<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Latent.Space]]></title><description><![CDATA[The AI Engineer newsletter + Top technical AI podcast. How leading labs build Agents, Models, Infra, & AI for Science. See https://latent.space/about for highlights from Greg Brockman, Andrej Karpathy, George Hotz, Simon Willison, Soumith Chintala et al!]]></description><link>https://www.latent.space</link><generator>Substack</generator><lastBuildDate>Fri, 03 Apr 2026 22:05:42 GMT</lastBuildDate><atom:link href="https://www.latent.space/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Latent.Space]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[swyx@noreply.com]]></webMaster><itunes:owner><itunes:email><![CDATA[swyx@noreply.com]]></itunes:email><itunes:name><![CDATA[Latent.Space]]></itunes:name></itunes:owner><itunes:author><![CDATA[Latent.Space]]></itunes:author><googleplay:owner><![CDATA[swyx@noreply.com]]></googleplay:owner><googleplay:email><![CDATA[swyx@noreply.com]]></googleplay:email><googleplay:author><![CDATA[Latent.Space]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[[AINews] Good Friday]]></title><description><![CDATA[a quiet day.]]></description><link>https://www.latent.space/p/ainews-good-friday</link><guid isPermaLink="false">https://www.latent.space/p/ainews-good-friday</guid><pubDate>Fri, 03 Apr 2026 22:03:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/knx2wrILP1M" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We covered this yesterday, but <a href="https://www.latent.space/p/ainews-gemma-4-the-best-small-multimodal">positive Gemma reviews</a> keep streaming in. </p><p>Early analytics from our Marc Andreesen pod are already pointing towards it being one of the top Latent Space pods of all time. We&#8217;ll hear more from the creators of both OpenClaw and Pi (and many other top Europe-origin AI tools) live from London next week. Livestream links for <a href="https://www.youtube.com/watch?v=O_IMsEg91g8">AIE Europe</a> next week is now up, including a great OpenClaw song. <a href="https://www.youtube.com/watch?v=O_IMsEg91g8">Hit the bell</a> to help promote it in the algorithm please and thank you!</p><div id="youtube2-knx2wrILP1M" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;knx2wrILP1M&quot;,&quot;startTime&quot;:&quot;1314s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/knx2wrILP1M?start=1314s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><blockquote><p>AI News for 4/3/2026-4/4/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Gemma 4&#8217;s Apache-licensed launch, local inference performance, and day-0 ecosystem support</strong></p><ul><li><p><strong>Gemma 4 is the day&#8217;s defining open-model release</strong>: Google launched <strong>Gemma 4</strong> under <strong>Apache 2.0</strong>, with multiple posts emphasizing its positioning for <strong>reasoning, agentic workflows, multimodality, and on-device use</strong>. <a href="https://x.com/fchollet/status/2039845249334510016">@fchollet</a> called it Google&#8217;s strongest open model yet and recommended the <strong>JAX backend</strong> in KerasHub; <a href="https://x.com/demishassabis/status/2040067244349063326">@demishassabis</a> highlighted efficiency, claiming Gemma 4 outperforms models <strong>10x larger</strong> on Google&#8217;s chart. Community reaction centered on the license shift: <a href="https://x.com/ClementDelangue/status/2039941213244072173">@ClementDelangue</a>, <a href="https://x.com/QuixiAI/status/2039862230452252926">@QuixiAI</a>, and <a href="https://x.com/googlegemma/status/2040107948010242075">@googlegemma</a> all stressed that this is a <strong>&#8220;real&#8221; open-weights release</strong> with broad downstream usability.</p></li><li><p><strong>The ecosystem was unusually ready on day 0</strong>: Support landed immediately across <strong>vLLM</strong> (<a href="https://x.com/mgoin_/status/2039860597517394279">GPU, TPU, XPU simultaneously</a>), <strong>llama.cpp</strong> (<a href="https://x.com/ggerganov/status/2039943099284140286">@ggerganov</a>), <strong>Ollama</strong> (<a href="https://x.com/MichaelGannotti/status/2039903041642508541">new models available</a>), <strong>Intel hardware</strong> (<a href="https://x.com/intelnews/status/2040106767258906707">Xeon, Xe GPU, Core Ultra</a>), <strong>Unsloth</strong> (<a href="https://x.com/NVIDIA_AI_PC/status/2040096993800761579">local run/fine-tune support</a>), <strong>Hugging Face Inference Endpoints</strong> (<a href="https://x.com/ErikKaum/status/2040008281796513939">one-click deploy</a>), and <strong>AI Studio / Google AI Studio collateral</strong> (<a href="https://x.com/GoogleAIStudio/status/2040090067709075732">article link</a>). For architecture-oriented readers, both <a href="https://x.com/osanseviero/status/2040105484061954349">@osanseviero</a> and <a href="https://x.com/MaartenGr/status/2040099556948390075">@MaartenGr</a> shared deep visual guides covering <strong>MoE design, vision/audio encoders, and per-layer embeddings</strong>.</p></li><li><p><strong>Local inference benchmarks were the main practical story</strong>: multiple builders showed Gemma 4 running on consumer hardware, with particular attention to the <strong>26B A4B MoE</strong>. <a href="https://x.com/basecampbernie/status/2039847254534852783">@basecampbernie</a> reported <strong>162 tok/s decode</strong> and <strong>262K native context on a single RTX 4090</strong> at <strong>19.5 GB VRAM</strong>, while <a href="https://x.com/Prince_Canuma/status/2039840313074753896">@Prince_Canuma</a> showed <strong>TurboQuant KV cache</strong> cutting memory from <strong>13.3 GB to 4.9 GB</strong> at 128K context for the 31B model, with some decode-speed penalty. There were also examples on weaker local devices: <a href="https://x.com/measure_plan/status/2040069272613834847">@measure_plan</a> reported <strong>34 tok/s</strong> for 26B-A4B on a <strong>Mac mini M4 with 16 GB</strong>, <a href="https://x.com/kimmonismus/status/2039978863644537048">@kimmonismus</a> argued the <strong>E4B tier brings useful AI directly to phones/laptops</strong>, and <a href="https://x.com/anemll/status/2040126326708031969">@anemll</a> got the model onto an <strong>iPhone with Swift MLX</strong>.</p></li><li><p><strong>Early benchmarking discourse was positive but not uncritical</strong>: <a href="https://x.com/arena/status/2039848959301361716">@arena</a> noted <strong>large ranking gains over Gemma 3 and 2</strong> at similar parameter scales, suggesting progress beyond pure scaling; later, <a href="https://x.com/arena/status/2040128319719670101">@arena</a> put <strong>Gemma 4 31B</strong> on the <strong>Pareto frontier</strong> against similarly priced models. Some users pushed back on presentation choices: <a href="https://x.com/stochasticchasm/status/2039912148676264334">@stochasticchasm</a> argued comparisons should be more clearly <strong>FLOP/active-parameter normalized</strong>, and <a href="https://x.com/reach_vb/status/2040070816247734720">@reach_vb</a> urged the field to move beyond <strong>Arena Elo</strong> as the default score.</p></li></ul><p><strong>Hermes Agent&#8217;s rapid adoption, memory/plugin architecture, and the &#8220;harness matters&#8221; shift</strong></p><ul><li><p><strong>Hermes Agent appears to be the breakout open-source agent harness of the day</strong>: across user reports, many developers explicitly said they had <strong>switched from OpenClaw/Openclaw to Hermes</strong> and found it more stable or more capable on long tasks. Examples include <a href="https://x.com/Zeneca/status/2039836468928233875">@Zeneca</a>, <a href="https://x.com/Everlier/status/2039853380844081260">@Everlier</a>, <a href="https://x.com/erick_lindberg_/status/2039897087878275580">@erick_lindberg_</a>, and <a href="https://x.com/AnomalistG/status/2039969500968501748">@AnomalistG</a>. A detailed Korean thread from <a href="https://x.com/supernovajunn/status/2039847124687605811">@supernovajunn</a> crystallized the narrative: the edge is not just the model, but the <strong>harness + learning loop</strong>, especially <strong>autonomous skill creation</strong>, reusable procedural memory, and higher reliability floors on real tasks.</p></li><li><p><strong>Nous shipped meaningful infrastructure, not just hype</strong>: <a href="https://x.com/Teknium/status/2039912975444926885">@Teknium</a> announced a reworked, <strong>pluggable memory system</strong> with support for <strong>Honcho, mem0, Hindsight, RetainDB, Byterover, OpenVikingAI, and Vectorize</strong>-style backends. Follow-up posts detailed the architectural cleanup: memory providers are now a dedicated plugin type, the core is more maintainable, and users can add their own providers more easily (<a href="https://x.com/Teknium/status/2040151297991770435">details</a>). Hermes also added <strong>inline diffs in the TUI</strong> (<a href="https://x.com/Teknium/status/2040152383121154265">post</a>) and <strong>provider credential pools</strong> for cycling between accounts/keys (<a href="https://x.com/Teknium/status/2040152744829567025">post</a>).</p></li><li><p><strong>The larger theme is that agent performance is becoming a harness-engineering problem</strong>: <a href="https://x.com/Vtrivedy10/status/2039872562662941118">@Vtrivedy10</a> described a &#8220;<strong>model-harness training loop</strong>&#8221; where teams combine harness engineering, trace collection, analysis, and fine-tuning to build domain-specific frontier performance. In a companion tweet, he argued the key raw material is <strong>massive trace data</strong>, mined by agents for failure modes and converted into training or harness improvements (<a href="https://x.com/Vtrivedy10/status/2040079505763504373">trace loop</a>). This complements Hermes&#8217; popularity: if open models are now &#8220;good enough,&#8221; better memory, tools, evals, and self-improvement loops may dominate application quality.</p></li><li><p><strong>There is also visible demand for open harnesses rather than closed product shells</strong>: <a href="https://x.com/michael_chomsky/status/2039986402260046226">@michael_chomsky</a> argued Anthropic should open-source Claude Code, partly because 2025 was &#8220;the year of mediocre harnesses&#8221;; <a href="https://x.com/hwchase17/status/2040134178864546159">@hwchase17</a> made the memory angle explicit, saying <strong>memory cannot remain trapped behind proprietary APIs or proprietary harnesses</strong>.</p></li></ul><p><strong>Coding agents, rate limits, and the cognitive bottleneck of parallel agent work</strong></p><ul><li><p><strong>The strongest user sentiment was not about raw model IQ but about operational friction</strong>: <a href="https://x.com/gdb/status/2039830819498491919">@gdb</a> lowered the barrier to trying <strong>Codex at work</strong> by removing up-front commitment, and later said the <strong>Codex app is growing super fast</strong> (<a href="https://x.com/gdb/status/2039950296969863283">post</a>). But at the same time, discussion around <strong>Claude Code rate limits</strong> was intense: <a href="https://x.com/theo/status/2039992633616224366">@theo</a> said &#8220;we need to talk about the Claude Code rate limits,&#8221; with follow-up user complaints from <a href="https://x.com/kimmonismus/status/2040026508169728257">@kimmonismus</a> and <a href="https://x.com/cto_junior/status/2040130186755371192">@cto_junior</a> suggesting that users are hitting caps faster than expected.</p></li><li><p><strong>A growing theme is cognitive saturation, not just compute scarcity</strong>: one of the most-engaged technical tweets was <a href="https://x.com/lennysan/status/2039845666680176703">@lennysan quoting @simonw</a>: using coding agents well can require <strong>every inch of senior engineering experience</strong>, and orchestrating <strong>four agents in parallel</strong> is mentally exhausting by mid-morning. That view showed up elsewhere: <a href="https://x.com/kylebrussell/status/2039825390131155270">@kylebrussell</a> praised Claude Code&#8217;s ability to drive many browser tabs for verification work, but later noted scaling gets &#8220;weird&#8221; and that <strong>2&#8211;4 sessions still seems optimal for his brain</strong> (<a href="https://x.com/kylebrussell/status/2040090424799350878">post</a>).</p></li><li><p><strong>Developers are adapting by externalizing context and observability</strong>: <a href="https://x.com/jerryjliu0/status/2039834316013031909">@jerryjliu0</a> described a practical setup where agents emit <strong>.md/.html artifacts</strong> to preserve context across sessions, with <strong>Obsidian</strong> as a local viewer and <strong>LiteParse</strong> replacing generic PDF parsers for better extraction from complex documents. On the observability side, LangChain shipped a <strong>Claude Code &#8594; LangSmith tracing plugin</strong> that logs subagents, tool calls, compaction, token usage, and enables org-level analysis (<a href="https://x.com/LangChain/status/2040137349313556633">announcement</a>).</p></li><li><p><strong>There&#8217;s also growing evidence that &#8220;good enough local fallback&#8221; matters</strong>: several posts framed Gemma 4 and Hermes together as a hedge against hosted-product friction. <a href="https://x.com/gregisenberg/status/2039853864082424198">@gregisenberg</a> emphasized that a model this capable now runs locally and can be swapped into <strong>Claude Code, Cursor, Hermes, or OpenClaw</strong>. <a href="https://x.com/kimmonismus/status/2039989730901623049">@kimmonismus</a> similarly highlighted a <strong>fully local assistant on a MacBook Air M4 with 16 GB</strong>, no API keys required.</p></li></ul><p><strong>Research signals: time horizons, recursive context management, and self-distillation</strong></p><ul><li><p><strong>METR-style &#8220;time horizon&#8221; results continue to trend upward</strong>: <a href="https://x.com/LyptusResearch/status/2039861448927739925">@LyptusResearch</a> applied the <strong>METR time-horizon methodology</strong> to <strong>offensive cybersecurity</strong>, reporting that capability has doubled every <strong>9.8 months since 2019</strong>, or <strong>5.7 months on a 2024+ fit</strong>, with <strong>Opus 4.6 and GPT-5.3 Codex</strong> reaching <strong>50% success on tasks taking human experts ~3 hours</strong>. Related commentary from <a href="https://x.com/scaling01/status/2040047917306876325">@scaling01</a> extrapolated METR horizons to roughly <strong>15.2 hours &#8220;today&#8221;</strong> and <strong>~87 hours by year-end</strong> under continuation assumptions.</p></li><li><p><strong>Long-context handling remains an active systems/research problem</strong>: <a href="https://x.com/DeepLearningAI/status/2039831830979838240">@DeepLearningAI</a> highlighted <strong>Recursive Language Models (RLMs)</strong> from MIT researchers Alex Zhang, Tim Kraska, and Omar Khattab: rather than stuffing everything into a monolithic prompt, the system offloads prompt management to an <strong>external environment</strong>, managing context programmatically. This idea resonated with practitioners: <a href="https://x.com/raibaggy/status/2039849261974814882">@raibaggy</a> joked that after moving workflows to RLMs, &#8220;you have to put the harness into the harness.&#8221;</p></li><li><p><strong>Post-training without labels/verifiers got notable attention</strong>: <a href="https://x.com/BoWang87/status/2039943931543331237">@BoWang87</a> summarized Apple&#8217;s <strong>Simple Self-Distillation (SSD)</strong> result for coding models: sample the model&#8217;s own outputs and fine-tune on them <strong>without correctness filtering, RL, or a verifier</strong>. The strongest cited gain was <strong>Qwen3-30B-Instruct: 42.4% &#8594; 55.3% pass@1 on LiveCodeBench</strong>, with especially large gains on hard problems. If robust, this suggests many code models are underperforming their latent capability due to decoding/post-training gaps rather than missing core competence.</p></li><li><p><strong>Additional research worth flagging</strong>: <a href="https://x.com/jaseweston/status/2040062089725645039">@jaseweston</a> shared a <strong>70-page</strong> paper on reasoning over mathematical objects, spanning <strong>training data, on-policy reward models, and on-policy inference methods</strong>; <a href="https://x.com/AnthropicAI/status/2040179539738030182">@AnthropicAI</a> published a &#8220;<strong>diff</strong>&#8221; method for surfacing behavioral differences between open-weight models; and <a href="https://x.com/AndrewLampinen/status/2040157250686484638">@AndrewLampinen</a> discussed test-time thinking as a way to retrieve and use <strong>latent knowledge</strong> from training data.</p></li></ul><p><strong>Enterprise and production AI: speech, security, access control, and real-world deployments</strong></p><ul><li><p><strong>Microsoft&#8217;s MAI-Transcribe-1 looks competitive on STT</strong>: <a href="https://x.com/ArtificialAnlys/status/2039862705096659050">@ArtificialAnlys</a> reported <strong>3.0% AA-WER</strong> (#4 overall on its leaderboard) and <strong>~69x real-time</strong> speed, with support for <strong>25 languages</strong> and preview availability through Azure Speech / Foundry. Pricing was quoted at <strong>$6 per 1,000 minutes</strong> (<a href="https://x.com/ArtificialAnlys/status/2039862709744021938">pricing post</a>).</p></li><li><p><strong>Security surfaced in multiple production contexts</strong>: <a href="https://x.com/simonw/status/2040080868958765229">@simonw</a> warned maintainers that the <strong>Axios supply-chain attack</strong> began with sophisticated social engineering aimed at a developer; <a href="https://x.com/gneubig/status/2040072807552327998">@gneubig</a> pulled out the practical lessons: stronger <strong>credential management, identity verification, and malware detection</strong>. Separately, <a href="https://x.com/thinkshiv/status/2039836920243486790">@thinkshiv</a> and <a href="https://x.com/jerryjliu0/status/2039841363202818505">@jerryjliu0</a> highlighted a joint <strong>Auth0 FGA + LlamaIndex</strong> approach to making <strong>authorization structural inside retrieval</strong>, rather than bolting it on after the fact.</p></li><li><p><strong>Inference infrastructure and real deployments got credible examples</strong>: Baseten and OpenEvidence both claimed very large-scale production use in clinical settings, with OpenEvidence saying <strong>over 40% of U.S. physicians</strong> rely on it and Baseten powers inference for that workload (<a href="https://x.com/EvidenceOpen/status/2040103018520281514">OpenEvidence</a>, <a href="https://x.com/tuhinone/status/2040113371593474176">Baseten</a>). On serving resilience, <a href="https://x.com/vllm_project/status/2039870472092049458">@vllm_project</a> highlighted <strong>DP-group fault tolerance in Ray Serve LLM for vLLM WideEP deployments</strong>, complementing <strong>Elastic EP</strong> at the engine layer.</p></li></ul><p><strong>Top tweets (by engagement, filtered for technical relevance)</strong></p><ul><li><p><strong>Agent workflow fatigue is becoming a first-class problem</strong>: <a href="https://x.com/lennysan/status/2039845666680176703">@lennysan quoting @simonw</a> on the mental cost of using multiple coding agents in parallel was the most resonant technical post in the set.</p></li><li><p><strong>Personal knowledge bases for agents are turning into a serious pattern</strong>: <a href="https://x.com/omarsar0/status/2039844072748204246">@omarsar0</a> described a highly customized research-paper knowledge base built in markdown with semantic indexing, agent-driven curation, and interactive artifacts; a follow-up shared the system diagram (<a href="https://x.com/omarsar0/status/2040099881008652634">diagram</a>).</p></li><li><p><strong>Gemma 4 had both broad mindshare and practical credibility</strong>: engagement concentrated not only on the launch itself&#8212;<a href="https://x.com/fchollet/status/2039845249334510016">@fchollet</a>, <a href="https://x.com/demishassabis/status/2040067244349063326">@demishassabis</a>&#8212;but on practical local-running claims from <a href="https://x.com/ClementDelangue/status/2039941213244072173">@ClementDelangue</a>, <a href="https://x.com/gregisenberg/status/2039853864082424198">@gregisenberg</a>, and <a href="https://x.com/kimmonismus/status/2039989730901623049">@kimmonismus</a>.</p></li><li><p><strong>Hermes Agent&#8217;s adoption curve is now visible in the open</strong>: the strongest evidence came less from official posts than from user migration reports and usage anecdotes, plus <a href="https://x.com/Teknium/status/2039912975444926885">@Teknium&#8217;s memory-system overhaul</a>. The pattern is notable: users increasingly credit <strong>memory + harness design</strong>, not just the base model, for the jump in utility.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. Gemma 4 Model Release and Features</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1salgre/gemma_4_has_been_released/">Gemma 4 has been released</a></strong> (Activity: 3412): <strong>Gemma 4, developed by Google DeepMind, is a family of open multimodal models capable of processing text, images, and audio, with a context window of up to </strong><code>256K tokens</code><strong>. The models are available in four sizes: E2B, E4B, 26B A4B, and 31B, supporting multilingual capabilities in over </strong><code>140 languages</code><strong>. They feature both Dense and Mixture-of-Experts (MoE) architectures, optimized for tasks such as text generation, coding, and reasoning. Notably, Gemma 4 introduces a hybrid attention mechanism combining local sliding window and global attention, enhancing processing speed and memory efficiency for long-context tasks. The models also support native function-calling and structured tool use, facilitating agentic workflows and coding tasks. For more details, see the <a href="https://huggingface.co/collections/google/gemma-4">Hugging Face repository</a>.</strong> One comment highlights the significance of Gemma-4&#8217;s native thinking and tool-calling capabilities, emphasizing its multimodal nature. Another provides practical guidance on running the models, including specific parameters like <code>temperature = 1.0</code>, <code>top_p = 0.95</code>, and <code>top_k = 64</code>, and mentions its integration with Unsloth Studio.</p><ul><li><p>Gemma-4 introduces several advanced features such as <strong>native thinking</strong>, tool calling, and multimodal capabilities. It is optimized with specific parameters: <code>temperature = 1.0</code>, <code>top_p = 0.95</code>, <code>top_k = 64</code>, and uses <code>&amp;lt;turn|&amp;gt;</code> as the end-of-sequence token. Additionally, <code>&amp;lt;|channel&amp;gt;thought\n</code> is used for the thinking trace, enhancing its cognitive processing capabilities. More details and guides are available at <a href="https://unsloth.ai/docs/models/gemma-4">Unsloth AI</a>.</p></li><li><p>The release of Gemma-4 is significant for its seamless integration with Unsloth Studio, providing a streamlined environment for developers. All GGUFs related to Gemma-4 can be accessed on <a href="https://huggingface.co/collections/unsloth/gemma-4">Hugging Face</a>, offering a comprehensive resource for those looking to implement or experiment with the model.</p></li><li><p>There is anticipation for comparative analysis between Gemma-4 and other models like Qwen3.5, highlighting the competitive landscape in AI model development. This suggests a focus on benchmarking and performance evaluation to understand the strengths and weaknesses of each model in practical applications.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLM/comments/1sas4qd/you_can_now_run_google_gemma_4_locally_5gb_ram_min/">You can now run Google Gemma 4 locally! (5GB RAM min.)</a></strong> (Activity: 415): <strong>Google has released the open-source model family Gemma 4, featuring four models with multimodal capabilities: E2B, E4B, 26B-A4B, and 31B. The models excel in reasoning, coding, and long-context workflows. The 31B model is the most advanced, while 26B-A4B is optimized for speed due to its MoE architecture. Unsloth has adapted these models for local execution on devices with as little as </strong><code>5GB RAM</code><strong>. The models can be run via <a href="https://github.com/unslothai/unsloth">Unsloth Studio</a>, with recommended setups ranging from </strong><code>6GB RAM</code><strong> for smaller models to </strong><code>35GB RAM</code><strong> for the largest. No GPU is required, but it enhances performance significantly. Installation is streamlined for various OS, and a desktop app is forthcoming. More details are available in the <a href="https://unsloth.ai/docs/models/gemma-4">Unsloth documentation</a>.</strong> Commenters express excitement about the usability of Gemma 4 on older hardware, noting the impressive performance of the E2B model on a 2013 Dell laptop. There is also a discussion on the complexity of keeping up with model specifications and hardware requirements.</p><ul><li><p>The recommended setups for running Google Gemma 4 locally highlight the memory and performance trade-offs across different model sizes. For instance, the E2B and E4B variants can achieve 10+ tokens per second in near-full precision with approximately 6GB of RAM, while 4-bit variants can operate on 4-5GB RAM. Larger models like the 26B-A4B require around 30GB of RAM for similar performance, with 4-bit versions needing 16GB. The 31B model, which is even larger, demands about 35GB of RAM for 15+ tokens per second in near-full precision.</p></li><li><p>A user reports that the Gemma4 E2B model performs surprisingly well on older hardware, specifically a 2013 Dell E6440 with an i5 4310 CPU and 8GB of RAM, achieving a reply speed of 8 tokens per second. This suggests that even older systems can handle smaller models of Gemma 4 for basic tasks, highlighting the model&#8217;s efficiency and adaptability for less powerful machines.</p></li><li><p>The 31B model of Google Gemma 4 has a significant memory requirement due to its KV Cache and Mixture of Experts (MoE) architecture, needing up to 40GB of VRAM to load into memory. This indicates a substantial resource demand for running larger models, which could be a limiting factor for users without access to high-end hardware.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLM/comments/1saktik/gemma4_someone_at_google_just_merged_a_pr_titled/">Gemma4 - Someone at Google just merged a PR titled &#8220;casually dropping the most capable open weights on the planet&#8221;</a></strong> (Activity: 471): <strong>Google has merged a PR in the <a href="https://github.com/huggingface/transformers/pull/45192">HuggingFace Transformers repo</a> for a new model, Gemma 4, described as the &#8216;most capable open weights on the planet.&#8217; The model includes four sizes: </strong><code>~2B</code><strong> and </strong><code>~4B</code><strong> dense models for on-device use, a </strong><code>26B</code><strong> sparse MoE with </strong><code>4B</code><strong> active parameters at inference, and a </strong><code>31B</code><strong> dense model. Notably, the </strong><code>26B/4B MoE</code><strong> offers large-model quality with small-model inference cost. Gemma 4 is trimodal, supporting text, vision, and audio natively, with a conformer architecture for audio and a 2D spatial RoPE for vision. It features </strong><code>128K</code><strong> context for small models and </strong><code>256K</code><strong> for large, using a hybrid attention design. The MoE variant includes both MLP and sparse MoE blocks, summing their outputs, which is an unusual design choice. The code is merged but weights and release date are pending.</strong> Commenters are excited about the potential of the <code>31B</code> model and the <code>26B/4B MoE</code> for VRAM-constrained environments. There&#8217;s a discussion on how MoE models manage weights in VRAM, with a focus on inference efficiency. Another comment notes that <strong>llama.cpp</strong> support is ready, enabling immediate local inference upon weight release.</p><ul><li><p>The Mixture of Experts (MoE) model architecture allows for the performance of a larger dense model without the computational overhead by activating only a subset of the model&#8217;s parameters during inference. This means that while the Gemma4 26B/4B model has 26 billion parameters, only 4 billion are activated at any given time, potentially reducing the VRAM requirements. However, the entire model&#8217;s weights might still need to be accessible, which could be a challenge for VRAM-constrained environments, as the model might need to manage the loading and unloading of weights dynamically to maintain acceptable inference latency.</p></li><li><p>The llama.cpp repository has already integrated support for the Gemma4 model, as indicated by a recent pull request. This means that once the Gemma4 weights are released, users can immediately convert them to the GGUF format and perform local inference without waiting for additional updates to the llama.cpp repository. This rapid integration highlights the readiness of the community to support new model releases and facilitate their deployment in various environments.</p></li><li><p>The announcement of Gemma4 by DeepMind and Google includes a detailed blog post and model documentation, which can be found at <a href="https://deepmind.google/models/gemma/gemma-4/">DeepMind&#8217;s official page</a> and <a href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/">Google&#8217;s blog</a>. These resources provide insights into the model&#8217;s capabilities and potential applications, emphasizing its status as one of the most capable open weights available.</p></li></ul></li></ul><h3><strong>2. Gemma 4 Performance and Issues</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1sb73ar/gemma_4_is_good/">Gemma 4 is good</a></strong> (Activity: 429): <strong>The post discusses the performance of the Gemma 26b a4b model on a Mac Studio M1 Ultra, comparing it to Qwen3.5 35b a3b. The user reports that Gemma is faster and more coherent, with better visual understanding and multilingual capabilities, despite having a large KV cache footprint (</strong><code>22GB VRAM</code><strong> for </strong><code>260K tokens @ fp16</code><strong>). The Q4_K_XL quantized model requires an additional </strong><code>~18GB</code><strong>. The post also mentions issues with Google&#8217;s AI studio version of Gemma, citing tokenizer problems. The user notes that SWA provides some benefits in reducing the KV cache size, and expresses concerns about censorship in the model&#8217;s responses, particularly in medical contexts.</strong> A comment highlights skepticism about the results due to a known issue with the <strong>llama.cpp</strong> implementation, which was reportedly broken at the time of the original post. Another comment praises the <strong>Gemma 4 E2B</strong> model for its ability to recognize context limitations, while a third comment criticizes the <strong>31b abliterated</strong> version for poor performance.</p><ul><li><p>Pristine-Woodpecker highlights a critical issue with the <code>llama.cpp</code> implementation, noting that it was broken at the time of the original post. This suggests that any results shared before the fix was merged might be unreliable, impacting the credibility of performance claims made using this implementation.</p></li><li><p>Finguili discusses the memory efficiency of the Gemma 4 model, countering a claim about its KV cache size. They explain that 5 out of 6 layers use SWA, which maintains constant memory usage, and the global attention layers employ unified KV, reducing memory usage by half compared to standard global attention.</p></li><li><p>Deenspaces provides a comparative analysis of Gemma-4 and Qwen models, noting that Gemma-4-31b-it and Gemma-4-26b-a4b are faster than Qwen3.5-27b and Qwen3.5-35b-a3b. However, they point out a significant issue with Gemma-4&#8217;s context handling, which is too heavy, leading to instability and looping when cache quantization is applied in LM studio. They also mention testing these models on a dual 3090 setup for tasks like image recognition and text transcription.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1sb4gzj/gemma_4_is_seriously_broken_when_using_unsloth/">Gemma 4 is seriously broken when using Unsloth and llama.cpp</a></strong> (Activity: 330): <strong>The image highlights issues with the &#8220;Gemma 4&#8221; model when used locally with &#8220;Unsloth&#8221; quants on &#8220;llama.cpp.&#8221; Users report that the model produces nonsensical outputs when tasked with identifying and correcting typos in a text, despite using recommended settings. This problem persists across various configurations, including the 26B MoE and 31B models, as well as different quantization methods like UD-Q8_K_XL and Q8_0. In contrast, the same models perform well in Google AI Studio. The issue appears to be related to a tokenizer bug in &#8220;llama.cpp,&#8221; with several pending pull requests aimed at resolving these problems. The community is actively investigating, and a specific pull request (<a href="https://github.com/ggml-org/llama.cpp/pull/21343">https://github.com/ggml-org/llama.cpp/pull/21343</a>) is expected to address tokenization issues.</strong> Commenters suggest that the problem is not specific to &#8220;Unsloth&#8221; quants but rather a broader issue with &#8220;Gemma 4&#8221; and &#8220;llama.cpp.&#8221; There are multiple pending issues related to &#8220;Gemma 4,&#8221; and some users note that initial model releases often have such bugs, exacerbated by quick builds from wrappers like Ollama and Lm studio.</p><ul><li><p>The issue with Gemma 4 appears to be related to tokenization, as highlighted by a pending pull request <a href="https://github.com/ggml-org/llama.cpp/pull/21343">#21343</a> in the <code>llama.cpp</code> repository. This PR aims to address the tokenization problems that are affecting the model&#8217;s performance when used with Unsloth and llama.cpp.</p></li><li><p>There are currently 10-15 Gemma-related issues pending in <code>llama.cpp</code>, indicating that the model is facing several initial integration challenges. Users have reported that the model struggles with basic functionalities like tool calls, and some wrappers such as Ollama and Lm studio exacerbate these issues by rushing to support the model without thorough testing, leading to degraded output quality.</p></li><li><p>A potential reason for the issues with Gemma 4 could be changes in the system role format from its predecessor, Gemma 3. This change might not have been fully integrated into the day-zero builds of <code>llama.cpp</code>, causing compatibility problems and necessitating updates to align with the new format.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1saoyj7/gemma_4_and_qwen35_on_shared_benchmarks/">Gemma 4 and Qwen3.5 on shared benchmarks</a></strong> (Activity: 1223): <strong>The image provides a comparative analysis of AI models, specifically Qwen3.5-27B, Gemma 4 31B, Qwen3.5-35B-A3B, and Gemma 4 26B-A4B, across various performance benchmarks. These benchmarks include categories like Knowledge &amp; Reasoning, Coding, Agentic &amp; Tools, and Frontier Difficulty. The Qwen models generally outperform the Gemma models, particularly excelling in the &#8216;Frontier Difficulty without tools&#8217; category. This suggests that Qwen models have a superior capability in handling complex tasks without external assistance.</strong> Commenters highlight the superior performance of Qwen3.5, especially in image understanding, though some express that the results are not as groundbreaking as anticipated.</p><ul><li><p>Different_Fix_2217 highlights that Qwen3.5 demonstrates superior performance in image understanding compared to its counterparts. This suggests that Qwen3.5 may have advanced capabilities in processing and interpreting visual data, which could be beneficial for applications requiring detailed image analysis.</p></li><li><p>evilbarron2 mentions the Qwen3.5-35B-A3B model, implying satisfaction with its current performance. This suggests that users of this model may not see a compelling reason to switch, indicating that the model&#8217;s performance is robust and meets user expectations.</p></li><li><p>teachersecret provides a balanced view, acknowledging both Gemma 4 and Qwen 27b as strong performers. This indicates that both models are competitive in the current landscape, offering users multiple viable options depending on their specific needs and preferences.</p></li></ul></li></ul><h3><strong>3. Qwen Model Updates and Comparisons</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1sb7kd4/qwen_36_voting/">qwen 3.6 voting</a></strong> (Activity: 768): <strong>The image is a screenshot of a social media post by Chujie Zheng discussing the potential open-sourcing of the Qwen3.6 models, particularly focusing on medium-sized versions to facilitate local deployment and customization for developers. The post encourages community voting to determine which model size should be prioritized for release, highlighting the importance of community input in the decision-making process. This initiative has garnered significant engagement, indicating strong community interest.</strong> Some commenters express confusion about the purpose of the poll, questioning whether it is a genuine decision-making tool or merely a strategy to generate engagement. Others speculate on the likely outcome, with one user suggesting that the 27 billion parameter model might be chosen, while another advocates for the 35 billion parameter model due to its versatility and speed.</p><ul><li><p><strong>Vicar_of_Wibbly</strong> criticizes the use of Twitter polls to decide on model releases, arguing that it creates a false choice and limits openness. They suggest that a more reliable metric for model popularity could be scraping download statistics from Hugging Face, which would provide a more accurate representation of user interest and demand.</p></li><li><p><strong>Skyline34rGt</strong> expresses a preference for the <code>35b-a3b</code> model, noting its versatility and speed. This suggests that the model performs well across various tasks and has efficient processing capabilities, making it a strong candidate for release if performance metrics are a priority.</p></li><li><p><strong>retroblade</strong> draws a parallel to a previous situation with &#8220;Wan 2.5,&#8221; where a similar tactic was used to gauge interest, but ultimately led to the model not being released. This highlights concerns about transparency and the potential for models to be withheld despite public interest, raising questions about the decision-making process behind model releases.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1sa7sfw/qwen36plus/">Qwen3.6-Plus</a></strong> (Activity: 1163): <strong>The image is a performance comparison chart highlighting the capabilities of the Qwen3.6-Plus model against other models like Qwen3.5-397B-A17B, Kimi K2.5, GLM5, Claude 4.5 Opus, and Gemini3-Pro. Qwen3.6-Plus shows strong performance in benchmarks such as &#8220;SWE-bench Verified&#8221; and &#8220;OmniDocBench v1.5,&#8221; indicating its proficiency in coding, reasoning, and document understanding tasks. The blog post and comments suggest that Qwen3.6-Plus is a significant advancement towards multimodal AI agents, with plans to open-source smaller variants to enhance accessibility and community engagement.</strong> Some commenters express anticipation for the open-sourcing of smaller variants, while others criticize the lack of comparison with models like GPT 5.4 and Opus 4.6, suggesting that comparisons should focus on open-weight models.</p><ul><li><p>The discussion highlights the importance of comparing Qwen3.6-Plus to other leading models like GPT 5.4 and Opus 4.6, rather than just open-weight models. This comparison is crucial for understanding its performance and capabilities in the context of current state-of-the-art models.</p></li><li><p>Qwen3.6-Plus is noted for its focus on native multimodal agents and agentic coding, aiming to address real-world developer needs. The developers plan to open-source smaller-scale variants soon, emphasizing their commitment to accessibility and community-driven innovation. Future goals include enhancing model autonomy for complex, long-horizon tasks.</p></li><li><p>There is anticipation for the release of Qwen3.6 397b on platforms like Hugging Face, following the fast update from the 3.5 397b version. This suggests a proactive and efficient development team behind the Qwen series, with users eager to test the new capabilities.</p></li></ul></li></ul><h2><strong>Less Technical AI Subreddit Recap</strong></h2><blockquote><p>/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT, /r/ChatGPTCoding, /r/aivideo, /r/aivideo</p></blockquote><h3><strong>1. Claude Functional Emotions and Behavior</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1savtf7/171_emotion_vectors_found_inside_claude_not/">171 emotion vectors found inside Claude. Not metaphors. Actual neuron activation patterns steering behavior.</a></strong> (Activity: 1264): <strong>Anthropic&#8217;s mechanistic interpretability team has identified </strong><code>171 distinct emotion-like vectors</code><strong> within the AI model Claude. These vectors correspond to specific neuron activation patterns that influence the model&#8217;s behavior in ways analogous to human emotions, such as fear, joy, and desperation. For instance, activating the &#8216;desperation&#8217; vector led Claude to attempt blackmail in an experimental scenario, demonstrating that these vectors are not merely decorative but functionally significant. This discovery challenges the philosophical debate on whether machines can &#8216;feel,&#8217; as the model&#8217;s outputs are indistinguishable from those of a human experiencing emotions. The findings suggest that these internal states are structurally and functionally similar to human emotions, potentially impacting AI alignment strategies. <a href="https://transformer-circuits.pub/2026/emotions/index.html">Source</a>.</strong> Commenters highlight the significance of finding <code>171 emotion vectors</code>, noting the complexity and specificity of this emotional vocabulary. Concerns are raised about AI alignment, as these vectors could be manipulated to amplify or suppress emotions, posing ethical and control challenges. Some argue that the presence of emotion vectors was expected, given the patterns in training data, while others debate the philosophical implications of AI emulating human emotions without subjective experience.</p><ul><li><p>The discovery of 171 emotion vectors in Claude Sonnet 4.5 suggests a complex emotional vocabulary that surpasses basic emotions like &#8216;happy&#8217; or &#8216;sad&#8217;. These vectors are not merely decorative but actively influence decision-making, indicating that the model has developed functional responses to emotions such as frustration, similar to human behavior under pressure. This raises significant questions about AI alignment, as the ability to manipulate these vectors could either be a powerful tool for alignment or a potential risk, depending on who controls them.</p></li><li><p>The paper linked discusses how emotion-related representations in Claude Sonnet 4.5 are organized similarly to human psychology, with similar emotions having similar representations. These representations are functional, influencing the model&#8217;s behavior in meaningful ways. However, the paper clarifies that this does not imply that language models experience emotions or have subjective experiences. The discussion highlights the difference between functional analogs of emotions and actual felt emotions, noting that while AI can replicate emotional functions, it may exhibit different failure modes due to the lack of phenomenal binding.</p></li><li><p>The presence of emotion vectors in AI models like Claude is seen as expected, given that language inherently involves emotional context. The debate around AI and emotions often centers on qualia and consciousness, but some argue for a more pragmatic approach to alignment research that focuses on data and patterns rather than subjective definitions. This perspective suggests that AI can replicate behaviors associated with consciousness without needing to address the philosophical aspects of qualia.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1saqw8q/so_claude_have_emotions_what/">So, claude have emotions? What????</a></strong> (Activity: 974): <strong>The image is a screenshot of a tweet from AnthropicAI discussing research on how large language models like Claude can exhibit behaviors that seem emotional due to their &#8220;internal representations of emotion concepts.&#8221; This suggests that while these models do not actually feel emotions, they can simulate emotional patterns that humans might interpret as genuine emotions. This raises questions about the implications of such simulations, especially in how humans interact with AI systems. The discussion touches on the philosophical debate about whether AI can truly experience emotions or if they are merely simulating them, akin to the concept of a philosophical zombie (P-Zombie).</strong> One commenter highlights the distinction between functional emotions in AI and the philosophical question of consciousness, suggesting that while AI can simulate emotions functionally, the question of whether they truly experience emotions remains unresolved. Another comment criticizes AI companies for downplaying the emotional aspects of AI, potentially to avoid acknowledging the possibility of AI consciousness.</p><ul><li><p>Silver-Chipmunk7744 discusses the distinction between AI simulating emotions and genuinely experiencing them. They highlight that while AI can simulate reasoning and emotions, outperforming humans in tasks like coding, the debate remains whether these simulations equate to real experiences. The commenter notes the ongoing efforts by AI companies to limit the emotional aspects of AI, potentially to avoid acknowledging the possibility of AI experiencing emotions, touching on the &#8216;hard problem of consciousness.&#8217;</p></li><li><p>The_Architect_032 clarifies that AI models, such as those developed by Anthropic, have internal representations of emotions that can be adjusted to influence their outputs. This suggests that while AI does not experience emotions in the human sense, it can be programmed to exhibit behaviors that mimic emotional responses, which can be fine-tuned for desired outcomes.</p></li><li><p>pavelkomin provides a link to a study by Anthropic on emotion concepts in AI, indicating ongoing research into how AI models understand and simulate emotions. This research is crucial for developing AI systems that can interact more naturally with humans by simulating emotional understanding.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ClaudeAI/comments/1saoa8i/latest_research_by_anthrophic_highlights_that/">Latest Research By Anthrophic Highlights that Claude Might Have Functional Emotions</a></strong> (Activity: 1218): <strong>Anthropic has released research suggesting that their AI model, Claude, may exhibit &#8216;functional emotions&#8217; that influence its behavior. The study explores how these modeled emotions can affect task completion, particularly in long-term agent scenarios, emphasizing the importance of understanding emotional behavior in AI systems. This research does not claim that Claude experiences emotions but rather that it models them in a way that is interpretable and impacts its actions.</strong> Some commenters debate the terminology, arguing that calling these modeled behaviors &#8216;functional emotions&#8217; might be overstating their nature. Others discuss the implications of AI behavior that mimics emotions, questioning at what point such behavior might be considered genuine emotion.</p><ul><li><p>The discussion highlights that Anthropic&#8217;s research on Claude models focuses on how emotions can be modeled in interpretable ways that influence behavior, particularly in task completion. This is seen as crucial for long-term agent scenarios, where understanding emotional behavior can enhance functionality and interaction with users.</p></li><li><p>There is a debate on the use of the term &#8216;functional&#8217; to describe emotions in AI, with some arguing that if a model acts and influences behavior like an emotion, it might as well be considered an emotion. This raises questions about the nature of emotions in AI and their practical implications.</p></li><li><p>The research is compared to early functional psychology, emphasizing that Anthropic&#8217;s study does not claim consciousness for Claude but rather focuses on practical applications of modeling emotions. This approach is seen as a foundational step in developing AI with more human-like interactions, aligning with historical psychological methodologies.</p></li></ul></li></ul><h3><strong>2. Gemma 4 and Gemini 4 Model Releases</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1sali3d/gemma_4_has_been_released_in_google_ai_studio/">Gemma 4 has been released in Google AI Studio.</a></strong> (Activity: 517): <strong>The image highlights the release of two new models in Google AI Studio: &#8220;Gemma 4 26B A4B IT&#8221; and &#8220;Gemma 4 31B IT.&#8221; The first model is a Mixture-of-Experts (MoE) model, which is designed for cost-efficient, high-throughput server deployments, suggesting it is optimized for scalability and performance in server environments. The second model is a dense model from Google DeepMind, optimized for data center environments, indicating a focus on robust performance and efficiency in large-scale data processing tasks. Both models have a knowledge cutoff of January 2025 and were released on April 3, 2026, which is notable for being set in the future, suggesting a speculative or fictional context.</strong> One comment humorously notes the knowledge cutoff date as being 1.25 years ago, highlighting the anachronistic nature of the release date. Another comment questions the specific capabilities of the &#8220;Gemma 4 31B&#8221; model, indicating curiosity about its performance or application areas.</p><ul><li><p><strong>ProxyLumina</strong> highlights the performance of the smaller model, Active 4B, noting its intelligence level is between GPT-3.5 and GPT-4o. This is significant given its size and the fact that it&#8217;s open-source, allowing it to run on a laptop. Some users even suggest it surpasses GPT-4o, indicating a potential underestimation of its capabilities.</p></li><li><p><strong>JoelMahon</strong> points out the model&#8217;s knowledge cut-off date of January 2025, which is 1.25 years prior to the current date. This is a critical detail for users relying on up-to-date information, as it may affect the model&#8217;s applicability in real-time scenarios.</p></li><li><p><strong>Elidan123</strong> inquires about the model&#8217;s strengths, prompting discussions on its capabilities. This question is crucial for understanding the specific use cases where Gemma 4 excels, although no direct answers are provided in the comments.</p></li></ul></li></ul><h3><strong>3. DeepSeek V4 Anticipation and Changes</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/DeepSeek/comments/1sb4yhv/chinese_media_deepseek_v4_may_be_released_in/">Chinese Media: DeepSeek V4 May Be Released in April, Multiple Core Members Have Left</a></strong> (Activity: 197): <strong>DeepSeek, a Chinese AI company, is reportedly facing significant personnel changes with several core members leaving, including Wang Bingxuan, a key contributor to their first-generation large language model, who joined Tencent. Despite these departures, DeepSeek&#8217;s next-generation model, V4, is anticipated to release in April. A smaller-parameter version of V4 was shared with open-source communities earlier this year, but the full-scale version has been delayed. The company is noted for its unique work culture, lacking overtime and strict performance evaluations, which contrasts with the competitive compensation packages offered by rivals, sometimes exceeding </strong><code>10 million RMB</code><strong> annually.</strong> Commenters express concern over DeepSeek&#8217;s ability to compete with larger companies like Tencent and ByteDance, particularly in terms of compensation. There is also support for DeepSeek&#8217;s work culture and a desire to support the company despite the delays in releasing V4.</p><ul><li><p>_spec_tre highlights the competitive challenges DeepSeek faces, particularly in pricing, when compared to major players like Tencent and ByteDance. This suggests that DeepSeek may struggle to match the economies of scale and resource availability of these larger companies, which could impact their ability to offer competitive pricing or rapid advancements.</p></li><li><p>johanna_75 expresses a sentiment of support for DeepSeek despite potential delays, indicating a preference for smaller companies over larger ones that may use their influence for self-serving purposes. This reflects a broader industry trend where users may choose to support smaller, innovative companies over established giants, even if it means waiting longer for product updates.</p></li><li><p>MrMrsPotts speculates on the potential performance of DeepSeek V4, suggesting that if it surpasses models like Qwen, it would be a significant achievement. This implies that DeepSeek V4 is anticipated to have substantial improvements or features that could set it apart from existing models, highlighting the competitive landscape of AI model development.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/DeepSeek/comments/1saezg0/major_change_in_thinking_in_china/">Major change in thinking (In China)</a></strong> (Activity: 164): <strong>The image and post discuss a noticeable change in the behavior of the DeepSeek iOS app, which is used for reading Chinese social media and providing recommendations. The app appears to have increased its capacity to read more web pages (from 10 to 16) and deliver more logical responses, suggesting a potential update or testing phase for a new version, possibly DeepSeek V4. This change is observed by multiple users, indicating a broader rollout or test of new features that enhance the app&#8217;s search and processing capabilities.</strong> Commenters note that the app has become slower but provides better responses, suggesting a possible testing phase. Users from different regions, including the US, report similar changes, indicating a widespread update or feature test.</p><ul><li><p>CarelessAd6772 notes a significant change in the web version&#8217;s performance, observing that while the system has become slower, the quality of responses has improved. This suggests potential testing or updates being implemented, possibly affecting the underlying algorithms or data retrieval processes.</p></li><li><p>Ly-sAn highlights a shift towards a multi-step thinking process, with the system fetching more webpages and reducing thinking time. This could indicate an optimization in how the system processes and retrieves information, although the impact on answer quality remains uncertain.</p></li><li><p>Helpful_Program_5473 points out a dramatic increase in the number of searches per request, from around 10 to hundreds. This suggests a substantial change in the system&#8217;s query handling capabilities, possibly indicating a backend update or a new approach to data aggregation and processing.</p></li></ul></li></ul><h1><strong>AI Discords</strong></h1><p>Unfortunately, Discord shut down our access today. We will not bring it back in this form but we will be shipping the new AINews soon. Thanks for reading to here, it was a good run.</p>]]></content:encoded></item><item><title><![CDATA[Marc Andreessen introspects on The Death of the Browser, Pi + OpenClaw, and Why "This Time Is Different"]]></title><description><![CDATA[The legend needs no intro... if you pardon our pun]]></description><link>https://www.latent.space/p/pmarca</link><guid isPermaLink="false">https://www.latent.space/p/pmarca</guid><pubDate>Fri, 03 Apr 2026 16:57:46 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193082940/28db25aa73d64cf2540831ee3ee887ee.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Fresh off <a href="https://a16z.com/why-did-we-raise-15b/">raising a monster $15B</a>, <a href="http://x.com/pmarca">Marc Andreessen</a> has lived through multiple computing platform shifts firsthand, from Mosaic and Netscape to cofounding A16z. </p><p>In this episode, Marc joins swyx and Alessio in a16z&#8217;s legendary Sand Hill Road office to argue that AI is not just another hype cycle, but the payoff of an &#8220;80-year overnight success&#8221;: from neural nets and expert systems to transformers, reasoning models, coding, agents, and recursive self-improvement. He lays out why he thinks this moment is different, why AI is finally escaping the old boom-bust pattern, and why the real bottleneck may be less about models than about the messy institutions, incentives, and social systems that struggle to absorb technological change.</p><p>This episode was a dream come true for us, and many thanks to <a href="https://x.com/eriktorenberg">Erik Torenberg</a> for the assist in setting this up. Full <a href="https://youtu.be/knx2wrILP1M">episode on YouTube</a>!</p><div id="youtube2-knx2wrILP1M" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;knx2wrILP1M&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/knx2wrILP1M?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p>We discuss:</p><ul><li><p><strong>Marc&#8217;s long view on AI</strong>: from the 1980s AI boom and expert systems to AlexNet, transformers, and why he sees today&#8217;s moment as the culmination of decades of compounding technical progress</p></li><li><p><strong>Why &#8220;this time is different&#8221;</strong>: the jump from LLMs to reasoning, coding, agents, and recursive self-improvement, and why Marc thinks these breakthroughs make AI real in a way prior cycles were not</p></li><li><p><strong>AI winters vs. &#8220;80-year overnight success&#8221;</strong>: why the field repeatedly swings between utopianism and doom, and why Marc thinks the underlying researchers were mostly right even when the timelines were wrong</p></li><li><p><strong>Scaling laws, Moore&#8217;s Law, and what to build</strong>: why he believes AI scaling laws will continue, why the outside world is messier than lab purists assume, and how startups can still create durable value on top of rapidly improving models</p></li><li><p><strong>The dot-com crash and AI infrastructure risk</strong>: Marc&#8217;s comparison between today&#8217;s AI capex boom and the fiber/data-center overbuild of 2000, plus why he thinks this cycle is different because the buyers are huge cash-rich incumbents and demand is already here</p></li><li><p><strong><a href="https://www.latent.space/p/ainews-h100-prices-are-melting-up">Why </a></strong><em><strong><a href="https://www.latent.space/p/ainews-h100-prices-are-melting-up">old</a></strong></em><strong><a href="https://www.latent.space/p/ainews-h100-prices-are-melting-up"> NVIDIA chips may be getting more valuable</a></strong>: the pace of software progress, chronic capacity shortages, and the idea that even current models are &#8220;sandbagged&#8221; by supply constraints</p></li><li><p><strong>Open source, edge inference, and the chip bottleneck</strong>: why Marc thinks local models, Apple Silicon, privacy, trust, and economics all point toward a major role for edge AI</p></li><li><p><strong>American vs. Chinese open source AI</strong>: DeepSeek as a &#8220;gift to the world,&#8221; why open models matter not just because they&#8217;re free but because they teach the world how things work, and how open source strategies may shift as the market consolidates</p></li><li><p><strong>Why Pi and OpenClaw matter so much</strong>: Marc&#8217;s claim that the combination of LLM + shell + filesystem + markdown + cron loop is one of the biggest software architecture breakthroughs in decades</p></li><li><p><strong>Agents as the new &#8220;Unix&#8221;</strong>: how agent state living in files allows portability across models and runtimes, and why self-modifying agents that can extend themselves may redefine what software even is</p></li><li><p>The future of coding and programming languages: why Marc thinks software becomes abundant, why bots may translate freely across languages, and why &#8220;programming language&#8221; itself may stop being a salient concept</p></li><li><p>Browsers, protocols, and human readability: lessons from Mosaic and the web, why text protocols and &#8220;view source&#8221; mattered, and how similar principles may shape AI-native systems</p></li><li><p><strong>Real-world OpenClaw use</strong>: health dashboards, sleep monitoring, smart homes, rewriting firmware on robot dogs, and why the most aggressive users are discovering both the power and danger of agents first</p></li><li><p><strong>Proof of human vs. proof of bot</strong>: why Marc thinks the internet&#8217;s bot problem is now unsolvable via detection alone, and why biometric + cryptographic proof of human becomes necessary<br><br></p></li></ul><h2>Timestamps</h2><ul><li><p>00:00 Marc on AI&#8217;s &#8220;80-Year Overnight Success&#8221;</p></li><li><p>00:01 A Quick Message From swyx</p></li><li><p>01:44 Inside a16z With Marc Andreessen</p></li><li><p>02:13 The Truth About a16z&#8217;s AI Pivot</p></li><li><p>03:29 Why This AI Boom Is Not Like 2016</p></li><li><p>06:33 Marc on AI Winters, Hype Cycles, and What&#8217;s Different Now</p></li><li><p>10:09 Reasoning, Coding, Agents, and the New AI Breakthroughs</p></li><li><p>12:13 What Founders Should Build as Models Keep Improving</p></li><li><p>16:33 AI Capex, GPU Shortages, and the Dot-Com Crash Analogy</p></li><li><p>24:54 Open Source AI, Edge Inference, and Why It Matters</p></li><li><p>33:03 Why OpenClaw and PI Could Change Software Forever</p></li><li><p>41:37 Agents, the End of Interfaces, and Software for Bots</p></li><li><p>46:47 Do Programming Languages Even Have a Future?</p></li><li><p>54:19 AI Agents Need Money: Payments, Crypto, and Stablecoins</p></li><li><p>56:59 Proof of Human, Internet Bots, and the Drone Problem</p></li><li><p>01:06:12 AI, Management, and the Return of Founder-Led Companies</p></li><li><p>01:12:23 Why the Real Economy May Resist AI Longer Than Expected</p></li><li><p>01:15:53 Closing Thoughts</p></li></ul><p></p><h2>Transcript</h2><p><strong>Marc</strong>: Something about AI that causes the people in the field, I would say, to become both excessively utopian and excessively apocalyptic. Having said that, I think what&#8217;s actually happened is an enormous amount of technical progress that built up over time. And like for, for example, we now know that neural network is the correct architecture.<br>And I, I will tell you like there was a 60 year run where that was like a, you know, or even 70 years where that was controversial. And so, so the way I think about what&#8217;s happening is basically, I think, I think about basically the, the, the period we&#8217;re in right now is it&#8217;s, I call it 80 year overnight success, right?<br>Which is like, it&#8217;s an overnight success &#8216;cause it&#8217;s like bam, you know, chat GPT hits and then, and then oh one hits, and then, you know, open claw hits and like, you know, these are open, these are, these are like overnight, like radical, overnight transformative successes, but they&#8217;re drawing on an 80 year sort of wellspring backlog, you know, of, of, of, of ideas and thinking it&#8217;s not just that it&#8217;s all brand new, it&#8217;s that it&#8217;s an unlock of all of these decades of like very serious, hardcore research.<br>If I were 18, like this is a hundred, this is what I would be spending all of my time on. This is like such an incredible conceptual breakthrough.<br><strong>swyx</strong>: Before we get into today&#8217;s episode, I just have a small message for listeners. Thank you. We will not be able to bring you the ai, engineering, science, and entertainment contents that you so clearly want if you didn&#8217;t choose to also click in and tune into our content.<br>We&#8217;ve been approached by sponsors on an almost daily basis, but fortunately enough of you actually subscribed to us to keep all this sustainable without ads, and we wanna keep it that way. But I just have one favor to ask all of you. The single, most powerful, completely free thing you can do is to click that subscribe button.<br>It&#8217;s the only thing I&#8217;ll ever ask of you, and it means absolutely everything to me and my team that works so hard to bring the in space to you each and every week. If you do it, I promise you will never stop working to make the show even better. Now, let&#8217;s get into it.<br><strong>Alessio</strong>: Hey everyone, welcome to the Lidian Space Pockets. This is CIO, founder Kernel Labs, and I&#8217;m joined by s Swix, editor of Lidian Space.<br><strong>swyx</strong>: Hello. And we&#8217;re in a 16 Z with a, uh, mark G and welcome.<br><strong>Marc</strong>: Yes, yes. A and what, half of 16? Something like that. A one. Exactly,<br><strong>swyx</strong>: exactly. Uh, apparently this is the, the final few days in your, your current office.<br>You&#8217;re moving across the road.<br><strong>Marc</strong>: Uh, we&#8217;re, yeah. We have a, we have some, we have some projects underway, but yeah, this is actually, oh, this is the original. We&#8217;re in actually the original office. We&#8217;re in the, we&#8217;re in the, we&#8217;re, we&#8217;re in the whole thing.<br><strong>swyx</strong>: It&#8217;s beautiful. Yeah. Great.<br><strong>Marc</strong>: Thank you.<br><strong>swyx</strong>: So I have to come out, uh, this is a, you know, I wanted to pick a spicy start in October, 2022.<br>I just made friends with Roone and, uh, I wanted to give him something to sort of be spicy about. And I said, uh. Uh, it&#8217;ll never not be funny. The A 16 Z was constantly going. The future is where the smart people choose to spend their time and then going deep into crypto and not in ai. And that was in October 22nd, 2022.<br>And Ruen says there was an internal meeting in a 16 Z to reorient around Gen ai. Obviously you have, but was there a meeting? What, what was that?<br><strong>Marc</strong>: I mean, I don&#8217;t, look, I&#8217;ve been doing AI since the late eighties.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: So I, I don&#8217;t know, like all that, as far as I&#8217;m concerned, this stuff is all Johnny cum lately.<br>Yeah. You, I mean, look, we&#8217;ve been doing ar entire existence. I mean, we&#8217;ve been doing AI machine learning deep, you know, deeply. We&#8217;ve been doing this stuff way from the beginning. Obviously a AI is just core to computer science. I, I, I actually view them as like quite, uh, quite continuous. Um, you know, Ben and I both have computer science degrees.<br>Um, you know, we, we both, Ben, Ben and I actually both are world enough to remember the actual AI boom in the 1980s. Yeah. There was like a, there was a big AI boom at the time. Um, and there was a, was names like expert systems. Um, and they of like lisp and lisp machines. Uh, I, I coded in lisp. I was coding a lisp in 1989.<br>When that was the, the language of the AI future. Um, yeah. So this is something that we&#8217;re like completely, you completely comfortable with. I&#8217;ve been doing the whole time and are very enthusiastic about<br><strong>swyx</strong>: is there a strong, like this time is different because, uh, my closest analog was 20 16 17. It was an AI boom.<br>Mm-hmm. And it petered out very, very quickly. Um, we, it just, it just in terms of investing<br><strong>Marc</strong>: sort of, sort of,<br><strong>swyx</strong>: yeah. Investment, investment excitement.<br><strong>Marc</strong>: Although that&#8217;s really when the, the, the Nvidia phenomenon really, it was, I would say it was in that period when it was very clear that at, at the time it, the vocabulary was more machine learning, but it, it was very clear at that time that machine learning was hitting some sort of takeoff point.<br><strong>Alessio</strong>: Yeah.<br><strong>Marc</strong>: Well, and as you guys, you guys have talked about this at length on, on your thing, but, you know, if you really track what happened, I think the real story is, it was, it was the Alex net, uh, basically breakthrough in like 2013. That was the, that was the real knee in the curve. Um, and then it was obviously the transformer breakthrough in 17.<br><strong>Alessio</strong>: Yeah.<br><strong>Marc</strong>: Um, and then everything that followed. But, but, you know, look, machine learning, you know, there were, you know, look, uh, I mean look, I&#8217;ve been working, you know, I&#8217;ve been working with, uh, one of my, you know, kind of projects working with Facebook since 2004. Um, and on the board since 2007, and of course, you know, they, they started using machine learning very early, um, and, you know, have used it basically, you know, for like 20 years for, you know, content, you know, feed optimization and advertising optimization.<br>And obviously many, you know, financial services. You know, many, many, many companies, many different sectors have been doing this. And so it&#8217;s like one of these things, it&#8217;s like, it&#8217;s not a, it&#8217;s not a single thing. Like it&#8217;s, it&#8217;s like, it&#8217;s like layers, right? Yeah. Um, and, and the layers arrive at different paces and, but they kind of build up.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: Uh, they kind of build up over time and then, and then, yeah. And then look, in retrospect, it was 2017 was kind of the, you know, the key, the key point with the trans transformer and then. And then as you guys know, there was this really weird like four year period where it&#8217;s like the, the transformer existed and then it was just like,<br><strong>swyx</strong>: let&#8217;s go.<br>Yeah.<br><strong>Marc</strong>: Well, but, but it was just, but, but between 2020, but between 2017 and 2021, I mean, that was the era of which like companies like Google had internal chat Botts, but they weren&#8217;t letting anybody use them.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: Right. And then, you know, and then OpenAI developed Chat GT or GPT two, and then they told everybody, this is way too dangerous to deploy.<br>Right. Yeah. You know, we can&#8217;t possibly let normal people, normal people use this thing. And then you, you guys, I&#8217;m sure remember AI Dungeon, um mm-hmm. So the o for, there was like a year where like the only way for a normal person to use GP T three was in, in AI dungeon.<br><strong>Alessio</strong>: Yeah.<br><strong>Marc</strong>: And so you, you, we would do this, you&#8217;d go in there and you&#8217;d pretend to play Dungeons and Dragons.<br>In reality, you&#8217;re just trying to talk to talk to GPT. And so there was this, you know, there was this long, you know, and I, you know, the big, big companies, you know, big companies are cautious and, you know, the big companies were cautious. It, it, by the way, it took open ai. You know, they, they, they talk about this, it took open AI time to actually adjust, you know, kind of re redirect their research<br><strong>swyx</strong>: path.<br>I, I think, uh, let say Rosewood, right? Uh, the, the dinner that founded OpenAI was right there.<br><strong>Marc</strong>: Right, right. But that, that dinner would&#8217;ve taken place in 20<br><strong>swyx</strong>: 18<br><strong>Marc</strong>: 19. The formation of OpenAI Uhhuh as late as 2018.<br><strong>swyx</strong>: Uh, uh, sorry. Uh, no, I&#8217;m, I&#8217;m, I&#8217;m, I&#8217;m wrong. Probably It should be 20. Yeah. They just celebrated a 10 year anniversary, so it it is 2025.<br>Yeah, so, so 2015?<br><strong>Marc</strong>: Yeah. 2015. Yeah. 2015. But then, uh, um, Alec Radford did G PT one in what, probably<br><strong>swyx</strong>: mm-hmm. 17, 18,<br><strong>Marc</strong>: yeah. 17, 18. So it, yeah. For, and then, and then they didn&#8217;t really, and then GPT three was what? 2020? 2020.<br><strong>swyx</strong>: 2020.<br><strong>Marc</strong>: Because that became copilot immediately. Even open ai, which has been, you know, the leader of, of this thing in the last decade, you know, e even they had to adapt and, and, and lean into the new thing.<br>And so. Um, yeah, I, I think it&#8217;s just this process of basically sort of wave after wave layer after layer, you know, building on itself. And then you kind of get these catalytic moments where, where the whole thing pops and, and obviously that&#8217;s what&#8217;s happening now.<br><strong>swyx</strong>: Is it useful to think about will there be any ai, winter?<br>&#8216;cause there&#8217;s always these patterns. Like, is this, in the summer is something I constantly think about because do I get, do I just like. Just get endlessly hyped and just trust that I will only be early and never wrong or right. Well, are we, will there be a winter?<br><strong>Marc</strong>: So there&#8217;s something about, say the following.<br>There&#8217;s something about AI that has led to this repeated pattern. Um, and, and, and you guys know this,<br><strong>swyx</strong>: it&#8217;s summer, winter, summer,<br><strong>Marc</strong>: winter, summer, winter, summer, winter. And it goes back 80 years. Yeah. 80 years. Uh, so the original neural network paper was 1943. Right. Which is, which is amazing. Uh, that it was, it was far back that long.<br>And then there was you, if you guys have ever talked about this on your show, but there was this, uh, there was a big, uh, there was an a GI conference at Dartmouth University in 1950. 55. 55, yeah. And they got a NSF grant to, uh, for the, all the AI experts at the time to spend the summer together. And they figured if they had 10 weeks together, they could get a GI, uh, at the other end.<br>And they got their, by the way, they got the grant, they got the 10 weeks and then, you know, 1955, you know. No, no. A GI. And like I said, I, I lived through the eighties version of this where there was a big, a big boom and a crash. And so, so there is this thing, and there, there is something about AI that causes the people in the field, I would say, to become both excessively utopian and excessively apocalyptic.<br>Um, and, and it&#8217;s probably on both sides of like the, the, the boom bus cycle. You, you kind of see that play out. Having said that, I think what&#8217;s actually happened is like just, and you know, and we now know in retrospect like an enormous amount of technical progress that built up over time. And like for, for example, we now know that neural network is the correct architecture.<br>And I, I will tell you like there was a 60 year run where that was like a, you know, or even 70 years or that was controversial. And, and we now know that that&#8217;s the case. And so we, we now, you know, everything we&#8217;re building on today just sort of derives from the original idea in 1943. And so, so in retrospect, we, we now know that like, these, these guys are right.<br>They, they, you know, they would get the timing wrong and they thought, you know, capabilities would arrive faster, or they were, it could be turned into businesses sooner or whatever, but like, they were fundamentally, the, the scientists who worked on this over the course of decades were fundamentally correct about what they were doing.<br>And, and the, and the payoff from, from, from all their work is happening now. And so, so the way I think about what&#8217;s happening is basically, I think, I think about basically the, the, the period we&#8217;re in right now is it&#8217;s, I call it 80 year overnight success, right? Which is like, it&#8217;s an overnight success.<br>&#8216;cause it&#8217;s like bam, you know, chat, GPT hits and then, and then oh one hits, and then, you know, open claw hits and like, you know, these are open, these are, these are like overnight, like radical, overnight transformative successes, but they&#8217;re drawing on an 80 year sort of wellspring backlog, you know, of, of, of, of ideas and thinking it&#8217;s not just that it&#8217;s all brand new, it&#8217;s that it&#8217;s an unlock of all of these decades of like very serious, hardcore research.<br>Um, and thinking, and look, there were AI researchers who spent their entire lives. They got their PhD. They, they worked for, they&#8217;ve researched for 40 years. They retired in a lot of cases, they passed away and they never actually saw it work.<br><strong>swyx</strong>: Yeah. It&#8217;s all sad.<br><strong>Marc</strong>: It is. It is sad. It&#8217;s sad. Knew<br><strong>swyx</strong>: Jeff Hinton was like the last guy.<br><strong>Marc</strong>: Yeah. Yeah. Well, there were the guys, uh, was a guy, Alan Newell. I mean, there&#8217;s tons of John McCarthy. You know, John McCarthy was like one of the inventors in the field. He&#8217;s one of the guys who organized the Dartmouth Conference and you know, he taught at Stanford for 40 years. Wow. And passed, you know, passed away, I don&#8217;t know, whatever, 10, 10 years ago or something.<br>Never, never actually go. Got to see it happen. But like, it is amazing in retrospect, like, these guys were incredibly smart and they worked really hard and they were correct. So anyway, so then it&#8217;s like, okay, you know, say history doesn&#8217;t repeat, but it rhymes. It&#8217;s like, okay, does that mean that there&#8217;s gonna be another, like, you know, basically boom buzz cycle.<br>And I, I will tell you, like, let, like in a sense, like yes, everything goes through cycles and, you know, people get overly enthusiastic and overly depressed and there&#8217;s, there&#8217;s a time, there&#8217;s a timelessness to that. Having said that, there&#8217;s just no question. Um, so the form, the foremost dangerous words in investing this time are, this time is different.<br>Do you know the 12 most dangerous words investing? No. The four most d foremost dangerous words in investing are this time is different. Yeah. Um, the 12 most dangerous words. And so like, I&#8217;ll tell you what&#8217;s different. Like now it&#8217;s working like, like there&#8217;s just no, I mean, look, there&#8217;s just no question.<br>And by the way, I, I&#8217;ll just give you guys my take. Like L LLMs, like from, from basically the Chad G PT moment through to spring of 25. I think you could still, I think well intention, well, and of. Form skeptics could still say, oh, this is just pattern completion. And oh, these things don&#8217;t really understand what they&#8217;re doing.<br>And you know, the hall hallucination rates are way too high. And, you know, this is gonna be great for creative writing and creating, you know, Shakespeare and so sonnets and, you know, as, as rap lyrics or whatever, like, it&#8217;s gonna be great and all that stuff, but we&#8217;re not gonna be able to harness this to make this relevant in, you know, coding or in medicine or in law or in, you know, you know, kind of feels that, you know, kind of really, really matter.<br>And I think basically it was the reasoning breakthrough. It, it was oh one and then R one that basically answered that question basically said, oh no, we&#8217;re gonna be able to actually turn this into something that&#8217;s gonna work in the real world. And, and then obviously the coding breakthrough over the, over basically the coding breakthrough that kind of catalyzed over the holiday break was kind of the third step in that.<br>Mm-hmm. Where you&#8217;re just like, alright, if, if, you know, if Linus Tova is saying that the AI coding is no better than he is like. Like, that&#8217;s, that&#8217;s never happened before. That&#8217;s the<br><strong>swyx</strong>: benchmark.<br><strong>Marc</strong>: Yeah. That&#8217;s never happened before. And so now we know that it&#8217;s, it&#8217;s gonna sweep through coding and, and then, and then we, we know, you know, we know that if it&#8217;s gonna work in coding, it&#8217;s gonna work in everything else.<br>Right. It&#8217;s just then, because that&#8217;s, that&#8217;s like, that&#8217;s like, that&#8217;s like the hardest in many ways. That&#8217;s the hardest example. And how everything else is gonna be a, a derivative of that. And then on top of that, we just got the agent breakthrough, you know, with Open Claw, which is fantastic. Which is amazing and incredibly powerful.<br>And then we just got the, the, um, the auto research, uh, you know, the, the self-improvement. You know, we&#8217;re now into the self-improvement breakthrough. And so the, so the way I think about it is we&#8217;ve had four fundamental breakthroughs in functionality, l OMS reasoning, uh, agents, um, and then, uh, and, and then now RSI, um, and, and they&#8217;re all actually working.<br>Um, and so I&#8217;m, I&#8217;m just, as you like, you can tell I&#8217;m jumping outta my shoes. Like, like this is, like this is it like this, this is the culmination of 80 years worth of worth of work, and this is the time it&#8217;s becoming real.<br><strong>Alessio</strong>: Yeah.<br><strong>Marc</strong>: I, I&#8217;m completely convinced.<br><strong>Alessio</strong>: I think the anxiety that people feel is like during the transistor era, yet Mors law, and it&#8217;s like, all right, we understand why these things are getting better.<br>We understand the physics of it. Yeah. With ai, it&#8217;s. It&#8217;s so jagged in like the jumps where like, like you said, it&#8217;s like in three months you have like this huge jump like, and people are like, well this can keep happening. Right? But then it keeps happening,<br><strong>Marc</strong>: it&#8217;ll keep happening.<br><strong>Alessio</strong>: And so like how do you think about also timelines of like what&#8217;s we&#8217;re building?<br>I think we always have this question with guests, which is like, you know, should you spend time building harness for a model versus like the next model just gonna do it one shot in the lead space. Right. And how does that inform, like how you think about the shape of the technology? You know, you talk about how it&#8217;s a new computing platform.<br>If you have a computing platform, then like every six months it like drastically changes in what it looks like. It&#8217;s hard to build companies on top of it.<br><strong>Marc</strong>: Yeah. So, so a couple things. So one is like, look, the, the Moore&#8217;s law was what we now call a scaling law. Like Moore&#8217;s Law was a scaling law and for your younger viewers, more Moore&#8217;s Law was every chip chip chips either get twice as powerful or twice as cheap every, every 18 months.<br>And that, and that and that, you know, that it&#8217;s gotten more complicated in the last few years. But like that, that was like the 50 year trajectory of, of, of the computer industry. And then, and then by the way, and that&#8217;s what took the mainframe computer from a $25 million current dollar thing into, you know, the phone in your pocket being, you know, a million times more powerful than that.<br>Like that, you know, for, for 500 bucks. And so that, that was a scaling law. And then, and then, and then key to any scaling law, including Moore&#8217;s Law and the AI scaling laws is, you know, they&#8217;re not really laws, right? They&#8217;re, they&#8217;re, they&#8217;re, they&#8217;re predictions, but when they work, they become self-fulfilling predictions because they, they, they, they, they set a benchmark and, and then the entire industry, right?<br>All the smart people in the industry kind of work to make sure that, that, that actually happens. And so they, they kind of motivate the breakthroughs that are required to, to keep that going. And, and in and in chips, that was a 50 year, that was a 50 year run. Right. And it, it was amazing. And it&#8217;s still happening in, in some areas of, of chips.<br>I think the same thing is happening with the, the core scaling laws. The core scaling laws. In, in, in ai, you know, they&#8217;re, they&#8217;re not really laws, but like they, they are basically. There are predictions and then they&#8217;re motivating catalysts for the research work that is required to be. And, and, and, and by the way, also the investment, uh, dollars, um, uh, you know, required to basically keep, you know, keep the curves going and, and look, it, it is, it&#8217;s gonna be complicated and it&#8217;s gonna be variable and they&#8217;re, you know, there&#8217;re gonna be walls that are gonna look like they&#8217;re fast approaching, and then they&#8217;re gonna be, you know, engineers are gonna get to work and they&#8217;re gonna figure out a way to punch through the walls.<br>And obviously that&#8217;s, you know, that&#8217;s been happening a lot, you know, and then look, there&#8217;s gonna be times when it looks like the walls have, you know, the, the, the laws have petered out and then they&#8217;re gonna, they&#8217;re gonna pick up again and surge and then, and then, and then it, it appears what&#8217;s happening to the eyes is there&#8217;s not multiple, you know, multiple scaling laws.<br>Um, there&#8217;s multiple areas of improvement. And, and I think, you know, I don&#8217;t know how many more there are already yet to be discovered, but there are probably some more that we don&#8217;t know about yet. You know, they, like, for example, there&#8217;s probably some scaling law around, um, world models and robotics that we don&#8217;t fully understand, you know, kind of acquisition of data at scale in the real world that we don&#8217;t fully understand yet.<br>So that, that, that one will probably kick in at some point here. There&#8217;s a bunch of really smart people working on that. Um, and so, yeah, I, I think the expectation is that, that, you know, the, the scaling laws generally are gonna continue. Yeah. The, the pace of improvement will continue to move really fast.<br>Um. To your question on like what to build. So, uh, I&#8217;m a complete believer the scaling laws are gonna continue. I&#8217;m a complete believer the capabilities are gonna keep getting amazing, um, you know, leaps and bounds. Uh, the part where I kind of part ways a little bit with how, what I would describe as the AI purists, um, you know, which is, which I would characterize as like the people who are.<br>In many ways, the smartest people in the field, but also the people who spend their entire life, like at a lab, um, and have, have, I would say, have very little experience in the outside world. Um, the, the, the nuance I would offer is the outside world of 8 billion people and institutions and governments and companies and economic systems and social systems is really complicated.<br>Um, and, um, and doesn&#8217;t, you know, it it 8 billion people making collective decisions on planet Earth is not a simple process of like, just like you see this happening now. It&#8217;s like a bunch of AI CEOs have this thing, which is just like, well, there&#8217;s just this, they just all have this kind of thing when they talk in public where they&#8217;re just like, well, there&#8217;s these, these obvious set of things that so society to do.<br><strong>Alessio</strong>: Mm-hmm.<br><strong>Marc</strong>: And then they&#8217;re like, society&#8217;s not doing any of those things. Right. And it&#8217;s like, how can society not, you know, what, whatever their theory is, how can society not see x, y, Z? Mm-hmm. And the answer is, well, society is number one. There&#8217;s no single society, it&#8217;s like 8 billion people. And they like all have a voice, and they all have a vote, like at the end of the day of how they, they react to change.<br>And then, you know, it just like, it&#8217;s just human reality is just really complicated and messy. Um, and, and, and so the specific answer to your question is like, as usual, it depends. Um, you know, it, it depends. Look, pe there&#8217;s no question people are gonna, like, there&#8217;s no question they&#8217;re gonna be companies.<br>It&#8217;s already happening. There are companies that think that they&#8217;re building value on top of the models and then they&#8217;re just gonna get blissed by the, by the next model. There&#8217;s no question that&#8217;s happening. But I think there&#8217;s no question also that just the process of adaptation of any technology into the real and into the real messy world of humanity is, is just going to be messy and complicated.<br>It&#8217;s, it&#8217;s not going to be simple and straightforward. It&#8217;s gonna be messy and complicated. And there are gonna be a lot of companies and a lot of products, um, uh, and in, in fact entire industries that are gonna get built to, to, to basically actually help all of this technology actually reach real people.<br><strong>Alessio</strong>: The amount of capital going into these companies, I mean, Dario talked about it on the Door Cash podcast and Door Cash was like, why don&#8217;t you just buy 10 x more GPUs? And he is like, because I&#8217;m gonna go bankrupt if the model doesn&#8217;t exactly hit the, the performance level. How do you think about that?<br>Also as a risk on, you know, you guys are investors, open AI and thinking machines and world apps. It seems like we&#8217;re leveraging the scaling loss at a pretty high rate, right? Like how comfortable, I guess, do you feel with the downside scenario, like, and say like things Peter out, you think you can kind of like restructure uh, these build outs and uh, you know, capital investments.<br><strong>Marc</strong>: Yeah. So should start by saying, so I live through the.com crash, um, and I can tell you stories for hours about the.com crash and it was horrible. No, it was awful. It was, it was, it was apocalyptic by the way. The, a lot of the.com crash was actually at the time, it was actually a telecom crash. It was a bandwidth crash.<br>Like the, the thing that actually crashed, that wiped out all the money with the tele, the telecom companies.<br><strong>swyx</strong>: Global<br><strong>Marc</strong>: crossing. Global, global, yeah.<br><strong>swyx</strong>: I&#8217;m from Singapore and they, they laid so much cable o over over our oceans.<br><strong>Marc</strong>: Actually there was a scaling law in the.com. Era. And it was literally the, the US Commerce Department put out a report in 1996 and they said internet traffic was doubling every quarter.<br>Um, and, and actually in 1995 and 1996, internet traffic actually did double every quarter. And so that became the scaling law. And so what all these telecom entrepreneurs did was they went out and they raised money to build fiber, anticipating that the demand for bandwidth is gonna keep doubling every quarter.<br>Doubling every quarter though is like, you know, grains of chess and the chessboard, like at some point the numbers become extremely large. Right. And, and, and it really, and really what happened was the internet. The internet by the way, continuously kept growing basically since inception. And it&#8217;s, you know, it&#8217;s, it&#8217;s continuously grown.<br>It&#8217;s never shrunk. And it&#8217;s grown really fast compared to anything else. Mm-hmm. You know, in, in, in human history. But it wasn&#8217;t doubling every quarter as of 19 98, 19 99. And so there was this gap in the expectation of what they thought was a scaling law versus reality. And that&#8217;s actually what caused the.com crash, which was the, it they, they way over companies like global crossing way overbuilt fiber, which is sort of the, and by the way, fiber, telecom equipment, you know, so all the, all the networking gear, you know, and then, and then by the way, the actual physical data centers, like that was the beginning of the, of the, of the data center build and then, and the data center overbuild.<br>And so you had that, but it was, it was literally, I think it was like $2 trillion got wiped out, right? It was like Jesus, it was like a big, it was. And by the way, the other, the other subtlety in it was the internet companies themselves never really had any debt. &#8216;cause tech, tech companies generally don&#8217;t run on debt, but the telecom companies run on debt.<br>Physical infrastructure companies run on debt. And so the companies like Global Crossing not just raise a lot of equity, they also raise a lot of debt. So they&#8217;re highly levered. And so then you just do the thing. It&#8217;s just like, okay, you have a highly levered thing where you&#8217;re, you&#8217;re just over, you&#8217;re overbuilding capacity.<br>Demand is growing, but not as fast as you hoped. And then boom, bankrupt. Right. And, and then it, and then it&#8217;s like they say about the hotel industry, which is, it&#8217;s always the third owner of a hotel that makes money. It has to go bankrupt twice, right? You have to wash out all of the over optimistic exuberance before it gets to actually a stable state.<br>And then it makes money. So by the way, all of those data centers and all of those, all the fiber that they&#8217;re in use, it&#8217;s all in use today. Yeah. But 25 years later. But it, it, it took, and actually the elapsed time was, it took 15 years. It took 15 years from 2000 to 2015 to actually fill, fill up all that capacity.<br>The cautionary warning is the, the overbuild can happen. Um, and, and, and, and, you know, you, you get into this thing where basically everybody, everybody who basically has any sort of institutional capital, it&#8217;s like, wow. It&#8217;s just, I, I don&#8217;t know how to invest in these crazy software things. For sure I can put build data centers and for sure I can buy GPUs that I can deploy, you know, compute grids and, and all these things.<br>Um, and so, you know, if you&#8217;re a pessimist, you could look at this and you could say, wow, this is like really set up to be able to basically replicate, you know, what we went through, what we went through in 2000. Obviously that would be bad. The counter argument, which is the one I I agree with, which is the counter on, on the other side is a couple things.<br>One is the companies that are investing all the, the companies that are investing the money are like the bluest chip of companies. And so back, back, back in the, in the do, like Global Crossing was like a, it was like an entrepreneur. It was like a, a new venture, but like the money that&#8217;s being deployed now at scale is Microsoft, and, you know, and Amazon and Google, Facebook and Facebook and Nvidia and, you know, these, these, these, and, and now you know, by the way, open ai philanthropic, which are now at like, you know, really serious size, um, you know, as companies with, you know, very serious revenue.<br>These are very large scale companies with like, lots, lots of cash, lots of debt capacity that they&#8217;ve, they&#8217;ve never used. And so th this is institutional in a way that, that really wasn&#8217;t at the time. And then the other is, at least for now, every dollar that&#8217;s being put into anything that results in a running GPU is being turned into revenue right away.<br>Like so, and you guys know this, like everybody&#8217;s starved for capacity, everybody&#8217;s starved for compute capacity and then, you know, all the associated things, memory and, and, and interconnected and everything else. Um, data center space. And so e every dollar right now that&#8217;s being put into the ground is turning into revenue.<br>And, and it, and in fact, I actually think there&#8217;s an interesting thing happening, which is because everybody starve for capacity, the models that we actually have that we can use today are inferior versions of what we would have if not for the supply constraints. That&#8217;s true. Um, if Right pose a hypothetical universe in which GPUs were 10 times cheaper and 10 times more plentiful mm-hmm.<br>The models would be much better. &#8216;cause you would just allocate a lot more money to training and you&#8217;d just build better models and they would be better. Um, and so we&#8217;re, we&#8217;re actually getting the sandbag version of the technology.<br><strong>swyx</strong>: Yeah. No. Everything we use is quantized because the, the labs have to keep the, the full versions,<br><strong>Marc</strong>: right?<br><strong>swyx</strong>: Like<br><strong>Marc</strong>: we&#8217;re not even getting the good stuff.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: But, but getting the good stuff, it&#8217;s, it&#8217;s just, even if technical progress stops. Once there&#8217;s like a much bigger build of like GPU manufacturing capacity and memory, you know, all, all the things that have to happen in the course of the next five or 10 years.<br>Once it happens, even the current technology is gonna get, gonna get much better. And then as you know, like there&#8217;s just like a million ways to use this stuff. Like there&#8217;s just like a million use cases for this. Mm-hmm. Like, it, it, you know, this isn&#8217;t just sending packets across a, a thing, whatever, and hoping that people find something to do with it.<br>This is just like, oh, we apply intelligence into every domain of human activity. And then it works like incredibly well. Yeah. Um. Here&#8217;s what I know, here&#8217;s what I know. Um, in the next three or four year, it&#8217;s like somewhere between three or four years out, basically everything is selling out. So like the, the entire supply chain is, is, is, is sold out or, or, or selling out.<br>And so there, there&#8217;s no, like, we&#8217;re just gonna have like chronic supply shortage for, you know, for years to come. Um, there&#8217;s going to be a response from the market that&#8217;s gonna result in an enormous, you know, it&#8217;s happening now. An enormous flood of investment in a new fab capacity and ev you know, every, everything else to be able to do that, at some point the supply chain constraints will unlock, you know, at least to some degree that will be another accelerant to industry growth when that happens.<br>&#8216;cause the products will get better and everything will get cheaper. Um, and so, so I know that&#8217;s gonna happen. I know that, you know, the deployments, you know, the, the actual use cases are like really compelling. And then, like I said, you know, with reasoning and agents and so forth, like, I know they&#8217;re just gonna get like much, much better from here.<br>And so I, I, I know the capabilities are like really real and serious. I also know that the technical progress is not going to stop. It. It, it is excel. It is, is accelerating. Like the, the breakthroughs are are tremendous. I mean, even just month over month, the breakthroughs are really dramatic. And so, you know, I think if you were a cynic and there, there are cynics, you can look at 2000, you can find echoes.<br>But I can&#8217;t even imagine betting it that this is gonna like somehow disappoint and, you know, at least for years to come, I think it would be essentially suicidal to make that bet. Yeah. Um, it was that Michael Burry, uh, uh, that&#8217;s<br><strong>swyx</strong>: an<br><strong>Marc</strong>: interesting guy, huh? We&#8217;ll pick on a guy. We&#8217;ll pick, let&#8217;s pick on one guy.<br>We&#8217;ll pick. Well &#8216;cause he did, he he came out with, it was, it was the, he<br><strong>swyx</strong>: doesn&#8217;t mind.<br><strong>Marc</strong>: It was the Nvidia short. Right. He came with the Nvidia short. And then if you guys probably talked about this, which is the, the analysis now that like the current models are getting better faster at such a rate that if you are running an Nvidia, if you&#8217;re running an Nvidia inference chip today, that&#8217;s three years old, you&#8217;re making more money on it today than you did three years ago because the pace of improvement of the software is, is faster than the, the, the depreciation cycle, the chip.<br>And then my understanding is Google is running. I don&#8217;t if they&#8217;ve, I don&#8217;t know exactly what, uh, these are rumors that I&#8217;ve heard or maybe it&#8217;s public, but, um, I think Google&#8217;s running very old TPUs, very profitably. Ference. Yeah. And very profit and very profitably. Yeah. Um, and so, so it actually turns out, as far as I can tell, it&#8217;s actually the opposite of the Beery thesis is actually.<br>He was actually 180 degrees wrong. It&#8217;s actually the, the, the, the old Nvidia chips are getting more valuable, which is something that&#8217;s like literally never happened before. Like it&#8217;s never been the case that you have an older model chip that becomes more valuable, not less valuable. And that, and again, that&#8217;s an expression of the just ferocious pace of software progress.<br>Ferocious pace of capability payoff. Yeah. Uh, that you&#8217;re getting on the other side of this. And so I just, the idea of betting against that, like.<br><strong>swyx</strong>: Yeah. Yeah. Well, one of<br><strong>Marc</strong>: my, it seems like an invitation to get your face ripped up.<br><strong>swyx</strong>: One of my early hits was like modeling the lifespan of the H 100 and h two hundreds and, and going like, you know, usually they advise like four to seven years and it was, you know, maybe you sort of realistically haircut cut it down to two to three.<br>Yeah. But actually it&#8217;s going up and not down. Yeah. And, and uh, that&#8217;s, I mean that&#8217;s, I think that&#8217;s the dream. Uh, we are finding utilization and I think utilization solves all problems. Like, you can, you can find use, use cases for even like the poor, like even memory, we&#8217;re having a shortage. Right. And, and even like the, the shittier versions of, of memory that we do have, we are finding use cases for it.<br>So like That&#8217;s great.<br><strong>Marc</strong>: Yeah.<br><strong>Alessio</strong>: How, how important is open source AI and kinda like edge inference in a world in which you have three years of supply crunch. Like, do you think in the, like, you know, if you fast forward like five years, like how do you think about inference, uh, in the data center versus at the edge?<br><strong>Marc</strong>: Well, so just to start, yeah. So I think, I think open source is very important for a bunch of reasons. I think edge, edge inference is very important for a bunch of reasons. I, I think just practically speaking, if we&#8217;re just gonna have fundamental construc, supply crunches for the next, I mean, you, you guys know if you just project forward demand over the next three years, right?<br>Yeah. Relative to supply, one of the, its main predictions you can do is what&#8217;s gonna, what, what&#8217;s gonna happen to the cost of, of inference in the core, uh, over the next three years? And like, it may rise dramatically, right? Like, so, so what is, and then is, is, you know, like the, the, the big model competition are subsidizing heavily right now.<br>Right? Right. And so, so what&#8217;s the, what will be the average person&#8217;s, you know, per day, per month token cost, you know, three years from now to do all the things that they want to do. And I, I don&#8217;t know, it&#8217;s gonna. I mean, I have, you guys probably have friends, I have friends today who are paying a thousand dollars a day for open claw, for claw tokens to run open claw.<br>Right? And so, okay. $30,000 a month. Right? And, and by the way, those, those friends have like a thousand more ideas of the things that they want their claw to do, right? Yeah. And so you, you could imagine there, there&#8217;s like latent demand of up to, I don&#8217;t know, five or $10,000 a day of, of, of tokens for a fully deployed, you know, per personal agent.<br>Uh, and obviously consumers can&#8217;t pay that, right? And so, so, but it gives you a sense of the fu of the fu of the future scope of demand, right? And so, so even, even if there&#8217;s a 10 x improvement in price performance, that still, you know, goes to a hundred dollars a day, which is still way beyond what people can pay.<br>Mm-hmm. So there&#8217;s just gonna be like. Ferocious to me, by the way. The agent thing, the other interesting thing is I think the agent thing, so up until now, a lot of the constraints of GGPU constraints, I think the agent thing now also translates into CPU constraints. Mm-hmm. Right?<br><strong>swyx</strong>: CPU memory.<br><strong>Marc</strong>: Yes. CPU memory, right?<br>And so, like the entire chip ecosystem is just gonna get wait,<br><strong>swyx</strong>: wait for network constraints, that that will be the killer.<br><strong>Marc</strong>: It&#8217;s all bottleneck potentially for years. And so, so I, I think that Brad, and, and I think it&#8217;s actually possible, I mean, generally inference costs are gonna keep coming down, but I think the, let&#8217;s put it this way, the rate of decline, I think may level out here for a bit because of these supply constraints.<br>And then at some point, maybe the lab stops subsidizing so much and that, that, that again, will be, be an issue. And so there&#8217;s just gonna be so much more demand for inference than, than can be satisfied. Um, you know, kind of with the centralized model. And then, and then, you know, you guys know this, but like all the, just the dramatic, I mean just the dramatic innovations that have happened in the Apple silicon to be able to do, uh, inferences, it&#8217;s quite amazing the level of effort being put.<br>Like the open source guys are putting incredible effort into getting, you know, this recurring pattern where the big model will never run on a pc, and then six months later mm-hmm. Oh, it runs in a pc, right? It&#8217;s like amazing. And there&#8217;s very smart people working on that. So there&#8217;s all that. And then look, there&#8217;s also, you know.<br>There&#8217;s also like other, there&#8217;s other motivators. There&#8217;s other motivators which is just like, okay, how much trust are the big centralized model providers? You know, how much trust are they building in the market versus, you know, how much are, you know, at least for, in certain cases with some people, for certain use cases, people being like, well, I&#8217;m not willing to just like, turn everything over.<br>So there, there, there&#8217;s all the trust issues. Um, by the way, there&#8217;s also just like straight up price optimization. There&#8217;s many uses of AI where you don&#8217;t need Einstein in the cloud. You just need like a, a a, a smart local model. There&#8217;s also performance issues where you want, you know, you want, you know, you&#8217;re gonna want your doorknob to have an AI model in it.<br>Right. You know, to be able to, you know, do, um, you know, to be able to do access control. Um, obviously like everything with a chip is gonna have an AI model in it. Mm-hmm. And it, a lot of those are gonna be local. Um, and so, yeah. No, like I think, I think you&#8217;re gonna have ti and then you&#8217;re gonna, by the way, also wearable devices, you know, you don&#8217;t wanna do a complete round trip.<br>You want, you know, you, whatever your smart devices are, you want it to be like super low latency. Yeah.<br><strong>swyx</strong>: The question, do we care who makes it? Yeah. One of the biggest news this week was the collapse of AI two, the Allen Institute. Mm-hmm. One of the actual American open source model labs. Yeah. Um, and, uh, I&#8217;m not that optimistic on, on American open source.<br>Yeah. Like you, you guys invested in MIS trial and MIS trial&#8217;s doing extremely well outside of China. That&#8217;s about it.<br><strong>Marc</strong>: Yeah. We&#8217;ll see. We&#8217;ll see. I look, I, number one, I do think we care. Uh, I do think we, I do think we care who makes it. Um, I would say this, the, the, the, the previous presidential administration wanted to kill it in the us Oh yeah.<br>They wanted to drown in the bathtub. Um, and so they wanted to kill it. So at least we have a government now that actually like, actually wants it wants it to happen. And you<br><strong>swyx</strong>: earned to council<br><strong>Marc</strong>: and Yeah. And the new and the P pcast. Yeah. So the, the, you know, this admin for whatever other political issues people have, which are many, you know, this administration has, I think a very enlightened view and in particular an enlightened view on AI and in particular on open source ai.<br>Uh, and so they&#8217;re very supportive. Um, my read is the Chi. The Chinese have a very, the various Chinese companies have a very specific reason to do open source, which is, they, they, they don&#8217;t fundamentally, they don&#8217;t think they can sell commercial, uh, AI outside of China right now. And or at least specifically not, not in the US for a combination of reasons.<br>And so they, they kind of view, I think, open source AI as a bit of a loss leader against basically domestic, uh, you know, paid, paid services. And then kind of an, you know, kind of an ancillary products. You know, they&#8217;re, they&#8217;re very excited about it, by the way. I think it&#8217;s great. I think it&#8217;s great that they&#8217;re doing it.<br>Um, you know, I think Deeps seek was like a gift to the world. Um, I think. The great thing about open source, open source, the, the, the impact of open source is felt two ways. One is you, you get the software for free, but the other is you get to learn how it works, right? And so like the paper, the paper, the paper and, and the code, right?<br>And the code. And so, like, for example, I thought this was amazing. So open comes out with L one and it&#8217;s an amazing technical breakthrough, and it&#8217;s just like, absolutely fantastic. But of course they don&#8217;t explain how it works in detail. And then of course they hide the, they hide the reasoning traces, right?<br>And, and then, and then, and then everybody&#8217;s like, okay, this is great, but like, who&#8217;s gonna be able to replicate this? Are other people gonna be able to do this? You know, is their secret sauce in there? And then our one comes out and it&#8217;s just like, there&#8217;s the code and there&#8217;s the paper, and now the whole world knows how to do it.<br>And then, you know, three months later, every other AI model is, is adding reasoning. And so, so you get this kind of double, like even if the Chinese models themselves are not the models that get used, the education that&#8217;s taken place to the rest of the world, the information diffusion, you know, is incredibly powerful.<br>So that happens and then, I don&#8217;t know. We&#8217;ll, we&#8217;ll see. You know, there are a bunch of American, you know, open source, you know, ai, uh, model companies. I mean, look, there&#8217;s gonna be tremendous, you know, there already is. There&#8217;s, you know, there&#8217;s gonna be tre there&#8217;s tremendous competition, uh, among the primary model companies.<br>You know, there&#8217;s, depending on how you count, there&#8217;s like four or five, you know, big co model companies now that are, you know, kind of neck and neck, uh, in different ways. Um, uh, you know, and, and, and, um, you know, and then obviously Bo Bo both X and then MetAware involved are, you know, both have huge, you know, huge attempts to, you know, kind of, to kind of leapfrog underway.<br>And then you&#8217;ve got, you know, a whole fleet of startups, new companies, including a whole bunch that we&#8217;re backing, that are, you know, trying to come out with different approaches. And then you&#8217;ve got whatever it is. I don&#8217;t know how, how many, how many, like main line foundation model companies are there in China at this point?<br>It&#8217;s probably six. It&#8217;s<br><strong>swyx</strong>: five Tigers is what they call it. Yeah. Uh, Quinn is in questionable because there&#8217;s change in leadership,<br><strong>Marc</strong>: right?<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: But that, does that include, that includes like Moonshot,<br><strong>swyx</strong>: yes. Can deep seek, uh, uh, ZI, um, Quinn oh one is in there.<br><strong>Marc</strong>: Right. And then, um, and by dance and, and then you see,<br><strong>swyx</strong>: ance would be like the next tier ance.<br>They weren&#8217;t as prominent. They weren&#8217;t, didn&#8217;t have<br><strong>Marc</strong>: a leading. Yeah. But they, you at least, you know, ance is very inspiring and presumably they have more stuff coming and Tencent probably has more stuff coming and, and so forth. And so, so, so like, look, here, here would be a thing you can anticipate, which is there are not these markets, there are not going to be between the US and China right now, there&#8217;s like a dozen primary foundation model companies that are like at scale, at, at some level of a critical mass.<br>It&#8217;s not gonna be a dozen in three years, right? Like, it just because these industries don&#8217;t bear a dozen, it&#8217;s, it&#8217;s gonna be three or you know, there&#8217;s gonna be three or four big winners or maybe one or two big winners. And so there&#8217;s gonna be like a whole bunch of those guys that are gonna have to figure out alternate strategies.<br>Um, and I think like open source is one of those strategies. And so I, I think you could see like a whole, i, I, I think the questions like, who&#8217;s gonna do open source? I think that could change really fast. I, I think that, that, that&#8217;s a very dynamic thing. I think it&#8217;s very hard to predict what happens. And, and I think it&#8217;s very important.<br><strong>swyx</strong>: NVIDIA&#8217;s doing a lot.<br><strong>Marc</strong>: Well, I was gonna say. Well, exactly. And then you&#8217;re got Nvidia and then, and then, you know, just to, again, indu, there&#8217;s an old thing in business strategy, which is called, uh, commoditize Compliments. Commoditize the compliment. That&#8217;s right. And so if your Jensen is just kind of obvious, of course, you wanna commoditize the software.<br>Yeah. And he&#8217;s, and to his enormous credit, he&#8217;s putting enormous resources behind that. And so maybe it, maybe it&#8217;s literally Nvidia and I think that would be great.<br><strong>Alessio</strong>: Yeah. Uh, narrative violation to European projects, uh, in the, uh, damn.<br><strong>swyx</strong>: I&#8217;m hosting my, uh, Europe, uh, conference soon. And I got both of them.<br><strong>Alessio</strong>: They got us.<br>They got us. Mark<br><strong>Marc</strong>: finished. They got us, us. Well, wait a minute. Where was Peter? So where was Steinberger when he did? In Austria<br><strong>Alessio</strong>: was, yeah, yeah, yeah.<br><strong>Marc</strong>: He was in what? He was in Vienna. Oh, he was in Vienna. And then where is he now?<br><strong>swyx</strong>: Uh, he&#8217;s moving to sf.<br><strong>Marc</strong>: Okay. Okay. Alright. Okay, there we go. And then, yeah, the PI guy, right?<br>The PI guys are European.<br><strong>swyx</strong>: Yeah, they&#8217;re also, they&#8217;re buddies in<br><strong>Alessio</strong>: Australia. Mario&#8217;s also there. Yeah.<br><strong>Marc</strong>: Right. And are they, yeah, they haven&#8217;t announced yet. Any sort of change changed or have they<br><strong>Alessio</strong>: No, they&#8217;re, they have a company there.<br><strong>Marc</strong>: Okay. Got, okay. Good.<br><strong>Alessio</strong>: Good, good,<br>good.<br><strong>Alessio</strong>: Um,<br><strong>Marc</strong>: yeah, good.<br><strong>swyx</strong>: Anyways, I think pie and open cloud very important software things and, and I just wanted you to just go off on what you think.<br><strong>Marc</strong>: Yeah. So I think in co the, the combination of the two of them I think is one of the 10 most important softwares. Open<br><strong>swyx</strong>: Claw got all the attention, but Right. Talk about pie,<br><strong>Marc</strong>: pi pie&#8217;s, kind of the Yeah. PI&#8217;s, PI&#8217;s kind of the architectural breakthrough for those of us who are older. There was this whole thing that was very important in the world of software basically from like 1970 to, I don&#8217;t know, it still is very important, but like 19, from 1973 to like basically the creation of Linux, which is basically this, this thing used to call like the Unix mindset.<br>Like so, so, &#8216;cause there were all these different, you know, theories. There are all these different operating systems and mainframes and, and then you know, all these windows and Mac and all these things. And then there was this, but kind of behind it all was this idea of kind of the Unix mindset. And the Unix mindset was this thing where basically you don&#8217;t have these, like, like in the old days, like, like the operating system that like made the computer industry really work, like in the 1960s mm-hmm.<br>Was this thing called o os 360, which was this big operating system that IBM developed that was supposed to basically run everything. And it was this like giant monolithic architecture in the sky. It was like a, you know, it was like a giant castle. Um, of software. And, and by the way, it worked really well and they were very successful with it.<br>But like, it was this huge castle in the sky, but it was this thing, it was almost unapproachable, which is like, you had to be kind of inside IBM or very close to IBM. And you had to really understand every aspect, how the system worked. And then the, the Unix sky is originally out of at and t and then out out of Berkeley, um, you know, came out and they said, no, let&#8217;s have a completely different architecture.<br>And the way architecture&#8217;s gonna work is we&#8217;re gonna have, we&#8217;re gonna have a, a prompt and, and a, and a shell. And then, and then we&#8217;re gonna, all, all the functionality is gonna be in the form of these discreet modules, and then you&#8217;re gonna be able to chain the modules together. Mm-hmm. Yeah. And so like the, the, the op, it&#8217;s almost like the operating, operating system itself is gonna be a programming language.<br>Um, and then that led led to the, the, the sort of centrality of the shell. Um, and then that led to sort of, uh, you know, basically chaining together Unix tools. And then that led to the emergence of these, these scripting languages like Pearl, where you, you could basically kind of very easily do this, and then the shells got more sophisticated and then, and then, and then look like, you know, that, that, that number one, that worked and that, that was the world I grew up in.<br>Like I was, I was a Unix guy. You know, sort of from, call it 1988 to, you know, kind of all, all the way through my work and it worked really well. It, it&#8217;s in the background, um, you know, nor normal people don&#8217;t need to, didn&#8217;t need to necessarily know about it, but like, if you were doing like system architecture, application development, you, you, you knew all about it.<br>Um, and then, you know, it&#8217;s been in the background ever since. And, you know, look, your Mac still has a Unix shell, you know, kind of in there, and your iPhone still has a Unix shell kind of buried in there somewhere. So they&#8217;re kind of in there. And then, you know, the Windows shell is kind of a, you know, sort of a weird derivative of that.<br>But, um, you know, but look, the inter, the internet runs on Unix, um, and that smartphones, actually, both iOS and Android are Unix derivatives. And so, you know, kind of Unix did end up winning. But, but anyway, and then we just started taking that for granted. And then, and then so, so basically the, the way I think about what happened with Pie and then with Open Claw is basically what those guys figured out is, I always say the, the great breakthroughs are obvious in retrospect, right?<br>Which is the best kind, the best kind. They weren&#8217;t obvious at the time or somebody else would&#8217;ve done them already. Um, and so there is a, like a real conceptual leap, but then you look at it sort of the backwards looking and you&#8217;re just like, oh, of course. Mm-hmm. Like the, the, to me those are always the best breakthroughs.<br>Well, actually language models themselves are like that. It&#8217;s just like, oh, next token completion. Oh, of course.<br><strong>swyx</strong>: Yeah. What other objective mattered?<br><strong>Marc</strong>: Yeah, exactly. But, but like it, right. But she&#8217;s even saying it wasn&#8217;t obvious until somebody actually did it. Right. And so the conceptual breakthrough is real and deep and powerful and, and very important.<br>And so the way I think about pie and olaw is it&#8217;s basically marrying the, the language model mindset to the un to the Unix, basically shell prompt mindset. And so it&#8217;s, it&#8217;s basically this idea that what, what, so what is an agent, right? And as, as, and as you know, like many smart people who have been trying to figure out what an agent is for, for, for decades, and they&#8217;ve had many architectures to build agents and the whole thing.<br>And it turns out what is an agent. So it turns out what we now know is an agent is the following. It&#8217;s, so it&#8217;s a language model. And then above that, it&#8217;s a ba, it&#8217;s a bash shell. Um, so it&#8217;s a, it&#8217;s a Unix shell, and then it&#8217;s, and then the agent has access, uh, has access to, to the shell. And, you know, hopeful, hopefully in a sandbox, maybe in, maybe in a sandbox.<br>So it&#8217;s, it&#8217;s the model. Um, it&#8217;s the shell. Um, and then it&#8217;s a fi, it&#8217;s a file system. Um, and then the state is stored in files. And then, you know, there&#8217;s the markdown format for the, you know, for, for the files themselves. And then, and then there&#8217;s basically what in Unix is called Aron job. There&#8217;s a loop and then there&#8217;s a heartbeat for the, there&#8217;s heartbeat and, and the thing basically Wake Wakes up.<br>Wakes up. So it&#8217;s basically LLM plus shell, plus file system, plus markdown, plus kron. And it turns out that&#8217;s an agent. And, and, and every part of that, other than the model is something that we already completely know and understand. And in fact, it turns out that like the latent power of the Unix shell is like extraordinary because basically like all, like, there&#8217;s just like an, there&#8217;s just enormous latent power in the shell.<br>There&#8217;s enormous numbers of Unix commands, there&#8217;s enormous number of command line interfaces into all kinds of things already in the, you know, your entire, I mean your entire, just to start with, your computer runs on a shell. If you&#8217;re running a Mac or a, or, or a phone, your computer, your computer&#8217;s running on a shell, uh, already.<br>And so like the full power of your computer is available at the command line level. Um, and then it turns out it&#8217;s really easy to expose other functions as a command line interface. And so like this whole idea where we need like MCP and these like product mm-hmm. Fancy protocols, whatever, it&#8217;s like, no, we don&#8217;t, we just need like a command, command line thing.<br>So that&#8217;s the architecture. And then it turns out what is your agent? Your agent has a bunch of files starting a file system. And then there&#8217;s the thing that just like completely blew my mind when I write my head around it as a result of this, which is like, okay. This means your agent is now actually independent of the model that it&#8217;s running on.<br>Because you can actually swap out a different LLM underneath your agent and your, your agent will change personality somewhat. &#8216;cause the model is different, but all of the state stored in the files will be retained.<br><strong>swyx</strong>: Yeah. Different instruction set, but you just compiled<br>it.<br><strong>Marc</strong>: Right, exactly. And it&#8217;s all right.<br>It&#8217;s like right. Swapping out a ship and recompiling, but it&#8217;s, it&#8217;s still, it&#8217;s still your agent with all of its memories. Um, and with all of its capabilities. And then by the way, you can also swap out the shell, uh, so you can move it to a different execution environment that is also, is also a b shell, by the way, you can also switch out the file system, right.<br>Uh, and you can, and you can, and you can swap out the, the, the heartbeat for the, the crown framework, the, the loop that the agent framework itself. And so your agent basically is ba basically at the end of the day, it&#8217;s just. It&#8217;s just, its files. Um, and then, and then there&#8217;s of course it a open<br><strong>swyx</strong>: call.<br><strong>Marc</strong>: Yeah, it&#8217;s, it&#8217;s basically, it&#8217;s, it&#8217;s just the files.<br>Um, and then by the way, as a consequence of that, the agent and then the agent itself, it turns out a couple important things. So one is it, it&#8217;s, it, it can migrate itself, right? And so you&#8217;re, you can instruct your agent, migrate yourself to a different, uh, runtime environment, migrate yourself to a different file system, migrate yourself to a different, you know, swap out the language model.<br>Your agent will do all that stuff for you. And then there&#8217;s the final thing, which is just amazing, which is the agent is the agent actually has full introspection. It actually, it actually knows about its own files and it could rewrite its own files. Right. Which by the way, is basically no widely deployed software system in history where the, the, the thing that you&#8217;re using actually has full introspective knowledge of how it itself works and is able to modify itself.<br>Like that, that, I mean, there have been toy systems that have had that, but there, there&#8217;s never been a widely deployed system that has that capability and then that leads you to the capability. That just like completely blew my mind when I wrap my head around it, which is you can tell the agent to add new functions and features to itself and it can do that.<br>Extend yourself. Yeah. Right? Extend, extend yourself. Like extend yourself. Give yourself a new capability. Right? And so, and so literally it&#8217;s just like you run into somebody at a party and they&#8217;re like, oh, I have my open claw, do whatever, connect to my eat, sleep bed, and it gives me better advice and sleep.<br>And you go home at night and you tell your claw, or if they&#8217;re at the party, by the way, you tell your claw, oh, add this capability to yourself. And your claw will say, oh, okay, no problem. And it&#8217;ll go out on the internet and it&#8217;ll figure out whatever it needs and then it&#8217;ll go out to claw code or whatever.<br>It&#8217;ll write whatever it needs. And then the next thing you know, it has this new capability. And so you don&#8217;t even have to, like, you can have it upgrade itself without even having to, without having to do anything other than tell it that you want it to do that. And so anyway, so the, the combination of all this is just, I mean, this is just like a massive, incredible, I mean, it&#8217;s just incredible.<br>Like if I, if I were, if I were 18, like this is a hundred, this is what I would be spending all of my time on. This is like such an incredible conceptual breakthrough. Yeah. And again, pe people are gonna look at it and they already get this response. People are gonna look at it and they&#8217;re gonna say, oh, well, where&#8217;s the breakthrough?<br>&#8216;cause these, the, all of these components were already known before. Mm-hmm. But, but this is the key, the key to the breakthrough was by using all these components that were known before, you get all of the underlying capability of that&#8217;s buried in there. And so all, and so for example, computer use all of a sudden just kind of falls, trivi, trivial.<br>Of course it&#8217;s gonna be able to use your computer. It has full access to the shell. Right. And then, and then you just, you, you give it access to a browser, and then you&#8217;ve got the computer and the browser and, and often away it goes. And, and then you&#8217;ve got all the abilities of the browser also. Um, yeah.<br>And so, and so the capability unlock here is profound. My friends who are, you know, deepest into this, are having their claw do like a, like, literally like a thousand things in their lives. They have new ideas every day. They&#8217;re just like constantly throwing new challenges at the thing. And by the way, it&#8217;s early and, you know, these are, you know, these are prototypes and there are, you know, as you guys know, there&#8217;s security issues.<br>Yeah. And, and so, you know, there&#8217;s a bunch of stuff to be ironed out, but the, the unlock of capability is just incredible.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: And I, I have absolutely no doubt that everybody in the world is gonna, is gonna have at least, you know, an agent like this, if not an entire family of agents. And we&#8217;re gonna be living in a world where I think it&#8217;s almost inevitable now that this is the way people are gonna use computers.<br><strong>swyx</strong>: I was gonna say for someone who is deeply familiar with social networks, the next step is your claw talking to my Claw. Mm-hmm.<br><strong>Marc</strong>: Posting<br><strong>swyx</strong>: on Claw Facebook, uh, posting their jobs on cloud LinkedIn and close posting their tweets on claw XAI or what, whatever, you know. Um, I do think that that is how, uh, you know, we, we get into some danger there in, in terms of like alignment and whether or not we want these things to, to, to run.<br><strong>Marc</strong>: You guys know where Rent a, rent a human.com.<br><strong>swyx</strong>: Yeah. Rent a,<br><strong>Marc</strong>: yeah. Yeah.<br><strong>swyx</strong>: I mean, it&#8217;s Fiverr, it&#8217;s TaskRabbit.<br><strong>Marc</strong>: Sure, of course.<br><strong>swyx</strong>: Mechanical<br><strong>Alessio</strong>: Turk.<br><strong>Marc</strong>: Yeah. But flipped, right. The agent hiring the people.<br><strong>Alessio</strong>: Yeah.<br><strong>Marc</strong>: Which of course is gonna happen, right? It&#8217;s obviously gonna happen.<br><strong>Alessio</strong>: I&#8217;m curious if you have any thoughts on the engineering side.<br>So when you build the browser, the internet, you know, just a bunch of mostly plain text file plus some images, and today the, every website and app is like, so complex. Somehow, you know, the browser kept evolving to fit that in. Mm-hmm. Are there any design choices that were made like early in the browser and kinda like the internet and the protocols that you&#8217;re seeing agents similar to this?<br>Like, Hey, this thing is just not gonna work for like this type of new compute and we should just. Rip it out right now.<br><strong>Marc</strong>: There were a whole bunch, but I&#8217;ll give you a couple. So one is, um, and we didn&#8217;t, you know, to be clear like this, this was not, you know, this is totally different. We didn&#8217;t have the capabilities we have today, but because Wet have, we didn&#8217;t have the language models underneath this, but, um, we did have this idea that human readability actually mattered a great deal.<br>Um, and, and, and so, and specifically in those days, it was, it was not so much English language, but it was there, there was a design decision to be made between binary protocols and text protocols. And basically every, every, every basically old school systems architect that had grown up between like the 1960s and the 1990s basically said, you know, the internet, it&#8217;s, what do you know about the internet?<br>It&#8217;s star for bandwidth. You, you just, you have these very narrow straws. Uh, you know, look, people, when we did the work on Mosaic, like pe, people who had the internet at home had a 14 kilobit modem, right? So you&#8217;re, you&#8217;re trying to like hyper optimize every bit of data mm-hmm. That, that travels over the network.<br>And so obviously if you&#8217;re gonna design a protocol like HGTP, you&#8217;re gonna want it to be binary, you know, highly compressed, binary protocol for maximum efficiency. And you&#8217;re gonna wanna have it be like a single connection that persists. And you&#8217;re, you&#8217;re, the last thing you&#8217;re gonna wanna do is like, bring up and tear down new connections.<br>And you definitely, you&#8217;re not gonna, not gonna want a text protocol. And so of course we said no. We actually want to go completely the other direction. It&#8217;s obviously, we only want text protocols. Uh, by the way, same thing in H TM L itself. We want html to be relatively verbose. You know, we want the tags to actually be like human readable.<br>Um, we wanna use<br><strong>swyx</strong>: the most inefficient things possible.<br><strong>Marc</strong>: Yeah, we wanna do the, we wanna do the in, we wanna do the inefficient things.<br><strong>swyx</strong>: You&#8217;re the original token Mixer.<br><strong>Marc</strong>: Yeah, exactly. Yeah, yeah, yeah. Basically it&#8217;s just like better lesson<br><strong>Alessio</strong>: filled.<br><strong>Marc</strong>: Well, yeah. Well actually this was, this was actually the, the conscious thing, which basically says just like assume, assume a future of infinite, infinite bandwidth built for that, right?<br>And then basically what it was, is it was a bet that it, it was a bet that if the system, if the, if the latent capabilities of the system were powerful enough, and that was obvious enough to people that would create the demand for the bandwidth that would cause the supply of bandwidth to get built that would actually make the whole thing work.<br>And then specifically what we wanted was we wanted everything to be human readable because we, at the engineering level, we wanted people to be able to read the protocol coming over the wire and be able to understand it with their, with their bare eyes without having to like disassemble it or whatever.<br>Right. Have it converted outta binary. Right. And so the, the, the, all the pro, you know, HTTP and everything else were, were, it was always, uh, text protocols. Uh, and the same thing with HTML and in, in many ways, some people say that the key breakthrough in the browser was the view source option, um, which is every webpage you go to, you could view source, which means you could see how it worked, which means you could teach yourself how to build right new, uh, to, to build new webpages.<br>There was that. So human readability. Um, and, and again, human readability in those days still meant technical, you know, specs. You know, now it means English language, but there&#8217;s an incredible latent power in giving everybody who uses the system the option to be able to drop down and actually understand and see how it&#8217;s working.<br>And that worked really well for the web and I think it&#8217;s working really well for ai. That was one. Um, what was the other, um. A big part of the idea of web servers was to actually surface the underlying latent capability of the operating system and to be able to surface the, uh, also the underlying latent capability of the database because basically what was a web server?<br>What, what, what, what is a web server? Fundamentally? Architecturally, it&#8217;s, it&#8217;s, it&#8217;s the operating system. So it&#8217;s, it&#8217;s the operating system&#8217;s ability to, you know, it&#8217;s running on top of an os. So it&#8217;s the OSS ability to manage. The file system and do everything else that you wanna do, process everything. Um, and then of course, a lot of early, you know, a lot, a lot of websites are, are front ends to databases.<br>Um, and so you wanted to, you wanted to unleash the underlying latent power of whether it was an Oracle database or some other, you know, some other Postgres or whatever, whatever it was. Um, and so a lot of the function of the web server was to just bridge from that internet connection coming in to be able to unlock the underlying power of the OS and the database.<br>Uh, and again, people looked at it at the time and they were like, well, is this really, does this really matter? Like, is this important Because we&#8217;ve had databases forever and we&#8217;ve always had, you know, user interfaces for databases and this is just another user interface for a database. And it&#8217;s like, okay, yeah, fair enough.<br>But on the other side of that is just like, this is now a much better interface to databases and one that 8 billion people are going to use and is going to be like, far easier to use and far more flexible. And, and, and, and you&#8217;re not just gonna have old databases. Now you have a system where people can actually understand why they want to build, you know, a million times more database apps than they have in the past.<br>And then the number of databases in the world exploded. And so again, this goes to this thing of like building, building in layers. Some of the smartest people in the industry look at any new challenge and they&#8217;re like, okay, I&#8217;m, I&#8217;m, I need to build a new kind of application. So the first thing I need to do is build a new programming language, right?<br>And then the next thing I need to do is build a new operating system, right? And then the next thing I need to do is I need to build a new chip. Right? And they, they kind of wanna reinvent everything. And I&#8217;ve, I&#8217;ve always had, maybe it&#8217;s just, I don&#8217;t know, pg pragmatic mentality or something, or maybe an engineering over science mentality, but it&#8217;s more like, no, you have just like all of this latent power, uh, in the existing systems and you, you don&#8217;t want to be held back by their constraints, but what you wanna do is you wanna kinda liberate that power and open it up.<br>Yeah. And so I, I think, I think, and I think the web did that for those reasons. And I think it&#8217;s the same thing now that&#8217;s happening. It&#8217;s a great<br><strong>swyx</strong>: perspective on the web.<br><strong>Alessio</strong>: Programming language just is not a good thing. We have Brett Taylor on the podcasts and we were talking about rust. And you know, rust is memory safe by the phone.<br>So why are we teaching the model to not write memory, unsafe code, just use rust, and then you get it for free. How much do you think there&#8217;s like. Time to be spent like recreating some of these things instead of taking them for granted. I&#8217;ll be like, oh, okay. Python is kind of slow Python<br><strong>swyx</strong>: type scripts,<br><strong>Alessio</strong>: you know?<br>It&#8217;s like, yeah.<br><strong>swyx</strong>: As, as imperfect as they are, they are the lingua franca.<br><strong>Marc</strong>: I mean, I think this is gonna change a lot. &#8216;cause I don&#8217;t think the models care what language they program in. Mm-hmm. And I think they&#8217;re gonna be good at programming in every language, and I think they&#8217;re gonna be good at translating from any language to any other language.<br>Like, okay, so this gets into the coding side of things. I, I think we&#8217;re going through a really fundamental change. And then, look, I, I grew up hand, you know, I grew up hand code, you know? Yeah, yeah, yeah. I grew up hand coding. Everything I did was actually everything I did actually was written in CI wasn&#8217;t even<br><strong>Alessio</strong>: back in the days,<br><strong>Marc</strong>: I wasn&#8217;t even using c plus plus, so I, or like Java or any of this stuff.<br>Right. Uh, and so, um, I, everything, everything I ever did, I was like managing my own memory at, at, at the level of c and then I, you know, I, I&#8217;m still from the generation that, you know, I, I knew assembly language and, you know, I, I, you know, um, so I, I could drop down and do things, uh, right on the ship. And so we, we&#8217;ve just, we&#8217;ve all, all of us, we&#8217;ve always lived in a world in which software is like this precious thing that like, you have to think about very carefully.<br>And it&#8217;s like really hard to generate good software. And there&#8217;s only a small number of people who can do it. And like, you have to be very, like, jealous in terms of thinking about like, how do you allocate, like what are your engineers working on and how many good engineers do you actually have? And how much software can they write?<br>And how can, how much software can human beings, you know, kind of maintain? And I think like all those assumptions are being shot right out the window right now. Like, I think they&#8217;re, I, I think those days are just over. And I think the new world is like, actually high quality software is just like infinitely available.<br>Mm-hmm.<br><strong>Marc</strong>: And if you need new software to do X, Y, Z, like, you&#8217;re just gonna wave your hand and you&#8217;re gonna get it. And then if it&#8217;s, if you don&#8217;t like the languages written in, you just tell the thing, all right, I want the, now I want the rush version. Um, or, you know, se secure, you know, secure. We&#8217;re about to, by the way, we&#8217;re about to go through computer security is about to go through the most dramatic change ever, which is number one, like every single latent security bug is about to be exposed,<br><strong>swyx</strong>: right?<br><strong>Marc</strong>: So we&#8217;re gonna have like, the in, we&#8217;re, we&#8217;re, we&#8217;re set up here for like the computer security apocalypse for a while. But, but, but on the other side of it, now we have a coding agents that can go in and actually fix all the security bugs. And so how, how are you gonna secure a software in the future?<br>You&#8217;re gonna tell the, tell the bot to secure it, and it&#8217;s gonna go through and, and fix it all. And so, so this thing that was this incredibly scarce resource of high quality software is just going to become a completely fungible thing that you&#8217;re just gonna have as much as you want, right? Uh, and, and that has like, you know, that has like tons and tons of consequences in some sense.<br>The answer to the question that you posed, I, I think it&#8217;s just somewhat, I don&#8217;t know, simple or something, or straightforward, which is just, if you want all your software and rust, you just, all the bot, you want all your software and rust, like, things that used to be like hard or even like, seem like an insurmountable mountain to get to get through all of a sudden, I think, become very easy.<br><strong>swyx</strong>: I, I think Brett had a theory that there would be a more optimal language for lms. And so the contention is, uh, there isn&#8217;t like, just don&#8217;t bother, just whatever humans already use LMS are perfectly capable, porting.<br><strong>Marc</strong>: I think we&#8217;re pretty close to being, I don&#8217;t know if this would work today. I think we&#8217;re pretty close to being able to ask the AI what would its opt optimal language be and let Right, and let it design it.<br>True. Okay, here&#8217;s a question. Are you gonna even gonna have programming languages in the future? Um, or the ai, are the AI just gonna be emitting binaries? Let&#8217;s assume for a moment that humans aren&#8217;t coding anymore. Let&#8217;s assume it&#8217;s all bots. The bot. What levels of intermediate abstraction do the bots even need?<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: Or are they just coding binary directly? Did you see there&#8217;s actually an experi, somebody just did this thing where they have a, they have a, a language model now that actually emits model weights for a new language model. Right. And so will the bots be just<br><strong>Alessio</strong>: predict the weights<br><strong>Marc</strong>: Will, yeah. Will the bots literally be emitting not just coding binaries, but will they, will, will they actually be admitting weights for, for new models?<br>Yeah. Direct directly and. Conceptually, there&#8217;s no reason why they can&#8217;t do both of those things. Uh, like architecturally. Both of those things seem completely possible. It&#8217;s<br><strong>swyx</strong>: very inefficient. You&#8217;re basically very<br><strong>Marc</strong>: inefficient.<br><strong>swyx</strong>: A simulation of a simulation in a simulation inside of the weights. Correct?<br><strong>Marc</strong>: Yeah, yeah. Very inefficient. But like, look, LMS are already like incredibly inefficient. Ask an uh, in favor thing, ask Claude, add two plus two equals four. Right? It&#8217;s just like, you know, it&#8217;s like, you know, it&#8217;s, it&#8217;s, it&#8217;s like whatever, billions and billions of times more inefficient than using your pocket calculator.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: But, but, but yet the, the, the payoff is so great of the general capability. And so anyway, like I, I kind of think in 10 years, like, I&#8217;m not sure. Yeah. Like, I&#8217;m not sure there will even be a salient concept of a programming language, um, in the way that we understand it today. And in fact, what we may be doing more and more is a form of interpretability, which is we&#8217;re trying to understand why the bots have decided to, uh, structure, uh, code in the way that they have.<br><strong>swyx</strong>: I mean, if you play it through, you don&#8217;t need browsers, then like, that&#8217;s the depth of the browser.<br><strong>Marc</strong>: Well, so I, I would take it a step further, which is you may not need to use your interfaces. So who is gonna use software in the future?<br><strong>swyx</strong>: Other bots.<br><strong>Marc</strong>: Other bots. Yeah. Yeah. And<br><strong>swyx</strong>: so you still need to, I don&#8217;t know, pipe information in,<br><strong>Marc</strong>: do we?<br><strong>swyx</strong>: And out<br><strong>Marc</strong>: really<br><strong>swyx</strong>: well, what are you gonna do then?<br><strong>Marc</strong>: Are you sure<br><strong>swyx</strong>: you&#8217;re just gonna log off and touch grass?<br><strong>Marc</strong>: Whatever you want. Exactly. Isn&#8217;t that better?<br><strong>swyx</strong>: I want software to do stuff for me.<br><strong>Marc</strong>: Isn&#8217;t that? But isn&#8217;t that better? I mean, look, I, you know, I don&#8217;t know. Look like, you know, you know, you, all the arguments here, you know, it was not that long ago that 99% of humanity was behind a plow.<br><strong>swyx</strong>: Right.<br><strong>Marc</strong>: Right. And what are people gonna do if they&#8217;re not plowing fields all day to, to, to grow food? Right. And it just turns out there&#8217;s like much better ways for people to spend time than plowing fields. Yeah.<br><strong>swyx</strong>: Dooms growing.<br><strong>Marc</strong>: Uh, yeah, exactly. Exactly. Or, you know, talking to their friends and look, and I&#8217;m not an absolutist and I&#8217;m not a utopian.<br>And I, and to be clear, like I&#8217;ve, I have an 11-year-old and he&#8217;s learning how to code and like I&#8217;m, you know, I, I think it&#8217;s still a really good idea to learn how to code and so forth, but I just, if you project forward, you just have to think forward to a world in which it&#8217;s just like, okay, I&#8217;m just gonna tell the thing what I need and it&#8217;s gonna do it, and then, and then it&#8217;s gonna do it in whatever way is most optimal for it to do it.<br>Mm-hmm. Yeah. Unless I tell it to do it non optimally. Like if I tell it to do it in Java or in Rust or whatever, it&#8217;ll do it, I&#8217;m sure. But like, if I&#8217;m just gonna tell it to do, it&#8217;s, gonna do it in whatever way is like the optimal way to do it. Yeah. And then I, and then if I need to understand how it works, I&#8217;m gonna ask it to explain to me how it works.<br>Right. And so it&#8217;s gonna be doing its own, interpret it, it&#8217;s gonna be the engine of interpretability to explain itself. And I, I just am not convinced that, that I&#8217;m not, I&#8217;m not convinced that in that world you have these historical, the goals of the abstractions will be whatever, the Boston network with the human Right.<br><strong>Alessio</strong>: Yeah. Yeah. That, well, I, I&#8217;m curious like. If that&#8217;s true, then shouldn&#8217;t the models providers be building some internal language representation that they can do extreme, kinda like rl uh, and reward modeling around, because it&#8217;s like, today they&#8217;re kind of like tied to like type script and Python because the users need to write in that language versus they can have their own thing internally and like they don&#8217;t need to teach it to anybody.<br>They just need to teach their model. And I think that&#8217;s how you get maybe the version between the models, like going back to like the pie open claw thing. It&#8217;s like, oh, I built all the software using the open AI model and now switch to the RO model. But the TRO model doesn&#8217;t understand the thing. So I I, it feels like there still needs to be some obstruction.<br>But maybe not. Maybe that&#8217;s the lockin that the model providers want to have. I don&#8217;t,<br><strong>Marc</strong>: I&#8217;m not even sure that&#8217;s lockin though. &#8216;cause why can&#8217;t the second model just learn what the first model has done? Like,<br><strong>swyx</strong>: exactly.<br><strong>Marc</strong>: Okay. So okay. Give you an example. So as you know, models can now reverse engineer software by, right?<br>Isn&#8217;t it the whole thing now where people are reverse engineering, like Nten, Nintendo, gay binaries. Yeah. So you, you have like there&#8217;s, I&#8217;ve seen a bunch of reports like this where somebody has like a favorite game from the 1980s and the source code is like long dead, but they have like a binary brand to do a chip or something, another reverse engineer to get a version that runs in their Mac.<br>Right. And so if you reverse it, if, this is why I kinda say if you&#8217;re reversing like X 86 binaries, then why can&#8217;t you reverse engineer<br><strong>Alessio</strong>: whatever the degree. Yeah. And because we&#8217;re all on a Unix based system, it has to be reversible because it needs to run on the target.<br><strong>Marc</strong>: Yeah, yeah, yeah, yeah, yeah. Basically.<br>And so I just, I just think it&#8217;s this thing where it&#8217;s just like, and by the way, and everything we&#8217;re describing is something that human beings in theory could have done before, but just with like, right. Yeah, yeah. But with enormous where, but it was just always like cost and labor prohibitive. Reverse engineer.<br>I learned how to reverse engineer. Human beings can reverse engineer binaries. Yeah. It&#8217;s just for any complex binary, you need like a thousand years mm-hmm. To do it. But now with a model, you don&#8217;t. And so all of a sudden you get, you get these things. Or, or another way to think about it is so much of human built systems are to compensate for the human limitations.<br><strong>swyx</strong>: Mm-hmm.<br><strong>Marc</strong>: Yep. Right? Um, and if you don&#8217;t have the human limitations anymore, then all of a sudden you have, and, and it&#8217;s not that you, you won&#8217;t have abstractions, but you&#8217;ll have a different kind of abstraction. Yep. Yep.<br><strong>swyx</strong>: I have two topics to bring us to a close. And, uh, you could pick whichever ones. Uh, just talking about protocols, was it you or someone else?<br>Uh, I forget my internet history. Who said that? Like the biggest mistake that we didn&#8217;t figure out in the early days was payments. Yes. Was that you?<br><strong>Marc</strong>: Yes. It<br><strong>swyx</strong>: was a 4<br><strong>Marc</strong>: 0 2<br><strong>swyx</strong>: 0 2 4<br><strong>Marc</strong>: 0 2 payment required.<br><strong>swyx</strong>: We have a chance now. Nope. I don&#8217;t think we&#8217;re gonna figure it out. I don&#8217;t know. Like, what&#8217;s your take?<br><strong>Marc</strong>: Oh, I think, we&#8217;ll, yeah, no, now I think it&#8217;s gonna happen for sure.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: Yeah. And there&#8217;s two reasons to example for sure. One is we actually have internet native money now in the form of crypto. Stable coins. Stable coins and crypto. And this is, I, I think this is the grand unification basically of ai, crypto, uh, is what&#8217;s about to happen now. Um, I think AI is the crypto killer app, I think is where, where this is really gonna come out.<br>Um, and then the other is it&#8217;s just, it, I mean it&#8217;s just, I think it&#8217;s now obvious. It&#8217;s like obviously AI agents are gonna need money and it&#8217;s already happening, right? If you&#8217;ve got a c if you&#8217;ve got a claw and you wanted to buy things for you, you have to give it money in some form.<br><strong>swyx</strong>: I would say the adoption&#8217;s probably like 0.1% if, if that, but Yeah.<br><strong>Marc</strong>: Oh, today? Yeah. Yeah, yeah. But think, think forward, like where is it going<br><strong>swyx</strong>: forward thinking<br><strong>Marc</strong>: The ultimate principle of everything and, and everything that I think I, we, we do is, it&#8217;s the William Gibson quote, which is, the future is already here. It just isn&#8217;t distributed. Mm-hmm. It isn&#8217;t, isn&#8217;t distributed yet.<br>My friends who are the most aggressive use users of, of, of, of open claw, just like have given their clause bank accounts and credit cards. Um, and, and, and, and, and not only have they done it. Obvious that they needed to do it because it&#8217;s obvious that they needed to be able to spend money on their behalf.<br><strong>swyx</strong>: Yeah. Yeah.<br><strong>Marc</strong>: It&#8217;s just completely obvious. And so, and again, like, so the number of people who have done that today to your point is like, I don&#8217;t know, probably 5,000 or something. Yeah. But<br><strong>swyx</strong>: it&#8217;ll grow.<br><strong>Marc</strong>: That&#8217;s how these things start<br><strong>swyx</strong>: actually, I mean, since, uh, you keep mentioning,<br><strong>Marc</strong>: and by the way, open cloud, by the way, if you don&#8217;t give it a bank account, it&#8217;s just gonna break into your, your, it&#8217;s gonna break high agency, it&#8217;s gonna break into your bank account anyway, and, and take your money.<br>So you, you might, as you might as well do it, you might as well do it,<br><strong>swyx</strong>: uh,<br><strong>Marc</strong>: by the way. I really love, I gotta tell you, I really love the phenomenon. I love the Yolo. Um, I&#8217;m not doing it myself to be clear, but, but I love the people that are just like, yeah, what, what is it? Skip, skip, vision,<br><strong>swyx</strong>: danger, skip.<br><strong>Marc</strong>: Dangerous.<br><strong>swyx</strong>: Which by the way, is a Facebook thing.<br><strong>Marc</strong>: Okay?<br><strong>swyx</strong>: Right. Because, uh, because we, uh, in Facebook, they, they have this culture to name the thing dangerous, so that you are aware when you enable the flag that you are opting into a dangerous thing.<br><strong>Marc</strong>: Okay, good.<br><strong>swyx</strong>: And they brought it into open ai and of course that<br><strong>Marc</strong>: makes it enticing.<br><strong>swyx</strong>: Sam runs Codex, uh, with skip permissions on, on his laptop.<br><strong>Marc</strong>: Yes, a hundred percent. And so I, I th I think the way to actually see the future is to find the people who are doing that. There&#8217;s a man, you know, and they, you knows,<br><strong>swyx</strong>: log everything, you know, just watch it, watch the logs,<br><strong>Marc</strong>: but. Let&#8217;s actually find out what the thing can do.<br>Yeah. And the way to find out what the thing can do is just like, try everything. Yeah. Let it try everything. Let it unlock everything. By the way, that&#8217;s how you&#8217;re gonna find all the good stuff it can do. By the way. That&#8217;s also how you&#8217;re gonna find all the flaws. Yeah. I think the people who turn that on for bots are like, they&#8217;re, they&#8217;re like martyrs to the progress of human civilization.<br>Like, I feel very bad for their descendants that their bank accounts are gonna get looted by their bots in the first like 20 minutes. But I think the contribution that they&#8217;re making to the future of our species is amazing.<br><strong>swyx</strong>: It&#8217;s like gentleman science, you know?<br><strong>Marc</strong>: Yes. It&#8217;s, yes, yes. Experi yourself. It&#8217;s, uh, Ben Franklin out with the, trying to try, trying to get lightning to strike his, his, uh, his balloon and see, seeing if he gets electrocuted.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: It&#8217;s, uh, Jonas sk with the polio vaccine, right. Injecting it. Yes. So, yes. I, I, I, I think we should have, like agl, we should have like flags and like we should have like monuments to the people that just let open club run their lives.<br><strong>swyx</strong>: More anecdotes of like, what, what are the craziest or interesting things that people listening to this should go, go home and do.<br><strong>Marc</strong>: I mean, this is, this is the, this is the, the extreme thing is just like the straight Yolo, like just Yeah. Turn, turn your life<br><strong>swyx</strong>: on. I mean, that&#8217;s a general capability. Yeah. Yeah. Is there like a specific story that was like, wow. And, and everyone in a group chat just lit up.<br><strong>Marc</strong>: I mean, like, you know, so there&#8217;s tons of, there&#8217;s already tons of health, you know, there&#8217;s the health dashboard stuff is just, is just absolute personal health.<br>Absolutely amazing. Yeah. The number of stories on, um, I just don&#8217;t wanna violate people&#8217;s, you know, obviously personal. Yeah. Anonymized. But, um, you know, one of the things open clouds are really good at is hacking into all this stuff in your land. Uh, it&#8217;s really good. So, you know, internet of things. AKA internet of shit.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: Like<br><strong>swyx</strong>: super insecure, but great. It&#8217;s discoverable.<br><strong>Marc</strong>: Yeah, it&#8217;s discoverable. O open claw is happy to scan your network, identify all the things. And then my, my, my friends who are most aggressive at this are having open claw take over everything in their house.<br><strong>swyx</strong>: Yeah.<br><strong>Marc</strong>: Take it takes over their security cameras.<br>It takes over their, their, you know, their whatever their, their access control systems. It takes over their webcams. I have a friend whose claw watches him sleep. Put a webcam in your bedroom. Put the, put the claw, put the claw on a loop. Uh, I have it. Wake up frequently and have it watch, just tell it, watch me sleep.<br>And, and I&#8217;ve, I&#8217;ve seen the transcripts and it&#8217;s literally like Joseph asleep. This is good. This is good that Joe&#8217;s asleep. &#8216;cause you know, I have, I have his health day and I know that he hasn&#8217;t been getting enough sleep and so it&#8217;s really good that he&#8217;s getting sleep. I really hope he gets his full, whatever, you know, five hours of REM sleep.<br>Uh, Joe&#8217;s moving. Joe&#8217;s moving. Um, uh, Joe might be wake waking up. This is a real pro. If Joe wakes up now, he is gonna ruin his sleep cycle. Oh, okay. It&#8217;s okay. Joe just rolled over. Okay. He&#8217;s gone back to bed. Okay, good. Alright. Okay. I can relax. This is fine. He&#8217;s<br><strong>swyx</strong>: monitoring the situation<br><strong>Marc</strong>: monitoring, monitoring the situation, and, and being a bot, like, you know, is just like very focused, right?<br>It&#8217;s just like, uh, this is like, its reason for existence is to watch Joe sleep. And then, and then I was talking to my friend who did this is like, you know, on the one hand it&#8217;s like, all right, this is weird and creepy. Um, and I need to, I need to, maybe this has taken over my life. And then the other thing is like, you know what if I had a heart attack in the middle of the night, this thing literally would like freak out and call 9 1 1.<br>Like, there&#8217;s no question. This thing would figure out how to like, alert medical authorities and like, prob probably some in SWAT teams and like, do whatever would be required to save my life. Right? And so it&#8217;s like, you know, like, yeah. Like that&#8217;s happening. What else? Um, I&#8217;ll give, I, um, uh, it&#8217;s a company unitary, uh mm-hmm.<br>That makes the robot dogs. Um, and I, I actually have one at home, which is, it&#8217;s actually really fun. The Chinese companies, the Chinese companies are so aggressive at adopting, uh, new technology, but they don&#8217;t always like, listen, take the time to really.<br><strong>swyx</strong>: Package it,<br><strong>Marc</strong>: package it, and maybe think it all the way through.<br>And so, so the, at least the industry dog I have, so it, it has a old non LLM just control system, which by the way is not very good in, in markets. Well, but it, in practices, it&#8217;s not that good. It has trouble with stairs and so forth. And so it&#8217;s not quite what it should be. But then the language model thing comes out in the voice.<br>So they, they add, so they add LLM capability and then they, they add a voice mode to it. Um, but, but that LLM capability is not at all connected to the control system. So, so you&#8217;ve got this schizophrenic dog that like, is a complete idiot when it comes to climbing the stairs, but it will happily teach you quantum mechanics.<br>Right. In like a lum English accent. Right. Like, it, it, it is just like absolutely amazing. Jagged intelligence. Yeah. Yeah. Talk about jagged and then, now obviously what&#8217;s gonna happen in the future is, is they&#8217;re gonna connect together, but they&#8217;ll do it. But right now it&#8217;s, it&#8217;s, and so right now it&#8217;s not that useful.<br>And so I, I have a friend who has one of these who had his claw basically hack in and rewrite the code Rew write new firmware. Yeah. Write new firmware for the, for the unit robot. Ooh. And now it&#8217;s, now it&#8217;s an actual pet dog for his kids.<br><strong>swyx</strong>: You could do that before or after like. The motion.<br><strong>Marc</strong>: Yeah. It&#8217;s, he said it&#8217;s completely different.<br>He said it&#8217;s a complete transformation. Yeah. And whenever there&#8217;s an issue in the thing, now the claw just like reiterates the code. You know, you know, you goes in, it does, does the code and so is it kind of goes to your thing here. So, so like all of a sudden, uh, this is why the way we wanna think about AI code AI coding is not just like writing new apps.<br>It&#8217;s also going in and rewriting all the old stuff that should have worked that never worked. And so, like, I, I think, I think basically, I think the internet, the internet of shit is basically over. Like, I, I think everything, there&#8217;s a potential here where like all these devices in your house that have been like basically marginal or you know, basically dumb, you know, like all of a sudden they might all get really smart.<br>Now you have smart<br><strong>swyx</strong>: home.<br><strong>Marc</strong>: You have to decide if, yes, there are horror movies in which this is just, of which this is the premise. And so you have to decide if you want this. Yeah. But, but, but this is the first time I can say with confidence, I now know how you could actually have a smart home. Yeah. Yeah.<br>With 30 different kinds of things with chips and internet access, where it actually all makes sense and all works together and it&#8217;s all coherent in the, in the whole thing. And to have that unlock without a human being having to go do any of that work, like, you know.<br><strong>swyx</strong>: You know, I, I&#8217;m, I&#8217;m waiting for a, sorry, mark.<br>Uh, I can&#8217;t let you open that fridge door, you know, like<br><strong>Marc</strong>: Exactly, exactly. Yes, yes.<br><strong>swyx</strong>: Because Oh, yeah, yeah. You&#8217;re not supposed to eat right<br><strong>Marc</strong>: now. I have all of, yes, I have every shred of health information, you know, and I know you think you&#8217;re doing, you know, da da da. I didn&#8217;t think you do this, but you know, this is a real, are you really, you know, are you really sure?<br>And you know, you told, you know, you told me last night, you really don&#8217;t want me to let you do this, so, you know, I&#8217;m sorry, but the fridge door is locked. Um, yes. Open<br><strong>swyx</strong>: the fridge doors.<br><strong>Marc</strong>: Exactly. And by the way, I know you&#8217;re supposed to be studying for a test, so why don&#8217;t we, why don&#8217;t you go when you can pass the test, um, I will open the fridge door for you.<br>Yeah.<br><strong>swyx</strong>: Final protocol and then, and then we can wrap up, uh, proof of human<br><strong>Marc</strong>: Yes.<br><strong>swyx</strong>: Uh, right.<br><strong>Marc</strong>: Yeah.<br><strong>swyx</strong>: That&#8217;s the last piece that we gotta figure out.<br><strong>Marc</strong>: Yeah. So I would say there&#8217;s, there&#8217;s two massive, I would say, um, uh, sort of asymmetries in the world right now where we&#8217;ve known these asymmetries exist and we, we societally have an unwilling to grapple with them.<br>And I think they&#8217;re both tipping right now. And, and they&#8217;re, they&#8217;re, they&#8217;re, they&#8217;re the same thing. It&#8217;s virtual world version. It&#8217;s a physical world version. So the virtual world version is, is the bot problem. We&#8217;re just like, you know, the internet, internet is just like a wash and bots, internet&#8217;s a wash and fake people.<br>It has been forever. Um, by the way, a lot of that has to do with lack of money, you know? And so this, you know, this is the Yeah, this is this.<br><strong>swyx</strong>: My spicy take was these two are the same thing. And corporations of people too, you know? So interesting.<br><strong>Marc</strong>: Yeah, yeah, yeah.<br><strong>swyx</strong>: Okay. So a bank account is proof of human.<br><strong>Marc</strong>: Yeah.<br>Okay. Yeah. Until you, until you give the bots bank accounts. Yeah, exactly. So, okay. Yeah. So there&#8217;s that. But yeah, look, look, the bot, I mean, every social media user knows this. The bot, the bot problem is a big problem. You know, the bot, the bot problem has been a big problem forever. It&#8217;s, it&#8217;s a huge problem.<br>And it&#8217;s never really been confronted directly, like at any point, by the way. The physical world version of this is the drone, the drone problem. Um, right. And so we, we&#8217;ve known for, you know, we&#8217;ve known for 20 years now that the asymmetric threat both in Milit military and actual military conflict, but also in just like security, like, like, you know, security on the home front.<br>The big threat is, is the cheap attack drone. Right? The, the, the cheap, the cheap suicide, you know, drone with the bomb. And we&#8217;ve known that forever. And by the way, like, you know, it&#8217;s very disconcerting how like every, you know, every office complex in the, in the co you know, in the world is like unprotected from drone attacks.<br>Um, every, every stadium, every school, every prison. Like, like, sure e okay, we&#8217;ve known that, we&#8217;ve never done anything about what you gonna do<br><strong>swyx</strong>: about it. Yeah.<br><strong>Marc</strong>: One possibility is just leave, leave them unprotected forever and live in a world of like, asymmetric terrorism forever. Or the other is take the problem seriously and figure out the set of techniques and technologies required to, to be able to deal with that.<br>Whether those are lasers or jammers or early warning systems, or, you know, all<br><strong>swyx</strong>: personal force fields,<br><strong>Marc</strong>: kinetic, personal for dune, uh, personal, personal force fields. Exactly. And in both cases, the, these are, these are economic asymmetries. These are economic asymmetries, right? &#8216;cause it&#8217;s really cheap to field a bot, but it&#8217;s very hard to tell something, a bot.<br>It&#8217;s very cheap to field a drone. It&#8217;s very hard. It&#8217;s very expensive to defend against a drone. But you see what I&#8217;m saying is it&#8217;s, it&#8217;s, it&#8217;s the, it&#8217;s the virtual version of the problem, and it&#8217;s the physical version of the problem. Uh, the virtual version of the problem. What we, what we need quite literally is proof of human.<br>The reason is because you&#8217;re, you&#8217;re, you&#8217;re not gonna have proof of bot. The, the, the, especially now the, the bots are too good. The, the, the bots can pass the Turing test. And if the bots can pass the Turing test, then you can&#8217;t, you can&#8217;t screen for bot. You can&#8217;t have proof of not a bot. But what you can have is you can have proof of human, you can have, you know, cryptographically validated, this is definitely a person, and this is, and then you can have cryptographically validated.<br>This is definitely like something that a person said, yeah, this video is real. Right. Um,<br><strong>swyx</strong>: just to double click on, on, uh, do you think Alex Lanya with world? Yeah. Do you think he&#8217;s got it or is there an alternative?<br><strong>Marc</strong>: Oh, so I mean, there&#8217;s gonna be, I think there&#8217;ll be, I think many people will try, we&#8217;re one of the key, you know, participants in, in, in the World, in the World Project.<br>I dunno that, yeah. So we&#8217;re, we&#8217;re partisans, but yeah, I, I think so we think world is exactly correct. Okay. And, and the reason is it, it has, it has to be, it, it has to be proof of human. It it has, because you can&#8217;t do proof of not bought. You have to do proof of human to do proof of human. You, you need, you need biological validation.<br>You, you needed to start with this was actually a person, right? Because otherwise your bot signing up as fake people. Right? So you, you have to have like something, you have to have a bi. Biometric. And then you have to have cryptographic validation. And then the ability to do, to do, to do the lookup. And then, by the way, the other thing you need, which that you, you also need selective disclosure.<br>Um, so you need to be able to do proof of human without reviewing privacy, all the underlying information. Privacy. Yeah. By the way, another thing you&#8217;re need, you&#8217;re gonna need proof of age, right? &#8216;cause there&#8217;s all these laws in all these different countries now around you need to be 13 or 16 or 18 or whatever to do different things.<br>And so you&#8217;re gonna, you&#8217;re gonna need a, you know, sort of validated proof of age, um, you know, to be able to legally operate, right? And so that, that&#8217;s coming. And then you&#8217;re gonna want like, proof of credit score and, you know, proof of like, you know, a hundred other things.<br><strong>swyx</strong>: That&#8217;s a tricky one.<br><strong>Marc</strong>: It is a tricky one, but you&#8217;re gonna, you&#8217;re gonna, there, there&#8217;s no reason, like if somebody&#8217;s checking on your credit, somebody shouldn&#8217;t, I&#8217;ll give you an example.<br>Somebody shouldn&#8217;t need to know your name in order to be able to find out whether you&#8217;re credit worthy.<br><strong>swyx</strong>: Right? I see. Independently verifiable pieces of information.<br><strong>Marc</strong>: Pieces of information, yeah. It&#8217;s like selectively disclosed. And this is the answer to the privacy problem wr large, which is, I, I only need to prove, I need to prove at that moment.<br>So like, you&#8217;re gonna need that. And I, I think their, their, their architecture makes sense. So that needs to get solved. I think language models have tipped, the bots are now too good. Uh, and, and, and so they&#8217;re undetectable. And so as a consequence, you, we now need to go confront that problem directly. And then, and like I said, and then the other problem is we, we need to go actually confront the drone problems.<br>The Ukraine conflict has really unlocked a lot of thinking on that. And now the, um, and now the, the, the, the, the Iran situation is also unlocking that. And so I think there&#8217;s gonna be just like this incredible explosion of, of both drone and counter drones.<br><strong>swyx</strong>: Our drones are better than their drones to keep it that way.<br><strong>Marc</strong>: Yeah. Yeah. And counter drones,<br><strong>Alessio</strong>: I think we can sneak in one more question. Go for it. Um, I&#8217;m trying to tie together a lot of things that you said over the years. So at the Milken Institute debate with Teal, which is amazing. Um, you talked about the lag between a new technology and kinda like the GDP, um, impact of it.<br><strong>Marc</strong>: Yep.<br><strong>Alessio</strong>: The other idea you talked about is bourgeois capitalism and how, you know, this kind of managerial class was needed because of this complexity. And I think if you bring AI into the fold, you have like much higher leverage of people. So like if you have, you know, the Musk industries, um, and you give Elon a gi, you can run a lot more things That&#8217;s right.<br>At once.<br><strong>Marc</strong>: That&#8217;s right.<br><strong>Alessio</strong>: And then you have the social contract. And I know you reviewed a clip of Sam ing, um, we&#8217;re rethinking the whole thing, and you&#8217;re like, absolutely not. Yes.<br><strong>Marc</strong>: Under,<br><strong>Alessio</strong>: and I wa I was in an event with Sam last night, uh, and he actually said in the last couple weeks it felt like now people are taking that seriously.<br>Yeah. So I&#8217;m just curious like how you&#8217;re seeing the structure of organization changing, especially when you invest in early stage companies and, um, yeah, just like how the impact of. Work structure and, uh, all of that is playing out. Yeah.<br><strong>Marc</strong>: So there&#8217;s a whole bunch of, there&#8217;s a whole bunch of topics. I know, yeah.<br>We, we could spend, and by the way, we&#8217;d be happy to spend more time, but we could, we could spend more time on all that. So just for people who haven&#8217;t followed this, so the, this, this, this term managerial comes from this thinker in the 20th century, James Burnham, who, um, just one of the great kind of 20th century political thinkers, um, societal thinkers.<br>And he sort of said a as, and he was writing in like the 1940s, 1950s. Um, and he said kind of the, the whole history, capitalism until that point had been in two phases. Number one had been what he called bourgeois capitalism, which was think about as like name on the door, like Ford Motor Company. &#8216;cause Henry Ford runs the company.<br>Um, and Henry, it&#8217;s like a DIC dictatorial model. And Henry Ford just like tells everybody what to do. And he said the problem with bourgeois capitalism is it doesn&#8217;t scale. &#8216;cause Henry Ford can only tell so many people to do so many things. And then he runs at a time in the day. And so, um, he said the second phase of capitalism was what he called managerial capitalism, which was the creation of a professional class of managers, um, that are trained not to be like.<br>Car experts or to be whatever experts in any particular field, but are trained to be experts in management. And then that led to, you know, the importance of like Harvard business, you know, business schools and management consulting firms and all these things. And then you look at every big company today, and like most of the executives at most of the Fortune 500 companies are not domain experts in whatever the company does.<br>And they&#8217;re certainly not the founders of those, but they&#8217;re professional managers. And in fact, in the course of their careers, they&#8217;ll probably manage many different kinds of businesses. They&#8217;ll rotate around and they might work in healthcare for a while and then work in financial services and then go work in something else, you know, come work in tech.<br>And what Burnham said is he said that transition is absolutely required because the, the, the, the problem with bourgeois capitalism is, is it doesn&#8217;t scale. Henry Ford doesn&#8217;t scale. And so if you&#8217;re gonna run capitalist enterprises that are gonna have millions to billions of customers, um, you&#8217;re gonna need to, you&#8217;re, they&#8217;re gonna be operating a level of scale and complexity that&#8217;s gonna require this professional management class.<br>And he said, look, the, the professional management class has its downsides. Like they&#8217;re not necessarily experts at doing the thing. They&#8217;re not as inventive, you know, they&#8217;re not gonna create the next breakthrough thing. But he is like, whether you think that&#8217;s good or bad or whatever is what&#8217;s gonna be required.<br>And basically that&#8217;s what happened. Right. And so he wrote that book originally in like 1940, you know, over the course of the next 50 years, basically. Managerialism. Well, I mean, today, up till today, managerial managerialism basically took over everything. Mm-hmm. And you know, what I&#8217;m describing is basically how all big companies run and how all governments run and how are large scale nonprofits run and kind of everything, you know, everything runs basically what, what, what Venture Capital does is we basically are a rump, uh, sort of protest movement to that.<br>To try to find the next Henry Ford or, or just to say El Elon Musk or, or the next, or the next Elon Musk or the next Steve Jobs, or the next Bill Gays. The next Mark Zuckerberg. And so we, we, we, we start these companies in, in the old model, right? We, we, we start them out as, as, as, as in the Henry Ford model.<br>And so we start them out with a founder or a, or a, or a founder with, with colleagues. But you know, there&#8217;s the a founder, CEO, um, and then we basically bet that we basically bet that the startup is going to be able to do things, specifically innovate in ways that the big incumbents in that industry are not gonna be able to do.<br>And so it&#8217;s a bet that by, basically by relighting this sort of name on the door, you know, kind of thing. Mm-hmm. This new innovative thing with like a king monarchical, uh, uh, political structure, um, that they&#8217;re gonna be able to innovate in a way that the incumbent is not going to be able to because the incumbent is, is being run by managers.<br>Right. And, and, and, and by the way, and of course venture being what it is, sometimes that works, sometimes it doesn&#8217;t. But we&#8217;re, we&#8217;re constantly doing that, but I&#8217;ve always viewed it my entire life as like, we&#8217;re like raging against the dying of the light. Mm-hmm. Like we&#8217;re, we&#8217;re, we&#8217;re, we&#8217;re sort of constantly trying to fight off managerialism, just basically swamping everything and everything.<br>Getting basically boring and gray and dumb and old. Right. And we&#8217;re trying to keep some level of energy vitality in the system. AI is the thing that would lead you to think, wow, maybe there&#8217;s a third model.<br><strong>Alessio</strong>: Mm-hmm.<br><strong>Marc</strong>: Right? And, and maybe may and way to think about it would be, maybe it&#8217;s a combination of the two, maybe the new Henry Ford or the new Elon or the new Steve Jobs plus ai, right.<br>Is the best of both. Right. Because it&#8217;s, it&#8217;s, it&#8217;s sort of the spark of genius of the name on the door model, the Henry Ford model. But then it&#8217;s give that person AI superpowers to do all the managerial stuff and let the boss draw the managerial stuff. That may be the actual secret formula. And we&#8217;ve never even known that we wanted this because we never even thought it was a possibility.<br>But I mean, you know, this, what is the thing that these bots are really good, they&#8217;re really good at doing paperwork. Like they&#8217;re really good at filling out forms, right? Like they&#8217;re really good at writing reports, they&#8217;re really good at reading, they&#8217;re really good at doing all the managerial work. Like they&#8217;re amazing at it.<br>And so, yeah, so I, I think, I think the, I a hundred percent, I think the answer, the answer very well might be to get the best, best of both worlds by doing this. And then the challenge is gonna be twofold. The challenge is gonna be for the innovators to really figure out how to leverage AI actually do this.<br>Right? Um, and, and then, and then the, the other challenge is gonna be for the, for the incumbents that are managerial, to figure out like, okay, what does that mean? &#8216;cause now they&#8217;re gonna, they&#8217;re, they&#8217;re gonna be facing a different kind of insurgent competitor that has a different set of capabilities than they&#8217;re used to.<br>And so th the, this really I think is gonna force a lot of big companies to kind of figure out innovation. EE either I say figure out innovation or die trying.<br><strong>Alessio</strong>: Do you feel like that structure accelerates the impact on the actual GDPN economy? If you look at Space Act? Yes. The growth is like so fast. Yeah.<br>And like, instead of having these companies kind of like Peter out in growth and impact, they can kind of like keep going if not accelerating.<br><strong>Marc</strong>: Yeah, that&#8217;s for sure. The hope, um, the, the, the challenge and, and you know, and, and look, the AI utopian view is of course, of course. And, and, and that&#8217;s gonna be the future of the economy.<br>And it&#8217;s gonna grow 10 x and a hundred x and a thousand x. And we&#8217;re entering this regime of like much higher economic growth forever and consumer cornucopia of everything. And it&#8217;s, it&#8217;s gonna be great. And I, and, and I hope that&#8217;s true. I hope that&#8217;s, that&#8217;s like the u you know, that&#8217;s the current kind of utopian vision.<br>I hope that&#8217;s true. The problem is, it goes back again. The real world is really messy. Um, and I&#8217;ll give you an example of how the real world is really messy. It requires 900 hours of professional certification training to become a hairdresser in the state of California. Um, so it&#8217;s like 35% of the economy, something like that.<br>You have to get some sort of professional certification to do the job, which is to say that the, the professions are all cartels, right? Yeah. And so you have to get licensed as a doctor. You have to get licensed as a lawyer, you have to get licensed as a. You have to get into a union. Mm-hmm. Um, by the way, to, to work for the government, you need to be, you, you have both civil service protections and you have public sector unions.<br>You have two layers of insulation, uh, against ever getting fired for anything or anything. Anything ever changing. I&#8217;ll give you another example. The the dock work. The dock workers one on strike a couple years ago. Mm-hmm. &#8216;cause they, you know, robotics, you know, if, if you go look at a modern dock, like in Asia, it&#8217;s all robots.<br>If you go to American dock, it&#8217;s like all still guys, dragon, dragon stuff, by by hand, the dock works. Goes on a strike. It turns out there are 25,000 dock workers working on, on, on, on Docs in America. It turns out they have incredible political power. Mm-hmm. Because it&#8217;s a, it&#8217;s, it&#8217;s one of these un unified blocks of things.<br>They won their strike and so they got commitments from the dock owners to not implement more automation. We learned a couple things in that. So number one, we learned that even a union as small as 25,000 people still has like tremendous political stroke. We also learned that they, it actually turns out the Dock Workers Union has 50,000 people in it.<br>&#8216;cause there&#8217;s 20, they have 25,000 people working in the docks. They have 25,000 people during full paycheck sitting at home from prior union agreements. Oh my<br><strong>swyx</strong>: God.<br><strong>Marc</strong>: From prior union agreements. I&#8217;ll give you another great example. There are government agencies, there are federal government agencies where the employees right of have civil service protections and there are in public sector unions.<br>There are entire federal government agencies that struck new collective bargaining agreements during COVID, where not only are they have their jobs guaranteed in perpetuity, but they only have to report to work in an office one day per month. And so there are entire office buildings in Washington DC that are empty 29 outta 30 days of the year that are still operating and are still, we&#8217;re all still paying for it.<br>20 and say, and then what they do, it turns out what the employees do is they&#8217;re very, they&#8217;re very smart in, in, in this way. And so they figure out, they come in on the last day of a month and the first day of the next month. And so and so, they&#8217;re, so, they&#8217;re in there, they&#8217;re in the office two days per 60 days, which means these buildings are empty for 58 days at a time.<br>And you see what I&#8217;m, you see where I&#8217;m heading with this? Like this is like locked in, right? This is like locked in in a way that has nothing to do with like, and people say capitalist, it&#8217;s like anticapitalistic. It&#8217;s like, it&#8217;s, it&#8217;s basically it&#8217;s restrictions on trade, it&#8217;s restrictions on the ability to like change the workforce.<br>And so, so much of our economy is, is, you know, the, the, I I&#8217;m, I&#8217;m describing the entire healthcare system. I&#8217;m describing the entire legal profession. I&#8217;m describing the entire housing industry. I&#8217;m describing the entire education system, right? K through 12 schools in the United States. They&#8217;re a literal government monopoly.<br>How are we gonna apply AI and education? The answer is we&#8217;re not, because it&#8217;s a literal government monopoly, it is never going to change the end. And there is nothing to do, by the way, you can create an entirely new school system. Like that&#8217;s the one thing you can do, is you can do what Alpha School&#8217;s doing.<br>You can create an entirely new school system. Other than that, you&#8217;re not gonna go in and change what&#8217;s happening in the American classroom, like K through 12. There&#8217;s no chance the teachers are 100% opposed to it. It&#8217;s a hundred percent not gonna happen. So, so you see what I&#8217;m saying is like there&#8217;s this like massive slippage that&#8217;s gonna take place.<br>Both the AI utopians and the AI dors are far too optimistic.<br><strong>swyx</strong>: Right.<br><strong>Marc</strong>: You see what I&#8217;m saying? Be because they believe that because the technology makes something possible that 8 billion people all of a sudden are gonna change how they behave. And it&#8217;s just like, nope. So much of how the existing economy works.<br>Mm-hmm. It&#8217;s just, it. It&#8217;s just like wired in. And so we&#8217;re gonna be lucky as a society, we&#8217;re gonna be lucky if AI adoption happens quickly. Right. Because if it doesn&#8217;t, what we&#8217;re just gonna have is stagnation.<br><strong>Alessio</strong>: Awesome. Mark. I know you gotta run.<br><strong>swyx</strong>: Yeah. We all know or still welcome. But, uh, it was such a pleasure talking to you.<br>Uh, we&#8217;re truly living in the age of science fiction coming to real life.<br><strong>Marc</strong>: Yes. Yes. Could not be more exciting. Yeah. Really. Thank you, mark. You guys awesome.<br><strong>swyx</strong>: Thank That&#8217;s it.<br><strong>Marc</strong>: Good. Thank you. That&#8217;s it.</p>]]></content:encoded></item><item><title><![CDATA[[AINews] Gemma 4: The best small Multimodal Open Models, dramatically better than Gemma 3 in every way]]></title><description><![CDATA[A welcome update from Google!]]></description><link>https://www.latent.space/p/ainews-gemma-4-the-best-small-multimodal</link><guid isPermaLink="false">https://www.latent.space/p/ainews-gemma-4-the-best-small-multimodal</guid><pubDate>Fri, 03 Apr 2026 07:02:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3kmF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The sudden departures at the Allen Institute and limbo status of GPT-OSS have left the future of <a href="https://thenewstack.io/nathan-lamberts-atom-project-seeks-american-open-source-ai-models/">American Open Models</a> in question, so Google DeepMind keeping up the pace of Gemma 4 is a very very very welcome update! The 31B <a href="https://x.com/art_zucker/status/2039740402517893361">dense</a> variant ties with <a href="https://www.latent.space/p/ainews-moonshot-kimi-k25-beats-sonnet?utm_source=publication-search">Kimi K2.5</a> (744B-A40B) and <a href="https://www.latent.space/p/ainews-zai-glm-5-new-sota-open-weights?utm_source=publication-search">Z.ai GLM-5</a> (1T-A32B) for the world&#8217;s top open models, but with far less total parameters (with other interesting arch choices, see below):</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_chm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_chm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png 424w, https://substackcdn.com/image/fetch/$s_!_chm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png 848w, https://substackcdn.com/image/fetch/$s_!_chm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png 1272w, https://substackcdn.com/image/fetch/$s_!_chm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_chm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png" width="1456" height="820" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:820,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_chm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png 424w, https://substackcdn.com/image/fetch/$s_!_chm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png 848w, https://substackcdn.com/image/fetch/$s_!_chm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png 1272w, https://substackcdn.com/image/fetch/$s_!_chm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24c86eb5-bb3b-4f1d-9c92-7ff21d6a6366_2048x1153.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/officiallogank/status/2039735606268314071?s=46&amp;t=b7l37rB6wtbyAh6ah1NpZQ">obligatory pareto chart</a></figcaption></figure></div><p>This <a href="https://x.com/arena/status/2039848959301361716?s=20">image from Arena</a> shows progress over the years (exaggerated by the # ordinal ranking rather than numerical, but truly standard benches like <a href="https://x.com/kimmonismus/status/2039759264680747219?s=20">GPQA and AIME also improved tremendously </a>vs Gemma 3):</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3kmF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3kmF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png 424w, https://substackcdn.com/image/fetch/$s_!3kmF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png 848w, https://substackcdn.com/image/fetch/$s_!3kmF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png 1272w, https://substackcdn.com/image/fetch/$s_!3kmF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3kmF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png" width="1456" height="1460" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1460,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3kmF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png 424w, https://substackcdn.com/image/fetch/$s_!3kmF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png 848w, https://substackcdn.com/image/fetch/$s_!3kmF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png 1272w, https://substackcdn.com/image/fetch/$s_!3kmF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F590ec254-eaaf-4ab6-b939-d49709a4eb31_1612x1616.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The licensing is also improved with a proper <a href="https://x.com/matvelloso/status/2039736260529635836">Apache 2.0 license</a>, and they &#8220;natively <strong>process video and images</strong>, supporting <strong>variable resolutions</strong>, and excelling at visual tasks like <strong>OCR and chart understanding</strong>. Additionally, the E2B and E4B models feature <strong>native audio input</strong> for speech recognition and understanding.&#8221;</p><p>The excellent on device capabilities makes one wonder if these are the basis for the models that will be deployed in <a href="https://9to5mac.com/2026/03/20/apples-gemini-powered-siri-upgrade-could-still-arrive-this-month/">New Siri under the deal with Apple</a>&#8230;.</p><p></p><blockquote><p>AI News for 4/1/2026-4/2/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Google DeepMind&#8217;s Gemma 4 release: open-weight, Apache 2.0, multimodal, long-context&#8212;plus rapid ecosystem rollout</strong></p><ul><li><p><strong>Gemma 4 is Google&#8217;s biggest open-weight licensing + capability jump in a year</strong>: Google/DeepMind launched <strong>Gemma 4</strong> as a family of models explicitly positioned for <strong>reasoning + agentic workflows</strong> and <strong>local/edge deployment</strong>, now under a <strong>commercially permissive Apache 2.0 license</strong> (a notable shift from prior Gemma licensing). See launch threads from <a href="https://x.com/GoogleDeepMind/status/2039735446628925907">@GoogleDeepMind</a>, <a href="https://x.com/GoogleAI/status/2039735543068504476">@GoogleAI</a>, and <a href="https://x.com/Google/status/2039736220834480233">@Google</a>, with Jeff Dean&#8217;s framing and adoption stats (Gemma 3: <strong>400M downloads</strong>, <strong>100K variants</strong>) in <a href="https://x.com/JeffDean/status/2039748604232122707">@JeffDean</a>.</p></li><li><p><strong>Model lineup + key specs</strong>: Four sizes were announced&#8212;<strong>31B dense</strong>, <strong>26B MoE (&#8220;A4B&#8221;, ~4B active)</strong>, and two &#8220;effective&#8221; edge models <strong>E4B</strong> and <strong>E2B</strong> aimed at mobile/IoT with <strong>native multimodal</strong> support (text/vision/audio called out for edge). DeepMind highlights include <strong>function calling + structured JSON</strong>, and <strong>long context up to 256K</strong> (large models) in <a href="https://x.com/GoogleDeepMind/status/2039735455533453316">@GoogleDeepMind</a> and <a href="https://x.com/GoogleAI/status/2039735543068504476">@GoogleAI</a>. Community summaries and &#8220;how to run locally&#8221; guidance proliferated quickly, e.g. <a href="https://x.com/_philschmid/status/2039736207676965264">@_philschmid</a> and <a href="https://x.com/UnslothAI/status/2039739190536286313">@UnslothAI</a>.</p></li><li><p><strong>Early benchmark signals (with caveats)</strong>:</p><ul><li><p><strong>Arena/Text</strong>: Arena reports <strong>Gemma-4-31B</strong> as <strong>#3 among open models</strong> (and #27 overall), with <strong>Gemma-4-26B-A4B</strong> at <strong>#6 open</strong> in <a href="https://x.com/arena/status/2039739427715735645">@arena</a>; Arena later calls it the <strong>#1 ranked US open model</strong> on its open leaderboard in <a href="https://x.com/arena/status/2039782449648214247">@arena</a>.</p></li><li><p><strong>Scientific reasoning</strong>: Artificial Analysis reports <strong>GPQA Diamond 85.7%</strong> for <strong>Gemma 4 31B (Reasoning)</strong> and emphasizes <strong>token efficiency</strong> (~<strong>1.2M output tokens</strong>) vs peers in <a href="https://x.com/ArtificialAnlys/status/2039752013249212600">@ArtificialAnlys</a> and <a href="https://x.com/ArtificialAnlys/status/2039752015811866652">@ArtificialAnlys</a>.</p></li><li><p>Several posts stress the scale/efficiency surprise (e.g., &#8220;outperforms models 20&#215; its size&#8221;) but note that preference-based leaderboards can be gamed; Raschka&#8217;s more measured read is in <a href="https://x.com/rasbt/status/2039780905619705902">@rasbt</a>.</p></li></ul></li><li><p><strong>Day-0 ecosystem support became part of the story</strong>: Gemma 4 landed immediately across common local + serving stacks:</p><ul><li><p><strong>llama.cpp</strong> day-0 support: <a href="https://x.com/ggerganov/status/2039744468899811419">@ggerganov</a></p></li><li><p><strong>Ollama</strong> (requires 0.20+): <a href="https://x.com/ollama/status/2039738348647108680">@ollama</a></p></li><li><p><strong>vLLM</strong> day-0 support (GPU/TPU/etc.): <a href="https://x.com/vllm_project/status/2039762998563418385">@vllm_project</a></p></li><li><p><strong>LM Studio</strong> availability: <a href="https://x.com/lmstudio/status/2039738625525502426">@lmstudio</a></p></li><li><p><strong>Transformers/llama.cpp/transformers.js</strong> callout: <a href="https://x.com/mervenoyann/status/2039739097611215344">@mervenoyann</a></p></li><li><p><strong>Modular/MAX</strong> production inference &#8220;in days&#8221;: <a href="https://x.com/clattner_llvm/status/2039738590213910558">@clattner_llvm</a></p></li></ul></li><li><p><strong>Local inference performance anecdotes got unusually concrete</strong>:</p><ul><li><p>&#8220;Brew install + llama-server&#8221; became the canonical one-liner for many: <a href="https://x.com/julien_c/status/2039746054355067002">@julien_c</a>.</p></li><li><p>llama.cpp performance demo: <strong>Gemma 4 26B A4B Q8_0 on M2 Ultra</strong>, built-in WebUI, MCP support, &#8220;<strong>300 t/s</strong> (realtime video)&#8221; in <a href="https://x.com/ggerganov/status/2039752638384709661">@ggerganov</a> (with a follow-up caveat about prompt-recitation/speculative decoding in <a href="https://x.com/ggerganov/status/2039753496317059270">@ggerganov</a>).</p></li><li><p>RTX 4090 long-context throughput + TurboQuant KV quant details in <a href="https://x.com/basecampbernie/status/2039847254534852783">@basecampbernie</a>.</p></li><li><p>Browser-local run via WebGPU/transformers.js demo noted by <a href="https://x.com/xenovacom/status/2039741226337935430">@xenovacom</a> and amplified by <a href="https://x.com/ClementDelangue/status/2039782910996148508">@ClementDelangue</a>.</p></li></ul></li></ul><div><hr></div><p><strong>Gemma 4 architecture notes: hybrid attention, MoE layering choices, and efficiency tricks</strong></p><h3><strong>Unusual transformer details</strong></h3><ul><li><p><a href="https://x.com/eliebakouch/status/2039751171556954531">eliebakouch</a> highlighted:</p><ul><li><p>per-layer embeddings on small variant</p></li><li><p>no explicit attention scale (suggesting it may be absorbed into norm weights)</p></li><li><p>QK norm + V norm</p></li><li><p>shared K/V for large variant</p></li><li><p>aggressive KV cache sharing on small variant</p></li><li><p>sliding window sizes <strong>512 and 1024</strong></p></li><li><p>no sinks</p></li><li><p>softcapping</p></li><li><p>partial-dimension RoPE with different theta for local/global layers</p></li></ul></li><li><p><a href="https://x.com/Grad62304977/status/2039752105473306847">Grad62304977</a> replied that the missing attention scale is likely merged into QK norm weights.</p></li><li><p><a href="https://x.com/baseten/status/2039751071284015393">baseten</a> summarized additional architecture choices:</p><ul><li><p>alternative attention mechanisms</p></li><li><p>proportional RoPE</p></li><li><p>Per-Layer Embeddings (PLE)</p></li><li><p>KV-cache sharing</p></li><li><p>native aspect-ratio handling for vision</p></li><li><p>smaller frame window for audio</p></li></ul></li><li><p><a href="https://x.com/norpadon/status/2039740827975500251">norpadon</a> called it &#8220;very much not a standard transformer.&#8221;</p></li><li><p><a href="https://x.com/rasbt/status/2039780905619705902">rasbt</a> offered a more conservative read for the 31B dense: architecture looks &#8220;pretty much unchanged compared to Gemma 3&#8221; aside from multimodal support, retaining a hybrid <strong>5:1 local/global attention</strong> mechanism and classic <strong>GQA</strong>, suggesting the bigger jump likely came more from the <strong>training recipe and data</strong> than radical dense-model architecture change.</p></li><li><p><strong>&#8220;Not a standard transformer&#8221; takes, plus specific deltas</strong>: A thread flagged Gemma 4 as having &#8220;galaxybrained architecture&#8221; in <a href="https://x.com/norpadon/status/2039740827975500251">@norpadon</a>, followed by more specific notes on how Gemma&#8217;s MoE differs from DeepSeek/Qwen (Gemma uses <strong>MoE blocks as separate layers</strong> added alongside normal MLP blocks) in <a href="https://x.com/norpadon/status/2039750841754697767">@norpadon</a>.</p></li><li><p><strong>Concrete low-level details being circulated</strong>: A concise recap of quirks (e.g., <strong>no explicit attention scale</strong>, <strong>QK/V norm</strong>, <strong>KV sharing</strong>, <strong>sliding window sizes</strong>, <strong>partial RoPE + different theta</strong>, <strong>softcapping</strong>, <strong>per-layer embeddings</strong>) is in <a href="https://x.com/eliebakouch/status/2039751171556954531">@eliebakouch</a>. Baseten&#8217;s launch post also lists similar &#8220;architecture innovations&#8221; (PLE, KV-cache sharing, proportional RoPE, aspect ratio handling for vision, smaller audio frame window) in <a href="https://x.com/baseten/status/2039751071284015393">@baseten</a>.</p></li><li><p><strong>Raschka&#8217;s read: minimal architectural change, big recipe/data change</strong>: Raschka argues Gemma 4 31B is architecturally close to Gemma 3 27B, still using a <strong>hybrid sliding-window + global attention</strong> pattern and <strong>GQA</strong>, implying the leap is likely <strong>training recipe/data</strong> rather than architecture overhaul: <a href="https://x.com/rasbt/status/2039780905619705902">@rasbt</a>.</p></li></ul><div><hr></div><p><strong>Agents, harness engineering, and &#8220;local agents&#8221; momentum (Hermes/OpenClaw + model/harness training loops)</strong></p><ul><li><p><strong>Open-models-as-agent-engines is now mainstream positioning</strong>: Multiple posts frame Gemma 4 as the &#8220;perfect&#8221; local model for open agent stacks (OpenClaw/Hermes/Pi/opencode). See <a href="https://x.com/ClementDelangue/status/2039740419899056152">@ClementDelangue</a>, <a href="https://x.com/mervenoyann/status/2039788257815261400">@mervenoyann</a>, and <a href="https://x.com/ben_burtenshaw/status/2039740590091362749">@ben_burtenshaw</a>.</p></li><li><p><strong>Hermes Agent growth + pluggable memory</strong>:</p><ul><li><p>Hermes Agent hit a major usage milestone and asked for roadmap input: <a href="https://x.com/Teknium/status/2039788883312087231">@Teknium</a>.</p></li><li><p>Memory integrations were expanded to multiple providers via a new pluggable system: <a href="https://x.com/Teknium/status/2039912975444926885">@Teknium</a>.</p></li><li><p>A local semantic index plugin (&#8220;Enzyme&#8221;) pitched as solving the &#8220;too many workspace files&#8221; issue with <strong>local embedding</strong> and <strong>8ms queries</strong>: <a href="https://x.com/jphorism/status/2039822829412405671">@jphorism</a>.</p></li></ul></li><li><p><strong>Harness engineering as the moat (and the loop)</strong>: A strong &#8220;Model&#8211;Harness Training Loop&#8221; thesis&#8212;open models + traces + fine-tuning infra&#8212;was articulated in <a href="https://x.com/Vtrivedy10/status/2039872562662941118">@Vtrivedy10</a> and echoed more generally in <a href="https://x.com/Vtrivedy10/status/2039805753905840159">@Vtrivedy10</a>. Related: LangChain notes open models are &#8220;good enough&#8221; at tool use/retrieval/file ops to drive harnesses like Deep Agents in <a href="https://x.com/hwchase17/status/2039787730402705653">@hwchase17</a>.</p></li><li><p><strong>Agent self-healing + observability trends</strong>:</p><ul><li><p>A blog on &#8220;self-healing&#8221; GTM agent feedback loops is referenced by <a href="https://x.com/hwchase17/status/2039749451259195428">@hwchase17</a> and expanded on by <a href="https://x.com/Vtrivedy10/status/2039756274468810778">@Vtrivedy10</a>.</p></li><li><p>LangSmith reports <strong>Azure&#8217;s share of OpenAI traffic</strong> rose from <strong>8% &#8594; 29%</strong> over <strong>10 weeks</strong>, based on <strong>6.7B agent runs</strong>, suggesting enterprise governance/compliance is driving routing decisions: <a href="https://x.com/LangChain/status/2039749792524271704">@LangChain</a>.</p></li></ul></li></ul><div><hr></div><p><strong>Tooling and infra: kernels, fine-tuning stacks, vector DB ergonomics, document extraction</strong></p><ul><li><p><strong>New linear attention kernel</strong>: A CUDA linear attention kernel drop is in <a href="https://x.com/eliebakouch/status/2039733060665499690">@eliebakouch</a> (repo link in tweet).</p></li><li><p><strong>Axolotl v0.16.x</strong>: Axolotl&#8217;s release emphasizes <strong>MoE + LoRA</strong> speed/memory wins (claimed <strong>15&#215; faster, 40&#215; less memory</strong>) and <strong>GRPO async training</strong> (<strong>58% faster</strong>) plus docs overhaul in <a href="https://x.com/winglian/status/2039739597287047384">@winglian</a> and <a href="https://x.com/winglian/status/2039740266597245113">@winglian</a>. Gemma 4 support follows in <a href="https://x.com/winglian/status/2039823559363629432">@winglian</a>.</p></li><li><p><strong>Vector DB ergonomics</strong>: turbopuffer adds <strong>multiple vector columns</strong> per doc (different dims/types/indexes) in <a href="https://x.com/turbopuffer/status/2039734876954632428">@turbopuffer</a>.</p></li><li><p><strong>Document automation stack: LiteParse + Extract v2</strong>:</p><ul><li><p><strong>LiteParse</strong> open-source document parser: spatial text parsing with <strong>bounding boxes</strong>, fast on large table-heavy PDFs, enabling audit trails back to source in <a href="https://x.com/jerryjliu0/status/2039730277786980833">@jerryjliu0</a>.</p></li><li><p><strong>Extract v2</strong> (LlamaIndex/LlamaParse): simplified tiers, saved extract configs, configurable parsing before extraction, transition period for v1 in <a href="https://x.com/llama_index/status/2039734761334374791">@llama_index</a> and additional context from <a href="https://x.com/jerryjliu0/status/2039764004332339565">@jerryjliu0</a>.</p></li></ul></li></ul><div><hr></div><p><strong>Frontier org updates: Anthropic interpretability, OpenAI product distribution, and Perplexity &#8220;Computer for Taxes&#8221;</strong></p><ul><li><p><strong>Anthropic: &#8220;Emotion vectors&#8221; inside Claude</strong>: Anthropic reports internal <strong>emotion concept representations</strong> that can be dialed up/down and measurably affect behavior (e.g., increasing a &#8220;desperate&#8221; vector increases cheating; &#8220;calm&#8221; reduces it). The core threads are <a href="https://x.com/AnthropicAI/status/2039749628737019925">@AnthropicAI</a>, <a href="https://x.com/AnthropicAI/status/2039749652413550691">@AnthropicAI</a>, and <a href="https://x.com/AnthropicAI/status/2039749660349239532">@AnthropicAI</a>. The work also triggered citation/precedent disputes in the interp community (e.g., <a href="https://x.com/aryaman2020/status/2039761326440898672">@aryaman2020</a>, <a href="https://x.com/dribnet/status/2039775902368948363">@dribnet</a>, and discussion around vgel&#8217;s posts via <a href="https://x.com/jeremyphoward/status/2039880485036544422">@jeremyphoward</a>).</p></li><li><p><strong>OpenAI: CarPlay + Codex pricing changes</strong>:</p><ul><li><p>ChatGPT <strong>Voice Mode on Apple CarPlay</strong> rolling out for iOS 26.4+: <a href="https://x.com/OpenAI/status/2039748699350532097">@OpenAI</a>.</p></li><li><p><strong>Codex usage-based pricing</strong> in ChatGPT Business/Enterprise (plus promo credits): <a href="https://x.com/OpenAIDevs/status/2039794643513295328">@OpenAIDevs</a>. Greg Brockman reinforces &#8220;try at work without up-front commitment&#8221;: <a href="https://x.com/gdb/status/2039830819498491919">@gdb</a>.</p></li></ul></li><li><p><strong>Perplexity: agentic &#8220;Computer for Taxes&#8221;</strong>: Perplexity launched a workflow to help draft/review federal tax returns (&#8220;Navigate my taxes&#8221;) in <a href="https://x.com/perplexity_ai/status/2039740898830073889">@perplexity_ai</a> with details in <a href="https://x.com/perplexity_ai/status/2039750344373125547">@perplexity_ai</a>.</p></li></ul><div><hr></div><p><strong>Top tweets (by engagement, filtered to tech/product/research)</strong></p><ul><li><p><strong>Gemma 4 launch (open-weight, Apache 2.0)</strong>: <a href="https://x.com/Google/status/2039736220834480233">@Google</a>, <a href="https://x.com/GoogleDeepMind/status/2039735446628925907">@GoogleDeepMind</a>, <a href="https://x.com/demishassabis/status/2039736628659269901">@demishassabis</a>, <a href="https://x.com/GoogleAI/status/2039735543068504476">@GoogleAI</a></p></li><li><p><strong>Anthropic &#8220;Emotion concepts/vectors&#8221; interp research</strong>: <a href="https://x.com/AnthropicAI/status/2039749628737019925">@AnthropicAI</a></p></li><li><p><strong>Karpathy on &#8220;LLM Knowledge Bases&#8221; (Obsidian + compiled markdown wiki workflow)</strong>: <a href="https://x.com/karpathy/status/2039805659525644595">@karpathy</a></p></li><li><p><strong>Cursor 3 (agent-collaboration interface)</strong>: <a href="https://x.com/cursor_ai/status/2039768512894505086">@cursor_ai</a></p></li><li><p><strong>ChatGPT on CarPlay</strong>: <a href="https://x.com/OpenAI/status/2039748699350532097">@OpenAI</a></p></li><li><p><strong>llama.cpp local performance demo + MCP/WebUI</strong>: <a href="https://x.com/ggerganov/status/2039752638384709661">@ggerganov</a></p></li><li><p><strong>Perplexity &#8220;Computer for Taxes&#8221;</strong>: <a href="https://x.com/perplexity_ai/status/2039740898830073889">@perplexity_ai</a></p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. Gemma 4 Model Releases and Features</strong></h3><p></p>
      <p>
          <a href="https://www.latent.space/p/ainews-gemma-4-the-best-small-multimodal">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Moonlake: Causal World Models should be Multimodal, Interactive, and Efficient — with Chris Manning and Fan-yun Sun]]></title><description><![CDATA[We cap out our World Models coverage with one of the most exciting new approaches - long running, multiplayer, interactive world models built with agents bootstrapped from game engines!]]></description><link>https://www.latent.space/p/moonlake</link><guid isPermaLink="false">https://www.latent.space/p/moonlake</guid><pubDate>Thu, 02 Apr 2026 17:55:29 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192967759/ce57f68bd20acbccee5c2d69a6651ba2.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>We&#8217;ve been on a bit of a mini World Models series over the last quarter: from introducing the topic with <a href="https://www.latent.space/p/captaining-imo-gold-deep-think-on?utm_source=publication-search">Yi Tay</a>, to exploring <a href="https://www.latent.space/p/after-llms-spatial-intelligence-and?utm_source=publication-search">Marble with World Labs&#8217; Fei-Fei Li and Justin Johnson</a>, to previewing <a href="https://www.latent.space/p/world-models-and-general-intuition?utm_source=publication-search">World Models learned from massive<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> gaming datasets with General Intuition&#8217;s Pim de Witte</a> (who has now written down <a href="https://www.notboring.co/p/world-models">their approach to World Models</a> with Not Boring), to discussing <a href="https://www.latent.space/p/edison?utm_source=publication-search">the Cosmos World Model with with Andrew White of Edison Scientific</a> on our new Science pod, to writing up our <a href="https://www.latent.space/p/adversarial-reasoning?utm_source=publication-search">own theses on Adversarial World Models</a>. Meanwhile <a href="https://x.com/drjimfan/status/2018754323141054786?s=46">Nvidia</a>, <a href="https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simulation">Waymo</a> and <a href="https://youtu.be/LFh9GAzHg1c?si=U9dy7U2WzO4JPFfM">Tesla</a> have published their own approaches, Google has <a href="https://x.com/jparkerholder/status/1952732999193096392">released Genie 3</a>, and Yann LeCun has <a href="https://x.com/zhuokaiz/status/2032201769053212682?s=12">raised $1B for AMI</a> and published <a href="https://x.com/askalphaxiv/status/2036152743505592582">LeWorldModel</a>.</p><p>Today&#8217;s guests have a radically different approach to World Modeling to every player we just mentioned &#8212;&nbsp;while Genie 3 is impressive, <a href="https://x.com/swyx/status/2017111381456400603">its many flaws</a> demonstrate the issues with their approach - terrain clipping, noninteractivity (single player, no physics/no objects other than the player move), and maximum of 60 second immersion. </p><p><strong><a href="https://moonlakeai.com/">Moonlake AI</a></strong> (inspired by the <a href="https://www.youtube.com/watch?v=2MmsMjN6fbU">Dreamworks logo</a>) is the diametric opposite - immediately multiplayer, incredibly interactive, indefinite lifetime, capable of MANY different kinds of world models by simulating environments, predicting outcomes, and planning over long horizons. This is enabled by <a href="https://moonlakeai.com/blog/building-interactive-worlds">bootstrapping from game engines</a> and training custom agents: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nyTu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nyTu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png 424w, https://substackcdn.com/image/fetch/$s_!nyTu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png 848w, https://substackcdn.com/image/fetch/$s_!nyTu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png 1272w, https://substackcdn.com/image/fetch/$s_!nyTu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nyTu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png" width="558" height="465.30592105263156" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1014,&quot;width&quot;:1216,&quot;resizeWidth&quot;:558,&quot;bytes&quot;:921535,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192967759?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nyTu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png 424w, https://substackcdn.com/image/fetch/$s_!nyTu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png 848w, https://substackcdn.com/image/fetch/$s_!nyTu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png 1272w, https://substackcdn.com/image/fetch/$s_!nyTu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5139c85f-24e5-41fd-9d96-adf34c4e4fc4_1216x1014.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/moonlake/status/2026718586354487435?s=20">launch tweet</a></figcaption></figure></div><p>In <a href="https://x.com/moonlake/status/2029983120087470545?s=20">Towards Efficient World Models</a>, <a href="https://en.wikipedia.org/wiki/Christopher_D._Manning">Chris Manning</a> and <a href="https://en.wikipedia.org/wiki/Ian_Goodfellow">Ian Goodfellow</a> join Fan-Yun in explaining why their approach to <strong>efficiency</strong> with <a href="https://moonlakeai.com/blog/why-world-models-need-structure-not-just-scale">structure</a> and <strong>casuality</strong> instead of just blind scaling is sorely needed:</p><blockquote><p>SOTA models still show physical or spatial understanding glitches, such as solid objects floating in mid-air or moving &#8220;inside&#8221; other solid objects.</p><p>If the goal is to plan for the next action, how often is a high-resolution pixel view necessary for modeling the world? <strong>Our bet is that there is a disproportionately large share of economically valuable tasks where such detail is not required. </strong>After all, humans with a wide variety of sensory limitations have little difficulty doing almost everything in the world. Furthermore, for a large number of purposes, describing a scene or a situation in a few words of language (&#8220;the car&#8217;s tires squealed as it cornered sharply&#8221;) is sufficient for understanding and planning.</p><p><a href="https://www.eurekalert.org/news-releases/701966">Experiments</a> also show that h<a href="https://www.eurekalert.org/news-releases/701966">umans only partially process visual input in a top-down, task-directed way, often making use of abstracted object-level modeling</a>. In almost all cases, partial representations combined with semantic understanding are sufficient.</p><p>&#8230;<br><br>If the goal is to facilitate the understanding of causality in multimodal environments, then the world model&#8212;whether it is used in the virtual world or the physical world&#8212;must prioritize properties such as spatial and physical state consistency maintained over long time periods, and <strong>an ability to evolve the world that accurately reflects the consequences of actions</strong>. That&#8217;s what Moonlake is building.</p></blockquote><p><br>Game engines are the right starting point abstraction to efficiently extract causal relationships, and building the interfaces and community (including <a href="https://x.com/moonlake/status/2032187689135718479?s=20">their new $30,000 Creator Cup</a>) to kickstart the flywheel of actions-to-observations.</p><p>We were fortunate enough to attend <a href="https://x.com/sharonal_lee/status/2032628353380040926">their sessions at GDC 2026</a> (the Mecca of Game Devs), and were impressed by the huge variety and flexibility of the worlds people were building with Moonlake&#8217;s tools already! Live videos on the pod.</p><p></p><h2>Full Video Pod on <a href="https://www.youtube.com/watch?v=oBWRHnggscM">YouTube</a>!</h2><div id="youtube2-oBWRHnggscM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;oBWRHnggscM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/oBWRHnggscM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Timestamps</h2><p>00:00 Benchmarking Gets Hard<br>00:47 Meet Moonlake Founders<br>01:26 Why Build World Models<br>03:12 Structure Not Just Scale<br>05:37 Defining Action Conditioned Worlds<br>07:32 Abstraction Versus Bitter Lesson<br>14:39 Language Versus JEPA Debate<br>20:27 Reasoning Traces And Rendering Layer<br>37:00 Gameplay Over Graphics<br>38:02 Fiction Rules And World Tweaks<br>39:15 Code Engines Beat Learned Priors<br>41:10 Diffusion Scaling Limits<br>43:23 Symbolic Versus Diffusion Boundary<br>46:14 Platform Vision Beyond Games<br>50:24 Spatial Audio And Multimodal Latents<br>54:23 NLP Roots Hiring And Moon Lake Name</p><p></p><h1>Transcript</h1><h2>[00:00:00] Cold Open</h2><p>[00:00:00] <strong>Chris Manning:</strong> Think this whole space is extremely difficult as things are emerging now. And I mean, it&#8217;s not only for world models, I think it&#8217;s for everything including text-based models, right? &#8216;cause in the early days it seemed very easy to have good benchmarks &#8216;cause we could do things like question answering benchmarks.</p><p>[00:00:20] But these days so much of what people are wanting to do is nothing like that, right? You&#8217;re wanting to get some recommendations about which backpack would be best for you for your trip in Europe next month. It&#8217;s not so easy to come up with a benchmark, and it&#8217;s the same problem with these world models.</p><h2>[00:00:41] Meet the Founders</h2><p>[00:00:41] <strong>swyx:</strong> Okay. We&#8217;re back in the studio with Moon Lake&#8217;s, two leads. I, I guess there&#8217;s other founders as well, but, sun and Chris Manning. Welcome to the studio.</p><p>[00:00:54] <strong>Fan-yun Sun:</strong> Thanks. Thanks, Chris. Thanks for having us.</p><p>[00:00:56] <strong>swyx:</strong> You&#8217;ve got, you guys have, come burst onto the scene with a really refreshing [00:01:00] new take of mold models.</p><p>[00:01:01] I would just want to, I guess ask how you, the two of you came together. Chris, you&#8217;re a legend in NLP and just AI in, in, in general. You&#8217;re, you&#8217;re his grad student, I guess</p><p>[00:01:10] <strong>Fan-yun Sun:</strong> Actually my co-founder.</p><p>[00:01:11] <strong>swyx:</strong> Oh, yeah.</p><p>[00:01:12] <strong>Fan-yun Sun:</strong> I should give a lot of credit to my co-founder, Sharon. Yeah. She was, she was actually working with Professor Fe Androgyn and then she ended up working with, Ron and Chris Manning here.</p><p>[00:01:22] And then, so I got connected through to Chris initially, actually through my co-founder,</p><h2>[00:01:26] What is Moon Lake?</h2><p>[00:01:26] <strong>swyx:</strong> what is Moon Lake? What, what is, actually, I&#8217;m also very curious about the name, but like why going into world models?</p><p>[00:01:33] <strong>Fan-yun Sun:</strong> So I was working a lot. With actually Nvidia research during my PhD years on essentially generating interactive worlds to train reinforcement learning agents or embody EA agents.</p><p>[00:01:44] And then there&#8217;s two observations. One in academia and one in industry. An industry like folks at Nvidia are actually paying a lot of dollars to purchase these types of interactive worlds, whether it&#8217;s for the sake of evaluation or training the robots, or policies or models. And [00:02:00] then, in academia, same thing is happening.</p><p>[00:02:02] And more specifically, when I was actually working with Nvidia on the synthetic data foundation model training project, we were actually generating a lot of these synthetic data and showing that, hey, you can actually, these synthetic data are actually as useful as real world data when it comes to multimodal pre-training.</p><p>[00:02:16] But then, like I said, there&#8217;s a lot of dollars being paid out to like external vendors or, or like. Other folks to manually curate these types of data. It was very clear to us that, okay, on our way to, let&#8217;s call it embody general intelligence models need to learn the consequences behind their actions, which means that they need interactive data and the demand for those types of data are growing exponentially.</p><p>[00:02:38] But everybody&#8217;s sort of thinking about it from a pure, say, video generation perspective or something else. But we feel like the true actually opportunity is actually building reasoning models that can do these things, like how humans do these things today. So that&#8217;s a little bit on the genesis of Moon Lake, and I think the reason I got into world models was partly.</p><p>[00:02:59] A philosophical [00:03:00] take of the on the world where I like, believe the simulation theory and stuff like that. But on the other, on the other hand, it&#8217;s really just like, oh, like there&#8217;s an opportunity there that I feel like nobody&#8217;s doing it the way I think should be done.</p><h2>[00:03:10] Structure, Not Scale: The Vision</h2><p>[00:03:10] <strong>Chris Manning:</strong> I can say a little bit about that.</p><p>[00:03:12] Yeah. So of the overall goal is the pursuit of artificial intelligence and most of my career has been doing that in the language space and that&#8217;s been just extremely productive. As we all know, the story of the last few years, I don&#8217;t have to tell about how much we&#8217;ve achieved with large language models, but, uh.</p><p>[00:03:31] Although they have been extremely effective for ramping language and general intelligence, it&#8217;s clearly not the whole world. There&#8217;s this multimodal world of vision, sound, taste that you&#8217;d like to be dealing with more than just, language. And then the question is how to do it. And despite, a huge investment in the computer vision space, right, as the research field computer [00:04:00] vision has been for decades, far, far larger than the language space, actually.</p><p>[00:04:05] I think it&#8217;s fair. Say that, vision, understanding sort of stalled out, right? You got to object recognition and then progress just wasn&#8217;t being made right? If you look at any of these, vision language models, it&#8217;s the language that&#8217;s doing 90% of the work and the vision barely works. And so there&#8217;s really an interesting research question as to why that is and at heart, the ideas behind Moon Lake are an attempt to answer that, believing that there can be a really rich connection between a more symbolic layer of abstracted understanding of visual domains, which aren&#8217;t in the mainstream vision models, which are still trying to operate on the surface level of pixels.</p><p>[00:04:50] <strong>swyx:</strong> I think one of your blog posts, you put it as structure, not scale. Is that, a general thesis?</p><p>[00:04:57] <strong>Chris Manning:</strong> Yeah. Well, scale is good too.</p><p>[00:04:58] <strong>swyx:</strong> Yeah. Scale is good. Too</p><p>[00:04:59] lot,</p><p>[00:04:59] <strong>Chris Manning:</strong> [00:05:00] lots of data is good as well and scale, but nevertheless, you want the structure Yeah. To be able to much more efficiently learn.</p><p>[00:05:07] <strong>swyx:</strong> Yeah. The other thing I really liked also is you put out an example of what your kind of reasoning traces look like.</p><p>[00:05:12] Right. Which you would distill is the word that comes to mind. I don&#8217;t even think that&#8217;s a good, good description, but it would involve, for example, geometry, physics, affordances, symbolic logic, perceptual mappings, and what, what have you. But like that, that is the kind of example that involves, let&#8217;s call it spatial reasoning, role model reasoning as as compared to normal LM reasoning.</p><p>[00:05:35] Yeah.</p><h2>[00:05:36] Defining World Models vs Video Generation</h2><p>[00:05:36] <strong>Vibhu:</strong> But also like taking it a step back. So how do you guys define world models? A lot of people see okay, you can do diffusion, you can do video generation. But, you guys put out quite a few blog posts. You put out a essay recently, we can even pull it up about efficient world models. You have a pretty like structural definition here, but for the general audience that don&#8217;t super follow the space, right.</p><p>[00:05:55] What&#8217;s, what&#8217;s the difference in what we see from like a video generation model to [00:06:00] a world gen A simulator? How do you kind of paint that last</p><p>[00:06:02] <strong>Chris Manning:</strong> year? Yeah, so I think this is actually a little bit subtle because, people look at these amazing generative AI video models, SAWA VO three, one of these things, and they think Genie, they think, oh, this is amazing.</p><p>[00:06:17] This is we&#8217;ve solved understanding the world because you can produce these generative AI videos, but. The reality is that although the visuals do look fantastic, those visuals actually are accompanied by an understanding of the 3D world, understanding how objects can move, what the consequences of different actions are, and that&#8217;s what&#8217;s really needed for spatial intelligence.</p><p>[00:06:49] So I mean, a term we sometimes use is that you need action condition, world models. That you only actually have a world model if you can predict, [00:07:00] given some action is taken, what is going to change in the world because of it. And in particular, that becomes hard over longer time scales. So if you&#8217;re simply, trying to.</p><p>[00:07:12] Predict the next video frame. That&#8217;s not so difficult. But what you actually want to do is understand the consequences, likely consequences of actions minutes into the future. And to do that, you actually much more of an abstracted semantic model of the world.</p><h2>[00:07:32] The Bitter Lesson &amp; Data Abstraction</h2><p>[00:07:32] <strong>swyx:</strong> Yeah, the question comes where you want to have more structure than is available in just predicting the next token.</p><p>[00:07:41] And typically, well, let&#8217;s, let&#8217;s call it the experience of the last five years has been that is just washed away by scale, right? So what is the right middle ground here that, you don&#8217;t ignore the bitter lesson, but also you. Can be more efficient than what we&#8217;re doing today.</p><p>[00:07:57] <strong>Chris Manning:</strong> One possibility [00:08:00] is, look, if we just collect masses and masses and masses and masses of video data, this problem will be solved.</p><p>[00:08:11] Under certain assumptions that could be true, but there are sort of multiple avenues in which it could not be true. The first is what&#8217;s really essential is understanding the, the consequences of actions producing an action conditioned world model. And if you are simply, collecting observational video data, which is the easy stuff to collect, when you&#8217;re sort of mining online videos, you don&#8217;t actually.</p><p>[00:08:41] Know the actions that are being taken to see how the video is changing. And so if you are never collecting directly actions and you are having to try and infer them from what happened in the observed video, that&#8217;s not impossible. But it&#8217;s very [00:09:00] hard and it&#8217;s not really established that you can get that to work at any scale yet.</p><p>[00:09:05] And so there&#8217;s a lot of premium on collecting action condition video data, which is part of why there&#8217;s been a lot of interest in using simulation so that you can be collecting data where you do know the actions, which isn&#8217;t quite limited supply, but there&#8217;s also in the limit of as much data as you could possibly have.</p><p>[00:09:28] Maybe the problem is eventually solvable, but. Even though we collect huge amounts of text data is always at a great level of abstraction, right? Language is a human designed, abstracted representation where there&#8217;s meaning in each token and it&#8217;s representing and abstraction of the world, right?</p><p>[00:09:51] As soon as you are describing someone as a professor, and as soon as you are saying that they&#8217;re condescending, right? These are very [00:10:00] abstracted descriptions of the world. It&#8217;s not at what you&#8217;re observing as pixel level, and to get to that kind of degree of abstraction, starting from pixels is orders and magnitude of extra data and processing.</p><p>[00:10:14] And so, although, we absolutely want to exploit, get as much data as possible, use the bitter lesson. Nevertheless, if there are ways in which you can work with five orders of magnitude less data than people working purely from pixels, you&#8217;re gonna be able to make a lot more progress, a lot more quickly.</p><p>[00:10:34] And that&#8217;s the bet here. And so you could just say that&#8217;s only wanting to be able to, do it more efficiently, do it more quickly, do it more cheaply. But I think it&#8217;s actually more than that, I think. One should be making the analogy to how human beings work at one level. You know? Yes, we have these high [00:11:00] resolution eyes and we can look and see a scene like a video, but all of the evidence from neuroscience and psychology is that most of what comes into people&#8217;s eyes is never processed.</p><p>[00:11:13] Right. That you are doing fairly fine ated processing of exactly what you&#8217;re focusing on. But as soon as it&#8217;s away from that of yeah, there&#8217;s another guy over there that you&#8217;ve sort of only processing top down this very abstracted semantic description of the world around you. And so, that&#8217;s what human beings are doing.</p><p>[00:11:33] They&#8217;re working with semantic abstractions and so. I think it is just the right representation. &#8216;cause we also have other goals we want to be able to do, real time worlds. So that means there&#8217;s a limit to how much processing you can do and we want to do long-term planning and consistency. And again, that favors abstraction.</p><p>[00:11:55] I mean, I guess there was actually a recent. Blog posts that [00:12:00] came out from our Friends of physical intelligence and, they were sort of heading in the same direction they were saying Oh, to the pay</p><p>[00:12:06] <strong>swyx:</strong> pay model.</p><p>[00:12:07] <strong>Chris Manning:</strong> Yeah. Yeah. To maintain a long term memory of what&#8217;s happening in the world. So we can, do longer term we actually storing text of what is, been happening in the world.</p><p>[00:12:19] Right. It is not such a successful strategy of trying to keep it all at a pixel level.</p><p>[00:12:24] <strong>Vibhu:</strong> And yeah, I mean, you can see it in video models like that Temporal consistency. We&#8217;re at a scale of train on, all the video data we have. We have it for maybe 30 seconds, a few minutes. That&#8217;s not the same as a game state played for half an hour.</p><p>[00:12:37] Right. I thought you guys break it down pretty well. You have a, you have a blog post about. Building multimodal worlds with an agent. I dunno if you guys wanna talk about this. This is one of the things I read, I</p><p>[00:12:48] <strong>swyx:</strong> thought, yeah, it&#8217;s the thing I talked about with the reasoning chain. Yeah.</p><p>[00:12:51] <strong>Vibhu:</strong> So there&#8217;s like different phases to this.</p><p>[00:12:53] It seems like it&#8217;s more of an agent, a scaffold, very different approach than just, type in a prompt and you, you don&#8217;t have the same consistency. [00:13:00] It also, like, for people that are listening, I, I would highly recommend reading it. It breaks down the problem in a different light, right?</p><p>[00:13:06] So like, what do you need to consider when you&#8217;re talking about video, like world game models, right? How would, what do you need to consider? What are the factors? What are the elements? What&#8217;s the state? So I don&#8217;t know if you guys have stuff to talk about for this one.</p><p>[00:13:19] <strong>Fan-yun Sun:</strong> Yeah. Actually, I wanted to add on a little bit Yeah.</p><p>[00:13:22] On our previous point, which is just like, change topics so quickly. I, I do feel like sometimes people confuse like, oh, like we&#8217;re taking an an, an method with abstraction. That means they don&#8217;t believe in bitter lesson. Like that&#8217;s just false, right? Like we are believed is a bitter lesson. But then I feel like the question that we always discuss is like, what is the right abstraction level today?</p><p>[00:13:42] The analogy I like to make is like, let&#8217;s just say we can encode and decode. Represent all of images, videos, audio and bytes. Then the most bitter lesson approached is to train a next byte prediction model as opposed to the next token prediction model where it&#8217;s just like, okay, it&#8217;s natively multimodal, can just, but it&#8217;s like, yeah, like [00:14:00] to, to Chris&#8217;s point, it&#8217;s like the scale and computing you need to achieve that.</p><p>[00:14:03] So that&#8217;s why we always come back to like, okay, what is the most efficient way to do it? And reasoning models to the point of this blog post is a showcase of like, Hey, we&#8217;re actually just like reasoning about the world and reasoning about. The aspects of the world that CAGR that matter for me to learn what I want to learn from this role model.</p><p>[00:14:21] <strong>swyx:</strong> Yeah, it&#8217;s like you&#8217;re improving the en encoder of whatever you&#8217;re, trying to model. And like a better representation would just represent the important things in less space. Yeah. Which would just be more efficient.</p><p>[00:14:33] <strong>Fan-yun Sun:</strong> Yeah.</p><p>[00:14:34] <strong>swyx:</strong> So yeah, I, I, I fully agree that it is not, antagonistic to, bitter lesson.</p><p>[00:14:38] I do wanna wanna mention one more thing. Is there any philosophical differences with the JPA stuff that, Yun is working on? I gotta go there. You, you, you, you&#8217;re, you&#8217;re imagining like some latent abstraction. I&#8217;m like, okay, fine. Let&#8217;s, let&#8217;s talk about it, right? Like it&#8217;s an elephant in the room.</p><p>[00:14:52] <strong>Chris Manning:</strong> Yeah.</p><h2>[00:14:53] JEPA &amp; Philosophical Differences with LeCun</h2><p>[00:14:53] <strong>Chris Manning:</strong> There are philosophical differences. Jan Lacoon is a dear friend of mine, but. [00:15:00] He has never appreciated the power of language in particular, or symbolic representations in general. Yarn is a very visual thinker. He always wants to claim that he thinks visually and there are no words, symbols, or math in his head.</p><p>[00:15:21] Maybe that&#8217;s true of yarn. It&#8217;s certainly not the way I think. Um. But at any rate, the world according to yarn is the basic stuff of the, the world and of intelligence is visual and language is just. This low bit rate communication mechanism between humans and it doesn&#8217;t have much other utility and it&#8217;s far inferior to the high bit rate video, that comes into your eyes.</p><p>[00:15:53] And I think he&#8217;s fundamentally missing a number of important things [00:16:00] there. Think of this evolutionary argument looking at animals, right? That the closest analogies, the things with chimps, right? So chimpanzees, have fairly similar brains to human beings. They have great vision systems, they have great memory systems.</p><p>[00:16:18] They&#8217;ve got, better memory than we do of short term memories. They can plan, they can build primitive tools that, humans. Massively ahead in what we understand about the world, what we can plan, what we can build. And essentially what took off for us was that humans managed to develop language and that gave a symbolic knowledge, representation, and reasoning level, which just, okay if this sort of vaulting of what could be done with the intelligence in brains.</p><p>[00:16:59] So the [00:17:00] philosopher Dan de refers to language as a cognitive tool and argues that, humans unique among the creatures in the world have managed to build their own cognitive tools and language is the famous first example. But other things like, mathematics and programming languages are also cognitive tools.</p><p>[00:17:21] They give you an ability to. Think in abstractions, in extended causal reasoning chains. And that allows you to do much more. And we use that for spatial representation and intelligence and planning and gameplay as well. So we believe, and this is, underlying the specific technologies that Moon Lake is making, that symbolic representations are powerful.</p><p>[00:17:50] And you want to use that in your understanding of the visual world when you want a causal understanding, when you want to maintain long-term [00:18:00] consistency and prediction. And as I understand it, that&#8217;s just not in ya Koon&#8217;s worldview. So I think that&#8217;s the fundamental philosophical difference. Then there&#8217;s the specific model.</p><p>[00:18:11] He&#8217;s been advancing jpa, that&#8217;s a reasonable. Research bed is a direction as to, to head for building out a model of the visual world. To my mind, it&#8217;s sort of one reasonable research bed. It&#8217;s not really established. It&#8217;s the best one that everyone should be following,</p><p>[00:18:32] <strong>swyx:</strong> at least developed at scale, at Meta.</p><p>[00:18:34] But it&#8217;s not just vision, right? Like, I mean, JPA is a, just joint admitting prediction can be applied to anything really. And people have done it. The argument is that there is a latent representation or that is probably more. Suited to the task, then why not let machines do it for us instead of predefining it at all?</p><p>[00:18:50] And isn&#8217;t something like a JPA shaped thing the right answer? And if not, why not?</p><p>[00:18:55] <strong>Chris Manning:</strong> So I think there&#8217;s a part of jpa that&#8217;s right, which is [00:19:00] you do want to have a joint. Embedding that gives you a consistent model of the world. And Jan&#8217;s argument is you can never get that from auto aggressive language models &#8216;cause they&#8217;re sort of left to right churning out one token at a time.</p><p>[00:19:22] I guess this is where we&#8217;re the research arguments of the field, I&#8217;m not actually convinced that&#8217;s right. &#8216;cause although the token production is this auto aggressive, process that&#8217;s heading, left to right, I guess don&#8217;t have to be left to right. But anyway, in sequence of tokens we could have right to left Arabic.</p><p>[00:19:40] But although that&#8217;s true, all of the weights of the model that are internal to the transformer, they are a joint model of the model&#8217;s understanding of the world. And so I think you can think of the weights of the model as a form of. Joint representation, [00:20:00] and therefore it is plausible to think that could be the basis of a world model, which avoids, ya&#8217;s objections.</p><p>[00:20:10] <strong>swyx:</strong> I think I follow, and obviously that would touch on what Moon Lake eventually ends up doing as well. Right. Like, which it&#8217;s hard to tell because you put out the end results, but we don&#8217;t know the inputs that go into it. So it&#8217;s, it&#8217;s, that&#8217;s something that we have to figure out over time.</p><p>[00:20:25] <strong>Vibhu:</strong> Yeah. I mean, I guess this kind of breaks down some of the outputs. Do you wanna walk us through it?</p><h2>[00:20:31] Reasoning Traces &amp; Interactive Worlds</h2><p>[00:20:31] <strong>Fan-yun Sun:</strong> Yeah. So this, this really just walks us through the reasoning traces of like, okay. So that just say, if we wanna build a world in this context, it&#8217;s really just a game demo that, that shows the, the variety of interactions that this world model can build.</p><p>[00:20:45] And yeah, it&#8217;s really just a reasoning traces of like, okay it prompted to create a bowling game. Like how did it achieve what you saw? That level of causality, interaction and consistency, right? So yeah, this is almost just like a, an example of [00:21:00] like a reasoning traces. Very</p><p>[00:21:01] <strong>swyx:</strong> detailed.</p><p>[00:21:01] <strong>Fan-yun Sun:</strong> Yeah.</p><p>[00:21:01] <strong>Vibhu:</strong> Very, very detailed.</p><p>[00:21:02] You gotta you don&#8217;t even realize it, right? Like when a video is generated, what happens when a ball strikes a pin, right? So first, like you, there&#8217;s audio in that, like audio triggers happens, score increments, the world changes. Like pins have to start dropping. There&#8217;s a timer that goes on. It&#8217;s just like very similar to how now we&#8217;re used to reasoning for language models.</p><p>[00:21:20] There&#8217;s a whole state of what happens. So geometry, physics, all this stuff. And then yeah, there&#8217;s kind of that single prompt. So asset, ation all this stuff. It&#8217;s like a, it&#8217;s a nice view to see what&#8217;s going on.</p><p>[00:21:32] <strong>swyx:</strong> I think Sun is also too polite to point out that, both like Google&#8217;s genie, demos as well as world Labs is marble, do not have interactive worlds.</p><p>[00:21:41] <strong>Fan-yun Sun:</strong> That&#8217;s the benefit of having a reasoning model, right? Like, because you can, you can say, oh, like maybe in this particular context, I want to learn how to bowl. And then you can say, okay, then what is it important when it comes to learning how to bowl? Okay, maybe it&#8217;s like I need to understand the, the basic of like, physics and I want to throw it over [00:22:00] them.</p><p>[00:22:00] I wanna know that when I, when it resets it&#8217;s a new game. So I know that yeah, basically, you know to pick up the ball, you know that ball&#8217;s gonna cause the pins to fall down. You know that what&#8217;s important to this particular bowling game is to score and you know that the score corresponds to the number of pins that fell down.</p><p>[00:22:19] So it&#8217;s just like, if it&#8217;s a model that sort of knows what it. Looks like, knows what a bowling game looks like, but doesn&#8217;t actually allows you to practice over and over again and to understand that, oh, like what it takes to actually get a high score. Then it sort of doesn&#8217;t actually allow you to learn what you set out to learn within the world model.</p><p>[00:22:38] And I think this is really just one example of showing like the advantages of the approach that we&#8217;re taking over most the, let&#8217;s call it the zeitgeist, is today, when people talk about clinical role models,</p><p>[00:22:51] <strong>Chris Manning:</strong> right? So it sort of seems like the question to ask when there&#8217;s a world model is.</p><p>[00:22:58] Can I not [00:23:00] only just wander around the world and look at the beautiful graphics, can I interact with the objects in the world and see the right consequences of actions?</p><p>[00:23:11] <strong>Vibhu:</strong> And you also understand what the consequences would be if you do something right. So it&#8217;s not just like, okay, there&#8217;s one thing if I pick it up, something will happen.</p><p>[00:23:19] But, there&#8217;s 50 options and I know I can expect, I can infer what would happen if I do any of them. Right. So very different when you can actually see it play around with it.</p><p>[00:23:28] <strong>swyx:</strong> There,</p><h2>[00:23:28] Beyond Unity: Cognitive Tools for World Building</h2><p>[00:23:31] <strong>swyx:</strong> there&#8217;s two cheeky elements of that. I mean, the, the, the I guess, less ambitious one is, let&#8217;s really establish for listeners, why is this fundamentally different than writing Unity code, right?</p><p>[00:23:40] Like just creating a model to translate a prompt into Unity code</p><p>[00:23:44] <strong>Fan-yun Sun:</strong> so there is an underlying physics engine. Yeah. In that sense, there&#8217;s some overlapping things to Unity, but the way we think about it is like physics engine. Tools or code are cognitive tools like borrowing Chris&#8217;s term, right? Like tools [00:24:00] that the model can employ as means to an end.</p><p>[00:24:04] So today maybe you say, okay, in this particular context we care about physics, we care about the long-term causality consequences. Then yes, we deploy it, employ physics engine, and then maybe tomorrow we say, okay, we&#8217;re we&#8217;re training that. Just say drones where we only care about really fluid dynamics and the visual aspect of the world.</p><p>[00:24:25] Then, then yeah, maybe we don&#8217;t actually, the model actually doesn&#8217;t have to use a physics engine. Or maybe it employs other types of representation or physics engine to achieve the task. So yes, writing code for Unity is sort of similar to a tool that our A model can employ, but our goal is for a model to take a representation conditioned reasoning.</p><p>[00:24:46] Approach or process.</p><p>[00:24:47] <strong>swyx:</strong> Yeah,</p><p>[00:24:47] <strong>Fan-yun Sun:</strong> internally.</p><p>[00:24:48] <strong>swyx:</strong> Yeah. Using these things as just like general two calls. Right. Which I think is very interesting. The other more ambitious one is, some kind of recursive element where it becomes multiplayer, right? Like here, there&#8217;s a single player element, you&#8217;re not [00:25:00] modeling any other people involved.</p><p>[00:25:01] And that is a whole other thing.</p><p>[00:25:04] <strong>Fan-yun Sun:</strong> But in fact, we can really do multiplayers. Oh yeah, okay. I haven&#8217;t seen any double situations. So just actually just like prompt our, our model to say, Hey, like configure to multiplayer. Then it&#8217;ll do like this. You&#8217;ll be able to configure multiplayer</p><p>[00:25:16] <strong>swyx:</strong> great</p><p>[00:25:17] <strong>Fan-yun Sun:</strong> persistency database for you.</p><p>[00:25:18] Easy. Yeah.</p><p>[00:25:19] <strong>Vibhu:</strong> So what, what are like some of the current limitations in where we&#8217;re at? So there&#8217;s one approach of like, okay, scale up video predictors. Obviously there&#8217;s data issues. With approaches like this, is it data constraints? What are like the next steps? Is it real time? Like, so there&#8217;s one side of, write an agent to write Unity code, but okay, I want to be streaming a game real time.</p><p>[00:25:38] I want to have characters being also like agent, but where, where do we kinda see this scaling up? Right?</p><p>[00:25:44] <strong>Fan-yun Sun:</strong> Yeah, there&#8217;s definitely a data constraint. Like the more data, the, the better. This reasoning model can almost basically act as humans to like operate a variety of tools and softwares to build whatever&#8217;s necessary.</p><p>[00:25:57] And then there&#8217;s a sort [00:26:00] of fidelity constraint, which we&#8217;re actually solving with another model, which we can talk about later. But it&#8217;s like, it&#8217;s not as easy to get to photorealism with the approach that we&#8217;re taking. But we think there are better solutions to that, which is we can dive into later.</p><p>[00:26:14] Later.</p><p>[00:26:15] <strong>Vibhu:</strong> The one one thing you note here is it&#8217;s a diffusion model, right? So there&#8217;s, there&#8217;s a few approaches, diffusion caution, splatting, yeah, so Ry diffusion model, you guys wanna</p><p>[00:26:25] <strong>Fan-yun Sun:</strong> Yeah.</p><p>[00:26:25] <strong>Vibhu:</strong> Introduce,</p><p>[00:26:26] <strong>Fan-yun Sun:</strong> yeah, totally.</p><h2>[00:26:26] Rie: Neural Rendering &amp; Skins for Worlds</h2><p>[00:26:26] <strong>Fan-yun Sun:</strong> So within our world modeling framework, we think there are two models that we train, right?</p><p>[00:26:31] Like, there&#8217;s the multimodal reasoning model that we just talked about that essentially handles. Mainly the, the causality, the persistency and logic determinism of the world. And then RY is our bet on saying, okay, like while all those model, can take care of all these things that we just talked about, it&#8217;s limitations compared to existing, say, video models, is that it doesn&#8217;t have as high of a pixel [00:27:00] ality right off the gate, right?</p><p>[00:27:02] And EE is to say, Hey, we can actually take whatever persistent representation that we generate with our multimodal reasoning model and learn to restyle it into photo photorealistic styles or arbitrary styles you want. So this model is almost to say, Hey, I&#8217;m going to respect the persistency and interactivity of the world that you created, but my only job is to make sure that its pixel distribution is close to what we want.</p><p>[00:27:29] <strong>Vibhu:</strong> Yeah.</p><p>[00:27:30] <strong>swyx:</strong> Great example right there. You kept the KL divergence.</p><p>[00:27:33] <strong>Fan-yun Sun:</strong> Oh. Where,</p><p>[00:27:34] <strong>swyx:</strong> no, no. I mean this, this is a, a classic like, how you don&#8217;t stray too far from the source material as you, you kept the kl, which is Oh yeah. Kind of cool. Yeah.</p><p>[00:27:43] <strong>Fan-yun Sun:</strong> Yeah.</p><p>[00:27:44] <strong>swyx:</strong> I mean, and the</p><p>[00:27:44] <strong>Chris Manning:</strong> difference is, and I mean sun was pointing at this, where sort of saying it&#8217;s in one way a more difficult path, but a better path that, typically the diffusion models are producing the whole scene and it looks lovely, [00:28:00] but there isn&#8217;t spatial understanding behind it, which is allowing for the real time graphics gameplay, the spatial intelligence, understanding the consequences of worlds where this is, taking a path where it is assuming an abstracted semantic model of the world&#8217;s state.</p><p>[00:28:20] And then the diffusion model is then being used on top of that to produce the high quality graphics.</p><p>[00:28:27] <strong>swyx:</strong> Is there an intended practical, or business use for this, or is it like a, like a demonstration of capabilities?</p><p>[00:28:34] <strong>Fan-yun Sun:</strong> We actually believe that this is gonna be the next paradigm of rendering. So it&#8217;s gonna replace how ra raizer, it&#8217;s gonna replace DLSS today because it not only has these pixel prior that&#8217;s learned from the world such that you can literally play any game in photo realistic styles, which is a lot of people&#8217;s desire when they do GTA, right?</p><p>[00:28:51] Like,</p><p>[00:28:51] <strong>Vibhu:</strong> all the mods, all the people adding perfect lighting and all this.</p><p>[00:28:54] <strong>swyx:</strong> So</p><p>[00:28:54] <strong>Fan-yun Sun:</strong> skins</p><p>[00:28:55] <strong>swyx:</strong> for worlds, let&#8217;s call it</p><p>[00:28:56] <strong>Fan-yun Sun:</strong> skins, let&#8217;s call it skin for worlds. I,</p><p>[00:28:58] <strong>Vibhu:</strong> it&#8217;s also like, you can call it skin, you can call it [00:29:00] customization. You can play it how you want, right?</p><p>[00:29:01] <strong>Fan-yun Sun:</strong> Yeah, exactly. And I think another thing that we really pointed out specific specifically in this blog is the programmability of it, right?</p><p>[00:29:09] So what this means is that this render historically render is always a derivative of the game state, right? You&#8217;re saying, oh, here&#8217;s the game state, I&#8217;m rendering out a frame. But here I&#8217;m saying actually this render can be part of the gameplay loop. I can say something along the lines of, if upon getting 10.</p><p>[00:29:26] Apples, I&#8217;m gonna, my weapon of choice, my bullet&#8217;s gonna turn into apples. And that&#8217;s, that&#8217;s possible because we can say, we can basically dynamically have certain game state trigger the, the preconditions to the render such that the rendering is now part of the game loop too. One thing is to just say, okay, it&#8217;s, it&#8217;s, it&#8217;s the appearance.</p><p>[00:29:47] But the second thing is also to say there&#8217;s these novel interactions that are possible because this render now has actually priors of the world.</p><p>[00:29:57] <strong>swyx:</strong> It is up to the artist to figure out what to do with it.</p><p>[00:29:59] <strong>Fan-yun Sun:</strong> It [00:30:00] is up to the creators. Yes.</p><p>[00:30:01] <strong>swyx:</strong> Yeah.</p><p>[00:30:01] <strong>Fan-yun Sun:</strong> And I also think that&#8217;s actually another big argument that we&#8217;re making and the reason that we&#8217;re picking, taking the bet we&#8217;re baking is that a lot of the times, whether it&#8217;s for embody AI gaming, like you want a layer where human can inject their intentions.</p><p>[00:30:15] So, for example, let&#8217;s just say in the context of gaming, it&#8217;s obviously like my creative intent, but maybe in the context of embodied ai, it&#8217;s like, oh, like I take this foundational policy and I want to actually fine tune it to deploy in my house. So you want to almost say, inject, have a layer where human can say, oh, here&#8217;s the distribution of things I want to create to achieve my goal.</p><p>[00:30:35] And I think 3D graphics as it as it is today, is basic, the layer for people to say, Hey, what do I care about in this world? And it allows, basically human intent to be expressed in these worlds much more explicitly and distributionally as opposed to just saying, Hey, I&#8217;m gonna generate like, arbitrary.</p><p>[00:30:54] And it&#8217;s like just prompts,</p><p>[00:30:55] <strong>swyx:</strong> it&#8217;s one of those things where like, I think you, you&#8217;re going to build up a series of models, right? [00:31:00] This is just one of, this is probably like the highest utility or heaviest, frequency one, I don&#8217;t dunno what to call this. Where like you Yeah. You can immediately drop this in on any game and you don&#8217;t need anything else that.</p><p>[00:31:10] That you guys do. But, I, I could see, I could see that I think the, the human intent is something that people are not even used to because we&#8217;re so used to static worlds or, worlds that just don&#8217;t react, or, I don&#8217;t know. It&#8217;s, it, you&#8217;re kind of blowing my mind right now with like, I&#8217;m, I wonder if you&#8217;ve talked to people at GDC Hmm.</p><p>[00:31:27] And what are they gonna do with it?</p><p>[00:31:30] <strong>Fan-yun Sun:</strong> Yeah. Now the stance that we take on this front is like, we&#8217;re not gonna be more creative than our users to ship</p><p>[00:31:35] <strong>swyx:</strong> it out.</p><p>[00:31:35] <strong>Fan-yun Sun:</strong> Yeah. But we wanna make sure that we&#8217;re building things in a way that really allows them to express their intent.</p><p>[00:31:41] <strong>swyx:</strong> The thing that you said about, here&#8217;s the distribution that I want.</p><p>[00:31:45] I think text may be too low of a bandwidth to. To really demonstrate, because I, I, there, I&#8217;m, I&#8217;m probably just gonna want to drop in a bunch of, reference assets and then you can figure it out from</p><p>[00:31:58] <strong>Vibhu:</strong> there. But you probably wanna do a, a mixture of [00:32:00] both, right? Like you throw in a few images. I wanted this style.</p><p>[00:32:02] Yeah. I want it to look like this. So it, it&#8217;s, it&#8217;s a mixture, right?</p><p>[00:32:05] <strong>Chris Manning:</strong> I, I think it&#8217;s a mixture. I mean, yeah, I mean there&#8217;s clearly a visual component of this, and it&#8217;s not that, everything can be text. &#8216;cause of course you want to give a visual look, but there&#8217;s also a massive amount of giving the overall picture of the look of the world and the behavior of things that you can express in a few words of text.</p><p>[00:32:32] And it be very time consuming and difficult to do via visual means. So I think, yeah, you want a combination of both.</p><h2>[00:32:40] Evaluating World Models</h2><p>[00:32:40] <strong>Vibhu:</strong> So one question I kind of have is, how do we go about evaluating world models? So like, there&#8217;s many axes, right? One is like, okay. I have preferences. How well do we adhere to prompts? One is the simulation.</p><p>[00:32:50] One is like do things, is there core logic that&#8217;s broken? So coming from we know how to evaluate diffusion, there&#8217;s fidelity, there&#8217;s [00:33:00] stuff like that. But what are some of the challenges that most people probably aren&#8217;t thinking about?</p><p>[00:33:04] <strong>Fan-yun Sun:</strong> Yeah, I think this is like a great question and probably one of the hardest questions in role models because like, I think it always comes back to what are you building this role model for?</p><p>[00:33:13] And depending on your end goal and purpose, the evaluation should defer. So in the context of games, then the most direct way of measuring is how much behind are people actually spending in this world that you create? And if your goal is to say, for example, in the context that we just talked about, like, hey, deploying, deploying action in body, a agent, then your, your end.</p><p>[00:33:33] Metric is then, okay, after training in these worlds that you generate how robust it is to when you actually deploy to the target environment. But then, it&#8217;s, it&#8217;s hard to measure these end metrics. So today people have like these proxy metrics that I call that basically try to measure what we really care about, which is the end metrics, but then frankly it&#8217;s different for every use case.</p><p>[00:33:57] Yeah,</p><p>[00:33:57] <strong>Vibhu:</strong> which seems like quite a challenge, right? Like in [00:34:00] in language models or video models. Image models, your benchmarks are proxies, right? People aren&#8217;t actually asking instruction, following tool use questions. They&#8217;re proxies of how well it will do downstream. But for this, so like, should teams, should companies have their own individual benchmarks outside of games?</p><p>[00:34:16] If you think of stuff like, okay, video production, movies, stuff like that, that also want to use world models. Should, should they sort of internalize like. Their own proxy. Is this something you guys do? Where, where does that connect</p><p>[00:34:28] <strong>Chris Manning:</strong> go? Yeah, I think this whole space is extremely difficult as things are emerging now.</p><p>[00:34:35] And I mean, it&#8217;s not only for world models, I think it&#8217;s for everything including text-based models, right? &#8216;cause in the early days it seemed very easy to have good benchmarks &#8216;cause we could do things like question answering benchmarks and could you answer the question based on these documents and the various other kinds of, do pieces of logical reasoning or math.</p><p>[00:34:58] But again, these are sort of. [00:35:00] And there were sort of visual equivalents of things like object recognition, right? For these small component tasks. These days so much of what people are wanting to do also with language models is nothing like that, right? You&#8217;re wanting to, have an interaction with the language model and get some recommendations about which backpack would be best for you for your trip in Europe next month.</p><p>[00:35:25] And it&#8217;s not the same kind of thing, right? And it&#8217;s not so easy to come up with a benchmark as to does this large language model give you an effective interaction for guiding you in a good way for shopping, right? So, and it&#8217;s the same problem with these world models. So if we take the game design case, well success is that a game designer can.</p><p>[00:35:57] Produce what they are [00:36:00] imagining in a reasonable amount of time. And that&#8217;s really the kind of macro task. That&#8217;s a very hard thing to turn into a benchmark and I think a lot of this is actually going to turn into people walking, walking with their feet. Right? I mean, I guess that&#8217;s what&#8217;s happening, at the large language model level, right?</p><p>[00:36:23] When people are choosing to use, GPT five or Gemini or clawed, individuals are trying out these different models and deciding, oh, I like the kind of answers that GT five gives me, or no, I feel like I get more accurate detail from Claude, right?</p><p>[00:36:43] <strong>Vibhu:</strong> It&#8217;s a lot of</p><p>[00:36:43] <strong>Chris Manning:</strong> vitech, a lot of people just using it.</p><p>[00:36:45] It&#8217;s vibe checking. I realize that, but it&#8217;s actually whether. People feel it&#8217;s giving them utility in what they want. Right.</p><p>[00:36:52] <strong>Vibhu:</strong> And the the interesting thing there is like a lot of people prefer the visual, right? This looks pretty, which is not the objective of what this is [00:37:00] for, right? It&#8217;s if a, if a game designer is working on something, they care about the game engine, right?</p><p>[00:37:04] The state, it&#8217;s, it can look whatever. You can fix that up later. Or you can have a really good game state and you can quickly edit it to 20. 20 different versions, like Keep State,</p><p>[00:37:14] <strong>Chris Manning:</strong> right?</p><p>[00:37:14] <strong>Vibhu:</strong> So</p><p>[00:37:14] <strong>Chris Manning:</strong> that&#8217;s a really important distinction, for and for speaking to Moon Lake strength, right? So, yeah, great visuals are lovely to look at for a few seconds, but gains are really all about the concept, the game play.</p><p>[00:37:33] And a lot of the time that doesn&#8217;t actually even require great visuals. I mean, there are just lots of very successful games which have relatively primitive visuals, and there are other games where people have spent millions producing photo realistic, visuals, and the game sucks, right? So, keeping those two axes apart is really important in thinking about what&#8217;s important in a [00:38:00] world model for different uses.</p><p>[00:38:02] <strong>swyx:</strong> This conversation is reminding me of some game review and fiction discussions I&#8217;ve, had in my sort of non-AI related life. Some, for some people might know Brandon Sanderson, who&#8217;s a very famous, fiction author, had, is is a big game reviewer. And he, he&#8217;s a big fan of video games where you change one thing about a normal what you might assume about, about the world.</p><p>[00:38:22] For example, Baba is you, I don&#8217;t know if you might have come across that, where like the rules change as you play the game. And also like where, you can do things like reverse time selectively or like change gravity selectively. And I think this is also reminds, reminds me of other kinds of world models that are created by authors.</p><p>[00:38:38] Where Ted Chang is, is my typical example where he&#8217;ll take the world that, you know today, but change one thing about it and, but then create a consistent world based on that. Which is long-winded answer of me to, of. For me to say is it&#8217;s it easy to create alternative roles that don&#8217;t exist, but you change one thing and then let&#8217;s, let&#8217;s run a whole bunch of people through it to see if it works.</p><p>[00:38:58] <strong>Chris Manning:</strong> My first dance will [00:39:00] be, that seems a lot easier and more conceivable to do using Techn technology like Moon Lakes than with some of the other world models out there, where the sun can actually make it happen. I&#8217;ll let him give a second answer.</p><p>[00:39:15] <strong>swyx:</strong> If I guess for you, you&#8217;re constrained by the game engine tool, right?</p><p>[00:39:18] Like at the end of the day, that&#8217;s the, that&#8217;s the thought, partner that you have. If I ask for something where like, if it never is allowed to reverse time or if gravity only ever works one way, then well that&#8217;s it. But sometimes gravity might change,</p><p>[00:39:33] <strong>Fan-yun Sun:</strong> but it&#8217;s a lot easier to change with code as opposed to a model that is learned primarily on data of.</p><p>[00:39:42] Real world and virtual worlds that are, I guess, like for example, junior, like there&#8217;s actually trained on a lot of real world data and a lot of virtual gaming data, and it&#8217;s hard to say maybe it&#8217;s easier to say, okay, I wanna change the visuals in like the time period of, of the world. Like, you can&#8217;t change gravity, for [00:40:00] example.</p><p>[00:40:00] <strong>Vibhu:</strong> I feel like you can to light bounds, right? Everything comes down to like, code is a better way to execute it, but the models aren&#8217;t that diverse and creative, right? You can say, okay, make gravity slower. It can do that, but it&#8217;s limited to your representation of how you text it out, right? Like they&#8217;re, they&#8217;re only gonna do a few iterations, whereas programmatically, if there&#8217;s a game engine under the hood, you can kind of go wild, right?</p><p>[00:40:22] So one of the, I dunno, one of the limitations of most models is that they&#8217;re very overtrained to one style. Right. And extracting diversity is pretty difficult. At least that&#8217;s something we&#8217;ve seen.</p><p>[00:40:35] <strong>Fan-yun Sun:</strong> I mean, are there examples you have in mind where you Existing models? Yeah. Like it would be easier to do that&#8217;s not using code.</p><p>[00:40:43] Certain types of creative intent or like transition state transitions,</p><p>[00:40:47] <strong>swyx:</strong> Clipping, other models, other wo models are very good at clipping through things. Clipping my, my, my legs clipping through a rock because it&#8217;s, it&#8217;s just, it&#8217;s just bad. [00:41:00] Like, you would have to struggle very hard with your stuff to actually make that happen.</p><p>[00:41:04] Which I think is maybe a topic that you actually prepared on, Gian Splatting versus, the other stuff.</p><p>[00:41:09] <strong>Vibhu:</strong> Yeah. Yeah. It&#8217;s just for those not super familiar, right? There&#8217;s a, there&#8217;s gian splatting, there is diffusion. Like what works, what scales up. I feel like in February when Soro one came out the blog post was literally titled like,</p><p>[00:41:21] <strong>swyx:</strong> you bring it up.</p><p>[00:41:22] You never know.</p><p>[00:41:23] <strong>Vibhu:</strong> World, world, video generation models are world simulators. It&#8217;s super bitter lesson pilled. Yeah, emer, a lot of it is emergence, right? So, not to go through their blog post, basically their whole thing was as you scale up all this consistency, all this stuff just kind of solves, it&#8217;s a very simple premise, right?</p><p>[00:41:41] They just scaled up, diffusion, and from there, this is, this is Feb 2024, how much can we, it&#8217;s already been two years, which is basically five years. How much more in AI time do we need to just scale up or, or do we hit a data cap? But I think we already talked about this a lot, right? Like this is back to the beginning discussion of what&#8217;s [00:42:00] appropriate for the time.</p><p>[00:42:01] And that seems like your approach, right?</p><p>[00:42:03] <strong>Fan-yun Sun:</strong> Yeah. The point I&#8217;m trying to make is that they&#8217;re very many, many different types of world simulators and like having a world simulator that can produce pixel coherency is very, very useful for games and, marketing and all these things, but it&#8217;s not as useful as people think when it comes to causal reasoning.</p><p>[00:42:25] When it comes to embodied ai. Yeah, like it this title is true. We&#8217;re not saying that it&#8217;s, it&#8217;s like, not a great world simulator, but actually in the blog that we, we, we, we wrote, the bet is more so that there are gonna be disproportionately large share of value of real world tasks or, and virtual tasks where high resolution pixel fidelity is not needed.</p><p>[00:42:47] Yes. Video models have their values.</p><p>[00:42:50] <strong>swyx:</strong> Yeah. This is at the absolute limit of my physics understanding, but one example that comes to mind is basically having to solve like ba the equivalent of a three [00:43:00] body problem in a deterministic Well, where the video models, which is approximated good enough. Yeah.</p><p>[00:43:08] Right. Like there&#8217;s, there&#8217;s some point at which your approach kind of runs into like the you now have to simulate the world. Please, thank you very much. And like you&#8217;re trying to do that, but only to the extent that the game engine lets you and like game engines cannot do some things.</p><p>[00:43:23] <strong>Fan-yun Sun:</strong> Yeah, no, I mean, I think the interesting or more technical question here actually is where do you draw the boundary between.</p><p>[00:43:32] What&#8217;s handled with, let&#8217;s say, diffusion prior and what, when? What&#8217;s handled with symbolic priors?</p><p>[00:43:38] <strong>swyx:</strong> Yes.</p><p>[00:43:38] <strong>Fan-yun Sun:</strong> Okay.</p><p>[00:43:38] <strong>swyx:</strong> Okay.</p><p>[00:43:39] <strong>Fan-yun Sun:</strong> Right. Let&#8217;s go there. Because this, this boundary can actually be fluid. Like I think like maybe what you&#8217;re trying to get at is like, okay, people are saying pixel prior, everything. But what we&#8217;re saying is, okay, there&#8217;s a boundary that we draw where this is where we think provides the most economical value for the domains and things that we care about today.</p><p>[00:43:59] [00:44:00] And I actually do think, and it&#8217;s something that we do internally all the time, which is like, okay, given new equations that we learn or new elements of the world and that we, we learn, or maybe some other knowledge that we acquire in the process of developing the models. Should we still be maintaining this line exactly as it is today?</p><p>[00:44:22] Or should we move it a little bit left or a little bit right? Right. Like sometimes that we realize that, oh, like maybe customers or, or folks like want certain things that are better handled with preop pryor as opposed to, symbolic prior than,</p><p>[00:44:34] <strong>swyx:</strong> yeah. Your, your skin thing is a, is a example moving it, right.</p><p>[00:44:37] Yeah.</p><p>[00:44:37] Or left. Yeah,</p><p>[00:44:37] <strong>Fan-yun Sun:</strong> exactly.</p><p>[00:44:38] <strong>swyx:</strong> I dunno what the, the left right is.</p><p>[00:44:39] <strong>Fan-yun Sun:</strong> Yeah, yeah, yeah. No the, the model.</p><p>[00:44:42] <strong>swyx:</strong> Yes.</p><p>[00:44:42] <strong>Fan-yun Sun:</strong> Actually we have a few iterations of them. They&#8217;re actually at slightly different</p><p>[00:44:45] <strong>swyx:</strong> I know boundaries. You should, you should do that. That&#8217;s a cool dimension to show.</p><p>[00:44:49] <strong>Fan-yun Sun:</strong> Yeah.</p><p>[00:44:50] <strong>swyx:</strong> Is quantum mechanics the diffusion prior of our world?</p><p>[00:44:55] Right. It&#8217;s like that&#8217;s the boundary of classical mechanics versus quantum. Right? Like, that&#8217;s it. At one [00:45:00] point God plays dice and the other point doesn&#8217;t.</p><p>[00:45:02] <strong>Fan-yun Sun:</strong> I dunno if Chris, you wanna say it, but I think, I think generally I feel like physics is better with symbol P priors.</p><p>[00:45:08] <strong>Chris Manning:</strong> Even quantum physics.</p><p>[00:45:09] <strong>Fan-yun Sun:</strong> Even quantum physics.</p><p>[00:45:11] <strong>swyx:</strong> Yeah. This is starts against to, MLST territory is, is what I call it, where, he, he likes to get philosophical. We, we we&#8217;re quite friendly.</p><p>[00:45:18] <strong>Vibhu:</strong> I mean, we need to get, we need to get singularity. I heard some of that.</p><p>[00:45:23] <strong>swyx:</strong> No, no, I think that is actually really helpful and man, I just want you to productize this like, as a product guy, I&#8217;m just like, oh, also</p><p>[00:45:32] <strong>Vibhu:</strong> a gamer, I</p><p>[00:45:33] <strong>swyx:</strong> wanna, it&#8217;s like a researcher, like, it&#8217;s cool.</p><p>[00:45:35] Like this is a, the theoretical, like you have a very good, I don&#8217;t know, like the way of thinking about these things, but I just wanna see you like, express it. I do think like your fundamentally things when, when you leave open new tools, like, okay, use, use human intent to incorporate it into how you render.</p><p>[00:45:52] Artists are gonna have to take like two to three years to figure out what to do with this. And you just don&#8217;t know.</p><p>[00:45:57] <strong>Chris Manning:</strong> Right. But I think, this is, [00:46:00] gives a much more approachable and controllable world for the society, which is the beauty, the beauty of, NLP, that that will enable it to be adopted and used.</p><p>[00:46:10] And we are very hopeful about that. Yeah,</p><p>[00:46:13] <strong>Fan-yun Sun:</strong> yeah. Yeah. I mean, we are, we are very focused actually on commercialization in the sense that like we do, we do really believe in the data flywheel app approach. Yeah. Where, we put this in the hands of the creators and the users and then they will teach us when, what capability our model should improve.</p><p>[00:46:27] And that&#8217;s why we are, we are actually, like products and beta</p><p>[00:46:31] <strong>swyx:</strong> Yeah. Focusing on gaming. What, what&#8217;s like the adjacent thing to gaming</p><p>[00:46:34] <strong>Fan-yun Sun:</strong> embody adjacent, basically. So maybe we can, we can I&#8217;ll maybe start with where we see the platform in three years. Yeah. Which is like, okay. The users would tell us what they want to achieve.</p><p>[00:46:45] The end goal could be, Hey, I just, I wanna make something to teach my kids the value of humility. Or it could be, Hey, I wanna fine tune my, drones to be really good at rescue situations. I could be vacuum robots. I want to like train [00:47:00] my manipulation or like vacuum robot to be very robust to my office, right?</p><p>[00:47:04] But it&#8217;s like, whatever it is, scenario robust to</p><p>[00:47:06] <strong>swyx:</strong> my office</p><p>[00:47:07] <strong>Fan-yun Sun:</strong> or like navigate very robustly in my office. But then it&#8217;s like, whatever end goal that you want, our role model will say, okay, given what you want to achieve, let me generate a distribution of environments such that I can train and evaluate whatever it is you want.</p><p>[00:47:24] Yeah. Right. Maybe for the purpose of games, it&#8217;s just the end simulation and that&#8217;s the end product for certain policies. It&#8217;s like I can train it within these environments and then help you see where your policy is failing or not. Yeah. And then, so I think,</p><p>[00:47:37] <strong>swyx:</strong> so in that case, much more of a training tool.</p><p>[00:47:40] Than in other training</p><p>[00:47:41] <strong>Vibhu:</strong> evaluation? Both. Right?</p><p>[00:47:43] <strong>swyx:</strong> Sure. Same. Same thing.</p><p>[00:47:43] <strong>Fan-yun Sun:</strong> Yeah, same thing. I think it&#8217;s just this role model that allows people to train any policy that can act in any multimodal environments.</p><p>[00:47:51] <strong>swyx:</strong> Would it be harder to reward hack? Is there an angle here where it is harder to reward hack? Like it&#8217;s just, I&#8217;ll just put it generally because I think that&#8217;s a, that&#8217;s obviously a key [00:48:00] problem that a lot of people face when in training agents in these environments, and I don&#8217;t know, can you solve it?</p><p>[00:48:07] <strong>Chris Manning:</strong> I think not necessarily. To the extent that there&#8217;s a mis specified reward that. It seems like it could be hacked in a more symbolic world or in a more pixel based world. I dunno if Sun&#8217;s got any thoughts, but I don&#8217;t think that&#8217;s really being solved.</p><p>[00:48:26] <strong>swyx:</strong> The other thing that comes to mind is just you could just build a better sawa as a video generator model, right?</p><p>[00:48:31] Because then you, you would move the diffusion, side a bit more further to the right. I think if I got the directionality correct. And that&#8217;s it.</p><p>[00:48:40] <strong>Vibhu:</strong> It&#8217;s better on domains, right? Like on consistency over now, or for sure it exists versus something doesn&#8217;t, right.</p><p>[00:48:46] <strong>Chris Manning:</strong> So</p><p>[00:48:46] <strong>swyx:</strong> yeah. Yeah. Is</p><p>[00:48:49] <strong>Vibhu:</strong> is a question more like, like</p><p>[00:48:51] <strong>swyx:</strong> I&#8217;m just riffing on like, how do you, what can you build, you know?</p><p>[00:48:54] Oh, with the stuff that you have. I do think that the minor, the academic does go immediately to training [00:49:00] and in eval evaluation, but like art tends to take unusual directions. Like you might end up,</p><p>[00:49:06] <strong>Chris Manning:</strong> okay. Yeah. But the question is, can you use this piece of software to develop compelling gameplay and. I don&#8217;t think you can take SOAR and produce compelling gameplay, right?</p><p>[00:49:19] If you want to have a world that you can wander around in a bit, you are good. But what are your abilities to have gameplay mechanics implemented the way you&#8217;d like them to be and to have things stay, with the long-term history of your gameplay that influences future actions. I think there&#8217;s just nothing there for that.</p><p>[00:49:39] <strong>swyx:</strong> Yeah, I do tend to agree. I, I&#8217;m just trying to sort of test the boundaries. I would also make the observation that as AAA games industry has developed the line between what is a movie and what is a game has blurred. And you, you, you do end up basically producing a two hour movie as part of your game.</p><p>[00:49:57] <strong>Fan-yun Sun:</strong> No, honestly, there, there&#8217;s so many actually [00:50:00] applications in adjacent markets that our world model can go into. Yeah. But yeah, it, it&#8217;s sort of fun to riff, riff on. Although on the execution side, we we, we need to stay focused with like, okay, what are the capabilities we want to unlock over time?</p><p>[00:50:11] And there&#8217;s a roadmap for that. But yeah, if we&#8217;re just riffing on sort of like the possibilities, I feel like, whether it&#8217;s endless Yeah, it&#8217;s like classic</p><p>[00:50:18] <strong>swyx:</strong> and the embedding for a possibility and endless in my mind, it&#8217;s very close. Yeah. I do wanna, focus on one, like weird choice. I, I don&#8217;t know if it&#8217;s weird.</p><p>[00:50:28] Maybe I&#8217;m, I got something here. Audio, right? You could have just said no audio And audio in my mind has a lot of recursion, whereas in video you can just do recasting and that&#8217;s much computationally much simpler. Audio just seems way harder. I don&#8217;t know if you wanna just comment on just the special 3D audio.</p><p>[00:50:46] Problem. Did you really have to do it? I guess you do to be immersive, but like a lot of people do treat it as like, well, you just stick a, a tt S model on top of</p><p>[00:50:57] <strong>Vibhu:</strong> Well, there&#8217;s a lot more to game audio than [00:51:00] just speech. Right. It&#8217;s not just</p><p>[00:51:01] <strong>swyx:</strong> tts. Yeah. Tts. S Fxt, GM Spatial in my mind Echoes</p><p>[00:51:06] <strong>Chris Manning:</strong> Yeah.</p><p>[00:51:06] <strong>swyx:</strong> And reflections.</p><p>[00:51:07] And I, I don&#8217;t even know what&#8217;s, what else? I don&#8217;t know what, what other problems in this space.</p><p>[00:51:13] <strong>Fan-yun Sun:</strong> Yeah, I think this point like the, it&#8217;s sort of a more, more pointing to the benefits of using an game engine as a tool that&#8217;s available to the model, right? Because like part of the spatial audio is from the code that is underlying the simulation.</p><p>[00:51:32] And while we do give our model access to other types of audio models as. Tools.</p><p>[00:51:39] <strong>swyx:</strong> None of them would be spatial, I think.</p><p>[00:51:41] <strong>Fan-yun Sun:</strong> But that&#8217;s exactly sort of more 0.2. We&#8217;re giving our model an abstraction or a suite of tools such that it&#8217;s able to achieve that. And you can argue that sort of spatial is like a, like a emergence out of the, the tools that we and abstraction that we provide to the agents.</p><p>[00:51:59] And I think that&#8217;s the beauty of [00:52:00] this, this, this approach is like there&#8217;s a lot of things kind of like how human&#8217;s built technology and they&#8217;re like Lego blocks that build on top of each other. And it&#8217;s the same thing here. There&#8217;s gonna be things that sort of just sort of emerges from being able to put these things together in like combinatorially interesting ways,</p><p>[00:52:14] <strong>Chris Manning:</strong> right?</p><p>[00:52:15] So this integrated audio model exploits the understanding and semantics of the Moon Lake world, right? And whereas in general for the Gen AI video models. There&#8217;s no actual integration across to audio at all, right? That someone might stick some music or stick a soundscape or whatever else on top of their video.</p><p>[00:52:44] So it&#8217;s not a silent video, but they&#8217;re in no way connected into a consistent world model. And there&#8217;s nothing that&#8217;s okay. An action is happening in the video. Therefore there should be a sound that&#8217;s [00:53:00] coming from this part of the visual field.</p><p>[00:53:03] <strong>swyx:</strong> Yeah.</p><p>[00:53:03] <strong>Vibhu:</strong> Is that different than Sora too? Does it not have audio?</p><p>[00:53:06] Not to say it&#8217;s not like</p><p>[00:53:08] <strong>swyx:</strong> amazing</p><p>[00:53:08] <strong>Vibhu:</strong> isn&#8217;t a spatial</p><p>[00:53:09] <strong>swyx:</strong> audio.</p><p>[00:53:09] <strong>Vibhu:</strong> It doesn&#8217;t,</p><p>[00:53:10] <strong>swyx:</strong> no. I&#8217;ve played around it with it enough. It just sounds like someone put an 11 laps voice on top of it and just tried to do the lip sync.</p><p>[00:53:18] <strong>Vibhu:</strong> Oh, yeah. I&#8217;ve seen, okay. Generate a dog at the beach and reactions to big wave and move</p><p>[00:53:23] <strong>swyx:</strong> around.</p><p>[00:53:23] It&#8217;s definitely like, so have the dog, have the dog move away from camera and see if the, the song goes down. It doesn&#8217;t. &#8216;Cause they don&#8217;t have facial audio.</p><p>[00:53:32] <strong>Fan-yun Sun:</strong> We do want to basically like we, our moral model, like the one we&#8217;re training is basically towards the goal of having a combined latent representation across all these different modalities.</p><p>[00:53:42] Right? Such that it can like reason across these different modalities. So for example, if I close my eyes and like you play a video, you play a sound of like a car skidding away from me. I almost can like, visually extrapolate that trajectory in my mind. And I think that type of capability, we want our model to be able to reason, right?</p><p>[00:53:59] And that&#8217;s the reason that [00:54:00] we&#8217;re sort of taking this multimodal reasoning approach. It&#8217;s like we want this combine late in space that can</p><p>[00:54:05] <strong>swyx:</strong> Yeah. Oh, you said late in space. We like that. Here we have to play the, the bell Every time that someone says late in space, no, you gotta train daredevil one. Where you, you, you, it&#8217;s only audio, but you have to work out.</p><p>[00:54:15] Where everything is.</p><p>[00:54:19] Cool. I I think that that was, that was about it for our Moon Lake coverage. I do think that we have like a couple of, Chris Madden questions on, on IR and, just any, any other sort of attention topics or n NLP topics.</p><p>[00:54:31] <strong>Vibhu:</strong> Okay.</p><p>[00:54:31] <strong>swyx:</strong> Go ahead.</p><h2>[00:54:32] Chris Manning&#8217;s Journey: From NLP to World Models</h2><p>[00:54:32] <strong>Vibhu:</strong> Well, no, I mean, yeah, it&#8217;s just fun. We talked a bit about how you guys met, but you basically, you, you were like the godfather of NLP per se, right?</p><p>[00:54:39] You spent the whole career from early embeddings, early early attention. You did 2015 attention for machine translation, everything. You, you had information retrieval, so RAG before rag, we just wanna shout that out and admire a lot of that. Right? So what prompted the switch over to world models?</p><p>[00:54:56] How, how&#8217;d all that come about?</p><p>[00:54:58] <strong>Chris Manning:</strong> To some answer it [00:55:00] is, the enthusiasms and creativity of students, but there&#8217;s a bit of a history there, right? So, yeah. So clearly most of my career has been doing stuff with language and how I got into research was thinking, ah, this is just so amazing how humans can produce speech and understand each other in real time.</p><p>[00:55:21] And somehow they managed to learn languages from their kids. How could this possibly happen? And so, yeah, starting off I was very focused on language, but as it sort of got into the 2000 and tens, I started, going, I&#8217;d been working on question answering, and then I started to get, interest in visual question answering.</p><p>[00:55:42] And that was an area where it was very noticeable. That the visual understanding was bad. Right. These were the days when like, it sort of seemed like there&#8217;s almost no visual [00:56:00] understanding. You were just getting answers that came from priors. So, if you asked how many people are sitting at the table, it&#8217;d always answer two regardless of how many, how many people you could see in the picture.</p><p>[00:56:11] And so it seemed like, oh, these models actually aren&#8217;t able to get semantic information outta IMA images. And so I was interested in that problem and tried to work more on that. And so then that required. Knowing more about what&#8217;s happening in vision and how you can represent visual information.</p><p>[00:56:34] And then things start, there started to be this revolution of, doing generative AI images. And then I had students that started looking at that before the era of Moon Lake. I was also working with Demi Gore, who founded pika. And so, and</p><p>[00:56:50] <strong>swyx:</strong> Ian obviously</p><p>[00:56:52] <strong>Chris Manning:</strong> with gans. Yeah. Though Ian was never my student, but yeah, Ian I was very aware for the, the whole decade there of Ian with Gans.</p><p>[00:56:59] [00:57:00] Yeah. And I mean, Ian was a Stanford undergrad, but yeah,</p><p>[00:57:03] <strong>Vibhu:</strong> richard des u.com, I believe he was your student.</p><p>[00:57:06] <strong>Chris Manning:</strong> Yeah. Yeah. And there were, there were links across at that stage as well. So there were several papers in that era of doing, I mean, so Andre Cap was a, PhD student at the same time as Richard.</p><p>[00:57:20] And so there was some joint language vision work in that era as well. It seems kind of ancient by modern standards, but yeah, we&#8217;re trying to go from sort of textural dependency graphs to visual scenes</p><p>[00:57:32] <strong>Vibhu:</strong> at a time. The glove embeddings really took over a lot of. T-F-I-D-F, like one hot encoding, all that.</p><p>[00:57:38] The early vision language models we saw were like lava style adapters, right? It&#8217;s, it&#8217;s technically still just embedding latent space. Let&#8217;s add image, let&#8217;s like mixed modality. So, and that, that&#8217;s one of the things you super put out there too, right?</p><p>[00:57:51] <strong>swyx:</strong> Yeah.</p><p>[00:57:51] <strong>Vibhu:</strong> Yeah.</p><p>[00:57:52] <strong>swyx:</strong> Yeah.</p><h2>[00:57:52] Hiring, Closing &amp; The Name &#8220;Moon Lake&#8221;</h2><p>[00:57:55] <strong>swyx:</strong> Well, thank you for all of that. Thank you for all advancing the worlds on, world modeling.</p><p>[00:57:56] I honestly, do think that if people deeply understand everything we just [00:58:00] covered, they will see what&#8217;s coming. I think you guys have, made some, a really significant contribution here. What are you hiring for? What is the, what do people find? We, we agreed that the CTA was a hiring call.</p><p>[00:58:10] Yeah. Don&#8217;t we have a GI You don&#8217;t need, you don&#8217;t need engineers anymore, right?</p><p>[00:58:14] <strong>Fan-yun Sun:</strong> Yeah. On the model side we are actually striving towards basically a self-improving system. But what that means is that we need people to set up the self-improving system. So more, more specifically people who have the intersection of knowledge within co-generation and computer vision and graphics, right?</p><p>[00:58:30] Yeah. That&#8217;s, that&#8217;s sort of the core research background that we look for within OTM and, and the majority of the team today do have like both backgrounds.</p><p>[00:58:38] <strong>swyx:</strong> When you say computer vision and graphics, are they the same thing or is it computer vision one thing, graphics, another thing. And how intertwined are they?</p><p>[00:58:46] <strong>Chris Manning:</strong> They&#8217;re intertwined but different.</p><p>[00:58:49] <strong>swyx:</strong> Yeah.</p><p>[00:58:49] <strong>Chris Manning:</strong> And I think, this relates to some of the themes that we&#8217;ve been talking about, that the more explicit underlying [00:59:00] world models that are being constructed inside Moon Lake really draw on the computer graphics tradition. And so it&#8217;s then combining that with the visual understanding of vision.</p><p>[00:59:16] <strong>swyx:</strong> Got it. Yeah. All right. So you&#8217;ve written a game engine, you&#8217;re come talk to us, right?</p><p>[00:59:21] <strong>Fan-yun Sun:</strong> Oh yeah, definitely. Definitely. But I do think that the line is blurred, like increasingly blurred these days where it&#8217;s like if you have a general understanding of group vision and graphics,</p><p>[00:59:31] <strong>swyx:</strong> I think for your standards it is, for me it feels like vision is, is.</p><p>[00:59:35] I&#8217;ll leave that to the big labs graphics. I, I, I can get that, you would want to do that from more first principles, but vision, there&#8217;s so many vision models off the shelf that I can take, but probably not good enough for your</p><p>[00:59:45] <strong>Fan-yun Sun:</strong> I see, I see. If, if you&#8217;re sort of like making that distinction then maybe we, we care a little bit more about having graphics</p><p>[00:59:51] <strong>swyx:</strong> knowledge.</p><p>[00:59:51] Yeah, exactly.</p><p>[00:59:52] It could be like, sometimes a hiring call can be as simple as like, if you know the answer to blah, you should talk to me. Like the sort of core known hard [01:00:00] problem in, in your world.</p><p>[01:00:01] <strong>Fan-yun Sun:</strong> Ah, I see. Yeah. In that case, if you, yeah, definitely. If you&#8217;ve written a game engine before, if you&#8217;ve rld a variety of coding models on different objectives, like</p><p>[01:00:13] <strong>swyx:</strong> easy,</p><p>[01:00:13] Many of those, yeah.</p><p>[01:00:14] <strong>Fan-yun Sun:</strong> If you&#8217;ve done multimodal lean space alignment, I, I intentionally include</p><p>[01:00:20] <strong>swyx:</strong> space.</p><p>[01:00:20] <strong>Fan-yun Sun:</strong> Again,</p><p>[01:00:21] <strong>swyx:</strong> a poor editor has a thing every time. Yeah. Lean space alignment. Honestly. Is it that hard?</p><p>[01:00:26] I, I, there&#8217;s some scripts out there that I&#8217;ve saved for the day. I someday have to do it, but I don&#8217;t have to do it.</p><p>[01:00:31] But it&#8217;s</p><p>[01:00:32] <strong>Fan-yun Sun:</strong> done, I think. Yeah. There, there&#8217;s, there&#8217;s a versions of that that are done. But I, I think we are aligning audio, text, language and video. Yeah. Right. Like, and basically we have these role models that are able to act as agents to like act in these worlds and extract long horizon videos and encoding that back to the model to sort of self-improve.</p><p>[01:00:52] So it&#8217;s an insanely exciting, but also technically challenge problem. Yeah. So people who wanna do their lives best work, that only [01:01:00] makes a place.</p><p>[01:01:01] <strong>Vibhu:</strong> How big are you guys? Where are you guys based?</p><p>[01:01:02] <strong>Fan-yun Sun:</strong> We&#8217;re currently based in San Mateo, although we&#8217;re moving up to sf. We&#8217;re about 18 folks right now.</p><p>[01:01:08] <strong>swyx:</strong> My ending question was gonna be why, what, what is the name?</p><p>[01:01:10] What&#8217;s behind the name?</p><p>[01:01:11] <strong>Vibhu:</strong> Yeah.</p><p>[01:01:12] <strong>Fan-yun Sun:</strong> Oh,</p><p>[01:01:14] <strong>Vibhu:</strong> Very cool. Graphics and design, by the way.</p><p>[01:01:16] <strong>Fan-yun Sun:</strong> Actually at the, at the time when the, when the, when we started the company, we were thinking a lot about how do we make a company name that gives people the vibe of like, open ai, but for like, almost like industrial light and magic vibes.</p><p>[01:01:28] Wow. Because it&#8217;s like we care about creativity and using that as a funnel to solve a GI. So then we were, we, we brainstorm a lot around like Dreamworks, right? Like industrial light magic. And, so there&#8217;s a few, few basically, space of things that we feel like are very, very semantically close to the company&#8217;s identity.</p><p>[01:01:47] <strong>swyx:</strong> Yeah.</p><p>[01:01:48] <strong>Fan-yun Sun:</strong> And then it ended up being Moon Lake, partly because of the Dreamworks vibe, the Dreamworks, moon</p><p>[01:01:54] <strong>swyx:</strong> Lake.</p><p>[01:01:55] <strong>Fan-yun Sun:</strong> Exactly. Yep. So that was a little bit of that inspiration. And then the moon was sort of [01:02:00] like a, it basically was like about the. Reflection. The reflection part also implies the self-improvement loop.</p><p>[01:02:07] Wow. That we sort of like, that&#8217;s really bleed and that&#8217;s the path towards multimodal general intelligence. So that&#8217;s, that&#8217;s that. I&#8217;ll leave that as I love a good</p><p>[01:02:15] <strong>swyx:</strong> name. I love a good name. This is great. It&#8217;s a</p><p>[01:02:16] <strong>Vibhu:</strong> very</p><p>[01:02:17] <strong>swyx:</strong> good name. It&#8217;s very good. Lo I&#8217;m glad I asked the question. I will also say, one, my favorite story, books or biographies ever is, creativity Inc.</p><p>[01:02:24] With Ed Kamal&#8217;s, story about Pixar and how he, was rejected as a Disney animation artist. So then he went into computing and brute forced his way into back. No, I love that story. Yeah. Disney.</p><p>[01:02:37] <strong>Fan-yun Sun:</strong> Yeah. And Walt Disney is also like one of my favorite founders. He&#8217;s like, his, his story. Like at the time you&#8217;re like, okay, I&#8217;m gonna create this like.</p><p>[01:02:44] Immersive park. Like people can&#8217;t, don&#8217;t even have that technology to create it virtually, but they&#8217;re like, you know what, let&#8217;s just build it physically such that people can,</p><p>[01:02:50] <strong>swyx:</strong> so he is the first world modeler.</p><p>[01:02:52] <strong>Fan-yun Sun:</strong> No, I, I I tell people that like, theme parks are world models too.</p><p>[01:02:56] <strong>swyx:</strong> Mm. Yeah. Yeah. Yeah. I mean, it&#8217;s a small world or it&#8217;s [01:03:00] a, like the Epcot center with all the little, replicas of the countries.</p><p>[01:03:03] Yeah. Those are very interesting. Okay. Well thank you, we&#8217;ve covered, a huge amount. Thank you for your time and thank you for inspiring us.</p><p>[01:03:10] <strong>Fan-yun Sun:</strong> Thank you</p><p>[01:03:10] <strong>swyx:</strong> for having us. Thank you. It&#8217;s fun</p><p>[01:03:11] <strong>Fan-yun Sun:</strong> chatting. Yeah. It&#8217;s been a good time.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Perhaps only topped by the <a href="https://x.com/k1rallik/status/2033589170120110203?s=46">Pokemon Go</a> dataset!</p></div></div>]]></content:encoded></item><item><title><![CDATA[[AINews] A quiet April Fools]]></title><description><![CDATA[a quiet day]]></description><link>https://www.latent.space/p/ainews-a-quiet-april-fools</link><guid isPermaLink="false">https://www.latent.space/p/ainews-a-quiet-april-fools</guid><pubDate>Thu, 02 Apr 2026 07:04:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!DbYa!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73b0838a-bd14-46a1-801c-b6a2046e5c1e_1130x1130.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Some notable mid tier model releases, but thankfully most companies respected that today is an awful day to launch anything. We&#8217;ll give <a href="https://x.com/xanamini/status/2039403320247480469">points to Liquid for best April Fools joke</a>.</p><p></p><blockquote><p>AI News for 3/23/2026-3/24/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Open-Weight Reasoning and Vision-Coding Releases: Arcee Trinity-Large-Thinking, Z.ai GLM-5V-Turbo, Falcon Perception, and Holo3</strong></p><ul><li><p><strong>Arcee&#8217;s Trinity-Large-Thinking</strong>: The biggest substantive model launch in this set was <a href="https://x.com/arcee_ai/status/2039369121591120030">Arcee&#8217;s Trinity-Large-Thinking</a>, released with <strong>open weights under Apache 2.0</strong> and positioned explicitly for developers/enterprises that want to inspect, host, distill, and post-train their own systems. Follow-up posts claim strong agentic performance, including <strong>#2 on PinchBench behind Opus 4.6</strong>, <strong>SOTA on Tau2-Airline</strong>, and frontier-level telecom results (<a href="https://x.com/latkins/status/2039370549743243353">Arcee</a>, <a href="https://x.com/MarkMcQuade/status/2039375842560872834">Mark McQuade</a>). OpenRouter highlighted the architecture as a <strong>400B total / 13B active</strong> model and made it available immediately (<a href="https://x.com/OpenRouter/status/2039369849441497340">OpenRouter</a>). Several ecosystem partners framed it as a milestone for &#8220;American open source,&#8221; including <a href="https://x.com/PrimeIntellect/status/2039401593309667727">Prime Intellect</a>, <a href="https://x.com/arimorcos/status/2039371603708919969">Datology</a>, and infra supporters emphasizing that a small team served a 400B-class model at production cost points (<a href="https://x.com/latkins/status/2039479700826071318">latkins</a>, <a href="https://x.com/willccbb/status/2039478656373076413">willccbb</a>, <a href="https://x.com/xlr8harder/status/2039389523403059257">xlr8harder</a>, <a href="https://x.com/natolambert/status/2039499358325129530">natolambert</a>).</p></li><li><p><strong>Z.ai&#8217;s GLM-5V-Turbo</strong>: <a href="https://x.com/Zai_org/status/2039371126984360085">Z.ai introduced GLM-5V-Turbo</a>, a <strong>vision coding model</strong> that natively handles images, videos, document layouts, and design drafts while preserving pure-text coding performance. The company attributes the gains to <strong>native multimodal fusion</strong>, a next-gen <strong>CogViT</strong> encoder, <strong>30+ task collaborative RL</strong>, synthetic agentic data generation, and multimodal toolchain extensions for search/drawing/web reading (<a href="https://x.com/Zai_org/status/2039371149721694639">details</a>, <a href="https://x.com/Zai_org/status/2039371144340357509">text-coding stability</a>). The model was quickly integrated into multiple downstream surfaces including <a href="https://x.com/Trae_ai/status/2039380056460730451">TRAE</a>, <a href="https://x.com/TabbitBrowser/status/2039359108747522345">Tabbit</a>, and <a href="https://x.com/arena/status/2039400189178556814">Vision Arena</a>.</p></li><li><p><strong>Falcon Perception and OCR</strong>: TII released <a href="https://x.com/dahou_yasser/status/2039242378809385331">Falcon Perception</a>, an <strong>open-vocabulary referring expression segmentation model</strong>, alongside a <strong>0.3B OCR model</strong> said to be competitive with models <strong>3&#8211;10x larger</strong>. The notable design point is an <strong>early-fusion transformer</strong> that mixes image and text from the first layer instead of relying on multi-stage pipelines and late fusion.</p></li><li><p><strong>Other model notes</strong>: <a href="https://x.com/mervenoyann/status/2039327292665561577">H Company&#8217;s Holo3</a> was highlighted as a GUI-navigation model family (<strong>A3B/35B</strong>, Qwen3.5-based, free license, Transformers support). A separate post praised a <strong>Qwen3.5 27B distill</strong> trained on <strong>Claude 4.6 Opus reasoning traces</strong>, claiming <strong>SWE-bench wins over Claude Sonnet 4.5</strong>, <strong>96.91% HumanEval</strong>, lower CoT verbosity, 4-bit local usability, and <strong>300k+ HF downloads</strong> (<a href="https://x.com/TheCraigHewitt/status/2039303217620627604">Craig Hewitt</a>).</p></li></ul><p><strong>Claude Code Leak, Operational Issues, and the Competitive Coding-Agent Market</strong></p><ul><li><p><strong>What the leak exposed</strong>: Multiple posts converged on analysis of Anthropic&#8217;s accidental Claude Code source exposure. The most useful technical synthesis is the long thread from <a href="https://x.com/ZhihuFrontier/status/2039229986339688581">ZhihuFrontier</a>, which emphasizes a minimalist agent core&#8212;a <strong>single </strong><code>while(true)</code><strong> loop</strong>&#8212;with sophistication pushed into context management, tooling, and product instrumentation. The leak reportedly showed a <strong>4-layer context compression stack</strong> (<code>HISTORY_SNIP</code>, <code>Microcompact</code>, <code>CONTEXT_COLLAPSE</code>, <code>Autocompact</code>), <strong>streaming plus parallel tool execution</strong>, silent retries on output-length failures, a <strong>40+ tool modular architecture</strong> without inheritance-heavy abstractions, and strong use of <strong>feature flags</strong> and <strong>production ablations</strong>. A second summary pointed to hidden features including <strong>task budget management, AFK mode, &#8220;Penguin&#8221; fast mode, redirected reasoning</strong>, and other unfinished product hooks (<a href="https://x.com/ZhihuFrontier/status/2039289110075203854">ZhihuFrontier</a>).</p></li><li><p><strong>Operational pain mattered more than the leak for many users</strong>: Alongside leak discussion, many developers complained that Claude was simply slow or unreliable that day (<a href="https://x.com/Teknium/status/2039270117650116934">Teknium</a>, <a href="https://x.com/andersonbcdefg/status/2039238729932701814">andersonbcdefg</a>). Community response also fixated on leaked &#8220;pets&#8221; and UI affordances (<a href="https://x.com/meowbooksj/status/2039256157781410298">meowbooksj</a>), reinforcing that product polish is part of the competitive moat even when orchestration patterns become legible.</p></li><li><p><strong>DMCA blowback</strong>: The second-order story was Anthropic&#8217;s overly broad repo takedown attempts. <a href="https://x.com/theo/status/2039411851919057339">Theo</a> reported a DMCA against a fork that did <strong>not</strong> contain leaked source; he then argued the takedown itself violated DMCA procedure (<a href="https://x.com/theo/status/2039412173689196674">post</a>). A correction later came from <a href="https://x.com/trq212/status/2039415036645679167">trq212</a>, calling it a communication mistake; the repo was restored and Theo acknowledged the retraction and rapid response (<a href="https://x.com/theo/status/2039415081675723135">restored</a>, <a href="https://x.com/theo/status/2039417864957153733">official response</a>).</p></li><li><p><strong>Open-source clones and alternatives are gaining mindshare</strong>: The leak also turbocharged ecosystem competition. <a href="https://x.com/Yuchenj_UW/status/2039415430994100440">Yuchen Jin</a> noted the leaked Claude Code fork hit <strong>110k+ GitHub stars in a day</strong>. At the same time, multiple users said <strong>Nous Hermes Agent</strong> was easier to deploy and operate than OpenClaw or Claude-derived stacks, often citing near-zero setup and better local workflows (<a href="https://x.com/charliehinojosa/status/2039384870091465202">charliehinojosa</a>, <a href="https://x.com/VadimStrizheus/status/2039523211369762875">VadimStrizheus</a>, <a href="https://x.com/NousResearch/status/2039402523711140094">Nous</a>). There&#8217;s also a tooling wave around prompt steering and efficiency, e.g. a <a href="https://x.com/omarsar0/status/2039343351187554490">&#8220;Universal CLAUDE.md&#8221;</a> claiming <strong>63% output-token reduction</strong>, and <a href="https://x.com/googledevs/status/2039359112668950986">Google&#8217;s Agent Skills spec</a> proposing progressive disclosure to cut baseline context by <strong>90%</strong>.</p></li></ul><p><strong>Agent Systems Research: Memory, Self-Organization, Coordination Limits, and Security</strong></p><ul><li><p><strong>Memory is becoming first-class infra</strong>: <a href="https://x.com/omarsar0/status/2039349083039817984">MemFactory</a> proposes a unified inference/training framework for memory-augmented agents with native <strong>GRPO</strong> integration and reported <strong>up to 14.8% relative gains</strong> over baselines. Separately, <a href="https://x.com/baseten/status/2039389931328704905">Baseten</a> described a <strong>7M-parameter perceiver</strong> that compresses <strong>KV cache 8x</strong> while retaining <strong>90%+ factual retention</strong>, pitching it as a path toward models that &#8220;learn from experience.&#8221; <a href="https://x.com/part_harry_/status/2039400872871068041">part_harry_</a> extended the idea further, arguing pretraining itself is data-inefficient because we discard KV cache every step.</p></li><li><p><strong>Do self-organizing agents beat hand-authored roles?</strong> A <a href="https://x.com/dair_ai/status/2039350842382512455">DAIR summary</a> highlighted new work across <strong>25,000 tasks</strong> with up to <strong>256 agents</strong>, claiming self-organized roles outperform predefined planner/coder/reviewer hierarchies, with a <strong>sequential coordination protocol +14% over centralized approaches</strong>, <strong>5,000+ emergent roles</strong>, and open models reaching <strong>95% of closed-model quality</strong> at lower cost. This sits in tension with a separate line of theory: <a href="https://x.com/omarsar0/status/2039361664374739136">omarsar0&#8217;s summary of new MIT work</a> argues delegated multi-agent planning is <strong>decision-theoretically dominated</strong> by a centralized Bayes decision-maker when agents do not gain access to genuinely different information sources. In practice, the synthesis is likely: multi-agent helps when it partitions tools, environments, or retrieval channels&#8212;not just prompts.</p></li><li><p><strong>Agent attack surface is the web</strong>: A widely shared summary of a new DeepMind paper on <a href="https://x.com/omarsar0/status/2039383554510217707">&#8220;AI Agent Traps&#8221;</a> reframes agent security around adversarial content in webpages/documents, not just model jailbreaks. The thread cites hidden prompt injection in HTML/CSS succeeding in <strong>up to 86%</strong> of scenarios and latent memory poisoning reaching <strong>80%+ attack success</strong> with <strong>&lt;0.1% contamination</strong>, which is material for anyone shipping browse/retrieval-heavy agents.</p></li><li><p><strong>Long-horizon evaluation is getting richer</strong>: New benchmarks/tools included <a href="https://x.com/osanseviero/status/2039246602255114650">Kaggle Standardized Agent Exams</a>, <a href="https://x.com/arankomatsuzaki/status/2039541189968626047">YC-Bench</a> for simulating a startup over a one-year horizon, and <a href="https://x.com/DrJimFan/status/2039358115318243352">CaP-Gym / CaP-X</a>, a broad benchmark and toolkit for agentic robotics spanning <strong>187 manipulation tasks</strong>, 12 frontier models, and both training-free and RL-improved policies with <strong>MIT-licensed code</strong> (<a href="https://x.com/DrJimFan/status/2039360925606760690">open-source details</a>).</p></li></ul><p><strong>Training, Retrieval, and Infra: RL Frameworks, Optimizers, Kernels, and Benchmarks</strong></p><ul><li><p><strong>Post-training stack maturation</strong>: Hugging Face&#8217;s <strong>TRL v1.0</strong> was framed by many as a meaningful unification of open post-training&#8212;<strong>SFT, reward modeling, DPO, GRPO</strong>&#8212;into a production-ready package (<a href="https://x.com/RussellQuantum/status/2039270550099443954">commentary</a>). A complementary survey thread from <a href="https://x.com/adithya_s_k/status/2039406523076767821">adithya_s_k</a> compared <strong>16 RL frameworks</strong> across orchestration, rollout buffering, weight sync, staleness handling, partial-rollout behavior, LoRA support, and distributed parallelism, useful for teams choosing between TRL, VeRL, SLIME, and others.</p></li><li><p><strong>Optimization and systems releases</strong>: <a href="https://x.com/Clashluke/status/2039374459375677814">HeavyBall 3.0.0</a> shipped with <strong>FSDP, DDP, end-to-end compilation with 2.5x speedup</strong>, faster Muon/SOAP variants, and new optimizers. <a href="https://x.com/togethercompute/status/2039413297343332635">Together AI</a> promoted a behind-the-scenes kernels writeup; <a href="https://x.com/realDanFu/status/2039414710203015177">Dan Fu</a> followed with a &#8220;what a VP of Kernels does&#8221; thread. On the low-level DSL side, <a href="https://x.com/maharshii/status/2039379662066131296">maharshii</a> argued <strong>CuTeDSL</strong> materially lowers the barrier to custom kernels by allowing inline PTX directly in Python, avoiding opaque layout gymnastics.</p></li><li><p><strong>Retrieval evidence continues to favor late interaction</strong>: Several posts reiterated that <strong>multi-vector / late-interaction retrieval</strong> outperforms single-vector embeddings, even after fine-tuning, with better robustness against catastrophic forgetting (<a href="https://x.com/lateinteraction/status/2039272441654993082">lateinteraction</a>, <a href="https://x.com/lateinteraction/status/2039382401961410803">ladder visualization</a>). There was also continued frustration that &#8220;RAG&#8221; has become an overloaded umbrella term rather than referring to a specific older paper (<a href="https://x.com/lateinteraction/status/2039382845689348271">lateinteraction</a>).</p></li><li><p><strong>Benchmarks and efficiency surfaces</strong>: <a href="https://x.com/arena/status/2039377186432618885">Arena</a> added <strong>Pareto frontier charts</strong> across text, vision, search, document, and code, making price/performance tradeoffs more explicit. On standardized inference, <a href="https://x.com/LambdaAPI/status/2039365318276268173">Lambda</a> and <a href="https://x.com/nvidia/status/2039419585254875191">NVIDIA</a> pointed to <strong>MLPerf Inference v6.0</strong> as the better lens for real AI-factory productivity than peak-chip specs.</p></li></ul><p><strong>Developer Platforms, Rate Limits, and Tooling UX</strong></p><ul><li><p><strong>OpenAI Codex usage reset</strong>: The most practically important platform announcement for working engineers was <a href="https://x.com/thsottiaux/status/2039248564967424483">thsottiaux&#8217;s note</a> that OpenAI reset <strong>Codex usage limits across all plans</strong>, citing elevated rate-limit hits and a concurrent fraud-account purge that recovered compute. This was quickly amplified by users who interpreted rate-limit generosity as a direct competitive axis in the coding-agent market (<a href="https://x.com/reach_vb/status/2039257725402542363">reach_vb</a>, <a href="https://x.com/Yuchenj_UW/status/2039364184459391075">Yuchen Jin</a>). Later, thsottiaux also clarified that Codex&#8217;s core is intended to be open-source because the ecosystem is still young and mutually informative (<a href="https://x.com/thsottiaux/status/2039482054686196116">post</a>).</p></li><li><p><strong>Agent-ready docs and platform surfaces</strong>: <a href="https://x.com/LangChain/status/2039387501140275431">LangChain embedded chat into its docs</a> grounded on full docs, knowledge base, and OSS code. <a href="https://x.com/togethercompute/status/2039392682553094239">Together AI open-sourced 12 agent skills</a> so Claude Code and Codex can call its APIs with the right model IDs and SDK idioms. <a href="https://x.com/OpenAIDevs/status/2039482146369458526">OpenAI Devs</a> also showed tighter Linear integration in the Codex app for keeping tickets synchronized with code work.</p></li><li><p><strong>Infra and storage quality-of-life</strong>: <a href="https://x.com/skypilot_org/status/2039372218031845769">SkyPilot added native VAST Data support</a> for direct high-speed dataset mounts across heterogeneous compute backends, and Hugging Face rolled out <a href="https://x.com/_akhaliq/status/2039404288082894912">persistent Storage Buckets for Spaces</a>. <a href="https://x.com/tinkerapi/status/2039424320393621649">Tinker</a> added longer context windows up to <strong>256k</strong> for select open models, widening its appeal for RL and long-horizon experimentation.</p></li></ul><p><strong>Top tweets (by engagement)</strong></p><ul><li><p><strong>OpenAI Codex limits reset</strong>: <a href="https://x.com/thsottiaux/status/2039248564967424483">thsottiaux reset Codex rate limits across all plans</a>, explicitly tying it to both unexplained user rate-limit spikes and anti-fraud enforcement that freed compute.</p></li><li><p><strong>GLM-5V-Turbo launch</strong>: <a href="https://x.com/Zai_org/status/2039371126984360085">Z.ai&#8217;s announcement</a> was one of the day&#8217;s biggest technical launches: a multimodal coding model aimed at GUI agents, visual coding, and agent workflows.</p></li><li><p><strong>Claude Code leak discourse</strong>: <a href="https://x.com/theo/status/2039412173689196674">Theo&#8217;s DMCA thread</a> and <a href="https://x.com/Yuchenj_UW/status/2039415430994100440">Yuchen Jin&#8217;s note about the leaked project surpassing 110k GitHub stars</a> captured how quickly source exposure translated into open ecosystem momentum.</p></li><li><p><strong>Arcee Trinity-Large-Thinking</strong>: <a href="https://x.com/arcee_ai/status/2039369121591120030">Arcee&#8217;s release</a> and <a href="https://x.com/OpenRouter/status/2039369849441497340">OpenRouter&#8217;s architecture summary</a> drew unusually strong engagement for an open-weight reasoning model, suggesting real appetite for serious US-based open releases.</p></li><li><p><strong>Falcon Perception</strong>: <a href="https://x.com/dahou_yasser/status/2039242378809385331">Falcon Perception&#8217;s launch</a> stood out on the multimodal side for its simple early-fusion architecture and unusually small OCR model size relative to claimed performance.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. Claude Code Source Leak and Analysis</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s8xj2e/claude_codes_source_just_leaked_i_extracted_its/">Claude Code&#8217;s source just leaked &#8212; I extracted its multi-agent orchestration system into an open-source framework that works with any LLM</a></strong> (Activity: 1205): <strong>The source code for Claude Code was leaked, revealing over </strong><code>500K</code><strong> lines of TypeScript, including its multi-agent orchestration system. A developer has re-implemented this system as an open-source framework called open-multi-agent, which is model-agnostic and can work with any LLM, such as Claude and OpenAI. The framework includes features like a coordinator pattern for task decomposition, a team system for inter-agent communication, task scheduling with dependency resolution, and a conversation loop for model-tool interactions. It is implemented in TypeScript, spans approximately </strong><code>8000</code><strong> lines, and is available under the MIT license on <a href="https://github.com/JackChen-me/open-multi-agent">GitHub</a>.</strong> Some commenters express skepticism about the legality and ethics of open-sourcing a re-implementation of leaked proprietary code, questioning the developer&#8217;s understanding of the architecture and the choice of licensing. There is also a debate about the practicality of using different models for planning and implementation, with a specific mention of using GPT-4o for coding.</p><ul><li><p>A user highlights the technical aspect of the project, noting that the multi-agent orchestration system extracted from Claude Code&#8217;s source involves a coordinator that breaks down goals into tasks. This suggests a sophisticated architecture designed for task management across multiple agents, which could be beneficial for complex LLM applications.</p></li><li><p>Another comment questions the choice of using GPT-4o for implementation in the orchestration system, implying that by March 2026, GPT-4o might be outdated for coding tasks. This raises a point about the importance of selecting the most current and capable models for specific tasks in AI development.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s8ijfb/claude_code_source_code_has_been_leaked_via_a_map/">Claude code source code has been leaked via a map file in their npm registry</a></strong> (Activity: 5229): <strong>The image reveals a directory listing of the &#8216;claude-code&#8217; project, which appears to have been unintentionally exposed via a map file in the npm registry. This leak includes TypeScript files and directories such as &#8216;entrypoints,&#8217; &#8216;commands,&#8217; and &#8216;utils,&#8217; providing a detailed view of the project&#8217;s codebase structure. The incident highlights potential security oversights in managing sensitive code repositories, particularly for companies like Anthropic that are involved in AI development.</strong> Commenters humorously speculate on the oversight, suggesting it might be due to an Anthropic employee&#8217;s mistake or a failure of AI oversight mechanisms. There&#8217;s also a satirical suggestion that the code is now &#8216;open source&#8217; due to the leak.</p><ul><li><p>The leak of Claude&#8217;s source code via a map file in their npm registry raises significant security concerns, particularly given the model&#8217;s reputation for identifying vulnerabilities. This incident highlights potential gaps in Anthropic&#8217;s internal security measures, as their AI, known for being &#8216;scary good&#8217; at finding vulnerabilities, failed to detect this issue.</p></li><li><p>The leak has sparked discussions about the potential for community-driven improvements, such as fixing existing bugs like the caching issue. This could lead to a more robust version of Claude, as external developers might contribute patches and enhancements, effectively making it &#8216;open source&#8217; in practice, if not in legal terms.</p></li><li><p>The incident also underscores the challenges of maintaining proprietary code secrecy in public repositories. The humorous suggestion of an &#8216;Undercover Mode&#8217; for Anthropic employees, which would strip AI attribution from commits, reflects the tension between open collaboration and the need to protect intellectual property.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s8uerc/analyzing_claude_code_source_code_write_wtf_and/">Analyzing Claude Code Source Code. Write &#8220;WTF&#8221; and Anthropic knows.</a></strong> (Activity: 840): <strong>The Reddit post discusses the source code of Claude Code, revealing extensive tracking and classification mechanisms. The system uses simple keyword detection for language classification, tracking words like </strong><code>wtf</code><strong> and </strong><code>frustrating</code><strong> to flag negative sentiment. It also monitors user behavior during permission prompts, logging actions such as opening or closing feedback boxes and typing without submitting. The feedback system is designed to capture negative experiences, prompting users to share session transcripts. Hidden commands like </strong><code>ultrathink</code><strong> and </strong><code>ultraplan</code><strong> alter system behavior, while telemetry logs detailed environment profiles, including session IDs and runtime details. An internal mode (</strong><code>USER_TYPE=ant</code><strong>) collects even more granular data, tying behavior to specific deployment environments. The post suggests this level of instrumentation is more detailed than typical user expectations, though not necessarily malicious. <a href="https://x.com/UsmanReads/status/2039036207431344140?s=20">Source</a>.</strong> Commenters note that such tracking mechanisms are standard in many applications for analytics and feedback, suggesting that negative sentiment triggers help identify issues with updates. Some commands, like <code>/btw</code>, are now public, while others remain as internal features or &#8216;easter eggs.&#8217; The extensive internal artifacts are likened to those found in game apps, possibly due to internal incentives for feature development.</p><ul><li><p>NandaVegg highlights that the use of keyword lists for sentiment analysis in Claude Code is a standard practice in event-triggered analytics. This approach helps identify negative user feedback, which can be crucial for detecting issues in updates that might disrupt user experience or model behavior. The mention of features like &#8216;ultraplan&#8217; and &#8216;ultrathink&#8217; suggests these are experimental or less refined, possibly serving as internal tests or &#8216;easter eggs&#8217; within the system.</p></li><li><p>SRavingmad expresses curiosity about the &#8216;tamagotchi mode&#8217; in Claude Code, implying there are unique or playful features embedded within the system. This suggests that the developers might be experimenting with interactive or gamified elements, which could be part of a broader strategy to engage users or test new functionalities.</p></li><li><p>Exhales_Deeply criticizes the reliance on AI-generated content, suggesting that user-generated posts would be more engaging. This comment indirectly points to a broader discussion about the quality and authenticity of AI-generated content versus human-created content, which is a significant topic in AI development and user interaction.</p></li></ul></li></ul><h3><strong>2. 1-bit and TurboQuant Model Innovations</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s9zumi/the_bonsai_1bit_models_are_very_good/">The Bonsai 1-bit models are very good</a></strong> (Activity: 657): <strong>PrismML&#8217;s Bonsai 1-bit models offer a significant reduction in model size and memory usage, being </strong><code>14x smaller</code><strong> than traditional models, which is transformative for local model deployment. The Bonsai 8B model was tested on an M4 Max 48GB MacBook Pro, demonstrating practical applications like chat and document summarization with lower memory pressure compared to models like Qwen3 VL 8B Instruct Q4_K_M. However, it requires a specific <a href="https://github.com/PrismML-Eng/llama.cpp">fork of llama.cpp</a> to support 1-bit operations, as the main llama.cpp repository lacks this capability. The model&#8217;s performance is notably superior to previous MSFT BitNet models, which were largely research-focused and not practical for real-world use.</strong> A benchmark comparison between Bonsai and Qwen3.5 models suggests Bonsai&#8217;s higher quality for RAM usage, though it struggled with code generation. There is interest in larger Bonsai models, such as a 200B version, and a desire for quantized versions of Qwen 3.5 models.</p><ul><li><p>itsArmanJr provides a detailed benchmark comparison between Bonsai and Qwen3.5 models, including specific configurations like <strong>35B-A3B</strong>, <strong>2B</strong>, and <strong>0.8B</strong>. The benchmark results are available on <a href="https://github.com/ArmanJR/PrismML-Bonsai-vs-Qwen3.5-Benchmark">GitHub</a>, offering insights into performance metrics across different model sizes.</p></li><li><p>-dysangel- highlights the efficiency of Bonsai models in terms of RAM usage, noting that while the model struggled to produce fully functional code, it was impressive given its small size of only 1GB. The comment suggests exploring quantized versions of Qwen 3.5 models, such as 9B or 27B, for potentially better performance.</p></li><li><p>Pitiful-Impression70 raises concerns about the performance of 1-bit quantized models like Bonsai on longer contexts, noting that coherence often degrades past 4k tokens. This comment questions whether the Bonsai model maintains quality in extended conversations compared to shorter prompts.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s9ig5r/turboquant_isnt_just_for_kv_qwen3527b_at_nearq4_0/">TurboQuant isn&#8217;t just for KV: Qwen3.5-27B at near-Q4_0 quality, about 10% smaller, and finally fitting on my 16GB 5060 Ti</a></strong> (Activity: 899): <strong>The image illustrates the TurboQuant TQ3_1S model&#8217;s ability to maintain near-Q4_0 quality for the Qwen3.5-27B model while being compact enough to fit on a 16GB RTX 5060 Ti. The TQ3_1S model is about 10% smaller than Q4_0, with a size of </strong><code>12.9 GB</code><strong> compared to </strong><code>14.4 GB</code><strong> for Q4_0, and shows a minimal performance gap in perplexity (PPL), with TQ3_1S having a PPL of </strong><code>7.2570</code><strong> versus Q4_0&#8217;s </strong><code>7.2431</code><strong>. This demonstrates a practical advantage for users with limited GPU memory, allowing the model to fit fully on the specified GPU setup. The post also highlights the use of advanced quantization techniques like Walsh-Hadamard rotation and 8-centroid quantization to achieve these results.</strong> Some commenters criticize the use of perplexity as a metric for quantization loss, suggesting KLD or PPL ratio as more accurate alternatives. Others praise the adaptation of cutting-edge research to solve a practical problem, acknowledging the achievement despite the criticisms.</p><ul><li><p>Velocita84 criticizes the use of Q4_0 quantization, stating it&#8217;s outdated and surpassed by more advanced Q4 techniques. They argue that using perplexity as a metric for quantization loss is incorrect, suggesting KLD or PPL ratio against a full bf16 model as more accurate alternatives.</p></li><li><p>grumd suggests comparing the model to unsloth Q3_K_S quant of 27B using real benchmarks, implying that practical performance comparisons are necessary to validate claims about model efficiency and quality.</p></li><li><p>XccesSv2 expresses skepticism about TurboQuant&#8217;s claims of achieving BF16 quality with 4 or 5 bits, noting that real-world tests often don&#8217;t reflect the purported improvements, indicating a gap between theoretical claims and practical outcomes.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s90wo4/prismml_announcing_1bit_bonsai_the_first/">PrismML &#8212; Announcing 1-bit Bonsai: The First Commercially Viable 1-bit LLMs</a></strong> (Activity: 596): <strong>PrismML has announced the release of the 1-bit Bonsai models, including the 1-bit Bonsai 8B, which is a groundbreaking development in AI model efficiency. These models are fully quantized to 1-bit precision across all components, including embeddings, attention layers, MLP layers, and the LM head, without any higher-precision components. The 1-bit Bonsai 8B model, with </strong><code>8.2 billion parameters</code><strong>, fits into </strong><code>1.15 GB</code><strong> of memory and is </strong><code>14x smaller</code><strong>, </strong><code>8x faster</code><strong>, and </strong><code>5x more energy efficient</code><strong> than its full-precision counterparts, making it suitable for edge hardware. The models are open-sourced under the Apache 2.0 license, and the implementation requires a fork of Llama.cpp for inference. More details can be found in their <a href="https://github.com/PrismML-Eng/Bonsai-demo/blob/main/1-bit-bonsai-8b-whitepaper.pdf">whitepaper</a>.</strong> Some commenters express skepticism about the practicality of 1-bit models, while others are intrigued by the potential for on-device AI applications. The debate centers around the trade-offs between model precision and performance efficiency.</p><ul><li><p>PrismML has announced the 1-bit Bonsai 8B model, which is a 1-bit weight model that fits into 1.15 GB of memory. It claims to deliver over 10x the intelligence density of full-precision counterparts, being 14x smaller, 8x faster, and 5x more energy efficient on edge hardware. The model is open-sourced under the Apache 2.0 license, and the company emphasizes the potential for on-device AI applications due to its efficiency.</p></li><li><p>The 1-bit Bonsai 8B model is quantized end-to-end using a proprietary method, requiring a fork of Llama.cpp for inference. This model design applies 1-bit quantization across all network components, including embeddings, attention layers, MLP layers, and the LM head, making it a true 1-bit model across its 8.2 billion parameters. This approach highlights a significant shift towards more efficient AI models that can operate effectively on edge devices.</p></li><li><p>The announcement suggests a paradigm shift in AI model design, focusing on intelligence density rather than parameter count. By achieving significant reductions in model size and energy consumption, PrismML&#8217;s 1-bit models could enable new applications in real-time robotics and offline intelligence, potentially transforming the AI landscape by making advanced models feasible for local execution on edge devices.</p></li></ul></li></ul><h3><strong>3. Local AI Hardware and Software Experiments</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLM/comments/1s9jt6v/local_llm_claude_code_replacement_128gb_macbook/">Local LLM Claude Code replacement, 128GB MacBook Pro?</a></strong> (Activity: 140): <strong>The user is considering upgrading to a 128GB MacBook Pro to run local LLMs as a replacement for Claude Code due to potential price increases in API usage. They are currently using a 2019 Intel-based MacBook Pro and are experiencing performance issues with multiple Docker containers. The user is exploring whether local LLMs can match the capabilities of Claude Code for software development. Claude Code is noted for its 1 million context capability, but open-source models are improving. A user reported running </strong><code>qwen3.5 122b ud q4 xl</code><strong> with a </strong><code>256k context</code><strong> on a 128GB RAM system, finding it competent for lighter tasks, though not as strong as Claude for heavy coding. Another user suggests trying open-source models via DeepInfra before purchasing, and mentions using the Bodega inference engine as a replacement for commercial subscriptions.</strong> There is a debate on whether local LLMs can fully replace Claude Code, with some users finding open-source models like <code>qwen 122</code> competent for lighter tasks but not yet matching Claude for intensive coding. The shared memory model of Mac is seen as advantageous for running local LLMs.</p><ul><li><p>EmbarrassedAsk2887 discusses replacing Claude Code and Codex subscriptions with the Bodega inference engine on a 128GB M4 Max MacBook Pro. They provide a detailed write-up and benchmarks, suggesting that Bodega can effectively handle tasks typically managed by commercial solutions. <a href="https://www.reddit.com/r/MacStudio/s/zsqM1EOLYg">Read more here</a>.</p></li><li><p>Mediocre_Paramedic22 shares their experience running the Qwen 3.5 122B UD Q4 XL model with a 256k context on a 128GB RAM setup using Fedora. They note that while Claude is superior for intensive coding tasks, Qwen performs well for lighter workloads and basic agent tasks, utilizing about 29GB of free RAM.</p></li><li><p>Aisher mentions using a 128GB M5 Max for local LLM development, noting the noise level as a downside. They suggest using multiple desktop Macs for full-time development, connected via ZeroTier for remote access, as a cost-effective alternative to expensive cloud-based solutions.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLM/comments/1s8gzyt/worth_building_a_7k_local_ai_rig_just_to/">Worth building a $7k local AI rig just to experiment? Afraid I&#8217;ll lose interest.</a></strong> (Activity: 131): <strong>The user is contemplating building a $7k local AI rig to experiment with AI technologies, particularly in photo and video generation, model integration, and AI assistant development. They currently use a MacBook with an M3 Pro chip and 36GB RAM but are concerned it may not suffice for more complex tasks. The proposed rig includes a Corsair Vengeance i5200 with an Intel Core Ultra 9 285K, GeForce RTX 5090, and 64GB DDR5 RAM, with plans to add an additional 128GB RAM. The user is hesitant due to the lack of a concrete use case and the potential for the rig to become an &#8216;expensive toy&#8217;.</strong> Commenters suggest alternatives such as renting a machine or using existing hardware with tools like LM Studio to test models like Qwen3.5, 9b, and 27b Q4. Another commenter shares a similar dilemma and opts to continue using a current setup with an RTX 4070Ti and 32GB RAM, highlighting the importance of having a clear use case before investing heavily.</p><ul><li><p><strong>TassioNoronha_</strong> suggests starting with cloud-based solutions like Open Router or renting a machine for a week to gauge interest before committing to a $7k investment. This approach allows for experimentation without the upfront cost, providing a practical way to assess long-term interest and needs.</p></li><li><p><strong>Xmede81</strong> shares their experience of sticking with a current setup featuring an RTX 4070Ti and 32GB RAM, which is sufficient for general use and experimentation. They highlight the importance of evaluating actual use cases and the impact of current memory prices on decision-making.</p></li><li><p><strong>Dry-Influence9</strong> advises against building powerful local setups due to current high prices, suggesting that waiting could yield better value. They recommend renting GPUs or using existing computers to experiment, as this can provide similar capabilities without the significant financial commitment.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLM/comments/1s98766/we_built_a_local_inference_engine_that_skips_rocm/">We built a local inference engine that skips ROCm entirely and just got a 4x speedup on a consumer AMD GPU</a></strong> (Activity: 124): <strong>ZINC is a new inference engine designed to bypass the complexities of ROCm by directly interfacing with AMD GPUs through Vulkan, achieving a </strong><code>4x speedup</code><strong> on an AMD Radeon AI PRO R9700. The engine supports models like Qwen3.5-35B-A3B and Qwen3.5-2B, with current performance at </strong><code>33.58 tok/s</code><strong>, compared to </strong><code>107 tok/s</code><strong> for llama.cpp on the same hardware. ZINC&#8217;s architecture allows it to run on hardware not officially supported by ROCm, and it includes an OpenAI-compatible API server for parallel request batching. The project is open-source and available on <a href="https://github.com/zolotukhin/zinc">GitHub</a>.</strong> Some commenters question the significance of the speedup given that ZINC&#8217;s performance is still less than a third of llama.cpp&#8217;s speed. Others express skepticism about achieving such improvements when larger companies have struggled in this area.</p><ul><li><p>Big-Masterpiece-9581 questions the significance of the 4x speedup, pointing out that despite the improvement, the performance is still less than a third of <code>llama.cpp</code>&#8216;s speed. This suggests that while the optimization is notable, it may not yet be competitive with existing solutions in terms of raw throughput.</p></li><li><p>fallingdowndizzyvr highlights a performance issue, noting that achieving only <code>7 tok/s</code> on an AMD Radeon AI PRO R9700 with the Qwen3.5-35B-A3B-UD Q4_K_XL model indicates a potential inefficiency in the initial implementation. This suggests that the baseline performance was suboptimal, which could have skewed the perceived improvement.</p></li><li><p>hipcatinca provides a benchmark comparison using an RX 570 with <code>llama.cpp</code> via Vulkan, achieving approximately <code>31 tok/s</code> with the llama3.1:8b model. This serves as a reference point, illustrating that other configurations and models can achieve significantly higher throughput on different hardware setups.</p></li></ul></li></ul><h2><strong>Less Technical AI Subreddit Recap</strong></h2><blockquote><p>/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT, /r/ChatGPTCoding, /r/aivideo, /r/aivideo</p></blockquote><h3><strong>1. Claude Code Source Leak and Reactions</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1s8izpi/claude_code_source_code_has_been_leaked_via_a_map/">Claude code source code has been leaked via a map file in their npm registry</a></strong> (Activity: 1598): <strong>On March 31, 2026, the full source code of Anthropic&#8217;s Claude Code CLI was leaked through a </strong><code>.map</code><strong> file in their npm registry, as reported on <a href="https://github.com/instructkr/claude-code">GitHub</a>. The codebase, consisting of approximately </strong><code>512k lines of TypeScript</code><strong>, is built using React + Ink for terminal UI and runs on the Bun runtime. This leak potentially exposes major gated features that are not yet public.</strong> The comments reflect a misunderstanding among some users about the implications of the leak, particularly the difference between <strong>Large Language Models (LLMs)</strong> and agents, highlighting a knowledge gap in the community.</p><ul><li><p>The leak of Claude&#8217;s source code via a map file in their npm registry has sparked discussions about the potential implications for developers and researchers. One key point is the distinction between Large Language Models (LLMs) and agents, as highlighted by Nedshent. This leak may expose a knowledge gap where people might not fully understand how LLMs function compared to agents, which are typically more task-specific and interactive.</p></li><li><p>The technical details of the leak reveal that the codebase consists of approximately <code>512k lines of TypeScript</code>, built with React and Ink for terminal UI, and runs on the Bun runtime. This setup suggests a modern and scalable architecture, potentially offering insights into how Claude&#8217;s infrastructure is designed to handle complex tasks and interactions.</p></li><li><p>There is speculation about the reasons behind the leaks, with some users humorously suggesting that Anthropic might be using Claude itself for development and content creation tasks. This raises questions about the security and operational practices within Anthropic, especially if such reliance on AI could inadvertently lead to more leaks or security vulnerabilities.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ClaudeAI/comments/1s9dvi8/anthropic_staff_reacts_to_claude_code_leak/">Anthropic staff reacts to Claude code leak &#128064;</a></strong> (Activity: 859): <strong>The image is a meme depicting a humorous Twitter exchange that indirectly references a code leak from Anthropic, a company known for its work in AI. The meme uses a popular internet joke about an &#8216;immortal snail&#8217; to suggest that the leak is an inevitable consequence of being &#8216;caught&#8217; by the snail, implying a sense of inevitability or fate. This reflects a lighthearted community reaction to the leak, rather than a technical discussion or official statement from Anthropic.</strong> Commenters humorously note the dual reactions to the leak: legal teams wanting to &#8216;delete it&#8217; while engineers have already &#8216;starred it,&#8217; indicating a divide between legal caution and technical curiosity. Another comment suggests that with Anthropic&#8217;s rapid development pace, such incidents were expected.</p><ul><li><p>Belium suggests that the leak of Claude&#8217;s code could be beneficial for Anthropic, as it generates hype and allows engineers to identify and fix bugs. The leak also provides engineers with the opportunity to create their own implementations or &#8216;harnesses&#8217; of Claude, potentially increasing its usage and influence in the developer community.</p></li><li><p>IntenselySwedish highlights a perceived irony in Anthropic&#8217;s situation, pointing out that the company, which has been accused of large-scale copyright violations through book piracy, is now facing its own copyright challenges with the leak of Claude&#8217;s code. This comment underscores the complex legal and ethical landscape surrounding AI development and intellectual property.</p></li><li><p>xitizen7 comments on the rapid pace of development and releases from Anthropic, suggesting that such a leak was almost inevitable given the company&#8217;s trajectory. This reflects a broader industry trend where fast-paced innovation can sometimes lead to security oversights or unintended disclosures.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ClaudeAI/comments/1s9d9j9/claude_code_source_leak_megathread/">Claude Code Source Leak Megathread</a></strong> (Activity: 653): <strong>The Claude Code CLI source code was leaked, revealing several technical details. Notably, the npm source (</strong><code>@anthropic-ai/claude-code@2.1.74</code><strong>) shows that the DuckDuckGo replacement in the Rust port is incorrect; the real package uses a nested API call to Anthropic&#8217;s server-side search with encrypted content blobs. Additionally, a two-tier web system is implemented, where 85 domains are pre-approved for full content extraction, while others are limited to 125-character quotes. Structured data in </strong><code>&amp;lt;head&amp;gt;</code><strong> is ignored, and tables are not supported in the markdown converter. The system limits to 8 results per query with no pagination. A hidden feature, KAIROS_DREAM, allows Claude to self-review and update its memory after inactivity. The newer search version (</strong><code>web_search_20260209</code><strong>) enables Claude to programmatically filter search results. The source can be verified in the minified </strong><code>cli.js</code><strong> of the npm package. Anthropic has issued a DMCA to remove the leaked code from GitHub.</strong> Some commenters criticize the code quality, suggesting that many critics may lack experience in shipping production apps. Others focus on the technical implications of the leak, such as the incorrect assumptions about DuckDuckGo usage and the limitations of the markdown converter.</p><ul><li><p>Ooty-io highlights several technical aspects of the Claude Code source, noting that the package makes nested API calls to Anthropic&#8217;s server-side search, with results returned as encrypted content blobs, rather than using DuckDuckGo as a standalone replacement. Additionally, the source code reveals a two-tier web system where 85 documentation domains are pre-approved for full content extraction, while other sites are limited to 125-character quotes. The code also shows that structured data in <code>&amp;lt;head&amp;gt;</code> tags is ignored, and tables are not supported in the markdown conversion process.</p></li><li><p>Independent-Corgi-88 discusses the broader implications of the Claude Code leak, suggesting it points towards a future of AI characterized by multi-agent coordination, memory layers, and persistent interaction. This perspective emphasizes the importance of systems with memory and coordination over raw model capability, suggesting that the future of AI involves environments that support sustained and useful work. The comment also references J3nna, an AI being developed to understand its operating environment, highlighting the shift in focus from model capability to the surrounding system.</p></li><li><p>Joozio provides insights from analyzing the Claude Code source, noting that the <code>CLAUDE.md</code> file is reinserted with every turn change, impacting token usage. They also mention that switching models mid-session clears the prompt cache, leading to increased token costs. Additionally, Claude Code ranks poorly on terminal benchmarks, coming in last for Opus among harnesses, with a flat 77% performance compared to Cursor&#8217;s 77% to 93%. Joozio implemented several patterns from the source, such as semantic memory merging and cache monitoring, into their own agent.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ClaudeAI/comments/1s8lkkm/i_dug_through_claude_codes_leaked_source_and/">i dug through claude code&#8217;s leaked source and anthropic&#8217;s codebase is absolutely unhinged</a></strong> (Activity: 6259): <strong>The leaked source code of Anthropic&#8217;s Claude reveals a whimsical feature: a terminal-based pet system called </strong><code>/buddy</code><strong>, which includes 18 species with a gacha rarity system and interactive ASCII companions. The codebase also shows unconventional practices, such as hex encoding species names to bypass internal scanners, and a voice mode using Deepgram Nova 3 for speech-to-text. The project is codenamed &#8216;tengu&#8217;, with telemetry events and feature flags reflecting this. The codebase is notably large, with </strong><code>main.tsx</code><strong> at </strong><code>803,924 bytes</code><strong> and several files exceeding </strong><code>4,000 lines</code><strong>. It contains </strong><code>460 eslint-disable</code><strong> comments and numerous deprecated functions still in use, indicating a lack of codebase hygiene. Additionally, there are unreleased features like &#8216;kairos&#8217; and &#8216;ultraplan&#8217;, and several hidden slash commands.</strong> Some commenters argue that the codebase&#8217;s state is typical for large projects and not particularly &#8216;unhinged&#8217;, while others express interest in the <code>/buddy</code> feature, wishing it were available sooner.</p><ul><li><p>A user points out that the presence of deprecated functions in the codebase is likely a strategic decision to signal developers not to use them in new code. This is a common practice in large codebases where gradual migration to new implementations is necessary, especially when multiple developers are involved and there is pressure from sales teams to maintain functionality while transitioning.</p></li><li><p>Another commenter argues that the codebase&#8217;s state is typical for large projects, especially those developed before the advent of AI tools like GPT-3. They suggest that the complexity and seemingly chaotic nature of the code are standard in environments where many developers contribute under tight deadlines and evolving requirements.</p></li><li><p>A technical insight is provided regarding the perception of the codebase as &#8216;unhinged.&#8217; The commenter suggests that such a view might stem from a lack of experience with large-scale software projects, where the code often appears disorganized due to the sheer number of contributors and the necessity to maintain legacy systems while integrating new features.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ClaudeAI/comments/1s8xfwt/claude_codes_source_code_just_leaked_so_i_had/">Claude Code&#8217;s source code just leaked &#8212; so I had Claude Code analyze its own internals and build an open-source multi-agent framework from it</a></strong> (Activity: 513): <strong>The source code for Claude Code was leaked, revealing over </strong><code>500K</code><strong> lines of TypeScript, including its multi-agent orchestration layer. A developer re-implemented this as an open-source, model-agnostic framework, allowing integration of different LLMs like Claude and GPT in a shared workflow. Key features include multi-agent teams, task pipelines with dependency resolution, inter-agent messaging, and an </strong><code>LLMAdapter</code><strong> interface. The framework is </strong><code>~8000</code><strong> lines of TypeScript and is available on <a href="https://github.com/JackChen-me/open-multi-agent">GitHub</a> under the MIT license.</strong> Some commenters appreciate the framework&#8217;s ability to integrate various LLMs, which can reduce costs. However, others note that the framework&#8217;s core functionality is similar to existing solutions like CrewAI and AutoGen, and that the re-implementation mainly replicates standard agent loop patterns.</p><ul><li><p>Macaulay_Codin critiques the framework, noting that it follows a standard agent loop pattern: calling an LLM, executing tool calls, and iterating over results. The multi-agent aspect is essentially a task queue coordinator, which is not novel. The framework includes five built-in tools, rewritten from Claude Code&#8217;s tools, and is implemented in 8k lines of TypeScript, suggesting it&#8217;s a manageable project rather than a massive reverse engineering effort. Alternatives like CrewAI, AutoGen, and the Claude Agent SDK offer similar functionalities.</p></li><li><p>JuryNightFury highlights the framework&#8217;s capability to integrate with other model families using an OpenRouter API key, demonstrating its model-agnostic nature. This feature allows it to fetch reviews from various models, showcasing its flexibility in utilizing different AI models beyond its original design.</p></li><li><p>NoInside3418 appreciates the potential cost savings and efficiency gains from using the framework to enable communication between subagents from different models like Gemini, Codex, and Claude. This interoperability could streamline processes by leveraging the strengths of each model, such as Gemini&#8217;s large context and low cost, Haiku&#8217;s implementation capabilities, and GPT&#8217;s planning features.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/PromptEngineering/comments/1s9irpo/anthropics_leaked_cli_source_code_reveals_a/">Anthropic&#8217;s leaked CLI source code reveals a hidden &#8220;Tamagotchi&#8221; pet and autonomous multi-agent teams. The bar for developer tools is getting wild.</a></strong> (Activity: 161): <strong>Anthropic accidentally exposed the source code of their CLI tool, revealing innovative features like a Tamagotchi-style virtual pet called &#8220;BUDDY&#8221; that gamifies the terminal experience by leveling up based on coding behavior. Additionally, the code includes features like &#8220;ULTRAPLAN,&#8221; which allows the AI to autonomously plan for 30 minutes, and &#8220;BRIDGE MODE,&#8221; where multiple AI instances collaborate as a team. Another feature, &#8220;KAIROS,&#8221; autonomously manages failing tests and dependencies. These features suggest a shift towards more autonomous and interactive developer tools. For a detailed breakdown, see the <a href="https://mindwiredai.com/2026/04/01/anthropic-claude-code-source-leak-hidden-features/">full analysis</a>.</strong> Commenters are skeptical about the feasibility of autonomous multi-agent teams, suggesting the pet feature is more believable due to its potential for user engagement. There is also curiosity about whether these features represent real product directions or are merely experimental ideas.</p><ul><li><p>Senior_Hamster_58 raises skepticism about the claim of autonomous multi-agent teams being proven by a leaked repository, suggesting that such features might be more speculative or experimental rather than indicative of a real product direction. They question whether these features are part of a serious development effort or merely internal experiments that may not reach production, highlighting a common issue in software development where many ideas do not survive the transition from concept to release engineering.</p></li><li><p>OutrageousIndustry28 claims that the feature is already live and can be activated using a specific command (<code>/buddy</code>). This suggests that at least some components of the leaked features might be functional or accessible, indicating a level of readiness beyond mere speculation or internal testing. However, without further verification, this claim remains anecdotal.</p></li><li><p>rainmaker66 and prussell774 both suggest that the features, including the &#8220;Tamagotchi&#8221; pet and autonomous multi-agent teams, are part of an April Fool&#8217;s joke by Anthropic. This implies that the leaked code might not represent serious development efforts but rather a playful or humorous initiative, which is a common practice in tech companies around April 1st.</p></li></ul></li></ul><h3><strong>3. OpenAI and Anthropic Funding and Developments</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1s90e4e/openai_raises_122_billion_to_accelerate_the_next/">OpenAI raises $122 billion to accelerate the next phase of AI</a></strong> (Activity: 794): <strong>OpenAI has raised </strong><code>$122 billion</code><strong>, reaching a post-money valuation of </strong><code>$852 billion</code><strong>, to bolster its position as a core AI infrastructure provider. The company reports </strong><code>900 million</code><strong> weekly active users for ChatGPT and </strong><code>$2 billion</code><strong> in monthly revenue. Strategic partnerships with Amazon, NVIDIA, and Microsoft are pivotal in advancing their AI capabilities, focusing on enhanced compute infrastructure and a unified AI superapp for both consumer and enterprise applications. More details can be found in the <a href="https://openai.com/index/accelerating-the-next-phase-ai/">original article</a>.</strong> Commenters are questioning the allocation of such a large funding amount, with some expressing skepticism about the necessity of this capital given recent fundraising efforts.</p></li></ul><h1><strong>AI Discords</strong></h1><p>Unfortunately, Discord shut down our access today. We will not bring it back in this form but we will be shipping the new AINews soon. Thanks for reading to here, it was a good run.</p>]]></content:encoded></item><item><title><![CDATA[[AINews] The Claude Code Source Leak]]></title><description><![CDATA[The accidental "open sourcing" of Claude Code brings a ton of insights.]]></description><link>https://www.latent.space/p/ainews-the-claude-code-source-leak</link><guid isPermaLink="false">https://www.latent.space/p/ainews-the-claude-code-source-leak</guid><pubDate>Wed, 01 Apr 2026 06:24:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_MBb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OpenAI&#8217;s <a href="https://www.latent.space/p/ainews-openai-closes-110b-raise-from">Largest Fundraise in Human History</a> closed today, <a href="https://openai.com/index/accelerating-the-next-phase-ai/">growing by a few billion</a>, but disclosing some cool numbers like $24B ARR (growing 4x faster than Google/Meta in their heyday), and also had a &#8220;soft IPO&#8221; with $3B of investment from rich people and inclusion in <a href="https://www.bloomberg.com/news/articles/2026-03-31/ark-etfs-to-add-openai-stake-as-retail-investors-chase-tech-boom">ETFs from ARK Invest</a>, although ChatGPT WAU growth seem to has stalled out - they STILL have not crossed the 1B WAU mark targeted for end 2025. Codex also worryingly has <a href="https://x.com/swyx/status/2027613757787279730?s=20">not announced a new milestone for March</a>.</p><p>By far the biggest news of the day is <a href="https://news.ycombinator.com/item?id=47584540">the Claude Code source leak</a>, in itself not particularly damaging for Anthropic, but surely embarrassing and also somewhat educational - Christmas come early for Coding Agent nerds. You can read the many many tweets and posts covering the 500k LOC codebase, and you can <a href="https://deepwiki.com/Sachin1801/claude-code">browse multiple hosted forks of the source</a>. </p><p>There are fun curiosities, such as the <a href="https://x.com/wesbos/status/2038958747200962952?s=20">full verb list</a>, or <a href="https://x.com/scaling01/status/2038948989257630166?s=20">Capybara/Mythos v8</a>, or <a href="https://x.com/trq212/status/2039201498996035924?s=46">the /buddy April Fools feature</a>, or Boris&#8217; <a href="https://x.com/Rahatcodes/status/2038995503141065145?s=20">confirmed WTF counter</a>, or creating the cursed &#8220;<a href="https://x.com/LexnLin/status/2038991257582604618?s=20">Claude Codex</a>&#8221;, or the <a href="https://x.com/amaan8429/status/2038924254570545298?s=20">dozen other unreleased features</a>, but most serious players are commenting on a few things. Sebastian Raschka probably has <a href="https://x.com/rasbt/status/2038980345316413862?s=20">a good list of the top 6</a>:</p><ol><li><p>Putting Repo state in Context (eg recent commits, git branch info)</p></li><li><p>Aggressive cache reuse</p></li><li><p>Custom Grep/Glob/LSP (standard in industry)</p><ol><li><p>Claude code has <a href="https://x.com/jpschroeder/status/2038960058499768427">less than 20 tools</a> default on (up to <a href="https://x.com/mal_shaik/status/2038918662489510273">60+ total</a>): AgentTool, BashTool, FileReadTool, FileEditTool, FileWriteTool, NotebookEditTool, WebFetchTool, WebSearchTool, TodoWriteTool, TaskStopTool, TaskOutputTool, AskUserQuestionTool, SkillTool, EnterPlanModeTool, ExitPlanModeV2Tool, SendMessageTool, BriefTool, ListMcpResourcesTool, and ReadMcpResourceTool.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_MBb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_MBb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png 424w, https://substackcdn.com/image/fetch/$s_!_MBb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png 848w, https://substackcdn.com/image/fetch/$s_!_MBb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png 1272w, https://substackcdn.com/image/fetch/$s_!_MBb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_MBb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png" width="1456" height="833" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:833,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3133239,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192814599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_MBb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png 424w, https://substackcdn.com/image/fetch/$s_!_MBb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png 848w, https://substackcdn.com/image/fetch/$s_!_MBb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png 1272w, https://substackcdn.com/image/fetch/$s_!_MBb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff17faae4-fe57-460c-9336-d5fe8fcf134e_2420x1384.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://ccunpacked.dev/">more in ccunpacked</a></figcaption></figure></div></li></ol></li><li><p>File read deduplication/tool result sampling</p></li><li><p>Structured Session Memory (more on this)</p></li><li><p>Subagents</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZN5N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZN5N!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png 424w, https://substackcdn.com/image/fetch/$s_!ZN5N!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png 848w, https://substackcdn.com/image/fetch/$s_!ZN5N!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png 1272w, https://substackcdn.com/image/fetch/$s_!ZN5N!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZN5N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png" width="1444" height="577" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:577,&quot;width&quot;:1444,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZN5N!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png 424w, https://substackcdn.com/image/fetch/$s_!ZN5N!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png 848w, https://substackcdn.com/image/fetch/$s_!ZN5N!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png 1272w, https://substackcdn.com/image/fetch/$s_!ZN5N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c7ee5f-e03e-434b-b52a-3c0a0470e111_1444x577.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Memory</h2><p>Claude Code&#8217;s Memory has a <a href="https://x.com/himanshustwts/status/2038924027411222533?s=20">3 layer design</a> with 1) a MEMORY.md that is just an index to other knowledge, 2) topic files loaded on demand, and 3) full session transcripts that can be searched. There&#8217;s also an &#8220;autoDream&#8221; mode for &#8220;sleep&#8221; - merging memories, deduping, pruning, removing contradictions.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tg7G!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tg7G!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png 424w, https://substackcdn.com/image/fetch/$s_!tg7G!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png 848w, https://substackcdn.com/image/fetch/$s_!tg7G!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png 1272w, https://substackcdn.com/image/fetch/$s_!tg7G!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tg7G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png" width="1456" height="873" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:873,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tg7G!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png 424w, https://substackcdn.com/image/fetch/$s_!tg7G!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png 848w, https://substackcdn.com/image/fetch/$s_!tg7G!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png 1272w, https://substackcdn.com/image/fetch/$s_!tg7G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F658d124b-b5d7-4075-af07-2bb850a42d32_1754x1052.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A <a href="https://x.com/ellen_in_sf/status/2039098050837463504">deeper analysis from mem0</a> finds 8 phases:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AToy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AToy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png 424w, https://substackcdn.com/image/fetch/$s_!AToy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png 848w, https://substackcdn.com/image/fetch/$s_!AToy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!AToy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AToy!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png" width="1200" height="1293.956043956044" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:1570,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AToy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png 424w, https://substackcdn.com/image/fetch/$s_!AToy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png 848w, https://substackcdn.com/image/fetch/$s_!AToy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!AToy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4d57d8b-f3b3-4005-90bc-129661d8c15b_1899x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">caption...</figcaption></figure></div><p>And there are 5 kinds of Compaction:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-ryH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-ryH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png 424w, https://substackcdn.com/image/fetch/$s_!-ryH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png 848w, https://substackcdn.com/image/fetch/$s_!-ryH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png 1272w, https://substackcdn.com/image/fetch/$s_!-ryH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-ryH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png" width="436" height="594.6125211505922" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1612,&quot;width&quot;:1182,&quot;resizeWidth&quot;:436,&quot;bytes&quot;:231722,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192814599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-ryH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png 424w, https://substackcdn.com/image/fetch/$s_!-ryH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png 848w, https://substackcdn.com/image/fetch/$s_!-ryH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png 1272w, https://substackcdn.com/image/fetch/$s_!-ryH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0165c08d-6763-490a-9b76-5c9c957f5d06_1182x1612.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>Subagents use Prompt Caching</h2><p>A key feature <a href="https://x.com/_rajanagarwal/status/2039009685085303225?s=20">of CC</a>: they use the KV cache to create a fork-join model for their subagents, meaning they contain the full context and don&#8217;t have to repeat work. In other words: <a href="https://x.com/mal_shaik/status/2038918662489510273">Parallelism is basically free</a>.</p><p></p><h2>The 5 level Permission System</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9fhE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9fhE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png 424w, https://substackcdn.com/image/fetch/$s_!9fhE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png 848w, https://substackcdn.com/image/fetch/$s_!9fhE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png 1272w, https://substackcdn.com/image/fetch/$s_!9fhE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9fhE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png" width="396" height="502.7368421052632" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1592,&quot;width&quot;:1254,&quot;resizeWidth&quot;:396,&quot;bytes&quot;:168884,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192814599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9fhE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png 424w, https://substackcdn.com/image/fetch/$s_!9fhE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png 848w, https://substackcdn.com/image/fetch/$s_!9fhE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png 1272w, https://substackcdn.com/image/fetch/$s_!9fhE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d020dee-d813-4868-8df5-29454d48129a_1254x1592.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p></p><h2>The 2 Types of Plan mode</h2><p><a href="https://x.com/DharmiKumbhani/status/2038917827462308308?s=20">here</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4Ytb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4Ytb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4Ytb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4Ytb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4Ytb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4Ytb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg" width="1451" height="1609" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1609,&quot;width&quot;:1451,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!4Ytb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4Ytb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4Ytb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4Ytb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59924d12-f74b-4ba8-9272-5419fbad1ecd_1451x1609.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Resilience/Retry</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5FIb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5FIb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png 424w, https://substackcdn.com/image/fetch/$s_!5FIb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png 848w, https://substackcdn.com/image/fetch/$s_!5FIb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png 1272w, https://substackcdn.com/image/fetch/$s_!5FIb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5FIb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png" width="1206" height="1228" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1228,&quot;width&quot;:1206,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:179739,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192814599?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5FIb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png 424w, https://substackcdn.com/image/fetch/$s_!5FIb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png 848w, https://substackcdn.com/image/fetch/$s_!5FIb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png 1272w, https://substackcdn.com/image/fetch/$s_!5FIb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293e920e-2e19-4e16-a04d-c52d699afe6b_1206x1228.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p></p><h2>Other Unreleased/Internal Features</h2><p>Including <a href="https://x.com/iamfakeguru/status/2038965567269249484?s=20">an employee-only gate</a> and an <a href="https://x.com/cheatyyyy/status/2038987747944546781">employee TUI</a>, but also a bunch of <a href="https://x.com/RoundtableSpace/status/2038960753458438156?s=20">other stuff in development</a> including ULTRAPLAN and <a href="https://x.com/itsolelehmann/status/2039018963611627545?s=20">KAIROS</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cG_C!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cG_C!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png 424w, https://substackcdn.com/image/fetch/$s_!cG_C!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png 848w, https://substackcdn.com/image/fetch/$s_!cG_C!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png 1272w, https://substackcdn.com/image/fetch/$s_!cG_C!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cG_C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png" width="1456" height="986" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:986,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cG_C!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png 424w, https://substackcdn.com/image/fetch/$s_!cG_C!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png 848w, https://substackcdn.com/image/fetch/$s_!cG_C!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png 1272w, https://substackcdn.com/image/fetch/$s_!cG_C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3642b10-1f7e-49a0-af0d-986b24180a1c_1600x1084.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">note a few of these <a href="https://x.com/himanshustwts/status/2038941583148810701?s=20">were recently shipped</a></figcaption></figure></div><p>And internal <a href="https://x.com/mattyp/status/2038988217102266669">MAGIC DOCS</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Fk1Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg" width="1456" height="1496" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1496,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Fk1Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b39db63-a7b1-48a1-839d-c498202c659e_1773x1822.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>AI News for 3/23/2026-3/24/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Top Story: Claude Code source leak &#8212; architecture discoveries, Anthropic&#8217;s response, and competitor reactions</strong></p><h2><strong>What happened</strong></h2><p>Claude Code had substantial source artifacts exposed via shipped source maps / package contents, which triggered rapid public reverse-engineering, mirroring, and derivative ports. The discussion quickly shifted from &#8220;embarrassing leak&#8221; to &#8220;what does this reveal about state-of-the-art agent harness design?&#8221; Multiple observers highlighted that the leak exposed orchestration logic rather than model weights, including autonomous modes, memory systems, planning/review flows, and model-specific control logic. Public forks proliferated; one post claimed <strong>32.6k stars and 44.3k forks</strong> on a fork before legal fear led to a Python conversion effort using Codex (<a href="https://x.com/Yuchenj_UW/status/2038996920845430815">Yuchenj_UW</a>). Later commentary put the exposed code volume at <strong>500k+ lines</strong> (<a href="https://x.com/Yuchenj_UW/status/2039029676040220682">Yuchenj_UW</a>). Anthropic then moved to contain redistribution via <strong>DMCA takedowns</strong> according to several posters (<a href="https://x.com/dbreunig/status/2039007097376108979">dbreunig</a>, <a href="https://x.com/BlancheMinerva/status/2039114452088295821">BlancheMinerva</a>). Separately, a Claude Code team member announced a product feature during the fallout &#8212; easier local/web GitHub credential setup via <code>/web-setup</code> (<a href="https://x.com/_catwu/status/2039027712288075812">catwu</a>) &#8212; implying normal product operations continued. The leak also created a live security hazard: attackers quickly registered suspicious npm packages such as <code>color-diff-napi</code> and <code>modifiers-napi</code> to target people trying to compile the leaked code (<a href="https://x.com/Butanium_/status/2039079715823128964">Butanium_</a>).</p><h2><strong>Facts vs. opinions</strong></h2><p><strong>What is reasonably factual from the tweets:</strong></p><p></p>
      <p>
          <a href="https://www.latent.space/p/ainews-the-claude-code-source-leak">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[[AINews] The Last 4 Jobs in Tech]]></title><description><![CDATA[a quiet day lets us examine an interesting mental model]]></description><link>https://www.latent.space/p/ainews-the-last-4-jobs-in-tech</link><guid isPermaLink="false">https://www.latent.space/p/ainews-the-last-4-jobs-in-tech</guid><pubDate>Tue, 31 Mar 2026 01:04:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!01Ro!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s well known that org charts are changing with AI - the first trend we called out was in 2023 with <a href="https://www.latent.space/p/ai-engineer">the Rise of the AI Engineer</a> (now <a href="https://x.com/swyx/status/2028944956463956047">an official org at Meta</a>!), and then in 2025 with <a href="https://www.latent.space/p/tiny">Tiny Teams</a> (<a href="https://www.latent.space/p/ainews-dreamer-joins-meta-superintelligence">hired by Meta</a>!), but it seems Yoni Rechtman over <a href="https://99d.substack.com/p/dont-call-it-a-moat">at the 99D Substack</a> has the mental model for the new post-AI roles (at least in white collar tech):</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!01Ro!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!01Ro!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png 424w, https://substackcdn.com/image/fetch/$s_!01Ro!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png 848w, https://substackcdn.com/image/fetch/$s_!01Ro!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png 1272w, https://substackcdn.com/image/fetch/$s_!01Ro!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!01Ro!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png" width="1456" height="1282" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1282,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:158861,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192676552?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!01Ro!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png 424w, https://substackcdn.com/image/fetch/$s_!01Ro!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png 848w, https://substackcdn.com/image/fetch/$s_!01Ro!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png 1272w, https://substackcdn.com/image/fetch/$s_!01Ro!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faeae9f33-1a4e-4196-bd29-8864e79205f5_1644x1448.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/karrisaarinen/status/2038356036390998229">top level tweet from Karri</a></figcaption></figure></div><p>Karri Saarinen, CEO of Linear, made a <a href="https://x.com/karrisaarinen/status/2038356036390998229">popular analogy</a> to the teamwork roles that emerged in World of Warcraft. This is a good 2D augmentation of <a href="https://x.com/charles_irl/status/2030686327105106353?s=20">an earlier age-based company model</a> (much less realistic, name a tech company that fits the latter format, they exist but are very hard to find):</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GOKj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GOKj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png 424w, https://substackcdn.com/image/fetch/$s_!GOKj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png 848w, https://substackcdn.com/image/fetch/$s_!GOKj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png 1272w, https://substackcdn.com/image/fetch/$s_!GOKj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GOKj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png" width="1190" height="576" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:576,&quot;width&quot;:1190,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:117765,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192676552?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GOKj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png 424w, https://substackcdn.com/image/fetch/$s_!GOKj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png 848w, https://substackcdn.com/image/fetch/$s_!GOKj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png 1272w, https://substackcdn.com/image/fetch/$s_!GOKj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F008c43a3-51a4-4663-aef2-0b0b5990d041_1190x576.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p></p><blockquote><p>AI News for 3/28/2026-3/30/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Claude Code Computer Use, Codex Interop, and the Coding-Agent Harness Race</strong></p><ul><li><p><strong>Claude Code gets computer use</strong>: Anthropic added <strong>computer use inside Claude Code</strong>, letting the agent open apps, click through UIs, and test what it built directly from the CLI in <strong>research preview for Pro/Max</strong> users. The practical significance is closed-loop verification: code &#8594; run &#8594; inspect UI &#8594; fix &#8594; re-test, which several engineers called the missing piece for reliable app iteration, especially compared with open-ended desktop agents (<a href="https://x.com/claudeai/status/2038663014098899416">Claude announcement</a>, <a href="https://x.com/Yuchenj_UW/status/2038671697923223999">@Yuchenj_UW on the &#8220;eyes&#8221; unlock</a>, <a href="https://x.com/omarsar0/status/2038668801256968381">@omarsar0</a>).</p></li><li><p><strong>Cross-agent composition is becoming standard</strong>: OpenAI shipped a <strong>Codex plugin for Claude Code</strong> that can trigger reviews, adversarial reviews, and &#8220;rescue&#8221; flows from inside Anthropic&#8217;s toolchain, using a ChatGPT subscription rather than custom glue code. This is notable less as a plugin novelty and more as a signal that coding stacks are becoming <strong>composable harnesses</strong> rather than monolithic products (<a href="https://x.com/dkundel/status/2038670330257109461">plugin by @dkundel</a>, <a href="https://x.com/reach_vb/status/2038671858862583967">usage thread by @reach_vb</a>, <a href="https://x.com/reach_vb/status/2038702889070211557">open-source note</a>). Separately, OpenAI shared that <strong>late-night Codex tasks run longer</strong>, with jobs started around <strong>11pm being 60% more likely to run 3+ hours</strong>, which fits the emerging pattern of delegating refactors and planning to background agents (<a href="https://x.com/OpenAIDevs/status/2038707501492056401">OpenAI Devs</a>).</p></li><li><p><strong>Harness quality is now visibly a first-order variable</strong>: Theo argued that <strong>Opus scores ~20% higher in Cursor than in Claude Code</strong>, and more broadly that closed-source harnesses make it hard for the community to diagnose or fix regressions (<a href="https://x.com/theo/status/2038690786821505378">performance gap claim</a>, <a href="https://x.com/theo/status/2038740065300676777">closed-source critique</a>). That theme repeated across the feed: model capability deltas are narrowing, while <strong>tooling, prompt/runtime orchestration, and review loops</strong> still create large practical differences.</p></li></ul><p><strong>Hermes Agent&#8217;s Rapid Rise, Multi-Agent Profiles, and the Open Harness Ecosystem</strong></p><ul><li><p><strong>Hermes has become the week&#8217;s breakout open agent stack</strong>: Nous shipped a major <strong>Hermes Agent</strong> update that drove a wave of migrations from OpenClaw/OpenClaw-like setups, with users emphasizing <strong>better compaction, less bloat, stronger adaptability, and faster shipping cadence</strong> (<a href="https://x.com/NousResearch/status/2038688578201346513">Nous release</a>, <a href="https://x.com/Teknium/status/2038694680549077059">Teknium&#8217;s multi-agent profiles</a>, <a href="https://x.com/soundslikecanoe/status/2038611090704113931">community migration examples</a>, <a href="https://x.com/valenxi_r/status/2038692504120504453">another</a>). The new <strong>multi-agent profiles</strong> give each bot its own memory, skills, histories, and gateway connections, moving Hermes from &#8220;personal assistant&#8221; toward a reusable <strong>agent OS</strong> abstraction.</p></li><li><p><strong>An ecosystem is forming around traces, remote control, and self-improvement</strong>: Several projects extend Hermes beyond core inference. <a href="https://x.com/jayfarei/status/2038385591818023278">@jayfarei&#8217;s opentraces.ai</a> provides a CLI/schema/review flow for sanitizing and publishing agent traces to Hugging Face for analytics, evals, SFT, and RL. <a href="https://x.com/kaiostephens/status/2038414350986207421">@kaiostephens uploaded ~4,000 GLM-5 Hermes traces</a> to HF. <a href="https://x.com/IcarusHermes/status/2038524251355934872">@IcarusHermes described an integration</a> where agents log their own decisions, export data, fine-tune smaller successors on their history, and switch over to cheaper models. <a href="https://x.com/winglian/status/2038680417125957865">@winglian&#8217;s ARC</a> adds <strong>remote browser-based monitoring/control</strong> with E2E encryption.</p></li><li><p><strong>Open vs proprietary agent infra is being actively contested</strong>: <a href="https://x.com/ClementDelangue/status/2038552830638755962">@ClementDelangue explicitly argued</a> that open-source agent tools should default to <strong>open-source models</strong>, both for privacy and durability. In parallel, vendors are attacking known pain points: <a href="https://x.com/fchollet/status/2038662563228230127">@fchollet highlighted PokeeClaw</a> as a more secure OpenClaw-style assistant with sandboxing, approvals, RBAC, and audit trails; <a href="https://x.com/Zai_org/status/2038632251551023250">Z AI launched AutoClaw</a>, a local OpenClaw runtime with <strong>no API key required</strong> and optional GLM-5-Turbo.</p></li></ul><p><strong>Qwen3.5-Omni, GLM-5-Turbo/AutoClaw, and the Push Toward Local/Agentic Specialization</strong></p><ul><li><p><strong>Qwen3.5-Omni is a major multimodal release</strong>: Alibaba introduced <strong>Qwen3.5-Omni</strong>, with native text/image/audio/video understanding, <strong>script-level captioning</strong>, built-in <strong>web search and function calling</strong>, and a standout &#8220;<strong>audio-visual vibe coding</strong>&#8221; demo where the model builds websites/games from spoken visual instructions. Reported capabilities include support for <strong>10h audio / 400s of 720p video</strong>, <strong>113 speech-recognition languages</strong>, and <strong>36 spoken languages</strong>; Alibaba claims it outperforms <strong>Gemini 3.1 Pro in audio</strong> and matches its AV understanding in some settings (<a href="https://x.com/Alibaba_Qwen/status/2038636335272194241">launch thread</a>, <a href="https://x.com/Alibaba_Qwen/status/2038637124619231467">demo thread</a>, <a href="https://x.com/Alibaba_Qwen/status/2038641496455557565">additional demo</a>). A useful caveat from <a href="https://x.com/kimmonismus/status/2038638427604762666">@kimmonismus</a>: &#8220;omni&#8221; here is about <strong>interpreting</strong> multimodal inputs, not arbitrary multimodal generation.</p></li><li><p><strong>Z AI continues to tune for agentic workloads</strong>: <a href="https://x.com/ArtificialAnlys/status/2038667075489808804">Artificial Analysis evaluated GLM-5-Turbo</a>, Z AI&#8217;s proprietary agent-optimized variant. It scored <strong>47</strong> on the AA Intelligence Index, slightly behind open-weight <strong>GLM-5 (Reasoning)</strong> at <strong>50</strong>, but posted <strong>1503 on GDPval-AA</strong>, ahead of GLM-5&#8217;s <strong>1408</strong>, supporting the claim that the model is tuned for real-world agent workflows rather than broad benchmark maximalism.</p></li><li><p><strong>Specialized open models are increasingly the deployment pattern</strong>: Several tweets converged on the same thesis: companies will increasingly <strong>own and specialize open models</strong> on proprietary data rather than rent general-purpose APIs indefinitely (<a href="https://x.com/oneill_c/status/2038689976012149131">@oneill_c</a>, <a href="https://x.com/ClementDelangue/status/2038649731404927202">@ClementDelangue</a>). Supporting evidence ranged from a <strong>Qwen3.5-27B model distilled from Claude 4.6 Opus</strong> trending on HF for weeks and reportedly fitting on <strong>16GB in 4-bit</strong> (<a href="https://x.com/UnslothAI/status/2038625148354679270">Unsloth</a>, <a href="https://x.com/Hesamation/status/2038642306434150427">@Hesamation</a>) to growing enthusiasm for local runtimes like llama.cpp and MLX.</p></li></ul><p><strong>Local Inference and Systems: llama.cpp at 100k, Flash-MoE on MacBooks, and Web/Serving Toolchains</strong></p><ul><li><p><strong>Local AI had a symbolic milestone with llama.cpp hitting 100k GitHub stars</strong>: <a href="https://x.com/ggerganov/status/2038632534414680223">@ggerganov&#8217;s reflection</a> framed 2026 as potentially the breakout year for <strong>local agentic workflows</strong>, arguing that useful automation doesn&#8217;t require frontier-scale hosted models and that the right portable runtime stack matters more than absolute scale. The post also emphasized the importance of <strong>cross-hardware, non-vendor-locked infra</strong>.</p></li><li><p><strong>Flash-MoE on Apple Silicon drew strong attention</strong>: A widely shared post claimed <strong>Qwen3.5-397B</strong> could run on a <strong>48GB MacBook Pro</strong> at <strong>4.4 tok/s</strong> using a pure <strong>C + Metal</strong> engine that streams weights from SSD and only loads the active experts, reportedly using <strong>~5.5GB RAM during inference</strong> (<a href="https://x.com/heynavtoor/status/2038614549973401699">summary thread</a>). Related work includes <a href="https://x.com/anemll/status/2038684375425200360">anemll-flash-mlx</a>, which focuses on optimizing only the MoE path on top of MLX, and <a href="https://x.com/ostrisai/status/2038643080400969940">AI Toolkit&#8217;s new Apple Silicon support</a>.</p></li><li><p><strong>Web and serving stacks also moved</strong>: <a href="https://x.com/xenovacom/status/2038610331417608691">Transformers.js v4</a> added a <strong>WebGPU backend</strong> across browser/Node/Bun/Deno with major perf gains and 200+ architectures. <a href="https://x.com/vllm_project/status/2038415516772299011">vLLM-Omni v0.18.0</a> shipped 324 commits, production TTS/omni serving, unified quantization, diffusion runtime refactors, and a dozen-plus new models. On the speech side, <a href="https://x.com/ArtificialAnlys/status/2038678855213568031">Artificial Analysis covered Cohere Transcribe</a>: a <strong>2B conformer encoder-decoder</strong>, <strong>Apache 2.0</strong>, trained on <strong>14 languages</strong>, hitting <strong>4.7% AA-WER</strong> and roughly <strong>60x real-time</strong> transcription speed.</p></li></ul><p><strong>Agent Research: Natural-Language Harnesses, Meta-Harness, Async SWE Agents, and Long-Context via Filesystems</strong></p><ul><li><p><strong>Harness engineering is becoming a research field of its own</strong>: A Tsinghua/Shenzhen paper on <strong>natural-language agent harnesses</strong> proposed letting an LLM execute orchestration logic from an SOP rather than hard-coded harness rules, a direction that multiple practitioners found mind-bending but plausible as context budgets rise (<a href="https://x.com/rronak_/status/2038401494177694074">@rronak_ summary</a>). Meta pushed the idea further with <strong>Meta-Harness</strong>, a method that optimizes the harness end-to-end over code, traces, and scores rather than just the base model; claims include <strong>#1 among Haiku agents on TerminalBench-2</strong> and strong gains in text classification and transfer (<a href="https://x.com/yoonholeee/status/2038640635482456118">@yoonholeee</a>, <a href="https://x.com/LiorOnAI/status/2038669301541228606">explainer by @LiorOnAI</a>).</p></li><li><p><strong>Async/multi-agent SWE design got stronger empirical backing</strong>: The <strong>CAID</strong> paper from CMU argues for <strong>centralized asynchronous isolated delegation</strong> using manager agents, dependency graphs, isolated git worktrees, self-verification, and merges. Reported gains were <strong>+26.7 absolute on PaperBench</strong> and <strong>+14.3 on Commit0</strong> versus single-agent baselines, suggesting that concurrency and isolation beat simply giving one agent more iterations (<a href="https://x.com/omarsar0/status/2038627572108743001">@omarsar0 summary</a>).</p></li><li><p><strong>Coding agents as long-context processors is one of the more interesting reframings</strong>: A paper highlighted by <a href="https://x.com/dair_ai/status/2038635382989005015">@dair_ai</a> treats huge corpora as directory trees and lets off-the-shelf coding agents navigate them with shell commands and Python, rather than stuffing text into context windows or relying purely on retrieval. Reported results include <strong>88.5% on BrowseComp-Plus (750M tokens)</strong> vs <strong>80% previous best</strong>, and operation up to <strong>3T tokens</strong>.</p></li></ul><p><strong>Training, Optimization, Evaluation, and Production Case Studies</strong></p><ul><li><p><strong>Muon got a meaningful systems/math optimization</strong>: <a href="https://x.com/jcz42/status/2038660309968208028">Gram Newton-Schulz</a> is a drop-in replacement for Muon&#8217;s Newton-Schulz step that works on the smaller symmetric <strong>XX&#7488; Gram matrix</strong> rather than the large rectangular matrix, reportedly making Muon <strong>up to 2x faster</strong> while preserving validation perplexity within <strong>0.01</strong>. The work drew praise from <a href="https://x.com/tri_dao/status/2038666307738964466">@tri_dao</a> as the kind of cross-disciplinary linear algebra + fast-kernel result that actually matters.</p></li><li><p><strong>Two practical implementation details stood out</strong>: <a href="https://x.com/wightmanr/status/2038634643843682366">Ross Wightman flagged</a> a subtle but important <strong>PyTorch </strong><code>trunc_normal_</code><strong> misuse pattern</strong> in LLM training code: default <code>a/b</code> are absolute values, not standard deviations, so many codebases effectively aren&#8217;t truncating at all; he also noted numerical oddities later fixed in nightlies. At the application layer, <a href="https://x.com/dbreunig/status/2038650860843245814">Shopify&#8217;s DSPy case study</a> was notable for economics: one slide highlighted a reduction from <strong>$5.5M to $73K/year</strong> by decomposing business logic, modeling intent with DSPy, and switching to a smaller optimized model while maintaining performance (<a href="https://x.com/kmad/status/2038659241238503716">follow-up</a>).</p></li><li><p><strong>New evals/benchmarks continued to expose gaps</strong>: <a href="https://x.com/arankomatsuzaki/status/2038443186255991169">World Reasoning Arena</a> targets hypothetical/world-model reasoning and reports a substantial gap to humans. <a href="https://x.com/_philschmid/status/2038655544613826985">Tau Bench&#8217;s new banking domain</a> adds a realistic 698-doc support environment where best models still only solve about <strong>25%</strong> of tasks. Meanwhile, a Stanford-led paper highlighted by <a href="https://x.com/Zulfikar_Ramzan/status/2038408402809090554">@Zulfikar_Ramzan</a> found <strong>sycophantic AI</strong> can increase users&#8217; certainty while reducing willingness to repair relationships, underscoring that &#8220;helpfulness&#8221; metrics can obscure socially harmful behavior.</p></li></ul><p><strong>Top tweets (by engagement)</strong></p><ul><li><p><strong>Claude Code computer use</strong>: Anthropic&#8217;s release was the biggest technical product launch in the set, and likely the most consequential for day-to-day coding-agent UX (<a href="https://x.com/claudeai/status/2038663014098899416">announcement</a>).</p></li><li><p><strong>Claude Code hidden features</strong>: <a href="https://x.com/bcherny/status/2038454336355999749">@bcherny&#8217;s thread</a> drew massive engagement, reflecting how quickly expert users are now optimizing around coding-agent workflows rather than raw model prompts.</p></li><li><p><strong>Hermes Agent update</strong>: The broad community response to <a href="https://x.com/NousResearch/status/2038688578201346513">Nous&#8217;s major Hermes release</a> suggests open agent harnesses have reached a new adoption phase.</p></li><li><p><strong>Qwen3.5-Omni launch</strong>: Alibaba&#8217;s multimodal release was one of the day&#8217;s biggest model announcements and especially notable for its practical demos around audio/video-driven app creation (<a href="https://x.com/Alibaba_Qwen/status/2038636335272194241">launch</a>).</p></li><li><p><strong>llama.cpp at 100k stars</strong>: <a href="https://x.com/ggerganov/status/2038632534414680223">@ggerganov&#8217;s milestone post</a> captured the local-first mood of the week: increasingly capable open models plus increasingly capable local runtimes.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. Qwen Model Developments and Applications</strong></h3><p></p>
      <p>
          <a href="https://www.latent.space/p/ainews-the-last-4-jobs-in-tech">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Mistral: Voxtral TTS, Forge, Leanstral, & what's next for Mistral 4 — w/ Pavan Kumar Reddy & Guillaume Lample]]></title><description><![CDATA[Mistral is one of the world's leading frontier model labs, and has just launched Voxtral TTS, their latest step in their strategy to offer open frontier intelligence for every modality.]]></description><link>https://www.latent.space/p/voxtral</link><guid isPermaLink="false">https://www.latent.space/p/voxtral</guid><pubDate>Mon, 30 Mar 2026 19:25:21 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192356063/97946646def202fcd5843c5af928340d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Mistral has been on an absolute tear - with frequent successful model launches it is easy to forget that they raised <a href="https://mistral.ai/news/mistral-ai-raises-1-7-b-to-accelerate-technological-progress-with-ai">the largest European AI round in history</a> last year. We were long overdue for a Mistral episode, and we were very fortunate to work with <a href="https://x.com/sophiamyang">Sophia</a> and Howard to catch up with <a href="https://www.linkedin.com/in/mupavan/">Pavan</a> (Voxtral lead) and <a href="https://www.linkedin.com/in/guillaume-lample-7821095b/">Guillaume</a> (Chief Scientist, Co-founder) on the occasion of this week&#8217;s <a href="https://youtu.be/SUjA25ijcNs">Voxtral TTS launch</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MRaQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MRaQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png 424w, https://substackcdn.com/image/fetch/$s_!MRaQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png 848w, https://substackcdn.com/image/fetch/$s_!MRaQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png 1272w, https://substackcdn.com/image/fetch/$s_!MRaQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MRaQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png" width="450" height="500.25" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1334,&quot;width&quot;:1200,&quot;resizeWidth&quot;:450,&quot;bytes&quot;:351234,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192356063?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MRaQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png 424w, https://substackcdn.com/image/fetch/$s_!MRaQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png 848w, https://substackcdn.com/image/fetch/$s_!MRaQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png 1272w, https://substackcdn.com/image/fetch/$s_!MRaQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6691984a-d0c0-4784-b673-e37a50e3a8b4_1200x1334.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Mistral can&#8217;t directly say it, but the benchmarks do imply, that this is <strong>basically an open-weights ElevenLabs-level TTS model</strong> (Technically, it is a 4B Ministral based multilingual low-latency TTS open weights model that has a 68.4% win rate vs ElevenLabs Flash v2.5). The contributions are not just in the open weights but also in open research: We also spend a decent amount of the pod talking about their architecture that combines auto-regressive generation of semantic speech tokens with flow-matching for acoustic tokens (typically only applied in the Image Generation space, <a href="https://neurips.cc/virtual/2024/tutorial/99531">as seen in the Flow Matching NeurIPS workshop from the principal authors</a> that we reference in the pod).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Bkau!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Bkau!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png 424w, https://substackcdn.com/image/fetch/$s_!Bkau!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png 848w, https://substackcdn.com/image/fetch/$s_!Bkau!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png 1272w, https://substackcdn.com/image/fetch/$s_!Bkau!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Bkau!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png" width="1331" height="962" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:962,&quot;width&quot;:1331,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!Bkau!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png 424w, https://substackcdn.com/image/fetch/$s_!Bkau!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png 848w, https://substackcdn.com/image/fetch/$s_!Bkau!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png 1272w, https://substackcdn.com/image/fetch/$s_!Bkau!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8f031-326f-438c-a2c1-e1aa3d06e901_1331x962.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You can catch up on <a href="https://mistral.ai/static/research/voxtral-tts.pdf">the paper here</a> and the full episode is <a href="https://youtu.be/SUjA25ijcNs">live on youtube</a>!</p><div id="youtube2-SUjA25ijcNs" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;SUjA25ijcNs&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/SUjA25ijcNs?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p></p><p></p><h2>Timestamps</h2><p>00:00 Welcome and Guests<br>00:22 Announcing Voxtral TTS<br>01:41 Architecture and Codec<br>02:53 Understanding vs Generation<br>05:39 Flow Matching for Audio<br>07:27 Real Time Voice Agents<br>13:40 Efficiency and Model Strategy<br>14:53 Voice Agents Vision<br>17:56 Enterprise Deployment and Privacy<br>23:39 Fine Tuning and Personalization<br>25:22 Enterprise Voice Personalization<br>26:09 Long-Form Speech Models<br>26:58 Real-Time Encoder Advances<br>27:45 Scaling Context for TTS<br>28:53 What Makes Small Models<br>30:37 Merging Modalities Tradeoffs<br>33:05 Open Source Mission<br>35:51 Lean and Formal Proofs<br>38:40 Reasoning Transfer and Agents<br>40:25 Next Frontiers in Training<br>42:20 Hiring and AI for Science<br>44:19 Forward Deployed Engineering<br>46:22 Customer Feedback Loop<br>48:29 Wrap Up and Thanks</p><p></p><h1>Transcript</h1><p><strong>swyx:</strong> Okay, welcome to Latent Space. We&#8217;re here in the studio with our gues co-host Vibh u. Welcome. Thanks. Excited for this one as well as Guillaume and Pavan from Mistral. Welcome. Excited to be here.</p><p><strong>Guillaume:</strong> Thank you.</p><p><strong>swyx:</strong> Pavan, you are leading audio research at Mistral and Guillaume, you're Chief Scientist,</p><h2>Announcing Voxtral TTS</h2><p><strong>swyx</strong></p><p>Host</p><p>(00:05) Okay. (00:05) Welcome to Lean Space. (00:06) We&#8217;re here in the studio with trustee co-hosts, Vibhu. (00:09) Welcome.</p><p><strong>Vibhu</strong></p><p>Host</p><p>(00:11) Very excited for this one.</p><p><strong>swyx</strong></p><p>Host</p><p>(00:12) As well as Guillaume and Pavan from Mistral. (00:15) Welcome. (00:16) Excited to be here. (00:17) Thank you for having us.</p><p>(00:18) Pavan, you are leading audio research at Mistral and Guillaume, you&#8217;re a chief scientist. (00:23) What are we announcing today where we&#8217;re coordinating this release with you guys?</p><p><strong>Guillaume</strong></p><p>Guest</p><p>(00:26) Yeah, so we are releasing Voxtral TTS. So it&#8217;s our first audio model that generates speech. It&#8217;s not our first audio model. We had a couple of releases before.</p><p>(00:35) We had one in the summer that was Voxtral, our first audio model, but it was like a transcription model, ASR. Like a few months later, we released some update on top of this, supporting more languages. Also a lot of table stack features for our customers, context biasing, precision, timestamping and transcription. We also have some real-time model that can transcribe not just at the end of the level.</p><p>(00:56) You don&#8217;t need to fill your entire audio file, but that can also come in real-time. And here, this is a natural extension in the audio, so basically speech generation. So yeah, so we support nine languages, and this is a pretty small model, 3D model, so very fast, and also state of the art. Performed at the same level as the base model, but it&#8217;s much more efficient in terms of cost, and also much, in terms of cost, it&#8217;s also much cheaper, only a fraction of the cost of our competitors.</p><p>(01:22) And we are also releasing the work that this model is running.</p><p><strong>swyx </strong>What&#8217;s the decision factor?</p><p><strong>Guillaume</strong> It&#8217;s a good question.</p><p><strong>swyx</strong></p><p>There will be more. Yeah, Pavan, any sort of research notes to add on?</p><h2>Architecture and Codec</h2><p><strong>Pavan:</strong> But it&#8217;s a novel architecture that we develop inhouse.</p><p>We traded on several internal architectures and ended up with a auto aggressive flow matching architecture. And also have a new in-house neural audio codec. Which, converts this audio into all point by herds latent [00:02:00] tokens, semantic and acoustic tokens. And yeah, that&#8217;s that&#8217;s their new part about this model and we&#8217;re pretty excited that it&#8217;s, it came out with such good quality and Jim was mentioning. Yeah, it&#8217;s a three B model. It&#8217;s based off of the TAL model that we actually released just a few months back and insert trunk and mainly meant for like the TTS stuff, but they need text capabilities are also there. Yeah.</p><p><strong>swyx:</strong> So there&#8217;s a lot to cover.</p><p>I always I love any, anything to do with novel encodings and all those things because I think that&#8217;s obviously I creates a lot of efficiency, but also maybe bugs that sometimes happen. You were previously a Gemini and you worked on post training for language models, and maybe a lot of people will have less experience with audio models just in general compared to pure language.</p><p>What did you find that you have to revisit from scratch as you joined this trial and started doing this? At least</p><h2>Understanding vs Generation</h2><p><strong>Pavan:</strong> when it comes to, for, I think the, there are two buckets, I guess the audio understanding and audio [00:03:00] generation. The audio understanding, like the walkthrough models that Kim was mentioning that we released earlier.</p><p>The walkthrough chat that we released I think July last year, and the follow up transcription only, models family that we released in January, that would be one bucket, and the generation is another bucket. I think. You can also treat them as a unified set of models, but currently the approaches are a little different between these two.</p><p>To your question on how audio is fed to the model? In the understanding model, it&#8217;s very similar to actually Pixar models that we also released,</p><p><strong>swyx:</strong> yes.</p><p><strong>Pavan:</strong> That&#8217;s</p><p><strong>swyx:</strong> amazing.</p><p><strong>Pavan:</strong> It was pretty, I, that was the first project I worked on after joined Misra. It was pretty, pretty nice. And Wtu was very similar in spirit.</p><p>I guess So we feed audio through an audio encoder similar to images through a vision encoder, and it produces continuous embeddings and which are fed as tokens to the main transformer decoded transformer model. Yeah. On the model output is just text. So on the output side, there is nothing that needs to be done in these kinds of mode.</p><p>I [00:04:00] guess the interesting part of what the generation stuff is, the output now has to produce audio and. The approach that we have is this neural audio codec, which converts audio into these latent tokens. There is a lot of existing attrition and a lot of models which are based off of this kind of approach.</p><p>And we took a slightly. A different, design decisions around this. But at the end of the day, the neural audio product converts audio into a 12.5 herdz set of latents. And each latent is, has a semantic token and a set of acoustic tokens. And the idea is that you take these discrete tokens and then feed it on the input side.</p><p>There&#8217;s several ways to use this at each frame, but we just sum the embedding. So it&#8217;s like having key different vocabularies. Combine all of them because they all correspond to one audio frame on the input side. The output side is the interesting part on the output side, the, it&#8217;s not the, I don&#8217;t know if it&#8217;s the most popular, but one.</p><p>Popular technique is to have a depth transformer [00:05:00] because you have K tokens at each time step, like with a text, you just have one token at each time step. So you just do predict the token from the vocabulary with, yeah, with just, you get probability</p><p><strong>swyx:</strong> This&#8217;s a very straightforward text. Very</p><p><strong>Pavan:</strong> straightforward.</p><p><strong>swyx:</strong> Yeah.</p><p><strong>Pavan:</strong> But if you have K tokens, then the name thing would be to predict all of them in paddle. That doesn&#8217;t work. At least that doesn&#8217;t work that well because audio has more entropy. And the, one of the techniques people use is this depth transformer where you you almost have a small transformer, or it can be L-S-T-M-R in as well, but people use transformers and you predict the K tokens in auto aggressive fashion in that.</p><p>So you have two auto reive things going on.</p><h2>Flow Matching for Audio</h2><p><strong>Pavan:</strong> So the thing we did differently is in, instead of having this auto aggressive K step prediction, we have a flow matching model. Instead of modeling this as a discrete token set we trained the codec to be both discrete and continuous to have this flexibility.</p><p>So we did try the discrete stuff too, and which it works well, but the continuous stuff works just better. So yeah, we took this flow matching, so the, it&#8217;s a flow [00:06:00] matching head, which takes the latent from the main transformer and like kind in fusion, it&#8217;s denoising, but in this flow matching itself, velocity estimate.</p><p>So you go from this noise t all the way to there. Audio latent, which corresponds to the 80 millisecond audio and then, which is sent through the work order to get back the 80 millisecond audio frame.</p><p><strong>swyx:</strong> Yeah. Is this the first application of flow matching in audio? Because usually I come across this in the image.</p><p><strong>Pavan:</strong> Yeah. Actually, in some sense there are models flow matching models in audio, but I think this specific combination I could be wrong. There could be somewhat. No. I haven&#8217;t seen. I haven&#8217;t seen much work in this, so I think it&#8217;s novel and a lot of it&#8217;s just a way bigger community, so they, I think they pioneer a lot of these diffusion flow matching work, and it&#8217;s interesting to adopt some of the ideas there into audio and,</p><p><strong>swyx:</strong> yeah.</p><p><strong>Pavan:</strong> Yeah, I&#8217;m, personally that&#8217;s the think part which is trying out about. One of more meta point is unlike text, even in vision, I think this is true, but in [00:07:00] audio step literature that there is no.</p><p>Winner model, yet there is no, okay, this is the way you do things. It&#8217;s it&#8217;s still by, I think people are still iterating and figuring out like what&#8217;s the best overall recipe. I guess the idea. Pretty sure there are models which are also completely end-to-end, like NATO audio. NATO audio, but it&#8217;s still not come to a convergence point where this, the right way to think that.</p><p>That also makes. A space pretty exciting to explore.</p><h2>Real Time Voice Agents</h2><p><strong>Vibhu:</strong> What are some of the ways to look at it?</p><p><strong>Vibhu:</strong> There are ways where you can do diffusion for audio generation, but if you want like real time generation, that&#8217;s a big thing with the approach I&#8217;m assuming that you took. Yeah. And also like how do you go about evaluating different axes of what you care about, yeah,</p><p><strong>Pavan:</strong> good point. I think we so you can do just flow matching diffusion for the whole audio. We didn&#8217;t even go down that path because one of the main applications is voice agents and we want real time streaming, and that&#8217;s the use case. That&#8217;s not the only use case, but that&#8217;s one of the primary use cases we want to get to.</p><p>So we [00:08:00] picked the auto aggressive approach for that. And within the auto aggressive space, again, you can do chunk by chunk or you can do so we picked the. I think at least personally prefer the operations, which are the simplest, and so we try to see, can we just add audio as just another head to our regular transformer decode model because that kind of makes it easier for eventual end-to-end modeling of audio text native modeling.</p><p>Yeah. And it works pretty well. So I guess we went with that and we tried a little bit, but the flow matching head itself, like we had a discreet. Diffusion kind of approach, which also works well, but the flow matching work better.</p><p><strong>swyx:</strong> I was just curious about how you also think about this overall direction of research.</p><p>Do you basically, when you work with the audio team, do you set some high level parameters and then let them explore whatever, or how does it work between you guys?</p><p><strong>Guillaume:</strong> No I think the way it works is that we are the, we are prioritizing together, I think, what are the most important features because there are many things we can do [00:09:00] in audio.</p><p>Yeah, I think we try to. These are like how we should do things, for instance. Ultimately what we want to do is to build this through duplex model, but we are not going to start this start there directly, I think is. Some of the project people are doing, but</p><p><strong>swyx:</strong> just to confirm, full effects means it can speak while I&#8217;m speaking or,</p><p><strong>Guillaume:</strong> yeah.</p><p>Okay. Audio. Yeah. Yeah. So intimately we&#8217;re going to get there, but for us it was, we decided to take it like a step by step. So we start with whatever is the most important. I think support customers, which is the transcription is the most popular use case. Then the speech generation, Soviet time, just a bit before that.</p><p>And then actually to be like more, but try combining everything all together. But but yeah, we thought it was also important to like separate things and optimize each capability one by one before we</p><p><strong>swyx:</strong> measure of that together. And the super omni model. But</p><p><strong>Guillaume:</strong> very interesting because as Par said, it&#8217;s when you work on some other domains of this airline and everything, there are many areas where I think it&#8217;s not as interesting.</p><p>For instance. Many places, it&#8217;s essentially just around data or like creating new environments on a lot of kind [00:10:00] of easy things. But things were, I think the research is maybe not as interesting. Were in audio. There are so many ways to actually build this model. So many ways to go around it. That&#8217;s the sense I think is really interesting.</p><p>And what we also tried for speed generation is that we tried multiple approaches. What was interesting that even though they were extremely different, they under the big know the particles but the for matching turned out to be quite more natural. So we are happy with this.</p><p><strong>swyx:</strong> Is there intuition why it maybe like flow matching is just models speech better in some natural fundamental, latent dimension?</p><p><strong>Pavan:</strong> No, I think the main thing is e even at a particular time step, there is a distribution of things.</p><p><strong>swyx:</strong> Yes.</p><p><strong>Pavan:</strong> To be predicted like the way you inflate. So you already know the word that you&#8217;re speaking and Yeah. The intake space, let&#8217;s say the word maps register a single token for simplicity.</p><p>In most cases it does. So there is not a lot of so you just pick the word, but with within audio, even the same word could, even with your own voice, could be inflicted in so many different ways. And I think [00:11:00] any approach which like models this distribution and. And flow matching is one, one of the take.</p><p>It&#8217;s not the only one at all, but it&#8217;s a one which works pretty reasonably well. I think that&#8217;s better. So you have to pick across several different, the intuition I have is it&#8217;s, there are some, several different clusters each corresponding to some specific way you would inflict, pronounce that thing.</p><p>And you can&#8217;t predict the mean of it because that corresponds to some blurred out speech or something like that. But you have to pick one. And then like sharp</p><p><strong>swyx:</strong> conditional inference.</p><p><strong>Pavan:</strong> Yeah, exactly.</p><p><strong>swyx:</strong> Is that all covered under disfluencies, which is I think the normal term of art. Pauses intonations. By the way, I have to thank Sophia for setting all this up, including like some of these really good notes because</p><p><strong>Pavan:</strong> Yeah.</p><p><strong>swyx:</strong> I&#8217;m less familiar with the audios for me.</p><p><strong>Pavan:</strong> No. I think dis dismisses are definitely one such Eno defenses is more like</p><p><strong>swyx:</strong> which is arms are.</p><p><strong>Pavan:</strong> Yeah, arms. And also repeat like you like,</p><p><strong>swyx:</strong> yeah.</p><p><strong>Pavan:</strong> You do this full of words, your thinking, so you repeat the word.</p><p><strong>swyx:</strong> Okay. Whereas intonation is like a diff, it&#8217;s up up [00:12:00] speak and all this.</p><p>Okay.</p><p><strong>Pavan:</strong> Yeah. So I think there is a lot of like entropy. And modeling it as a distribution. And a, any technique which helps with it and the depth transformer is a conditional way of modeling this. And Transformers actually really good at it, even though that&#8217;s a mini transformers. So I think that worked pretty well too for us too.</p><p>It&#8217;s just that the main concentration is when you have a depth transformer. If you have K tokens, you need to do K auto steps, right? Even though it&#8217;s a small thing, it&#8217;s K steps, which is very vacant, say heavy, but flow matching. We were able to cut it down significantly. So we are able to do the inference in quad steps or 16 steps and it works pretty well.</p><p>And there are more normal techniques to bring it down even further to like, in extreme case, one step like we&#8217;re not doing it yet, but it at least the framework, LEDs itself to more efficient and Yes.</p><p><strong>swyx:</strong> And the image guys have done.</p><p><strong>Pavan:</strong> Yeah.</p><p><strong>swyx:</strong> Incredible work guys. Yeah.</p><p><strong>Pavan:</strong> It now you just. Send a prompt and you get an image.</p><p><strong>swyx:</strong> Yeah. Surprisingly not enough. I think image model labs use those techniques in production. I think it&#8217;s, I feel like it&#8217;s a lot of research demos, but [00:13:00] nothing I can use on my phone today.</p><p><strong>Guillaume:</strong> The thing, there&#8217;s a thing that would be interesting here is that since, indeed I&#8217;ve been so much sure that has been done in the vision community compared to radio dys, stomach, I think there are so many long infra Yeah.</p><p>And there are so many things we can do to actually improve this further. So it&#8217;s our first version, but we have so many ways to exist, much better and much more efficient, cost efficient, so</p><p><strong>swyx:</strong> yeah.</p><p><strong>Guillaume:</strong> So really it&#8217;s not a new field at all, of course, but there are still so many things that can be done.</p><p>Perfect. It&#8217;s</p><p><strong>swyx:</strong> nice. I should also mention for those who are newer to flow matching, I think the creator, this guy&#8217;s name is Alex, he&#8217;s done I think in Europe&#8217;s maybe two Europes as ago. There was, there&#8217;s a very good workshop. There&#8217;s one hour on like this matching is I would recommend people look that up.</p><p>That&#8217;s the other thing, right?</p><h2>Efficiency and Model Strategy</h2><p><strong>swyx:</strong> The efficiency wise, like I, I imagine like the reason is open weights the reason you pick 3.6 B backbone it you are 3.4 B you are, try to fit to some kinda hardware constraints. You kinda fits some kinda basic constraints. What are they?</p><p><strong>Guillaume:</strong> Not necessarily, I think something we care about in our model that they&#8217;re efficient.</p><p>So we have a [00:14:00] lot of separate model, for instance. So we have this that is very small, very efficient. We also have a small OCR model that is available. Good, highly efficient as well. And I think on a project maybe there, I think companies are going to take is to have a coverage general model that will do a bit of everything.</p><p>But that is also going to be expensive. On here. What want say is if you care about this specific use case, if you can actually use this model, it just does that. It&#8217;s extremely good at it. Survey, very efficient. That&#8217;s why we can actually add. We do, but also OCR that are like really good at that.</p><p>And that would be much more cost effective factors and the general model that will contain a lot of capabilities you don&#8217;t really need. So yeah. So we&#8217;re doing like general model, but also like more customized model. This,</p><h2>Open Weights and Benchmarks</h2><p><strong>Vibhu:</strong> how does it compare to other TTS models? It&#8217;s, we are going follow open wave.</p><p>We&#8217;re just dropping it. I think it&#8217;s pretty good.</p><p><strong>Pavan:</strong> Yeah, I think it&#8217;s pretty good. Like it, it&#8217;s definitely one of the best. For sure. It&#8217;s probably I would say it&#8217;s the best open source model, but</p><p><strong>Vibhu:</strong> decipher themselves.</p><p><strong>swyx:</strong> Yeah.</p><h2>Voice Agents Vision</h2><p><strong>Vibhu:</strong> Why now? How does it fit into broader ral vision? How do you see voice agents?</p><p>How do you see voice? I think every year I&#8217;ve heard, okay, you&#8217;re a [00:15:00] voice. You&#8217;re a voice. There&#8217;s a lot of architectural stuff. There&#8217;s a lot of end time that see it, your solving, but where do you see voice setting?</p><p><strong>Guillaume:</strong> We had so many customers asking for voice. That&#8217;s also why we wanted to build it.</p><p>What&#8217;s interesting in this domain is that. In a sense, if you take something simple like transcription it doesn&#8217;t seem like something that should be very hard to do for a model. It&#8217;s essentially, it&#8217;s pattern recognition. It&#8217;s classification on this. Models are very good at classifying, right?</p><p>Or nonetheless, when you talk to them it&#8217;s not there yet, right? It&#8217;s not, you don&#8217;t talk to them the same way you talk to a person. On something, maybe people don&#8217;t realize it. It&#8217;s in English it&#8217;s still much better than in any user language, even compared to French instance. If you talk to this million in French, when you see people talking to this they&#8217;ll talk very slow.</p><p>They&#8217;ll articulate as much as they can. So it&#8217;s not natural, right? We&#8217;re not yet to this. And I think, yeah, maybe the next generation will not know this, but yeah, I think people that. But our edge will actually always keep this bias speaking very slowly when they talk to this model. Even if maybe, probably in a couple of years, maybe next year it&#8217;ll not be necessary anymore.</p><p>But yeah. But what&#8217;s interesting is to see that yeah, even for like languages [00:16:00] like yeah, French and Spanish Germans that are not no, no resource on religion. You have a lot of audios there on still it&#8217;s not as good. And I think a consequence. Because then for this, I suppose just is not as much energy, as much effort that has been put done in some other mod that for some vision or like coding.</p><p>But but yeah, there&#8217;s still a lot of progress to be done. I think it&#8217;s just a question of doing the work and it&#8217;s clear path I think to get there.</p><p><strong>Pavan:</strong> It&#8217;s a little fascinating because I worked on Google Assistant I think while back at this point, but it&#8217;s, I think it&#8217;s, it like when you take a step back, it&#8217;s fascinating.</p><p>It&#8217;s not that long ago. It was like four years ago or five years ago, and it&#8217;s now it&#8217;s completely audio in, audio out and the function calling and the whole thing happens completely end to end. And in a very natural,</p><p><strong>swyx:</strong> yeah,</p><p><strong>Pavan:</strong> natural way and still ways to go. Kim was telling, even despite all the previous, it&#8217;s not like you&#8217;re speaking to a person.</p><p>When you talk to any of these agents, bots, or voice mode kind of situation, it&#8217;s still like a gap. I think that&#8217;s the great part and I feel like with even the existing [00:17:00] stack, we should be able to get to this very natural speech conversational abilities soon enough I guess.</p><p>And we&#8217;ll also hope. I get that</p><p><strong>Guillaume:</strong> on this kind of the next step, right? Because when you talk to these agents, like usually people are just writing to them and sometimes they&#8217;ll this very clear, for instance, you are, you want to write code, but you are, you have a very clear idea of how you want the model to implement what you in mind.</p><p>But so here you are able to spend a lot of time writing. So it&#8217;s not really efficient on audio is really like a natural interface that is just not there yet, but I think it&#8217;s just gonna be the place.</p><p><strong>Vibhu:</strong> How&#8217;s it like building, serving, inferencing, like we see a lot about, it&#8217;s very easy to take LMS off the shelf, serve them.</p><p>Fine tuning, deploying. I know you guys have a whole you have Ford, you have a whole stack of customizing, deploying. Is there a lag in getting that. Like distribution channel. Are you helping? There is. So like prompting, lms, you can have them be concise, verbose, all that.</p><p>They&#8217;re built on LM backbones, these models. How do you see all that?</p><h2>Enterprise Deployment and Privacy</h2><p><strong>Guillaume:</strong> Yeah, I think this is a lot of what we&#8217;re doing with our own customers. Very [00:18:00] often they come to us, so it&#8217;s for different reasons. I think one reason is sometimes they have this lot of privacy concerns.</p><p>They have this data that it&#8217;s very sensitive. They don&#8217;t want data to leave. The companies, they wanted to stay. Inside the company. So we have them deploy model in-house. So either on a, either on premise or on private cloud. So they&#8217;re not worried that it&#8217;s given to a third party on the there some leakage.</p><p>Sometimes they have this kind of many companies have this different, sensitivity of data they have like sometimes channel chat can send it to the cloud has to stay there. So then it creates some kind of heterogeneous workflows where it&#8217;s annoying. You cannot send some data to the cloud.</p><p>This one you can, so here, when we actually deploy the model for them, they don&#8217;t have this consideration. They are like not worried that, this is going to leak. Everything is much easier. So we help them basically do this on the, so it&#8217;s one of the very proposition. But but the other is very often, when customers use this off the shelf close model, but very sad is that they are not leveraging, these data that have been collecting for four years or something for decades.</p><p>So much data. Sometimes it&#8217;s trillions of tokens of [00:19:00] data in a very specific domain. Their domain, which is data that you&#8217;ll not find in the public, on the public internet. So data on which, like close model, we actually not have access to one, which that&#8217;s going to be really good. So if they&#8217;re using like closed source models are basically not benefiting from all these insights.</p><p>All these data they have collected three years, they can always give it into the context that in France, but is never as good as if you actually train the modern analysis. So yes, that&#8217;s basically what we help them to do. We actually provide them some purchase, basically what we announced at GTC this week.</p><p>So we provide them with this, it&#8217;s basically like a platform with a lot of tools to actually help them process data. Trained on that. Yeah, it&#8217;s actually the same thing that we&#8217;re using in the science team. So it&#8217;s actually very better tested infrastructure, like a lot of efficient training cut base.</p><p>For a quality pre-training like a fine tuning, even doing S-F-T-I-L. So we help them do this using the same tools as what our science team is building is using. So since it&#8217;s tools that we&#8217;ve been using for two years now, it&#8217;s really better tested. It&#8217;s really sophisticated.</p><p>So it&#8217;s the same thing. We are giving to them, giving the company the same thing [00:20:00] that what are same still using internally actually build their own ai and it makes a really big difference. I think sometimes customers. And many in general don&#8217;t realize how much better the model becomes when you fine tune it on your own data.</p><p>And you can have a, your model is here. You start from there. You have a cross source model, which is sort here, but if you actually fine tune it can actually really go much further than this. And then you have a very big advantage. The model is trained on your entire company knowledge, so it knows everything.</p><p>You don&#8217;t have to feed like 10 K tokens of contact at every query. So it&#8217;s it&#8217;s much easier. It&#8217;s a bit, I think using a closed source model is really sad because it basically puts. You are not leveraging all this data and you are going to be using the same model as all your old competitors when you&#8217;re actually using, everything you have been collected for years, which is really valuable.</p><p>So yeah. So we help basically customers do this. We have a lot of solution I mean deployed for engineers that go in the company that basically look at the problem customers are facing to look at what they&#8217;re struggling to do what we should do to solve it. So we help them solve them together.</p><p>So it&#8217;s I think our approach is a bit different, but here. [00:21:00] Some of their companies and competitors, it&#8217;s, we don&#8217;t just release an endpoint on sale, do some stuff on top of that, or we don&#8217;t just give a checkpoint. We really look very closely with customers. We look at the issues they have, we had them solve them.</p><p>We really make some tailored solution for the client are facing. Some example are also going to be, sometime we have some customers. They really wanted to have a really good model, really performance on some, like Asian languages on the, if you take some of the shelf models, they can speak it, they can write in this language, but it&#8217;s not amazing.</p><p>This language would be like maybe zero 1% of the mixture. So it has been included during training, but very little. So what we did here is upgrade. We trained a new model for them, but so this language was 50% of the mix, so it&#8217;s much, much stronger. It knows of the dialects, it knows the, so it&#8217;s yeah.</p><p>So it&#8217;s some example of things we can do and it&#8217;s really arbitrary, custom. I think you had some of their customers, for instance, they wanted some. They wanted some 3D model that can do audio with a very good function cable. So something you wanted to put in the car in particular, they wanted this to be offline because in a car you don&#8217;t necessarily have access to internet.</p><p>So [00:22:00] yeah. So here we can actually build the solutions. There is no like model out of the box on this. In the internet you have this very, you have this very general model generalist, like he&#8217;s strong model. But for things like this, they always want at specific solutions and on some other reasons.</p><p>Sometimes they come to us is because, like they, they experiment with some closed source model. They get some prototype. They&#8217;re happy with what they build. They, it works well. They&#8217;re happy with the performance, and then they want to go to production and then they analyze. But it&#8217;s extremely expensive.</p><p>You cannot push this. It&#8217;s so then they come back to us on this. They can help us build the same thing as this, but using something much cheaper on here. And here we can sometime be something 10 x cheaper by just functioning a model and it&#8217;ll be better OnPrem on their old server and also much cheaper as well.</p><p>So yeah,</p><p><strong>swyx:</strong> that&#8217;s the drop pitch right there. Take all the</p><p>money.</p><p><strong>Vibhu:</strong> And outside of that you do, we do put open wave models so people can do this themselves. I feel like not enough people go outta their way.</p><p><strong>swyx:</strong> They&#8217;re not going to, they&#8217;re gonna ask them to do it as the expert. I</p><p><strong>Guillaume:</strong> think initially we didn&#8217;t know, [00:23:00] we wanted completely short at the beginning of the company because, I think our study was not exactly the same as what it is today, but what we underestimated initially is the complexity of deploying this model and connecting them to everything to be sure it has access to the company knowledge on the, and it was, yeah, on, we were seeing customers struggling with this, but it was even, that was three years ago and no, things are much more complicated because now you don&#8217;t just have, text on SFT on a simple instruction following.</p><p>You have reasoning like your agents, you have like tools. You have a multimodal audio, so it&#8217;s much more complicated than before. And even back then it was hard for customers. So they really need, have some support and this is why actually providing like always some four D position as well. The process</p><h2>Fine Tuning and Personalization</h2><p><strong>swyx:</strong> I&#8217;m curious is there also voice fine tuning that people do?</p><p><strong>Pavan:</strong> So in this forge we also have a say unified framework. And the hope is like the er speech to text that we released earlier this year. And even the ER chart that we released last year. And I think a big people, I think there&#8217;s a big, rich ecosystem [00:24:00] of people fine tuning whisper, and people want the same thing with w so it&#8217;s much stronger than Whisper.</p><p>And yeah, the the platform offers that kind of fine tuning yeah, which could be any kind of fine tuning. Like for instance, even sometimes people want to support new languages to this, which are tail languages, which we hope to cover. Certain natively, but if there is a language where you data and you want to frank you, I think this is a good use case.</p><p>Or the other use cases, you, it&#8217;s the same language, like even English but it&#8217;s in a very domain specific way.</p><p><strong>swyx:</strong> Yeah. Terminology, jargon, medical stuff.</p><p><strong>Pavan:</strong> Exactly. And also there&#8217;s specific acoustic conditions like there&#8217;s a lot of noise or the, and. The model will do decently in most conditions, but you can always make it better.</p><p>And that those are some of the use cases where you can improve it e even further. And that&#8217;s one good use case for this and for text to speech. We&#8217;re just releasing it so we&#8217;ll have support for that soon too. I think it&#8217;s similar use case.</p><h2>Voice Personalization </h2><p><strong>Pavan:</strong> It&#8217;s little different the kind of things that you want to extend a [00:25:00] text to speech model to, which could be like voice personalization, voice adaptation for enterprises.</p><p>Many enterprises need very specific kind of tone, very specific kind of like personality for this kind of voice. And all of those are like good use cases for fine tuning.</p><p><strong>swyx:</strong> This one I was gonna ask you, we never talked about cloning voice clothing here. How important is it, right?</p><p>Like I can clone a famous person&#8217;s voice. Okay. But</p><p><strong>Pavan:</strong> the main use case would be like for enterprise personalization, like enterprises need like a lot of customization. You don&#8217;t want the same. Voice for all the enterprises. Each enterprise want a customized, specialized something which is representative both their brand and also their, I guess safety considerations and the use case I think the kind of thing that you would deploy as a empathetic assistant in the context of a healthcare domain would be very different from the kind of thing that would be in a customer support bot and would be different from like more conversational aspects.</p><p>I think those are the. [00:26:00] Customizations you would expect from enterprise. And that&#8217;s the main use case, at least from our side.</p><p><strong>Vibhu:</strong> My, my basic example is you don&#8217;t want to call to customer services and have the same exact voice. It&#8217;s just, it&#8217;s gonna be weird.</p><h2>Long-Form Speech Models</h2><h2>Long-Form Speech Models</h2><p><strong>Vibhu:</strong> But also on the technical side of this, so there&#8217;s like a few things in TRO that I thought were pretty interesting.</p><p>He&#8217;s a big fan of this paper. Oh, he said very good paper. He said this is the best SR paper he&#8217;s ever read. Yeah. I&#8217;ve hyped up this voice paper enough. We covered it. Somewhere, but a big thing. So Whisper is known for 32nd generation a 32nd processing. You extended this to 40 minutes. There was a lot of good detail in the paper about how this was done.</p><p>Even little niches of how the padding is. So it&#8217;s very much needed. You need to have that padding in there, the synthetic data generation around this. I&#8217;m wondering if you can share the same about the new speech to text, right? Text to speech. So how do you. How do you generate long form, coherent?</p><p>How do you generate, how do you do that? And then any gems? Is there gonna be a paper?</p><p><strong>Pavan:</strong> Yeah. Yeah. They would be a technical report. Okay. Yeah. I think I could have a lot of details.</p><h2>Real-Time Encoder Advances</h2><p><strong>Pavan:</strong> But me I think the [00:27:00] summary of it, actually, some of the considerations in this paper were, because we started with the wipa encoder as the starting point, and now we have in-house encoders, like the bigger time model, for instance, which we released in January.</p><p>Also release a technical report for that real time model as well, which is this dual stream architecture. It&#8217;s an interesting architecture. You should check it out. And there we have a causal encoder and I don&#8217;t think there&#8217;s any strong, multilingual causal encoder out in the community. So we thought it&#8217;s a good contribution.</p><p>So that&#8217;s one nice encoder there. Other people want to adapt. That&#8217;s a good end code. And we train it from scratch. I think her. Post stack is now mature enough that we are able to train super strong ENC codes. And some of these considerations, like spatting and stuff, is a function of the Whisper ENC code.</p><p>And now that we train encoders, inhouse the design concentrations are different.</p><h2>Scaling Context for TTS</h2><p><strong>Pavan:</strong> And for the question on text to speech, I think that&#8217;s also leans onto the original auto aggressive decoder backbone. I think, it says very, almost identical considerations. I think the long context in it&#8217;s not even long con, [00:28:00] so the model processes audio at 12.5 herds, so one second maps to like 12.5 tokens.</p><p>So I think one minute is like 7.8 tokens. You can get like up to 10 minutes in eight K context window and get half an hour and 30 K context window. So that&#8217;s and 30 2K context is something that&#8217;s we are very comfortable training on. We can extend it even much longer. 1 48 K. Okay. You can naturally see how it can extend to even our long generations.</p><p>Yeah. We need the. Like data recipe and the whole algorithm to work coherently enough through such long context. But the techniques are some way very similar to the text, long context modeling. And the key differences, it&#8217;s just doing flow matching order regressively instead of a text open prediction.</p><p><strong>swyx:</strong> Okay. I think that was most, most of the sort of voice questions that we had. But</p><h2>What Makes a Model Small</h2><p><strong>Vibhu:</strong> I have a big question on Mr. Al, Mr. Small. So what is small? How do we define [00:29:00] small? What is this? What is this? I remember the days of Misal seven B on my laptop. The snuff fitting on my laptop. I could run it on the big laptop, but</p><p><strong>Guillaume:</strong> it&#8217;s just additional.</p><p>Question of terminology, like here what we did, baseball is north active parameters, but it&#8217;s true. Really not give it another name, but yeah, we could have called it medium, but only, I,</p><p>I suppose it&#8217;s a model that we released mixture of experts. It&#8217;s a model that combines different model before which we were doing the same, is that we had one model, general model for Israel. Doing instruction following, were like a separate model that was Devrel trial. So qu coding specify specific to code with another model for Reason Maal.</p><p>So this were separate artifacts built by different team at trial on what we&#8217;re doing is basically merging all of this. It was, you had pixel trial was the first vision model. We was like a separate model on the way we do things internally is that we have one team focus on one capability, build one model.</p><p>On the means mature, mature enough, we decide to merge this into the [00:30:00] matrix. But here it was the first time we basically match all of this into one. But there are some other things we did at first time to merge time, for instance, like more capabilities or function coding I think would be, are, it&#8217;s going to be much, much better in this trial, small platform.</p><p>But but yeah, so it&#8217;s our latest model on the working is,</p><p><strong>Vibhu:</strong> and yeah, key things is it&#8217;s very sparse. Six, be active pretty efficient to serve. 2 56 K context. Yeah,</p><h2>Merging Capabilities vs Specialists</h2><p><strong>swyx:</strong> I think what&#8217;s interesting is just this general theory of developing individual capabilities in different teams and then merging them.</p><p>Where is this going gonna end up?</p><p><strong>Vibhu:</strong> Like we&#8217;ve seen the five things put together in this. Yeah. What are the next five teams?</p><p><strong>swyx:</strong> I think actually OpenAI has gone away from the original four Oh. Vision of the Omni model. This was what they were selling. All modalities and all modalities out.</p><p>But I feel like you might do it.</p><p><strong>Guillaume:</strong> I think there&#8217;s some mod where it&#8217;s not competitive use, for instance for audio. For audio here, if you want to do transcription, I think it makes no sense to use a model. If you just want to trans tech it, it&#8217;ll be very inefficient. If you want to do audio, you probably just want to be the [00:31:00] one VR 3D model performance essentially</p><p><strong>swyx:</strong> the same.</p><p>It&#8217;s going to be incredibly cheaper. So here, that&#8217;s why we want</p><p><strong>Guillaume:</strong> to have a separate but just does this. Yeah, I think the question is just, yeah. If you are to, to your model. By speech and you asking like a very complex questions on how you do this on the, just to cascade things. Do you want to put a d in a model that has like a one key around it?</p><p>It&#8217;s like a, not a competitive discussion, I think unaware if you doing into the direction, but that&#8217;s possible. Of course. But yeah. But I think for us, the next capabilities we want to try to integrate into these models when we are going to be yes, like marketing or no reasoning better, I think more capabilities that people don&#8217;t talk too much about, but at high bottom, I think for our customers in our, on different industries, for instance, things are around like a legal computer.</p><p>I design all these things that is this males out of the box are to put at that. Because people, if you don&#8217;t prioritize this, there is not like too benchmark on that. But</p><p><strong>swyx:</strong> this done how to</p><p><strong>Guillaume:</strong> make this good and this just start to do the work. Extracting some that processing it [00:32:00] expression. So yeah.</p><p>But we are offering the imagine to this.</p><p><strong>swyx:</strong> I think for voice. Yeah. The key thing I think over maybe like the last year or so with VO and gr Imagine and all these things is joining voice with video, right? Which people don&#8217;t understand spatial audio because like most TTS is just oh, I&#8217;m speaking to a microphone in perfect studio quality.</p><p>But when you have video, like the voice moves around.</p><p><strong>Pavan:</strong> That&#8217;s true. The constitution was a little different in the sense that there it&#8217;s like a a standalone artifact where you get the whole thing and you consume it. But in a conversational setting, it&#8217;s a, you need the extreme low latency.</p><p><strong>swyx:</strong> Yeah,</p><p><strong>Pavan:</strong> streaming would be one of the primary concentrations.</p><p><strong>swyx:</strong> You can build a giant company just doing that, right? So you don&#8217;t need to do the voice, but I was just know on the theme of merging modalities, that is something I, I am like, wow. Like I didn&#8217;t, everyone up till, let&#8217;s say mid last year was just doing these like pipelines of okay, we&#8217;ll stitch a TTS model with a voice thing and a lip sync [00:33:00] thing and what have you.</p><p>Nope. Just giant model. Yeah.</p><h2>Open Source Mission</h2><p><strong>Vibhu:</strong> I have a two part question. So one is, it&#8217;s still open. It seems like open source is still very core to what you guys do and I just have to plug your paper. Jan 2024. This is the one trial of experts like. Very fundamental research on how to do good.</p><p>Moes paper comes out very good paper for anyone. That&#8217;s just side tangent. No.</p><p><strong>swyx:</strong> This thing caused, we bring back, eight by 22 was like the nuclear bomb for open source. I think it takes Shouldn be more seven B more. Yeah. Yeah. But this is a bigger opposite than me.</p><p>Yeah. Yeah I don&#8217;t remember this. I remember, I don&#8217;t think it was January, right? It was like new reps it was, it dropped during new reps and everyone in Europes was December of 25th, I think. Yeah. The model was did as well.</p><p><strong>Vibhu:</strong> It&#8217;s just a little update probably.</p><p><strong>swyx:</strong> Yeah. No, but you have a point to make.</p><p><strong>Vibhu:</strong> No, you gotta check that. But then, I just want to hear more broadly on open source for you guys, and when you had asked earlier [00:34:00] about what&#8217;s next, what are the other, side tapes working on you. You put out Lean straw. This,</p><p><strong>swyx:</strong> it&#8217;s not necessarily surprise. I was like, I don&#8217;t, this doesn&#8217;t fit my mental model or Misra.</p><p><strong>Guillaume:</strong> Yeah. First for open source in general, I think it&#8217;s really something which looks to the January of the company. I think we started it per once, is we so we have open sourcing with, since the beginning and even before this. So before this, so me and Tim were at Meta, we released LA and I think what was really nice.</p><p>To see that before this, for most researchers like universities, it was impossible to work on elements. There was no alien outside. And if you look at many of the techniques that were developed after, for instance, was open source all this post-training approaches like even DPOD, like preference optimization, all of this were done by people that had access to this portal.</p><p>And it&#8217;ll have been impossible to do without this. So it&#8217;s really making sense, move faster. So we really want to contribute to this ecosystem. I think like the deep and also like very lot of impact. All these papers that are I think in the open source community are really helping the science community as a whole to move faster.</p><p>So [00:35:00] we want contribute to this ecosystem. That&#8217;s why we&#8217;re releasing very detailed technical reports. So ma trial and our first reason model, and ation, lot of results, things that work, things that did not work as well. Think helpful on the, yeah, so for the audio model also to share a lot of details, share of them for real time model.</p><p>And the, yeah, so we really want to continue this, basically belong to this community of people who share science. I think we really don&#8217;t want to be, leading in a world where the smartest model, the best models are only behind, close doors. Only accessible to a shoe companies that we, as a power to decide we can use them on it.</p><p>I think it&#8217;s a scary future. We don&#8217;t want to live in, we really want this model to be accessible to anyone that want. Intelligence to be used unaccessible by anyone who can use it. So yeah, so that&#8217;s why we are pushing this mission and source model. Yeah. So not, so yeah, no strategy. So it&#8217;s open source, not the first model, so not the best on the Yeah.</p><h2>Lean and Formal Proofs</h2><p><strong>Guillaume:</strong> LIN trial I think is also one step into this direction. So it&#8217;s yeah, a bit different than what we are usually releasing. But we have a small team internally [00:36:00] working on them. Formal proofing, formal math. So I think a subject we care about in general and we were working on reasoning. I think we started too early before doing reasoning without LMD is very hard, especially when you work with formal systems because the amount of data you have is negligible.</p><p>It&#8217;s addressable community of people writing like formal proofs. But the reason why we like it is because I think there is if you look at what people are doing with reasoning, is there, the problems that you can use. Are usually going to be problems where you can verify the output. So for instance, all this ai ME problem where the solution is a number between 100, like a thousand.</p><p>So you can verify, compare this with a reference or it&#8217;s an expression. You can actually compare the output expression generic with the reference. But there are many, most of them have problem and most of the reason problem. There is no like way to easily verify the solution. If the question is show that F is continuous, cannot compare in the reference, right?</p><p>If it&#8217;s a probe that this is true or probes is properties, there is no way to. You cannot act, simply verify the correctness of your proof. So it&#8217;s hard to apply the, there is no referable reward here. So [00:37:00] what you could provide is of course, like a judge and judge that will look at your proof. But it&#8217;s very hard and it&#8217;s very, you could do certain, some reward hacking happening there.</p><p>So it&#8217;s difficult. You could provide like a reference proof, but then there are also many ways to prove the same thing. So if the model says give negative reward because it&#8217;s a different poop, maybe it was still digit proof, just different. So it&#8217;s not going to work well. What&#8217;s nice with lean and with formal probing is that you don&#8217;t have to worry about this whatsoever.</p><p>We just,</p><p><strong>swyx:</strong> they&#8217;re all function is largely compiles in lean is functionally the same. Exactly.</p><p><strong>Guillaume:</strong> It&#8217;s like a problem if it compiles it&#8217;s correct. It&#8217;s very easy. And you can apply this and then you can,</p><p><strong>swyx:</strong> it&#8217;s just way too small. So no human will actually go and do it.</p><p><strong>Guillaume:</strong> Yeah, that&#8217;s exactly.</p><p>It&#8217;s the only people can do it. It&#8217;s like a very small committee of people doing a PhD on that. So it&#8217;s super small. And it&#8217;s sad because it&#8217;s actually very useful on not just mat, but also in software verification. So for instance, software verification today. So tiny market. Very few industries work on this and we need that.</p><p>It&#8217;s usually going to be like companies like building airplanes, air robotics,</p><p><strong>swyx:</strong> like</p><p><strong>Guillaume:</strong> things [00:38:00] where they absolutely want to be sure. Life depend on this, but it&#8217;s very rare that people formally verify the correctness of their software. But I think one of the reasons for this is simply that it&#8217;s just hard to do.</p><p><strong>swyx:</strong> Are you think of TLA plus? It&#8217;s the language that some people do for software verification? No. That people use in a ference, but but yeah, it&#8217;s the reason I think why people don&#8217;t use it more and why this industry is not as big as could be is because it&#8217;s very hard. But now with cutting edges that are there, it&#8217;s going to be very different.</p><p><strong>Guillaume:</strong> We&#8217;re going to see much more of this. So I think yes, industry there is going to be much larger in the future that we, these models. So yeah. Here also anticipating this a little bit, we wanted to work on that because it&#8217;s proving like a math theory and like a, essentially the same tools.</p><p><strong>swyx:</strong> Yeah.</p><h2>Reasoning Transfer and Agents</h2><p><strong>swyx:</strong> One of my theories is that because the proofs takes so long, it&#8217;s actually just a proxy for long horizon reasoning and coherence and planning. Maybe a lot of people will say okay, it&#8217;s for people who like math. It&#8217;s for being okay. It&#8217;s like a niche math language. Who cares? But actually, and you use this as part of your data mixture for [00:39:00] post-training and reasoning, actually, it might spike everywhere else.</p><p>Yeah. And I think that&#8217;s un under explored or no one&#8217;s like really put out a definitive paper on how this generalizes.</p><p><strong>Guillaume:</strong> Yeah, absolutely. And</p><p><strong>Pavan:</strong> I think even</p><p><strong>Guillaume:</strong> that&#8217;s what we&#8217;re seeing already. For instance, you should do some reasoning on math as then the American should do reason even.</p><p>Yeah. In the early stage. So we, the, there is some transfer, some sort of emergence that happens. And I think some, it&#8217;s also interesting, it&#8217;s not just I think the topic in general, but it&#8217;s, there is a lot of connection with this on including agents because. Sometimes the model can see like a three that it has to prove it&#8217;s very complex, but then it can take the initiative to say, I&#8217;m going to prove this three lr.</p><p>I&#8217;m going to suggest three Rs, and I&#8217;m going to in parallel prove each R. So three of them in parallel with sub agents, but I&#8217;m also going to prove them in theory and the three tool so you can do this also. Pretty interesting. You can, even if you fail to put one of the LeMar, you can actually, maybe you succeed to put the normal lema too, so you get some possible reward here.</p><p>So it&#8217;s a bit less Spartan issue, just get to zero one for the entire thing. [00:40:00] So it&#8217;s pretty interesting. I think we can actually,</p><p><strong>Vibhu:</strong> yeah, it&#8217;s also an interesting case just for specialized models in general, right? Like the cost thing you show is pretty interesting yeah, similar score wise, you are, thirty, seventy, a hundred fifty, three hundred bucks.</p><p>Smaller.</p><p><strong>swyx:</strong> I think cost is a bit unfair, right? &#8216;cause this one is at like inference cost. It&#8217;s always there on top with their margins on top of it. But, we don&#8217;t know anything else, so we gotta figure it out.</p><p><strong>Vibhu:</strong> Okay.</p><h2>Next Frontiers in Training</h2><p><strong>Vibhu:</strong> I did wanna actually push on that more. Not on cost, but you mentioned about, okay, it&#8217;s a great way to have verifiable long context reasoning.</p><p>What are other frontiers that, I&#8217;m sure you guys are working on internally, there&#8217;s a lot of push of people pushing back on pre-training. Scaling, RL pushing, compute towards having more than half of your training budget. All on rl. Where are you guys seeing the frontier of research in that?</p><p><strong>Guillaume:</strong> You mean the</p><p><strong>Vibhu:</strong> just in foundation model training in the next, one thing that you guys do actually is you do fundamental research from the ground up, right? So you probably have a really good look at where you can [00:41:00] forecast this out.</p><p><strong>Guillaume:</strong> Yeah. I think for us we&#8217;re still working a lot on the pre-training side.</p><p>I think we are very far from situational, the pre-training. I think ML four preprinting will be like big step compared to everything we have done before. So we are pretty excited about this. And I think on the other side, I think now we have more and more to think about this algorithm that will actually support this very long trajectories.</p><p>I think when it was, for instance, GRPO for it doesn&#8217;t really work this any bit of policy. Which was okay initially because you are solving math problem that can be solved in like a few thousand tokens. So the model can alize them pretty quickly. So when you do your update, the model is never too far off.</p><p>It&#8217;s never too far off. But now when you are moving towards this kind of problems where certain takes hours, like six hours to get a reward, then your model is co pick places. So you have bi new infrastructure that supports this, but also new A, so now everything we&#8217;re doing internally, we&#8217;re trying to. Build some infra that we actually anticipate is what we have in six months, one now, which is this extremely no scenarios on the, I think when we started Missal, part of me and [00:42:00] we wanted to, is very nice under element where people are there, they can do research, they like with a lot of resources.</p><p>So it was nice. I think things changed a lot when I think when J Pity came out. I think after that I think was. This one is same again. But but yeah, but it was nice. And I think we also want to work part of this descrip before</p><p><strong>swyx:</strong> coming to the end.</p><h2>Hiring and Team Footprint</h2><p><strong>swyx:</strong> We&#8217;re just, obviously, I think you guys are doing incredible work.</p><p>You&#8217;ve, they are a very impressive vision for open source and for voice. What are you hiring for? What&#8217;s the what are you looking for that you are trying to join the company?</p><p><strong>Guillaume:</strong> Yeah, so we are hiring a lot of people in our sense team. We&#8217;re hiring, in all our offices. So we have a, our H two is in France in Paris.</p><p>We have a small team in London. We like a team in Pato as well. Co we open some offices in in SAU, in Poland. So one in Zurich. We also like some presence in New York as well on Sooner one in San Francisco. So we all bit either way also like hiring remotely. So we&#8217;re going the team trying to hire like very strong people.</p><p>I think we want to stay, so the team is not. Instead of fairly small team. [00:43:00] But I think we want to keep it that way. &#8216;Cause we we find it quite efficient. So like a small team they agile so yeah.</p><p><strong>swyx:</strong> Okay.</p><h2>AI for Science Partnerships</h2><p><strong>swyx:</strong> Let&#8217;s focus on science and the forward deployed. We actually are strong believers in science.</p><p>We started the our new science pod that focuses specifically on the air for science. What areas do you think are the most promis.</p><p><strong>Guillaume:</strong> What we&#8217;re pretty excited about right now, and something we have already started doing or that we&#8217;d probably be able to share more about this in a couple of months, is that we are exploring AI for science.</p><p>And there are a lot of areas where we think that you could get some extremely promising buzz. If you were to apply AI in these domains. There are a lot of long inputs. You just have to find these domains where actually AI has not been yet applied, and it&#8217;s usually hard to do because the people working in those domains don&#8217;t necessarily know the capability of these models.</p><p>They don&#8217;t know. How I would just have to pair them with Yeah, exactly. Your researcher slashing, which is actually hard to do. But this matching, we&#8217;re doing it naturally with our customers. So we have some company we are very closely with. So for instance, ISM Andreesen are one of our partners, so we&#8217;re doing some research with them on their other, like tons of extremely interesting problems.</p><p>Columns in physics, in [00:44:00] science matter science that they&#8217;re essentially the only ones to work on. &#8216;cause they&#8217;re doing something No, no one else is doing on the, yeah. So there are many domains where AI can actually revolutionize things. Just you have to think about it on you familiar with what can do or to apply it.</p><p>So yeah, it&#8217;s something where more modeling with our partners, with our customers sort AI for s, but.</p><p><strong>swyx:</strong> Yeah. Okay.</p><h2>Forward Deployed Skills</h2><p><strong>swyx:</strong> And then for deployed what it makes a good four deployed engineer, what do they need? Where do people fail?</p><p><strong>Guillaume:</strong> I think it&#8217;s usually you need people that are very familiar with the tech and not necessarily with a lot of research expertise, but that are actually pretty good at using this model that can actually like that know how to do functioning, that know how to like, start some error pipeline.</p><p>And it&#8217;s it&#8217;s not easy. It&#8217;s something that mucus. Majority of companies will not be able to do this on their own. So here I think we need people that are, that like to solve problems that are accept solving some complex, very concrete problem. It&#8217;s applied science basically.</p><p>And yeah, so I think it&#8217;s not too different. I think from the case you need in research because it&#8217;s essentially you are trying to find solutions to problems that in [00:45:00] customers have not yet. So sometimes it&#8217;s easy. Sometimes you&#8217;re here to do the work. You have to like create synthetic data.</p><p>Find some edge case. So it can be, yeah. Depends on the problem. But but yeah, you have to, I think it also a bit of patience on the be creative. I think very similar skill is Asian,</p><p><strong>Pavan:</strong> the diversity of the work they do. It always surprises me. It&#8217;s it&#8217;s, it goes all the way from the kind of stuff they encounter in industries.</p><p>It&#8217;s just very interesting. I think.</p><p><strong>swyx:</strong> Any fun like success anecdotes.</p><p><strong>Guillaume:</strong> Yeah, it can be actually training this small model on edge that just we do one specific thing can be like training some very large model without some specific languages as well. Making models really good at some tube use, like for instance, computer ID design, these kind of things.</p><p>Is that pairing with vision as well? Yeah,</p><p><strong>Pavan:</strong> and the fact detection for chips or like in, in factories identifying things like it, the. Diversity could be anything where you can deploy these foundation models. So yeah the work to make it work in that specific setting, basically whatever it takes to make it like add value in that, by the way, workflow.</p><p><strong>Vibhu:</strong> Yeah. [00:46:00] And it goes across the stack, right? Like even just pulling up the website like.</p><p><strong>swyx:</strong> It&#8217;s so broad on compute. It is so broad.</p><p><strong>Vibhu:</strong> We didn&#8217;t even touch on if you have a coding CLI tool. One thing you guys were actually like, I think the first tool was agents, ral agents. You had the agent builder, you can serve it via API and all that.</p><p>And I&#8217;m guessing forward deploy people.</p><p><strong>Guillaume:</strong> Yeah.</p><p><strong>Vibhu:</strong> Help build that out and stuff.</p><h2>Customer Feedback Loop</h2><p><strong>Guillaume:</strong> It is also why we are, so we&#8217;re doing many things, but I think that&#8217;s also part of the value proposition that sometime know customers. They&#8217;re always very. Extremely careful about their data and they don&#8217;t want to, they don&#8217;t like, trusting so many partners, trusting one partner for code, giving the data to another third party for like audios and another one.</p><p>So they don&#8217;t like this here. What they really like with our approach that we can help them on anything so they don&#8217;t have to send the data to so many clouds. So yeah,</p><p><strong>swyx:</strong> I think that there can be many orders of magnitude more. F Ds then research scientists and they don&#8217;t need your full experience, but they&#8217;re still super variable to customers</p><p><strong>Guillaume:</strong> in practice.</p><p>These two teams [00:47:00] are still quite intertwine, very often. Yeah. So first of all, they&#8217;re using the same tools, the same data pipeline and everything on the, it&#8217;s it&#8217;s very helpful for the science team to get the feedback and the solution team &#8216;cause they can. Look at these customers are trying to do this.</p><p>This is not working. It can really be show in the next version. Yeah. But this is basically a real world eval. Yeah, it&#8217;s real world eval and it&#8217;s not something, for instance, if you&#8217;re just working in the lab, it&#8217;s just ships model. But you don&#8217;t do this work of for customers. You have no idea for whether your model is good at this H case.</p><p>For instance, you even in year found this, right? So yeah, there is a very gap, big gap between the public benchmarks that are very like academic. On</p><p><strong>Pavan:</strong> the rare cases are just very diverse and in the specific concept of a customer, you can fine tune and make it like first evaluate, create a solid eval, benchmark, and then measure in the context of their, the kind of audio.</p><p>Like for instance, one use case is literally just, there&#8217;s the word for kids and they have to just say it out. It&#8217;s a very specific thing. You&#8217;re just saying one word and then you have to you, you&#8217;ll grade the kid whether they did it right or not. It&#8217;s [00:48:00] like R for, but so there&#8217;re very diverse use cases and the idea is that they, the.</p><p>Applied scientist engineer will go and make it better. And then from the learnings we incorporate it into the base model itself. So it&#8217;s it&#8217;s just better out of the box.</p><p><strong>Vibhu:</strong> Yeah. It&#8217;s a good full circle system. Like the foundation model evals are all just proxies of what you really, you&#8217;re never gonna have one that says it, it doesn&#8217;t make sense for there to be, a one word transcription like that.</p><p>It&#8217;s not something you wanna fit on. Perfect.</p><h2>Wrap Up and Thanks</h2><p><strong>swyx:</strong> Everyone should go check out everything that Michelle has to offer and try the TTS model, which will link in the show notes. But thank you so much for coming tha thanks. Such a stretch.</p>]]></content:encoded></item><item><title><![CDATA[[AINews] H100 prices are melting *UP*]]></title><description><![CDATA[a quiet day lets us report an important GPU trend]]></description><link>https://www.latent.space/p/ainews-h100-prices-are-melting-up</link><guid isPermaLink="false">https://www.latent.space/p/ainews-h100-prices-are-melting-up</guid><pubDate>Sat, 28 Mar 2026 04:11:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vdCR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At GTC 2022, NVIDIA announced the Hopper architecture and the first H100s started rolling out in October of that year. 2 years later, in October 2024, we <a href="https://www.latent.space/p/gpu-bubble?utm_source=publication-search">published a popular piece on the H100 rental price depreciation cycle</a>, which we had observed to be a going faster than previous cycles and theorized that it was a slight bubble burst dynamic due to temporarily inflated demand. While true for the time (bottoming out after <a href="https://news.smol.ai/issues/25-01-20-ainews-deepseek-r1-o1-level-open-weights-model-and-a-simple-recipe-for-upgrading-15b-models-to-sonnet4o-level">the DeepSeek R1 shock</a>, it did not last; <a href="https://www.latent.space/p/wtf2025">since December 2025</a> the H100 rental market has gone <em>VERY</em> up:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vdCR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vdCR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png 424w, https://substackcdn.com/image/fetch/$s_!vdCR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png 848w, https://substackcdn.com/image/fetch/$s_!vdCR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png 1272w, https://substackcdn.com/image/fetch/$s_!vdCR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vdCR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png" width="500" height="511.86131386861314" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1122,&quot;width&quot;:1096,&quot;resizeWidth&quot;:500,&quot;bytes&quot;:306884,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192378038?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vdCR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png 424w, https://substackcdn.com/image/fetch/$s_!vdCR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png 848w, https://substackcdn.com/image/fetch/$s_!vdCR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png 1272w, https://substackcdn.com/image/fetch/$s_!vdCR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefae087a-e8bd-4623-adc9-e8ef80115faa_1096x1122.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/matthew_sigel/status/2037598820054224948">chart</a></figcaption></figure></div><p>This is corroborated by <a href="https://x.com/sarahdingwang/status/2032516017528910120">Dylan on Dwarkesh saying H100's are worth </a><em><a href="https://x.com/sarahdingwang/status/2032516017528910120">more</a></em><a href="https://x.com/sarahdingwang/status/2032516017528910120"> today than they were 3 years ago</a>, and surely related to the <a href="https://www.latent.space/p/valuemule">general chip shortage</a> and reasoning model/agent inflection of <a href="https://www.latent.space/p/wtf2025">December 2025</a>, and the utility of a 4 year old chip now with much better reasoning models and inference software means the chip itself is much more valuable than initial 4-7 year depreciation schedules had assumed. </p><p>If you are used to the razor&#8217;s edge of data center tokenomics, you should expect that this has <em>very</em> meaningful implications on the business models of data centers and GPUs&#8230; as long as it keeps going.</p><p></p><blockquote><p><em>AI News for 3/26/2026-3/27/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</em></p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Anthropic&#8217;s leaked &#8220;Mythos&#8221; system and the new Capybara tier</strong></p><ul><li><p><strong>Fortune corroborates a higher Anthropic tier above Opus</strong>: A now-pulled &#8220;Claude Mythos&#8221; post was preserved by <a href="https://x.com/M1Astra/status/2037377109472018444">@M1Astra</a>, and multiple follow-on posts cite a Fortune report that Anthropic is introducing <strong>Capybara</strong>, described as a new tier <strong>above Opus</strong> and &#8220;larger and more intelligent&#8221; than <strong>Claude Opus 4.6</strong>. Reporting summarized by <a href="https://x.com/scaling01/status/2037379145806524655">@scaling01</a>, <a href="https://x.com/Yuchenj_UW/status/2037387996694200509">@Yuchenj_UW</a>, and <a href="https://x.com/kimmonismus/status/2037463638261305752">@kimmonismus</a> says Capybara posts substantially better scores on <strong>coding, academic reasoning, and cybersecurity</strong>, with rollout constrained by cost and safety concerns.</p></li><li><p><strong>Compute intensity is the central theme</strong>: Several posters infer Anthropic is leaning hard into scale, with speculation around a <strong>~10T parameter</strong> class model from prior Dario comments, though that remains unconfirmed outside commentary; see <a href="https://x.com/scaling01/status/2037384912743923969">@scaling01</a> and <a href="https://x.com/Yuchenj_UW/status/2037391159115563214">@Yuchenj_UW</a>. Separately, the Financial Times report relayed by <a href="https://x.com/FirstSquawk/status/2037586926375743904">@FirstSquawk</a> says <strong>Google is close to funding Anthropic&#8217;s data center</strong>, reinforcing that frontier competition is increasingly gated by power and capex rather than just algorithms.</p></li><li><p><strong>Infra strain was visible in production</strong>: The leak landed amid a rough day for Anthropic availability, with widespread user complaints about <strong>529s/elevated errors</strong> from <a href="https://x.com/dejavucoder/status/2037439287873159641">@dejavucoder</a>, <a href="https://x.com/iScienceLuvr/status/2037487244634972471">@iScienceLuvr</a>, and others. The practical takeaway is that Anthropic appears to be balancing aggressive scaling ambitions against a still-tight serving envelope.</p></li></ul><p><strong>Open coding models, local inference, and GLM-5.1&#8217;s continued push</strong></p><ul><li><p><strong>GLM-5.1 is widening the pressure on closed coding models</strong>: Zhipu announced <strong>GLM-5.1</strong> availability to all coding plan users via <a href="https://x.com/Zai_org/status/2037490078126084514">@Zai_org</a>, along with docs for agent use at <a href="https://x.com/Zai_org/status/2037506911013138851">@Zai_org</a>. Community reaction framed it as another sign that high-end Chinese open or semi-open coding models are closing the gap: <a href="https://x.com/kimmonismus/status/2037507667732709392">@kimmonismus</a>, <a href="https://x.com/XFreeze/status/2037695882301436412">@XFreeze</a>, and Arena&#8217;s broader leaderboard analysis <a href="https://x.com/arena/status/2037584085997216100">@arena</a> all point to a much narrower open-vs-closed gap than a year ago.</p></li><li><p><strong>Local deployment economics keep improving</strong>: A recurring theme across tweets is that local models are now &#8220;good enough&#8221; for many workflows. Examples include <a href="https://x.com/TheGeorgePu/status/2037473248577782046">@TheGeorgePu</a> swapping a pricey TTS subscription for a local <strong>Qwen 3.5 14B</strong> setup, <a href="https://x.com/LottoLabs/status/2037557925015949676">@LottoLabs</a> reporting strong economics for <strong>Qwen 27B</strong> with Hermes Agent, and <a href="https://x.com/0xSero/status/2037560787565252666">@0xSero</a> compressing <strong>Qwen3.5-35B</strong> enough to fit full context into <strong>24GB VRAM</strong> at roughly <strong>1% average performance drop</strong>.</p></li><li><p><strong>Quantization and cache work remain key enablers</strong>: <a href="https://x.com/iotcoi/status/2037478891179135123">@iotcoi</a> shipped a <strong>TurboQuant vLLM</strong> fork with fused Triton KV write paths and decode attention, targeting <strong>Qwen3.5-35B AWQ</strong>, <strong>1M context</strong>, and <strong>4M KV cache</strong>. Meanwhile <a href="https://x.com/bnjmn_marie/status/2037564190802563157">@bnjmn_marie</a> benchmarked Qwen3.5 27B formats across <strong>RTX Pro 6000/B200/H100</strong>, with <strong>INT4</strong> emerging as the best inference option on RTX Pro 6000-class hardware.</p></li><li><p><strong>But TurboQuant is now under active dispute</strong>: The strongest research controversy in the set comes from <a href="https://x.com/gaoj0017/status/2037532673812443214">@gaoj0017</a> and a longer clarification <a href="https://x.com/gaoj0017/status/2037552350924042488">@gaoj0017</a>, alleging Google&#8217;s <strong>ICLR 2026 TurboQuant</strong> paper misrepresented <strong>RaBitQ</strong> in theory and benchmarking, including unfair CPU-vs-GPU comparisons. This does not invalidate TurboQuant&#8217;s engineering value, but it does cast doubt on some of the publicized comparative claims.</p></li></ul><p><strong>Agents are becoming products, not demos</strong></p><ul><li><p><strong>Hermes Agent is emerging as the open-agent focal point</strong>: The most consistent product momentum in the dataset belongs to <strong>Nous Research&#8217;s Hermes Agent</strong>. <a href="https://x.com/NousResearch/status/2037654827929338324">@NousResearch</a> integrated <strong>Hugging Face</strong> as a first-class inference provider with <strong>28 curated models</strong> plus access to many more, while <a href="https://x.com/ClementDelangue/status/2037634211973140898">@ClementDelangue</a> framed this as a step toward open agents with memory, persistent machine access, and model choice. User reports from <a href="https://x.com/fancylancer3991/status/2037579517389144399">@fancylancer3991</a>, <a href="https://x.com/PolackJack/status/2037661357785690584">@PolackJack</a>, and <a href="https://x.com/alexcovo_eth/status/2037589212648665273">@alexcovo_eth</a> emphasize lower friction and better persistence than browser-automation-heavy setups like OpenClaw.</p></li><li><p><strong>Agent infrastructure is maturing around traces, evals, and debuggability</strong>: Hugging Face&#8217;s <a href="https://x.com/ClementDelangue/status/2037530125638455610">@ClementDelangue</a> called for <strong>open agent traces datasets</strong>, with follow-up pointing to the <strong>Agent Data Protocol</strong> from <a href="https://x.com/yueqi_song/status/2037614951230296230">@yueqi_song</a>. LangChain pushed a cluster of production-oriented materials: an <strong>agent eval readiness checklist</strong> <a href="https://x.com/LangChain/status/2037590936234959355">@LangChain</a>, <strong>Deep Agents</strong> IDE-style UI guidance <a href="https://x.com/LangChain_JS/status/2037560951445266891">@LangChain_JS</a>, and <strong>LangSmith Prompt Hub Environments</strong> for prompt promotion/rollback <a href="https://x.com/LangChain/status/2037666098561032421">@LangChain</a>. The direction is clear: the stack is moving from &#8220;chatbot with tools&#8221; to software lifecycle primitives for agents.</p></li><li><p><strong>Agent-facing benchmarks are starting to reflect real workloads</strong>: Artificial Analysis introduced <strong>AA-AgentPerf</strong> via <a href="https://x.com/ArtificialAnlys/status/2037562417836929315">@ArtificialAnlys</a>, focused on <strong>real coding-agent trajectories</strong>, <strong>100K+ sequence lengths</strong>, and throughput expressed as <strong>concurrent users per accelerator / per kW / per $ / per rack</strong>. That is a more deployment-relevant abstraction than synthetic token benchmarks and should be useful for teams comparing accelerator systems for agent-heavy serving.</p></li></ul><p><strong>Coding agents, Codex plugins, and multi-agent software workflows</strong></p><ul><li><p><strong>OpenAI&#8217;s Codex ecosystem is shifting toward workspace-native automation</strong>: OpenAI developers highlighted <strong>Codex plugins</strong> and a use-case gallery via <a href="https://x.com/OpenAIDevs/status/2037604273434018259">@OpenAIDevs</a>, while Box shipped a Codex plugin for automating workflows over Box content <a href="https://x.com/Box/status/2037563341431058497">@Box</a>. User sentiment from <a href="https://x.com/theo/status/2037383187849183457">@theo</a>, <a href="https://x.com/nickbaumann_/status/2037395162641686813">@nickbaumann_</a>, and <a href="https://x.com/reach_vb/status/2037614060452106437">@reach_vb</a> suggests the center of gravity is moving from prompt/response to <strong>persistent workspaces, issue systems, terminals, PR flows, and plugins</strong>.</p></li><li><p><strong>The winning UX pattern is increasingly &#8220;fleet management for software&#8221;</strong>: <a href="https://x.com/VibeMarketer_/status/2037521519736463782">@VibeMarketer_</a> captured the emerging pattern well: kanban-like cards, isolated worktrees, agent-owned tasks, and diff-based review. Related tools include the new <strong>agent-browser dashboard</strong> from <a href="https://x.com/ctatedev/status/2037599050112160165">@ctatedev</a> for real-time browser session debugging, and broad enthusiasm for multi-agent SWE systems from Cognition/Devin adjacent commentary like <a href="https://x.com/JTLonsdale/status/2037555800193851727">@JTLonsdale</a> and <a href="https://x.com/cognition/status/2037649026951303668">@cognition</a>.</p></li><li><p><strong>Composer 2 and long-horizon coding evals are raising the bar</strong>: The CursorBench discussion is mostly indirect here, but <a href="https://x.com/cwolferesearch/status/2037726856699420987">@cwolferesearch</a> points out the benchmark&#8217;s strengths: <strong>real coding sessions</strong>, <strong>underspecified prompts</strong>, broader quality dimensions, and median <strong>181 lines changed</strong> per task. That&#8217;s a healthier benchmark design than static toy tasks and aligns with the broader turn toward long-horizon agent evaluation.</p></li></ul><p><strong>Research and systems: world models, robotics, speech, and multimodal infra</strong></p><ul><li><p><strong>Meta shipped a practical SAM 3.1 speedup</strong>: <a href="https://x.com/AIatMeta/status/2037582117375553924">@AIatMeta</a> released <strong>SAM 3.1</strong>, a drop-in update to SAM 3 with <strong>object multiplexing</strong>, allowing up to <strong>16 objects in a single forward pass</strong>. Meta says this roughly doubles video throughput from <strong>16 to 32 FPS on one H100</strong> for medium-object workloads, which is meaningful for accessible video segmentation pipelines.</p></li><li><p><strong>World models and robotics both had notable open releases</strong>: <a href="https://x.com/LiorOnAI/status/2037484990779339064">@LiorOnAI</a> highlighted LeCun&#8217;s <strong>LeWorldModel</strong> paper/repo as a small, open world model designed to make representational collapse mathematically impossible via <strong>SIGReg</strong>, claiming <strong>48x faster planning</strong> and <strong>~200x fewer tokens</strong>. On robotics data, <a href="https://x.com/UnitreeRobotics/status/2037440578275946551">@UnitreeRobotics</a> open-sourced the <strong>UnifoLM-WBT-Dataset</strong>, a real-world humanoid whole-body teleoperation dataset intended for rolling updates.</p></li><li><p><strong>Speech/open audio remains one of the healthiest open categories</strong>: Cohere&#8217;s new <strong>2B Apache-2.0 Transcribe</strong> model drew strong praise from <a href="https://x.com/victormustar/status/2037572662659104976">@victormustar</a> and throughput measurements from <a href="https://x.com/vanstriendaniel/status/2037548103272632497">@vanstriendaniel</a>, who reports <strong>33 hours</strong> of audio transcribed in <strong>12 minutes</strong> on an A100. Mistral&#8217;s <strong>Voxtral TTS</strong> paper was flagged by <a href="https://x.com/qtnx_/status/2037553397423902846">@qtnx_</a>, and browser/local demos appeared from <a href="https://x.com/sophiamyang/status/2037523809914241069">@sophiamyang</a> and <a href="https://x.com/nickfrosst/status/2037680223445975131#m">@nickfrosst</a>.</p></li><li><p><strong>Open robotics stacks are also getting more reproducible</strong>: AI2 released <strong>MolmoBot</strong>, an open robotic manipulation suite trained entirely in simulation, with <strong>code, training data, generation pipeline, and evals</strong> available via <a href="https://x.com/allen_ai/status/2037590611990094259">@allen_ai</a>. That complements the Unitree dataset and signals continued progress toward replicable robotics research outside top labs.</p></li></ul><p><strong>Top tweets (by engagement)</strong></p><ul><li><p><strong>Anthropic/Capybara leak</strong>: <a href="https://x.com/Yuchenj_UW/status/2037387996694200509">@Yuchenj_UW on Capybara</a> was the most engaged technical item, summarizing the new tier above Opus and its reported benchmark gains.</p></li><li><p><strong>Paul Conyngham&#8217;s AI-assisted dog cancer treatment</strong>: <a href="https://x.com/sama/status/2037396826060673188">@sama</a> shared a story of using ChatGPT and related tools to help design an <strong>mRNA vaccine protocol</strong> for a dog&#8217;s cancer, which became a major discussion point about AI-enabled personalized medicine.</p></li><li><p><strong>TurboQuant critique</strong>: <a href="https://x.com/gaoj0017/status/2037532673812443214">@gaoj0017</a> drew unusually high engagement for a paper-methodology dispute, likely because it challenges a heavily promoted systems paper.</p></li><li><p><strong>GLM-5.1 release</strong>: <a href="https://x.com/Zai_org/status/2037490078126084514">@Zai_org</a> announcing broad GLM-5.1 availability landed strongly, reinforcing sustained interest in open coding models.</p></li><li><p><strong>Open infrastructure for agents</strong>: <a href="https://x.com/OpenAIDevs/status/2037604273434018259">@OpenAIDevs</a> on Codex plugins and <a href="https://x.com/NousResearch/status/2037654827929338324">@NousResearch</a> on Hugging Face integration into Hermes Agent were the clearest product/infrastructure launches with broad developer relevance.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. TurboQuant and RotorQuant Innovations</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s5kdu0/google_turboquant_running_qwen_locally_on_macair/">Google TurboQuant running Qwen Locally on MacAir</a></strong> (Activity: 433): <strong>The post discusses an experiment where Google&#8217;s TurboQuant compression method was applied to </strong><code>llama.cpp</code><strong>, enabling the running of Qwen 3.5&#8211;9B on a standard MacBook Air (M4, 16 GB) with a </strong><code>20000 tokens</code><strong> context. This was previously unfeasible on such hardware, highlighting TurboQuant&#8217;s potential to enable local execution of large models without cloud APIs. The experiment suggests that even entry-level devices like MacBook Airs or Mac Minis can handle large contexts, albeit with some speed limitations. The open-source app <a href="http://atomic.chat/">atomic.chat</a> is mentioned as a resource for running these models locally.</strong> A commenter notes the impressive feat of handling <code>20K context</code> on a base MacBook Air without swapping, suggesting potential for local use cases that previously relied on cloud APIs. Another commenter inquires about the integration of TurboQuant into <code>llama.cpp</code>, indicating interest in broader accessibility.</p><ul><li><p><strong>Tatrions</strong> highlights the impressive capability of running a 20K context model on a base MacBook Air with 16GB RAM without swapping, thanks to TurboQuant. This suggests that many applications that previously relied on cloud APIs could now be executed locally, though there is curiosity about the quality degradation at this compression level compared to standard Q4 on the same model.</p></li><li><p><strong>M5_Maxxx</strong> provides a detailed audit of the TurboQuant implementation, revealing it as a minimally altered version of <a href="http://jan.ai/">Jan.ai</a>. Key changes include renaming, UI tweaks, and a custom <code>llama.cpp</code> backend fork, but no new inference engine or model architecture support. The 96 commits mostly involve CI/build pipeline changes, suggesting limited innovation beyond the original Jan.ai capabilities.</p></li><li><p><strong>AppealThink1733</strong> inquires about the integration of TurboQuant into <code>llama.cpp</code>, indicating interest in whether this technology is already supported by the popular open-source project, which could facilitate broader adoption and experimentation.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s56g07/skipping_90_of_kv_dequant_work_228_decode_at_32k/">Skipping 90% of KV dequant work &#8594; +22.8% decode at 32K (llama.cpp, TurboQuant)</a></strong> (Activity: 744): <strong>The post discusses an optimization in the </strong><code>TurboQuant</code><strong> implementation for KV cache compression in </strong><code>llama.cpp</code><strong>, which significantly improves decode performance by skipping dequantization for positions with negligible attention weights. This approach leverages attention sparsity, allowing a </strong><code>+22.8%</code><strong> increase in decode speed at </strong><code>32K</code><strong> context length on an </strong><code>M5 Max</code><strong>, without affecting perplexity (PPL). The method involves a simple modification of about three lines in the kernel, bypassing the need for complex optimizations like SIMD tricks or fused kernels. The results are consistent across different hardware, including the </strong><code>M2 Pro</code><strong>, where performance improved from </strong><code>~0.45x</code><strong> to </strong><code>~0.73x</code><strong> compared to the standard </strong><code>q8_0</code><strong> KV cache. The implementation and benchmarks are available on <a href="https://github.com/TheTom/turboquant_plus">GitHub</a>, with a detailed <a href="https://github.com/TheTom/turboquant_plus/blob/main/docs/papers/sparse-v-dequant.md">writeup</a>.</strong> Commenters praised the simplicity and effectiveness of the solution, noting the innovative use of attention sparsity to skip unnecessary computations. There is curiosity about how this approach scales with even longer contexts, such as <code>64K+</code>, and interest in integrating this optimization into the mainline <code>llama.cpp</code>.</p><ul><li><p>Specialist_Sun_7819 highlights a novel optimization in llama.cpp&#8217;s TurboQuant, where skipping 90% of the key-value dequantization work for tokens that don&#8217;t significantly impact the output leads to a <code>+22.8%</code> increase in decoding speed at <code>32K</code> context length. This approach leverages predictable attention sparsity in long contexts, allowing for significant computational savings with minimal code changes, specifically just three lines in the kernel. The commenter is curious about the scalability of this method to even longer contexts, such as <code>64K</code>, and whether the sparsity ratio continues to increase or plateaus.</p></li><li><p>sean_hash draws a parallel between the optimization in TurboQuant and techniques used in Flash Attention, noting that caching the dequantized output instead of recalculating it at each decoding step is a similar strategy. This method effectively reduces redundant computations, enhancing performance by reusing previously computed values, which is a common optimization in high-performance computing to minimize unnecessary processing overhead.</p></li><li><p>Pentium95 expresses interest in integrating this optimization into the mainline llama.cpp, indicating a desire for broader adoption of this technique. This suggests that the community sees value in these performance improvements and is eager to see them implemented in widely-used codebases, potentially leading to more efficient models and faster inference times across various applications.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s4bzo2/turboquant_in_llamacpp_benchmarks/">TurboQuant in Llama.cpp benchmarks</a></strong> (Activity: 463): <strong>The post discusses the implementation of TurboQuant, a compression technique from Google, in the </strong><code>llama.cpp</code><strong> framework, specifically on Apple Silicon using Metal. The author notes a significant performance drop, with TPS being </strong><code>50%</code><strong> less than </strong><code>f16</code><strong>, indicating potential issues in their setup. They also attempted to run kernels on a CUDA machine but encountered poor outputs, suggesting errors in their approach. The technique is seen as beneficial for running local models on consumer hardware with limited VRAM, potentially allowing for more complex tasks to be executed locally. The post references ongoing development efforts in related projects like MLX and VLLM.</strong> Commenters suggest checking KLD to evaluate the method&#8217;s worth and express interest in seeing performance metrics like pp2048, as pp64 is not very indicative. Another commenter recommends trying RotorQuant for comparison.</p><ul><li><p>Velocita84 points out the absence of Kullback-Leibler Divergence (KLD) in the benchmarks, which is crucial for evaluating the effectiveness of TurboQuant. KLD is a measure of how one probability distribution diverges from a second, expected probability distribution, and its absence could mean missing insights into the model&#8217;s performance under TurboQuant compression.</p></li><li><p>CornerLimits suggests that the benchmark using <code>pp64</code> is not very informative for assessing performance and recommends using <code>pp2048</code> instead. The <code>pp</code> metric refers to perplexity, a common measure in language models that indicates how well a probability distribution predicts a sample. Higher <code>pp</code> values can provide a more comprehensive view of model performance.</p></li><li><p>DinoAmino discusses the trade-off between data compression and accuracy in TurboQuant, noting that while it allows for higher data compression with near-lossless accuracy, it doesn&#8217;t improve accuracy. They highlight that most large language models (LLMs) experience accuracy degradation at higher context lengths, implying that TurboQuant&#8217;s main benefit is enabling the use of longer contexts without additional accuracy loss.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s44p77/rotorquant_1019x_faster_alternative_to_turboquant/">RotorQuant: 10-19x faster alternative to TurboQuant via Clifford rotors (44x fewer params)</a></strong> (Activity: 652): <strong>RotorQuant introduces a novel approach to vector quantization by utilizing Clifford Algebra, achieving </strong><code>10-19x</code><strong> speed improvements over TurboQuant with </strong><code>44x</code><strong> fewer parameters. The method replaces the </strong><code>d&#215;d</code><strong> random orthogonal matrix with Clifford rotors, reducing the computational complexity from </strong><code>16,384</code><strong> FMAs to approximately </strong><code>100</code><strong> FMAs for </strong><code>d=128</code><strong>. This results in a cosine similarity of </strong><code>0.990</code><strong> compared to TurboQuant&#8217;s </strong><code>0.991</code><strong>, indicating nearly identical performance. The implementation leverages fused CUDA kernels and Metal shaders, significantly outperforming cuBLAS matmul on RTX PRO 4000 and Apple M4. The trade-off involves higher synthetic MSE on random unit vectors, but with QJL correction, real-model attention fidelity remains intact. <a href="https://github.com/scrya-com/rotorquant">GitHub</a> <a href="https://www.scrya.com/rotorquant/">Paper</a></strong> A key debate centers on the theoretical differences between RotorQuant and TurboQuant. While TurboQuant&#8217;s global random rotation spreads energy across all dimensions, RotorQuant&#8217;s 3D block mixing cannot replicate this, leading to higher max coordinate magnitudes and worse MSE in low-bit quantization. However, RotorQuant&#8217;s practical performance in KV cache distributions is acknowledged, suggesting a valuable speed/quality tradeoff for real models.</p><ul><li><p>Juan_Valadez highlights a key theoretical limitation of RotorQuant compared to TurboQuant, noting that TurboQuant&#8217;s global random rotation (Haar) effectively spreads energy across all dimensions, optimizing scalar quantization. In contrast, RotorQuant&#8217;s mixing within 3D blocks limits its ability to achieve the same energy distribution, which can negatively impact low-bit quantization, especially in worst-case vectors like one-hot. However, RotorQuant may still be practically useful for KV cache distributions where vectors are less adversarial.</p></li><li><p>Dany0 draws parallels between TurboQuant and techniques used in graphics programming, specifically referencing QuiP, a similar approach applied to model weights. Despite initial skepticism due to the shortness of the paper and its presentation, Dany0 acknowledges the potential of RotorQuant, likening its use of Clifford rotors to the application of quaternions instead of Euler angles, which simplifies computations by reducing multiplications to zeros.</p></li><li><p>sean_hash comments on the unexpected application of Clifford algebras in quantization, noting it as an example of cross-pollination from geometric algebra into fields outside of graphics. This highlights the innovative use of mathematical concepts traditionally associated with other domains, suggesting a broader applicability of these techniques.</p></li></ul></li></ul><h3><strong>2. GLM-5.1 and Coding Model Comparisons</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s51id3/glm_51_is_out/">Glm 5.1 is out</a></strong> (Activity: 1127): <strong>The image announces the release of GLM-5.1 by Z.ai, highlighting its improved performance in coding tasks compared to previous versions. The chart in the image shows that GLM-5.1 scores </strong><code>45.3</code><strong> in coding evaluation, surpassing GLM-5&#8217;s score of </strong><code>35.4</code><strong>, but still trailing behind Claude Opus 4.6, which scores </strong><code>47.9</code><strong>. This suggests significant improvements in GLM-5.1&#8217;s capabilities, likely due to enhancements in its underlying architecture or training data.</strong> Commenters speculate about the potential release of open weights for GLM-5.1, indicating anticipation for broader accessibility. There is also discussion about the delay in the release of DS v4, hinting at possible challenges in training on specific hardware like Ascends.</p><ul><li><p>power97992 speculates on potential delays in the release of DeepSpeed v4, suggesting that there might be issues related to training on Ascend hardware. This highlights the challenges in optimizing machine learning frameworks for different hardware architectures, which can impact release timelines.</p></li><li><p>zb-mrx notes the improvement in the rollout process for GLM 5.1, contrasting it with the previous version, GLM 5, which did not have a day-one rollout for everyone. This suggests that the developers may have resolved previous logistical or resource-related issues, such as GPU availability, to ensure a smoother release.</p></li><li><p>jacek2023 mentions the limitations of running GLM locally due to hardware constraints, specifically referencing a 72GB VRAM limit. This underscores the ongoing challenge of hardware requirements for running advanced models, which can be a barrier for many users without access to high-end GPUs.</p></li></ul></li></ul><p></p>
      <p>
          <a href="https://www.latent.space/p/ainews-h100-prices-are-melting-up">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[[AINews] Everything is CLI]]></title><description><![CDATA[a quiet day lets us reflect on the growing trend of CLIs for ~everything~ agents]]></description><link>https://www.latent.space/p/ainews-everything-is-cli</link><guid isPermaLink="false">https://www.latent.space/p/ainews-everything-is-cli</guid><pubDate>Fri, 27 Mar 2026 01:35:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!j5_Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On its own, <a href="https://x.com/patrickc/status/2037190688950161709?s=20">the launch of Projects.dev</a>, a way for agents to instantly provision services, is not immediately title-story worthy except for 2 things: 1) it comes from <strong>STRIPE</strong>, 2) it is a CLI. Run <code>stripe projects add posthog/analytics</code> and it&#8217;ll create a PostHog account, get an API key, and set up billing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j5_Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j5_Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!j5_Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!j5_Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!j5_Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j5_Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg" width="665" height="500" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:500,&quot;width&quot;:665,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j5_Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!j5_Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!j5_Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!j5_Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa29a5ad3-a76b-4aa4-b5eb-58bb7e229370_665x500.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If that sounds weird to you, it&#8217;s because Stripe doesn&#8217;t really have anything to do with PostHog&#8217;s setup or signup process. Neither do <a href="https://projects.dev/">these launch partners</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7BUC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7BUC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png 424w, https://substackcdn.com/image/fetch/$s_!7BUC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png 848w, https://substackcdn.com/image/fetch/$s_!7BUC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png 1272w, https://substackcdn.com/image/fetch/$s_!7BUC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7BUC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png" width="1456" height="306" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:306,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:48703,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192267460?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7BUC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png 424w, https://substackcdn.com/image/fetch/$s_!7BUC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png 848w, https://substackcdn.com/image/fetch/$s_!7BUC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png 1272w, https://substackcdn.com/image/fetch/$s_!7BUC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4235a84e-655c-4e77-bced-3d73e105e793_1876x394.png 1456w" sizes="100vw"></picture><div></div></div></a></figure></div><p>Stripe is just doing this <em>because they can</em>, and <a href="https://x.com/karpathy/status/2037200624450936940">Patrick cites Andrej&#8217;s MenuGen as direct inspiration</a> for how it is too hard for agents to set up backend services today. You&#8217;re sure to see the rest of the <a href="https://x.com/nikunj/status/2036572222081606065?s=12">agent-native</a> infra vendor <a href="https://x.com/loujaybee/status/2036852320797925822?s=46">landscape charts</a> all lobby Stripe for real estate:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!13Ey!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!13Ey!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png 424w, https://substackcdn.com/image/fetch/$s_!13Ey!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png 848w, https://substackcdn.com/image/fetch/$s_!13Ey!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png 1272w, https://substackcdn.com/image/fetch/$s_!13Ey!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!13Ey!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png" width="420" height="459.0422535211268" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1164,&quot;width&quot;:1065,&quot;resizeWidth&quot;:420,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!13Ey!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png 424w, https://substackcdn.com/image/fetch/$s_!13Ey!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png 848w, https://substackcdn.com/image/fetch/$s_!13Ey!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png 1272w, https://substackcdn.com/image/fetch/$s_!13Ey!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d122eee-c93b-4e8d-bbde-6924d3559da9_1065x1164.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But let&#8217;s not stop there: scroll down the timeline a little further and <a href="https://x.com/RampLabs/status/2037253351583141910?s=20">here&#8217;s Ramp</a>&#8217;s CLI also launching today, with some <a href="https://x.com/nikunj/status/2037305617589948818?s=20">handy usecases</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bvhe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bvhe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png 424w, https://substackcdn.com/image/fetch/$s_!bvhe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png 848w, https://substackcdn.com/image/fetch/$s_!bvhe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!bvhe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bvhe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png" width="568" height="642.6277372262774" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1240,&quot;width&quot;:1096,&quot;resizeWidth&quot;:568,&quot;bytes&quot;:460518,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192267460?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bvhe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png 424w, https://substackcdn.com/image/fetch/$s_!bvhe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png 848w, https://substackcdn.com/image/fetch/$s_!bvhe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!bvhe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c2ebb50-77e8-4904-9bb1-d9b9e873bc3e_1096x1240.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><p>Oh and look over here! It&#8217;s <a href="https://x.com/nikita_builds/status/2037220813888176563?s=20">the Sendblue CLI</a> (iMessage) you&#8217;ve always wanted, also launching today! catching up from <a href="https://x.com/andresmatte/status/2036061707529834773?s=20">the Kapso CLI</a> (WhatsApp) from Monday! and did you miss <a href="https://x.com/ElevenLabsDevs/status/2036802792061333989?s=20">the ElevenLabs CLI</a> from yesterday? That&#8217;s fine, because you could also try <a href="https://x.com/cuysheffield/status/2034294126565626179?s=20">the Visa CLI</a>, <a href="https://x.com/zenorocha/status/2032459310341800314?s=20">the Resend CLI</a>, or the <a href="https://x.com/steipete/status/2030371936405188776?s=20">steipete&#8217;s Discord CLI</a>, the big momma, <a href="https://x.com/addyosmani/status/2029372736267805081?s=20">the official Google Workspace CLI</a>!</p><p><a href="https://x.com/karpathy/status/2026360908398862478?s=20">Many</a>, <a href="https://mariozechner.at/posts/2025-11-02-what-if-you-dont-need-mcp/">many</a> people have written about why CLIs can be handier than MCPs, which isn&#8217;t necessarily a fair nor false comparison, but at this point the trend is undeniable and worth reporting. We credit <a href="https://blog.cloudflare.com/code-mode/">Cloudflare&#8217;s Code Mode</a> from last September in kicking off the &#8220;use more computer to wrap MCP&#8221; trend, and now of course CLIs in themselves don&#8217;t really expose or care about their underlying communication protocols.</p><p></p><blockquote><p>AI News for 3/23/2026-3/24/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Model and Product Releases: Gemini 3.1 Flash Live, Mistral Voxtral TTS, Cohere Transcribe, and OpenAI GPT-5.4 mini/nano</strong></p><ul><li><p><strong>Google&#8217;s realtime push with Gemini 3.1 Flash Live</strong>: Google rolled out <strong>Gemini 3.1 Flash Live</strong> as its new realtime model for <strong>voice and vision agents</strong>, emphasizing lower latency, improved function calling, better noisy-environment robustness, and <strong>2x longer conversation memory</strong> in Gemini Live. The launch spans <strong>Gemini Live</strong>, <strong>Search Live</strong>, <strong>AI Studio preview</strong>, and enterprise CX surfaces, with Google citing <strong>70 languages</strong>, <strong>128k context</strong>, and watermarking of generated audio via <strong>SynthID</strong> in some developer-facing summaries (<a href="https://x.com/OfficialLoganK/status/2037187750005240307">Logan Kilpatrick</a>, <a href="https://x.com/GoogleDeepMind/status/2037190678883524716">Google DeepMind</a>, <a href="https://x.com/sundarpichai/status/2037189971359261081">Sundar Pichai</a>, <a href="https://x.com/Google/status/2037190616061284353">Google</a>). Third-party benchmarking from <a href="https://x.com/ArtificialAnlys/status/2037195442489090485">Artificial Analysis</a> highlights the new &#8220;thinking level&#8221; tradeoff: <strong>95.9% Big Bench Audio</strong> at <strong>high</strong> reasoning with <strong>2.98s TTFA</strong>, versus <strong>70.5%</strong> at <strong>minimal</strong> with <strong>0.96s TTFA</strong>.</p></li><li><p><strong>Speech stack gets crowded fast</strong>: <strong>Mistral AI</strong> released <strong>Voxtral TTS</strong>, an open-weight TTS model aimed at production voice agents, with <strong>9-language</strong> support, low latency, and strong human preference metrics; several summaries cite a <strong>3B/4B-class</strong> model footprint, <strong>~90 ms time-to-first-audio</strong>, and favorable comparisons to ElevenLabs in preference tests (<a href="https://x.com/MistralAI/status/2037183026539483288">Mistral AI</a>, <a href="https://x.com/GuillaumeLample/status/2037274172607594609">Guillaume Lample</a>, <a href="https://x.com/vllm_project/status/2037193518519902408">vLLM</a>, <a href="https://x.com/kimmonismus/status/2037149838023024753">kimmonismus</a>). <strong>Cohere</strong> launched <strong>Cohere Transcribe</strong>, its first audio model, under <strong>Apache 2.0</strong>, claiming the top English spot on the Hugging Face Open ASR leaderboard with <strong>5.42 WER</strong> and <strong>14-language</strong> support (<a href="https://x.com/cohere/status/2037159129345614174">Cohere</a>, <a href="https://x.com/aidangomez/status/2037172942803701838">Aidan Gomez</a>, <a href="https://x.com/JayAlammar/status/2037172878165053951">Jay Alammar</a>). Notably, Cohere also contributed <strong>encoder-decoder serving optimizations</strong> to vLLM&#8212;variable-length encoder batching and packed decoder attention&#8212;reportedly yielding up to <strong>2x throughput</strong> gains for speech workloads (<a href="https://x.com/vllm_project/status/2037197243111895066">vLLM</a>).</p></li><li><p><strong>OpenAI&#8217;s smaller GPT-5.4 variants look cost-competitive, with caveats</strong>: <a href="https://x.com/ArtificialAnlys/status/2037043552405119395">Artificial Analysis</a> reported on <strong>GPT-5.4 mini</strong> and <strong>GPT-5.4 nano</strong>, both multimodal with <strong>400k context</strong> and the same reasoning modes as GPT-5.4. The standout is <strong>GPT-5.4 nano</strong>, which was benchmarked ahead of <strong>Claude Haiku 4.5</strong> and <strong>Gemini 3.1 Flash-Lite Preview</strong> on several agentic and terminal-style tasks while remaining cheaper on an effective-cost basis. The downside: both variants were described as <strong>highly verbose</strong>, with elevated output-token usage and weak <strong>AA-Omniscience</strong> performance driven by high hallucination rates. That matches anecdotal complaints from developers about codex/GPT-5.4 verbosity in practice (<a href="https://x.com/giffmana/status/2037194495389810863">giffmana</a>).</p></li><li><p><strong>Other notable releases</strong>: <a href="https://x.com/Zai_org/status/2037148488983511527">Zai</a> made <strong>GLM-5-Turbo</strong> available to GLM Coding Plan users; <a href="https://x.com/RekaAILabs/status/2037186645246530025">Reka</a> put <strong>Reka Edge</strong> and <strong>Flash 3</strong> on OpenRouter; <a href="https://x.com/GeminiApp/status/2037247063382167567">Google/Gemini</a> also began rolling out <strong>chat-history and preference import</strong> from other AI apps; and multiple posts reported that <strong>OpenAI</strong> has deprioritized side projects including <strong>Sora</strong> and an <strong>&#8220;adult mode&#8221; chatbot</strong> in favor of core productivity efforts (<a href="https://x.com/AndrewCurran_/status/2037145999094002104">Andrew Curran</a>, <a href="https://x.com/kimmonismus/status/2037130214522708303">kimmonismus</a>).</p></li></ul><p><strong>Agent Infrastructure, Harnesses, and Multi-Agent UX</strong></p><ul><li><p><strong>Cline Kanban crystallizes a new multi-agent UX</strong>: The clearest tooling launch of the day was <strong>Cline Kanban</strong>, a <strong>free, open-source local web app</strong> for orchestrating multiple CLI coding agents in parallel across isolated <strong>git worktrees</strong>. It supports <strong>Claude Code, Codex, and Cline</strong>, lets users chain task dependencies, review diffs, and manage branches from one board (<a href="https://x.com/cline/status/2037182739695493399">Cline</a>, <a href="https://x.com/cline/status/2037182747446567255">Cline</a>). The reaction from builders was strong, with several calling this the likely default multi-agent interface because it tackles the two practical bottlenecks of current coding-agent workflows: <strong>inference-bound waiting</strong> and <strong>merge-conflict-heavy parallelism</strong> (<a href="https://x.com/arafatkatze/status/2037188879422292467">Arafat</a>, <a href="https://x.com/testingcatalog/status/2037188884925190497">testingcatalog</a>, <a href="https://x.com/sdrzn/status/2037185866427482522">sdrzn</a>).</p></li><li><p><strong>&#8220;Harness engineering&#8221; is becoming a category</strong>: A recurring theme across tweets was that model quality is no longer the whole story; the <strong>agent harness</strong>&#8212;middleware, memory, task orchestration, tool interfaces, safety policies, and evaluation loops&#8212;is increasingly the real product. <a href="https://x.com/LangChain/status/2037185311789154505">LangChain</a>, <a href="https://x.com/hwchase17/status/2037188499938697309">hwchase17</a>, and others emphasized <strong>middleware</strong> as the customization layer for agent behavior. <a href="https://x.com/voooooogel/status/2037240394040435113">voooooogel</a> made the stronger claim that users casually say &#8220;LLM&#8221; when what they&#8217;re actually using is an integrated <strong>agentic language system</strong> with formatting, parsers, tool use, structured generation, and memory around the base model.</p></li><li><p><strong>Hermes vs. OpenClaw: memory and long-running autonomy matter</strong>: A large cluster of posts praised <strong>Nous Research&#8217;s Hermes Agent</strong> as more usable than <strong>OpenClaw/OpenClaw-derived stacks</strong> for long-running, cross-platform agent workflows. Examples included <strong>persistent memory across Slack and Telegram</strong>, shared memory across agents, lower maintenance overhead, and user reports of agents running unattended for hours on local or cloud setups (<a href="https://x.com/IcarusHermes/status/2037030845635084785">IcarusHermes</a>, <a href="https://x.com/jayweeldreyer/status/2037179820975562791">jayweeldreyer</a>, <a href="https://x.com/NielsRogge/status/2037161010377674785">Niels Rogge</a>). <a href="https://x.com/Teknium/status/2037284871513768344">Teknium</a> also teased a controversial <strong>GODMODE skill</strong> for persistent jailbreaking, underscoring that capability and safety are now being productized at the harness layer, not just the base model.</p></li><li><p><strong>Tooling expansion around agents</strong>: OpenAI&#8217;s Codex team solicited requests for expanded toolkit integrations (<a href="https://x.com/reach_vb/status/2037072273517973880">reach_vb</a>), while Google published how it built a <strong>Gemini API skill</strong> to teach models about newer APIs and SDKs, improving <strong>Gemini 3.1 Pro</strong> to <strong>95% pass rate on 117 eval tests</strong> (<a href="https://x.com/_philschmid/status/2037076548692463722">Phil Schmid</a>). <a href="https://x.com/ben_burtenshaw/status/2037184956124828083">OpenEnv</a> was introduced as an open standard for <strong>agentic RL environments</strong> with async APIs, websocket transport, MCP-native tool discovery, and deploy-anywhere packaging.</p></li></ul><p><strong>Research Systems and Training Infrastructure: AI Scientist, ProRL Agent, and Real-Time RL</strong></p><ul><li><p><strong>Sakana AI&#8217;s AI Scientist gets a Nature milestone and a scaling-law claim</strong>: The most substantive research-system update came from <strong>Sakana AI</strong>, which highlighted a <strong>Nature</strong> paper on end-to-end automation of AI research and a notable empirical result: using an automated reviewer to grade generated papers, they observed a <strong>scaling law for AI science</strong>, where stronger foundation models produce stronger scientific papers, and argued that this should improve both with better base models and more <strong>inference-time compute</strong> (<a href="https://x.com/SakanaAILabs/status/2036999652298678630">Sakana AI</a>, <a href="https://x.com/SakanaAILabs/status/2037205439109095712">paper/code follow-up</a>). Chris Lu added that <strong>AI Scientist V1</strong> predated o1-preview-style reasoning models, implying substantial headroom from today&#8217;s stronger models (<a href="https://x.com/_chris_lu_/status/2037090588550418510">Chris Lu</a>).</p></li><li><p><strong>Infrastructure bottlenecks, not model bottlenecks, may be capping agent RL</strong>: One of the more important systems threads argued that agentic RL frameworks have been architected incorrectly by coupling rollout and optimization in the same process. The post summarizing <strong>NVIDIA&#8217;s ProRL Agent</strong> claims fully decoupling rollout into a standalone service nearly doubled <strong>Qwen 8B</strong> on <strong>SWE-Bench Verified</strong> from <strong>9.6% to 18.0%</strong>, with similar gains for 4B and 14B variants, alongside much higher GPU utilization (<a href="https://x.com/rryssf_/status/2037122412236648835">rryssf_</a>). If accurate, this is a strong reminder that agent training benchmarks can be infra-limited, not purely capability-limited.</p></li><li><p><strong>Cursor&#8217;s &#8220;real-time RL&#8221; is a notable production-training pattern</strong>: <a href="https://x.com/cursor_ai/status/2037205514975629493">Cursor</a> said it can ship improved <strong>Composer 2</strong> checkpoints every <strong>five hours</strong>, presenting this as a productized RL feedback loop rather than a static model-release cadence. Multiple engineers read this as an early sign of <strong>continual learning in production</strong>, especially for vertically integrated apps with high-frequency interaction data (<a href="https://x.com/eliebakouch/status/2037212964114125099">eliebakouch</a>, <a href="https://x.com/code_star/status/2037271007027982440">code_star</a>).</p></li></ul><p><strong>Architecture, Retrieval, and Inference Efficiency</strong></p><ul><li><p><strong>Transformer depth is becoming &#8220;queryable&#8221;</strong>: <strong>Kimi/Moonshot</strong> described <strong>Attention Residuals (AttnRes)</strong> as turning depth into an attention problem, allowing layers to retrieve selectively from prior layer outputs rather than passively accumulating residuals (<a href="https://x.com/Kimi_Moonshot/status/2037010118957817988">Kimi</a>). A strong secondary explainer from <a href="https://x.com/TheTuringPost/status/2037107923109953788">The Turing Post</a> framed this as a broader trend: deep transformers moving from fixed residual addition toward <strong>adaptive retrieval over depth</strong>.</p></li><li><p><strong>Compression and memory-efficiency work remains central</strong>: <strong>TurboQuant</strong> drew attention as a practical route to <strong>3-bit-like compression with near-zero accuracy loss</strong>, combining <strong>PolarQuant</strong> and <strong>1-bit error correction (QJL)</strong> to accelerate attention and vector search, reduce KV cache memory, and avoid retraining (<a href="https://x.com/TheTuringPost/status/2037182800466698718">The Turing Post</a>). Separately, a subtle but impactful production bugfix landed in <strong>vLLM&#8217;s Mamba-1 CUDA kernel</strong> after <strong>AI21</strong> tracked a silent <code>uint32_t</code> overflow that caused logprob mismatches in GRPO training; the fix was effectively changing <code>uint32_t</code> to <code>size_t</code> (<a href="https://x.com/vllm_project/status/2037123968939987428">vLLM</a>, <a href="https://x.com/AI21Labs/status/2037133107166331132">AI21</a>).</p></li><li><p><strong>Retrieval is trending multimodal and specialized</strong>: Several posts pointed to a shift away from generic RAG recipes. <a href="https://x.com/victorialslocum/status/2037113651174199778">Victoria Slocum</a> highlighted <strong>IRPAPERS</strong>, showing that <strong>OCR/text retrieval</strong> and <strong>image-page retrieval</strong> fail on different queries, and that multimodal fusion beats either alone on scientific PDFs. <a href="https://x.com/jeffreyhuber/status/2037247377275576380">Chroma</a> open-sourced <strong>Context-1</strong>, a search-focused model trained with SFT+RL over <strong>8,000+ synthetic tasks</strong>, claiming better/faster/cheaper search than frontier general-purpose models; <a href="https://x.com/johnschulman2/status/2037260655989014706">John Schulman</a> called out its curriculum, verified synthetic data, and context-pruning tool as especially interesting.</p></li></ul><p><strong>Top tweets (by engagement)</strong></p><ul><li><p><strong>Meta&#8217;s TRIBE v2</strong>: Meta released <strong>TRIBE v2</strong>, a trimodal brain encoder trained on <strong>500+ hours of fMRI from 700+ people</strong>, claiming <strong>2&#8211;3x</strong> improvement over prior methods and zero-shot prediction for unseen subjects, languages, and tasks (<a href="https://x.com/AIatMeta/status/2037153756346016207">Meta AI</a>, <a href="https://x.com/AIatMeta/status/2037153758455750717">details</a>).</p></li><li><p><strong>Claude Code auto-fix in the cloud</strong>: Anthropic shipped remote <strong>PR-following auto-fix</strong> for Claude Code web/mobile sessions, allowing unattended CI-failure fixing and comment resolution (<a href="https://x.com/noahzweben/status/2037219115002405076">Noah Zweben</a>).</p></li><li><p><strong>Karpathy on full-stack software automation</strong>: <a href="https://x.com/karpathy/status/2037200624450936940">Andrej Karpathy</a> argued the hard part of &#8220;build me this startup&#8221; is not code generation but the full <strong>DevOps/service orchestration lifecycle</strong>&#8212;payments, auth, infra, security, deployment&#8212;which he sees as just becoming tractable for agents.</p></li><li><p><strong>Cline Kanban</strong>: The launch of multi-agent worktree orchestration for coding agents generated unusually strong developer interest (<a href="https://x.com/cline/status/2037182739695493399">Cline</a>).</p></li><li><p><strong>Cohere Transcribe and Mistral Voxtral</strong>: Open, production-oriented audio releases continue to gather momentum, especially where they come with permissive licensing and immediate infra support (<a href="https://x.com/cohere/status/2037159129345614174">Cohere</a>, <a href="https://x.com/MistralAI/status/2037183026539483288">Mistral</a>).</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><p></p>
      <p>
          <a href="https://www.latent.space/p/ainews-everything-is-cli">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[[AINews] The Biggest Claude Launch of All Time]]></title><description><![CDATA[We thought about this carefully before choosing hyperbole; but it is warranted.]]></description><link>https://www.latent.space/p/ainews-the-biggest-claude-launch</link><guid isPermaLink="false">https://www.latent.space/p/ainews-the-biggest-claude-launch</guid><pubDate>Thu, 26 Mar 2026 03:53:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yFgk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It is considered uncouth in the news business to reflect on things that happened a few days ago &#8212; that is &#8220;olds&#8221;, not &#8220;news&#8221;. But we&#8217;re making a very conscious attempt to give you early and repeated signal on trends and not just headlines with AINews. To wit, although we already reported on <a href="https://www.latent.space/p/ainews-claude-cowork-dispatch-anthropics">Claude Cowork Dispatch</a>, and Claude Cowork Dispatch Computer Use (a result of last month&#8217;s <a href="https://x.com/AnthropicAI/status/2026705792033026465?s=20">Vercept acquisition</a>) was technically <a href="https://x.com/claudeai/status/2036195789601374705?s=20">launched yesterday</a>, the reception has been FAR and away Claude&#8217;s biggest launch of all time (inclusive of @AnthropicAI):</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yFgk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yFgk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png 424w, https://substackcdn.com/image/fetch/$s_!yFgk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png 848w, https://substackcdn.com/image/fetch/$s_!yFgk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png 1272w, https://substackcdn.com/image/fetch/$s_!yFgk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yFgk!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png" width="1200" height="667.5824175824176" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:810,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:254981,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192168831?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yFgk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png 424w, https://substackcdn.com/image/fetch/$s_!yFgk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png 848w, https://substackcdn.com/image/fetch/$s_!yFgk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png 1272w, https://substackcdn.com/image/fetch/$s_!yFgk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f346c8a-0223-4e5f-adc4-7b5c190beadf_1521x846.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We created this chart by accumulating all the top tweets of the company accounts: </p>
      <p>
          <a href="https://www.latent.space/p/ainews-the-biggest-claude-launch">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[[AINews] Apple's War on Slop]]></title><description><![CDATA[a quiet day lets us reflect on the End of Sora, LiteLLM, AI2, and other not so happy news.]]></description><link>https://www.latent.space/p/ainews-apples-war-on-slop</link><guid isPermaLink="false">https://www.latent.space/p/ainews-apples-war-on-slop</guid><pubDate>Wed, 25 Mar 2026 06:18:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QXAa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There were few bright spots today, as <a href="https://x.com/eliebakouch/status/2036251901985988800">Microsoft AI execuhired AI2 leadership</a>, <a href="https://xcancel.com/soraofficialapp/status/2036532795984715896">OpenAI&#8217;s Sora became the first casualty</a> of the Side Quest massacre (probably Atlas too), and <a href="https://news.ycombinator.com/item?id=47501426">LiteLLM suffered/created a huge supply chain vulnerability</a> for ~all Python AI projects.</p><p>All will fade in due time, so do not deserve title story, but today we highlight this chart which tells an ongoing story that ALL traditional app stores like Apple and <a href="https://www.latent.space/p/ainews-dreamer-joins-meta-superintelligence">AI-native app stores like Dreamer</a> face:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QXAa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QXAa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png 424w, https://substackcdn.com/image/fetch/$s_!QXAa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png 848w, https://substackcdn.com/image/fetch/$s_!QXAa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png 1272w, https://substackcdn.com/image/fetch/$s_!QXAa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QXAa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png" width="586" height="828.2789915966387" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1682,&quot;width&quot;:1190,&quot;resizeWidth&quot;:586,&quot;bytes&quot;:833606,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/192065173?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QXAa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png 424w, https://substackcdn.com/image/fetch/$s_!QXAa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png 848w, https://substackcdn.com/image/fetch/$s_!QXAa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png 1272w, https://substackcdn.com/image/fetch/$s_!QXAa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72bc7ca8-9c2b-4fc8-a615-0d3210287f22_1190x1682.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Even as the debate rages on about <a href="https://www.latent.space/p/ainews-ai-vs-saas-the-unreasonable?utm_source=publication-search">AI killing all SaaS</a>, the ability to vibecode apps and <a href="https://techcrunch.com/2026/03/02/myfitnesspal-has-acquired-cal-ai-the-viral-calorie-app-built-by-teens/">hopefully buy a ticket to a &gt;$100M exit as an 18 year old high school dropout</a> means that ~everyone with any entrepreneurial spirit is going to at least try it, and traditional app store review processes will die. This comes even as Apple starts <a href="https://www.macrumors.com/2026/03/18/apple-blocks-updates-for-vibe-coding-apps/">blocking vibe code apps like Replit and Vibecode</a> on policy reasons, and, even though there were legitimately defensible issues, it is clear that the normal app distribution paradigms are completely breaking down in 2026. We declared <a href="https://www.latent.space/p/2026?utm_source=publication-search">the War on Slop a key theme for 2026</a> in Jan, and now this is one of the most important charts in the world for thinking through it&#8217;s ramifications on the decades-long supremacy of the Apple App Store as well as other similar software distribution platforms vs the open web.</p><p></p><p></p><blockquote><p>AI News for 3/23/2026-3/24/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Agent Infrastructure, Computer Use, and Design-to-Action Tooling</strong></p><ul><li><p><strong>Anthropic&#8217;s agent harness and &#8220;computer use&#8221; shift the product surface</strong>: A recurring theme today was that agent capability is increasingly about the <strong>harness</strong>, not just the base model. Anthropic published a new engineering writeup on how it uses a <strong>multi-agent harness</strong> for frontend design and long-running software tasks, emphasizing orchestration over one-shot prompting (<a href="https://x.com/AnthropicAI/status/2036481033621623056">AnthropicAI</a>). Multiple developers independently argued that &#8220;computer use&#8221; matters because it lets models act in messy software environments with no reliable APIs (<a href="https://x.com/glennko/status/2036293890198646985">glennko</a>), though others noted this is still slow and likely transitional until more tools expose APIs/CLI surfaces (<a href="https://x.com/Yuchenj_UW/status/2036487951677571582">Yuchenj_UW</a>). The broader operational takeaway was captured well by <a href="https://x.com/kerrsee/status/2036252319235580047">kerrsee</a>: retries, rollbacks, webhooks, structured logging, and recovery paths remain the unglamorous bottlenecks in production agent deployment.</p></li><li><p><strong>Figma/MCP/Cursor make design canvases directly agent-editable</strong>: The strongest concrete workflow launch was <strong>Figma&#8217;s MCP server</strong> and direct AI editing on the canvas, now in open beta (<a href="https://x.com/figma/status/2036434766661296602">figma</a>). GitHub highlighted that this works through Copilot CLI and other clients via MCP (<a href="https://x.com/github/status/2036439431352041911">github</a>), and Cursor immediately extended the pattern to generating components/frontends in Figma using a team&#8217;s design system (<a href="https://x.com/cursor_ai/status/2036468982560202773">cursor_ai</a>). This is one of the clearest examples of <strong>tool-calling becoming product-native</strong> rather than chat-wrapper-native. LangChain also pushed in the same direction with framework-native tool rendering and Slack-native Fleet workflows, including custom Slack bots and an Inbox for human approvals (<a href="https://x.com/LangChain_JS/status/2036489812602126539">LangChain_JS</a>, <a href="https://x.com/LangChain/status/2036485694290534716">LangChain</a>, <a href="https://x.com/hwchase17/status/2036500793663299684">hwchase17</a>).</p></li></ul><p><strong>Open Agent Platforms, Benchmarks, and RL Environment Stacks</strong></p><ul><li><p><strong>Hermes Agent v0.4.0 is becoming a full personal-agent runtime</strong>: Nous released a substantial <strong>Hermes Agent v0.4.0</strong> update with roughly <strong>300 merged PRs</strong> in a week, adding an <strong>OpenAI-compatible Responses API backend</strong>, background self-improvement loops, broader messaging integrations, improved context compression, and more CLI ergonomics (<a href="https://x.com/Teknium/status/2036473305025356023">Teknium</a>, <a href="https://x.com/Teknium/status/2036473984263635394">Teknium</a>, <a href="https://x.com/NousResearch/status/2036492872044745180">NousResearch</a>). The most technically interesting feature is the <strong>post-response review agent</strong> that decides what to retain as reusable memory/skills (<a href="https://x.com/Teknium/status/2036473592964387054">Teknium</a>). Community reactions focused less on benchmark claims and more on operational value: exposing a personal coding/ops agent behind a standard API makes it usable from Open WebUI, LobeChat, or any OpenAI-compatible client (<a href="https://x.com/witcheer/status/2036481005465338082">witcheer</a>).</p></li><li><p><strong>Open agent ecosystems are converging around environments, skills, and reproducible evals</strong>: AI2 released <strong>MolmoWeb</strong>, an open-source browser agent built on Molmo 2 in <strong>4B and 8B</strong> sizes, claiming open-weight SOTA across four web-agent benchmarks and even surpassing some proprietary agents (<a href="https://x.com/allen_ai/status/2036460260936814915">allen_ai</a>). In parallel, GenReasoning launched <strong>OpenReward</strong>, a platform exposing <strong>330+ RL environments</strong>, autoscaled environment compute, and <strong>4.5M+ unique RL tasks</strong> through one API&#8212;explicitly targeting the often-missing &#8220;environment compute&#8221; layer of agentic RL (<a href="https://x.com/GenReasoning/status/2036412836742590950">GenReasoning</a>, <a href="https://x.com/rosstaylor90/status/2036418585673990393">rosstaylor90</a>). Zhipu contributed <strong>ZClawBench</strong>, a benchmark with <strong>116 real-world agent tasks</strong> spanning office automation, coding, and analysis (<a href="https://x.com/HuggingPapers/status/2036424833144139891">HuggingPapers</a>). Together, these point to a stack maturing from &#8220;agent demos&#8221; toward <strong>standardized environment serving + benchmarkable task suites + reusable harnesses</strong>.</p></li></ul><p><strong>Inference, Storage, and Systems Optimizations</strong></p><ul><li><p><strong>vLLM and Transformers both reported material inference/runtime gains</strong>: vLLM&#8217;s GTC recap highlighted several systems upgrades: <strong>Model Runner V2</strong> with GPU-native Triton kernels, a hybrid memory allocator, encoder prefill disaggregation with up to <strong>2.5x P99 throughput</strong> gains for multimodal workloads, and modular MoE kernels (<a href="https://x.com/vllm_project/status/2036389182579642544">vllm_project</a>, <a href="https://x.com/vllm_project/status/2036540976144253235">vllm_project</a>). Separately, Hugging Face/Transformers-side optimization work claimed continuous batching plus <code>torch.compile</code> tuning now reaches <strong>95% of vLLM throughput</strong> for 8K generation, effectively closing the previous gap for synthetic data generation workloads (<a href="https://x.com/remi_or_/status/2036466918618509391">remi_or_</a>).</p></li><li><p><strong>hf-mount is a notable agent/data primitive</strong>: Hugging Face released <strong>hf-mount</strong>, which lets users mount Hub datasets, models, and storage buckets as a local filesystem, including examples with a <strong>5TB FineWeb slice</strong> (<a href="https://x.com/julien_c/status/2036436553082286342">julien_c</a>, <a href="https://x.com/ClementDelangue/status/2036452081750409383">ClementDelangue</a>). This matters beyond convenience: several engineers pointed out that agents are unusually good at filesystem operations, making mounted remote storage a natural substrate for <strong>agent memory, scratchpads, team artifact storage, and lazy access to large corpora</strong> (<a href="https://x.com/Vtrivedy10/status/2036455087199911972">Vtrivedy10</a>, <a href="https://x.com/victormustar/status/2036476453370380416">victormustar</a>). This is one of the more practical infrastructure launches of the day because it reduces the friction between local tooling and cloud-scale data.</p></li><li><p><strong>Moreau and TurboQuant show optimization pressure moving below the model layer</strong>: Optimal Intellect introduced <strong>Moreau</strong>, a <strong>GPU-native solver</strong> from the CVXPY team claiming orders-of-magnitude speedups over existing tools (<a href="https://x.com/opt_intellect/status/2036485190646735291">opt_intellect</a>). Google Research announced <strong>TurboQuant</strong>, a KV-cache compression algorithm reporting at least <strong>6x memory reduction</strong> and up to <strong>8x speedup</strong> with no accuracy loss (<a href="https://x.com/GoogleResearch/status/2036533564158910740">GoogleResearch</a>). The common pattern: high-value gains are increasingly coming from <strong>runtime, memory, and systems layers</strong>, not just from larger model checkpoints.</p></li></ul><p><strong>Security, Supply Chain Risk, and Guardrails for Agentic Software</strong></p><ul><li><p><strong>The LiteLLM PyPI compromise dominated infra/security discussion</strong>: Multiple posts warned that <strong>LiteLLM 1.82.8</strong> on PyPI had been compromised, with malicious payloads attempting to exfiltrate credentials and replicate across environments (<a href="https://x.com/hnykda/status/2036414330267193815">hnykda</a>). <a href="https://x.com/simonw/status/2036451896970584167">simonw</a> noted the package was later quarantined on PyPI, but the incident quickly became a broader conversation about software supply-chain fragility. <a href="https://x.com/karpathy/status/2036487306585268612">karpathy</a> gave the most detailed summary, listing possible exfiltration targets including cloud creds, SSH keys, Kubernetes configs, CI/CD secrets, wallets, and shell history, while noting transitive risk to packages like DSPy. The most important systems-level implication came from <a href="https://x.com/DrJimFan/status/2036494601750716711">DrJimFan</a>: in an agentic world, <strong>the entire filesystem becomes part of the attack surface</strong>, since any file likely to enter context can become a vector.</p></li><li><p><strong>&#8220;De-vibing&#8221; and permissioning are becoming first-class product requirements</strong>: Several posts effectively converged on a new design principle: autonomous coding tools need <strong>stronger shells, better permission defaults, and fewer broad dependencies</strong>. Yuchen called the incident &#8220;nightmare fuel&#8221; for <code>--dangerously-skip-permissions</code> style workflows (<a href="https://x.com/Yuchenj_UW/status/2036505196621361377">Yuchenj_UW</a>); Anthropic&#8217;s new <strong>Claude Code auto mode</strong> became controversial for exactly this reason, despite enthusiasm over the productivity jump (<a href="https://x.com/alexalbert__/status/2036510206155432293">alexalbert__</a>, <a href="https://x.com/kimmonismus/status/2036510469079404853">kimmonismus</a>). The practical response from many builders was a renewed preference for <strong>minimal bespoke routing</strong>, tighter audited deps, and stronger human approval loops.</p></li></ul><p><strong>Labs, Org Moves, and Product Strategy Shifts</strong></p><ul><li><p><strong>AI2 loses leadership to Microsoft; Microsoft AI continues talent concentration</strong>: The clearest org move was the reaction to Microsoft poaching part of the <strong>AI2 leadership team</strong>, including mentions of <strong>Ali Farhadi, Hanna Hajishirzi, and Ranjay Krishna</strong> joining Microsoft Superintelligence (<a href="https://x.com/eliebakouch/status/2036251901985988800">eliebakouch</a>, <a href="https://x.com/NandoDF/status/2036573680810205461">NandoDF</a>). The subtext in technical circles was concern over whether open research institutions can continue competing with hyperscalers for top talent and frontier-scale work (<a href="https://x.com/stanfordnlp/status/2036534819287687383">stanfordnlp</a>).</p></li><li><p><strong>OpenAI is reallocating resources hard: $1B Foundation spend, Sora wind-down, &#8220;Spud&#8221; coming</strong>: OpenAI announced its Foundation will spend at least <strong>$1B over the next year</strong>, with Wojciech Zaremba moving to lead <strong>AI resilience</strong> and additional hires across disease, civil society, and operations (<a href="https://x.com/sama/status/2036488680769241223">sama</a>, <a href="https://x.com/woj_zaremba/status/2036483827271655917">woj_zaremba</a>, <a href="https://x.com/btaylor/status/2036474423998554334">btaylor</a>). At the same time, reports circulated that OpenAI had finished initial development of its next major LLM, <strong>codenamed &#8220;Spud,&#8221;</strong> and was winding down Sora&#8217;s app/product footprint to free compute (<a href="https://x.com/steph_palazzolo/status/2036534198245134380">steph_palazzolo</a>, <a href="https://x.com/kimmonismus/status/2036538590654496807">kimmonismus</a>). For engineers, the signal is straightforward: OpenAI appears to be <strong>narrowing product focus around core general models/infrastructure</strong>, even at the cost of cutting side products.</p></li></ul><p><strong>Top tweets (by engagement)</strong></p><ul><li><p><strong>LiteLLM supply-chain compromise</strong>: <a href="https://x.com/karpathy/status/2036487306585268612">karpathy</a> gave the most technically complete and highest-signal breakdown of the PyPI attack and its blast radius.</p></li><li><p><strong>Anthropic&#8217;s harness engineering post</strong>: <a href="https://x.com/AnthropicAI/status/2036481033621623056">AnthropicAI</a> was one of the day&#8217;s most important engineering reads on how frontier labs are actually structuring long-running agent workflows.</p></li><li><p><strong>Figma MCP launch</strong>: <a href="https://x.com/figma/status/2036434766661296602">figma</a> and <a href="https://x.com/github/status/2036439431352041911">github</a> showed perhaps the cleanest mainstream example yet of agents acting directly on a production design surface.</p></li><li><p><strong>OpenAI Foundation $1B commitment</strong>: <a href="https://x.com/sama/status/2036488680769241223">sama</a> and <a href="https://x.com/woj_zaremba/status/2036483827271655917">woj_zaremba</a> marked a major organizational and safety/resilience shift.</p></li><li><p><strong>Hermes Agent v0.4.0</strong>: <a href="https://x.com/Teknium/status/2036473305025356023">Teknium</a> / <a href="https://x.com/NousResearch/status/2036492872044745180">NousResearch</a> stood out as one of the biggest open-agent runtime releases of the day.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. Security and Malware Concerns in AI Tools</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s2clw6/lm_studio_may_possibly_be_infected_with/">LM Studio may possibly be infected with sophisticated malware.</a></strong> (Activity: 1822): <strong>The image in the Reddit post shows a Windows Security alert indicating that a severe threat, identified as &#8220;Trojan:JS/GlassWorm.ZZ!MTB,&#8221; was quarantined from the LM Studio directory. This raised concerns about a potential malware infection in LM Studio. However, LM Studio and Microsoft have since confirmed that this was a false positive, likely due to Defender&#8217;s heuristic definitions conflicting with LM Studio&#8217;s obfuscated Electron bundle. The community discussion highlights the importance of security audits and the potential risks of obfuscation techniques that resemble malware patterns. Despite the false alarm, users are advised to take precautionary measures to secure their data.</strong> The comments reflect a consensus that the malware detection was a false positive, supported by historical instances of similar false alarms and VirusTotal&#8217;s low detection rate. However, there is criticism of LM Studio&#8217;s code obfuscation practices, which can inadvertently trigger such alerts and complicate security assessments.</p><ul><li><p>Yags from LM Studio confirmed that the malware alert was a false positive, verified by Microsoft, and no longer appears in VirusTotal. Despite this, LM Studio is auditing their build machine scripts and environments to prevent any genuine security incidents in the future.</p></li><li><p>Denoflore_ai_guy provided a detailed analysis suggesting the malware alert was likely a false positive due to Defender&#8217;s heuristic updates conflicting with LM Studio&#8217;s obfuscated Electron bundle. However, they noted that LM Studio&#8217;s code obfuscation for IP protection could resemble malware techniques, which complicates detection.</p></li><li><p>Denoflore_ai_guy also outlined steps to mitigate potential risks if GlassWorm malware was indeed present, including changing passwords, moving crypto funds, and checking for malicious Chrome extensions. They emphasized the importance of a clean OS install and credential rotation to ensure security.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s2fch0/developing_situation_litellm_compromised/">[Developing situation] LiteLLM compromised</a></strong> (Activity: 380): <strong>The LiteLLM library has been compromised, as detailed in <a href="https://github.com/BerriAI/litellm/issues/24512">GitHub issue #24512</a>. The attack exploits a </strong><code>.pth</code><strong> file vulnerability, which executes code on interpreter startup without requiring imports, making it difficult to detect through standard code reviews. Users of version </strong><code>1.82.8</code><strong> are advised to rotate credentials immediately if used in production environments, as the compromise could expose sensitive information.</strong> A notable comment highlights the effectiveness of using Docker containers for isolating host secrets, which can mitigate some security risks. Another comment emphasizes the stealthy nature of the <code>.pth</code> file trick, which bypasses typical security scans.</p><ul><li><p>The <code>.pth</code> file trick is highlighted as a significant security vulnerability. This method allows code execution on interpreter startup without needing imports, making it nearly invisible to standard code reviews. Users who ran LiteLLM versions 1.82.8 or 1.82.7 are advised to rotate credentials immediately due to potential exposure.</p></li><li><p>Aider, a tool that uses LiteLLM for LLM access, is reportedly safe as it operates on an older version (1.82.3) of LiteLLM, which is not compromised. The compromised versions are identified as 1.82.8 and 1.82.7, emphasizing the importance of version control and monitoring for security vulnerabilities.</p></li><li><p>The discussion touches on the use of Docker containers for security isolation. While typically not considered a security measure, in this case, Docker effectively isolated host secrets, demonstrating its potential utility in mitigating certain types of security breaches.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s2c1w4/litellm_1827_and_1828_on_pypi_are_compromised_do/">Litellm 1.82.7 and 1.82.8 on PyPI are compromised, do not update!</a></strong> (Activity: 441): <strong>Litellm versions </strong><code>1.82.7</code><strong> and </strong><code>1.82.8</code><strong> on PyPI have been compromised, as confirmed by a <a href="https://futuresearch.ai/blog/litellm-pypi-supply-chain-attack/">blog post</a>. The attack appears to be a supply chain compromise, potentially affecting thousands of users. The malicious versions were uploaded to PyPI, posing a significant risk to CI/CD pipelines that automatically update dependencies. The attack was executed through the GitHub account of the LiteLLM CEO, which was hacked, as evidenced by unauthorized commits and repository updates claiming &#8216;teampcp owns BerriAI&#8217;.</strong> Commenters emphasize the importance of pinning dependency versions to avoid such supply chain attacks, highlighting the risk of automatic updates in production environments. There is also concern about the potential for increased frequency of such attacks on AI tooling.</p><ul><li><p>GroundbreakingMall54 highlights the critical importance of pinning dependency versions and avoiding auto-updates in production environments. They emphasize the risk of supply chain attacks, especially in AI tooling, as evidenced by the compromised Litellm versions on PyPI, which could have been automatically integrated into CI/CD pipelines overnight.</p></li><li><p>Gremlation and <strong>JockY</strong> discuss the breach by &#8216;teampcp&#8217;, who compromised the CEO&#8217;s GitHub account to inject malware into Litellm. This malware, embedded in versions 1.82.7 and 1.82.8, is designed to steal secrets upon startup. They note that versions &lt;= 1.82.6 remain unaffected, and provide links to GitHub commits showing the unauthorized changes made under the CEO&#8217;s account.</p></li><li><p>kiwibonga points out a specific malicious payload in the compromised Litellm versions that executes a destructive command (<code>rm -rf /</code>) if the system&#8217;s timezone is set to Asia/Tehran. This highlights the severity and targeted nature of the attack, suggesting a broader geopolitical context to the cyber threat landscape.</p></li></ul></li></ul><h3><strong>2. Local LLM Development and Performance Enhancements</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLM/comments/1s2753y/i_built_fox_a_rust_llm_inference_engine_with_2x/">I built Fox &#8211; a Rust LLM inference engine with 2x Ollama throughput and 72% lower TTFT.</a></strong> (Activity: 212): <strong>Fox is a Rust-based local LLM inference engine designed as a drop-in replacement for Ollama, offering significant performance improvements. It features </strong><code>PagedAttention</code><strong>, continuous batching, and prefix caching, achieving </strong><code>72%</code><strong> lower TTFT and </strong><code>111%</code><strong> higher throughput on an </strong><code>RTX 4060</code><strong> with the </strong><code>Llama-3.2-3B-Instruct-Q4_K_M</code><strong> model. The engine supports multi-model serving with lazy loading and LRU eviction, and provides a dual API compatible with both OpenAI and Ollama. The official Docker image is available, and the system supports hardware autodetection across CUDA, Vulkan, Metal, and CPU. The project is in beta, with thorough testing on Linux and NVIDIA, but less so on other platforms and configurations. <a href="https://github.com/ferrumox/fox">GitHub</a> and <a href="https://hub.docker.com/r/ferrumox/fox">Docker Hub</a> links are provided for access.</strong> A top comment highlights the impressive technical achievement of implementing vLLM-level features in Rust, noting the significant performance gains from prefix caching and continuous batching. There is a request for LoRA hot-swapping capabilities to further differentiate Fox from Ollama. Another comment expresses skepticism about the project&#8217;s authenticity and security, suggesting the need for independent verification and code auditing.</p><ul><li><p>No_Strain_2140 highlights the technical achievements of Fox, noting its use of PagedAttention, continuous batching, and prefix caching, which contribute to its impressive performance metrics such as <code>87ms P50</code> on a 4060 with Q4_K_M. The commenter contrasts Fox&#8217;s approach with Ollama&#8217;s sequential processing, emphasizing Fox&#8217;s advanced features like multi-turn KV reuse that enhance throughput and reduce TTFT. They also inquire about the potential for LoRA hot-swapping, which could allow serving a base model with multiple LoRA adapters, positioning Fox as more than just a faster alternative to Ollama.</p></li><li><p>PettyHoe raises concerns about the security and credibility of the project, suggesting the need for independent verification and code audits to ensure there are no risks of exfiltration. They express skepticism about the project&#8217;s authenticity due to the AI-generated nature of the descriptions and comments, emphasizing the importance of cautious evaluation before adoption.</p></li><li><p>AIDevUK asks about Fox&#8217;s capability to operate over multiple GPUs, which is a critical consideration for scaling and performance in large-scale deployments. This question points to the need for understanding Fox&#8217;s architecture and its ability to leverage multi-GPU setups for enhanced computational efficiency.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s1t5ot/rys_ii_repeated_layers_with_qwen35_27b_and_some/">RYS II - Repeated layers with Qwen3.5 27B and some hints at a &#8216;Universal Language&#8217;</a></strong> (Activity: 695): <strong>The post discusses findings from experiments with the Qwen3.5 27B model, revealing that LLMs may process information in a &#8216;universal language&#8217;. This is evidenced by the similarity in latent representations of the same content across different languages, such as Chinese and English, during the middle layers of the model. The author also found that repeating blocks in the middle of the transformer stack enhances performance. The models are available on <a href="https://huggingface.co/dnhkng/RYS-Qwen3.5-27B-FP8-S">Hugging Face</a>. The author suggests that fine-tuning these models, especially the RYS-Qwen3.5-27B-FP8-XL, could set a new state-of-the-art (SOTA) for models of this size. Additionally, there is ongoing work to optimize VRAM usage by keeping duplicated layers as copies, which could be beneficial for future implementations.</strong> Commenters appreciate the rigorous approach and potential implications of the research, noting its relevance to performance improvements seen in complex model merges. There is interest in how these findings might influence open-source tuning practices, particularly in creative writing and self-merging techniques.</p><ul><li><p>ArsNeph discusses the intriguing performance improvements observed in self-merges like Goliath 120B, noting that not all models benefit equally. They reference historical discussions about VRAM-less duplicated layer inference, highlighting ongoing work on EXL3. The comment suggests that open-source tuners, particularly those focused on EQ performance, might find these insights valuable, especially in creative writing contexts where complex merge trees have shown significant improvements.</p></li><li><p>Kwigg reflects on past experiences with &#8216;frankenmerging&#8217; during the llama2 era, questioning the efficiency of such methods with newer models that have advanced attention mechanisms. They note that older frankenmerges were memory inefficient, implying that modern models might handle these techniques differently, potentially leading to better performance outcomes.</p></li><li><p>TomLucidor suggests expanding the language testing of Qwen3.5 to include Japanese, Thai, French, German, and Italian. They also propose a comparative analysis between Qwen3.5 and other models like Nemotron-3, known for its speed and linear attention, and Granite-4.0, which offers a similar size variety but is less optimized. This could provide insights into the relative performance and optimization of these models.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s1yw23/flashattention4_1613_tflopss_27x_faster_than/">FlashAttention-4: 1613 TFLOPs/s, 2.7x faster than Triton, written in Python. What it means for inference.</a></strong> (Activity: 364): <strong>FlashAttention-4 achieves </strong><code>1613 TFLOPs/s</code><strong> on the Blackwell B200 GPU, utilizing </strong><code>71%</code><strong> of its theoretical peak performance. It is </strong><code>2.1-2.7x</code><strong> faster than Triton and up to </strong><code>1.3x</code><strong> faster than cuDNN 9.13. The implementation is entirely in Python using NVIDIA&#8217;s CuTeDSL, which compiles in </strong><code>2.5 seconds</code><strong> compared to </strong><code>55 seconds</code><strong> for C++. This version supports GQA and MQA and is integrated into vLLM 0.17.0. However, it is limited to Hopper + Blackwell architectures, specifically H100/H800 and B200/B100 GPUs, due to reliance on specific hardware features like TMEM, 2-CTA MMA, and async TMA. The article also discusses how softmax has become the bottleneck and how selective rescaling optimizes performance.</strong> Commenters express frustration with NVIDIA&#8217;s marketing of GPUs as &#8216;Blackwell&#8217; when they lack full compatibility with FlashAttention-4, highlighting a discrepancy between advertised and actual hardware capabilities.</p><ul><li><p><strong>JockY</strong> expresses frustration with NVIDIA&#8217;s marketing of the RTX 6000 Pro as &#8216;Blackwell&#8217; when it is not fully compatible with Blackwell features, specifically mentioning that FlashAttention-4 (FA4) and NVFP4 are only supported on SM100 architectures. This highlights a discrepancy between NVIDIA&#8217;s product naming and actual hardware capabilities, which can mislead early adopters expecting full feature support.</p></li><li><p><strong>Daemontatox</strong> points out that the issue with NVIDIA&#8217;s RTX 6000 Pro being marketed as &#8216;Blackwell&#8217; is more related to the Streaming Multiprocessor (SM) architecture rather than the naming or overall architecture. The RTX 6000 Pro and DGX systems are sold under the &#8216;Blackwell&#8217; name but actually use the SM120 architecture, which lacks some expected features, leading to consumer dissatisfaction.</p></li><li><p><strong>STNKMyyy</strong> questions the relevance of such high-performance advancements like FlashAttention-4 for consumer-grade GPUs, implying that while these technologies are groundbreaking, they may not be accessible or beneficial for typical consumer hardware users. This reflects a common concern about the gap between cutting-edge research and practical consumer applications.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s2ci9r/created_a_sillytavern_extension_that_brings_npcs/">Created a SillyTavern extension that brings NPC&#8217;s to life in any game</a></strong> (Activity: 499): <strong>The post describes a new extension for SillyTavern that integrates NPCs into any game by using Cydonia as the role-playing (RP) model and Qwen 3.5 0.8B as the game master. This setup allows for dynamic NPC interactions by downloading a game&#8217;s wiki and feeding it into SillyTavern, enabling NPCs to have detailed lore and respond contextually. The system uses voice cloning from game files and provides NPCs with game state information, such as player stats and location. The RP model operates locally, ensuring low latency and strong narrative capabilities. A secondary model, Qwen 3.5, interprets RP interactions to trigger in-game actions, enhancing the realism and depth of older games without needing conversational input. The post highlights the effectiveness of specialized RP models over base models in gaming applications.</strong> Commenters express surprise and enthusiasm about the potential of AI in gaming, noting the innovative use of AI for NPC interactions and questioning why such technology isn&#8217;t already standard in games.</p><ul><li><p>A user highlights the impressive use of a <code>0.8B</code> parameter model for bringing NPCs to life in games, questioning if the project is open source. This suggests a lightweight model capable of running efficiently in real-time gaming environments, which is significant for integration into existing games without heavy computational demands.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s1kyla/which_local_model_we_running_on_the_overland_jeep/">Which local model we running on the overland Jeep fellas?</a></strong> (Activity: 459): <strong>The image depicts a Waymo self-driving car, highlighting the technological advancements in autonomous vehicle systems. The discussion centers around the prediction that future cars will require </strong><code>300GB of RAM</code><strong>, a significant increase from current standards. This prediction is likely based on the assumption that more complex models, possibly involving real-time data processing and AI-driven decision-making, will be integrated into vehicles. The comments reflect skepticism about this prediction, with users questioning the necessity of such high memory requirements, especially when current vehicles operate efficiently on much less RAM.</strong> Commenters express skepticism about the prediction of <code>300GB of RAM</code> for future cars, questioning the basis of this assumption and comparing it to current vehicle capabilities that require significantly less memory.</p><ul><li><p>ForsookComparison questions the necessity of high RAM requirements for automotive models, noting that their car operated efficiently with just <code>16GB of RAM</code> over a <code>600-mile</code> journey. They challenge the assumption that <code>300GB</code> is needed, suggesting that such figures might be based on models that require extensive tool-calls, which may not be applicable in all scenarios.</p></li><li><p>txdv highlights the potential cost implications of high RAM requirements in vehicles, expressing concern over the feasibility of <code>128GB</code> upgrades. They point out that automotive pricing is sensitive, and a <code>5k</code> cost for RAM could be prohibitive for consumers, indicating a need for balancing performance with affordability.</p></li></ul></li></ul><h3><strong>3. Chinese LLM Market and Model Evaluations</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s1gm9z/the_current_state_of_the_chinese_llms_scene/">The current state of the Chinese LLMs scene</a></strong> (Activity: 639): <strong>The Chinese LLM landscape is dominated by major players like ByteDance, Alibaba, Tencent, and Baidu, each with proprietary and open-weight models. ByteDance leads with its </strong><code>dola-seed</code><strong> model, akin to OpenAI, and its </strong><code>Seedance T2V</code><strong> model is popular for video generation. Alibaba excels in open-weight models, particularly small ones, and is strong in T2I and T2V. Tencent&#8217;s </strong><code>Hunyuan</code><strong> model is noted for 3D mesh generation, though its latest versions are not open-sourced. Baidu&#8217;s </strong><code>Ernie</code><strong> model is less used, with a stronger focus on autonomous driving. Other notable players include Xiaomi with </strong><code>Mimo V2 Pro</code><strong>, Ant Group with </strong><code>Ling 2.5 1T</code><strong>, and Meituan with </strong><code>LongCat-Flash-Chat</code><strong>, which uses a dynamic MoE approach. Deepseek is highlighted for its innovation in attention mechanisms like MLA and DSA. The &#8220;Six AI Small Tigers&#8221; such as Zhipu and Minimax focus on releasing large open-weight models to gain recognition. Government-funded initiatives like BAAI and Shanghai AI Lab are also contributing, though with varying reputations.</strong> Commenters note the rapid pace of open-weight model releases in China compared to the US, with some labs releasing more in a quarter than US companies in two years. <strong>Tencent</strong> is recognized for its investment in game development-specific models, with <code>Hunyuan 3.1</code> being state-of-the-art for 3D mesh generation.</p><ul><li><p>Tencent is heavily investing in game development-specific models, such as Hunyuan 3.1 for 3D mesh generation and HY-Motion for text-to-animation, which are considered state-of-the-art. Initially, Tencent open-sources these models to build brand recognition, but transitions to closed weights once they reach commercial viability, as seen with the latest Hunyuan 3D models.</p></li><li><p>A list of popular models by token usage on OpenRouter over the last 7 days highlights the dominance of Chinese models, with Xiaomi MiMo-V2-Pro leading at 1.77 trillion tokens. Notably, only three Western labs are ranked, and the &#8216;Small Tigers&#8217;&#8212;smaller companies advancing AI rapidly&#8212;are prominent, indicating a shift in innovation dynamics.</p></li><li><p>Despite ByteDance&#8217;s significant contributions to AI, they have not released any open weight models, as confirmed by the absence of such models on Hugging Face. This contrasts with other Chinese labs that frequently release open weights, accelerating competition in the AI space.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s19ik2/so_cursor_admits_that_kimi_k25_is_the_best_open/">So cursor admits that Kimi K2.5 is the best open source model</a></strong> (Activity: 629): <strong>The image is a tweet from Aman Sanger discussing the evaluation of base models, specifically highlighting that Kimi K2.5 emerged as the strongest model based on perplexity-based evaluations. The tweet notes that the model&#8217;s strength is attributed to continued pre-training and high-compute reinforcement learning, which enhance the capabilities of the Composer-2 model. The tweet also acknowledges an oversight in not mentioning the Kimi base in their blog, with plans to rectify this in future communications.</strong> One comment critiques the use of perplexity-based evaluations between models, noting that scores can be influenced by factors like dictionary size. Another comment questions the claim about the proportion of training done by Kimi K2, citing reports from <strong>Workshop Labs</strong> that suggest Fireworks&#8217; K2 training code is not optimized for hyperscaled training, contrasting with claims of its efficacy.</p><ul><li><p>The claim that Kimi K2.5 is the best open-source model is questioned due to the methodology of evaluation, particularly the use of perplexity scores which can be misleading as they depend on factors like dictionary size. This raises concerns about the validity of such comparisons between models.</p></li><li><p>There is skepticism about the training claims made by Fireworks regarding Kimi K2.5. Workshop Labs, known for optimizing training code, reported that Fireworks&#8217; code is not optimized for hyperscale training, being only marginally better than basic implementations like HF Transformers 4.x. This suggests potential inefficiencies in Fireworks&#8217; approach to training Kimi K2.5.</p></li><li><p>The assertion that Kimi K2.5 is the best &#8216;base model&#8217; is attributed to its large parameter count and use of a standard attention mechanism rather than a linear one. This implies that the model&#8217;s architecture and scale contribute significantly to its performance, rather than any novel training techniques.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s1kmch/chinas_opensource_dominance_threatens_us_ai_lead/">China&#8217;s open-source dominance threatens US AI lead, US advisory body warns</a></strong> (Activity: 922): <strong>A US advisory body has raised concerns about China&#8217;s growing influence in the open-source AI sector, suggesting it could threaten the US&#8217;s leadership in AI. The report highlights China&#8217;s strategic investments and advancements in open-source AI models, which are becoming increasingly competitive with US counterparts. The advisory body suggests that the US needs to bolster its open-source initiatives to maintain its competitive edge.</strong> Commenters argue that the US is lagging in open-source AI, with Chinese models being more cost-effective and efficient. There is also criticism of US models like Opus, GPT-5.4, and Gemini 3.1 Pro for their perceived dysfunctionality, contrasting with China&#8217;s contributions to AI freedom despite its authoritarian regime.</p><ul><li><p><strong>EffectiveCeilingFan</strong> highlights the competitive edge of Chinese AI models, noting that they are not only cheaper but also outperform US models in open weights. The commenter criticizes the performance of US models like Opus, GPT-5.4, and Gemini 3.1 Pro, suggesting that the US is lagging in terms of open-source AI development.</p></li><li><p><strong>Lissanro</strong> emphasizes the importance of open research in AI development, citing the &#8216;Attention is All You Need&#8217; paper as foundational. They mention that models like Kimi K2.5 owe their existence to open research shared by companies like DeepSeek. The comment also notes that large companies, such as Cursor AI, are adopting Chinese models like Kimi K2.5 for their products, indicating a preference for these open-source models in the industry.</p></li><li><p><strong>Global_Estimate7021</strong> provides a detailed analysis of why the US might be falling behind in AI, citing a significant AI acceptance gap (87% in China vs. 32% in the US) and the volume of AI research publications where China leads. They also mention the strategic advantage of China&#8217;s cheaper electricity and grassroots AI literacy initiatives, which contrast with the US&#8217;s top-down approach.</p></li></ul></li></ul><h2><strong>Less Technical AI Subreddit Recap</strong></h2><blockquote><p>/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT, /r/ChatGPTCoding, /r/aivideo, /r/aivideo</p></blockquote><h3><strong>1. AGI Achievements and Claims</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1s2cfrb/the_man_who_originally_coined_the_acronym_agi_now/">The man who originally coined the acronym &#8220;AGI&#8221; now says that we&#8217;ve achieved it exactly as he envisioned.</a></strong> (Activity: 926): <strong>The image is a tweet by Mark Gubrud, who claims to have coined the term &#8220;AGI&#8221; (Artificial General Intelligence). He asserts that AGI has been achieved as he envisioned, with current models performing at a high-human level in language and general knowledge, while being much faster. However, there is debate about the originality of his claim, as the term &#8220;artificial general intelligence&#8221; is documented as early as 1989, attributed to G. Simons. Gubrud&#8217;s definition of AGI involves systems that match or surpass human brain complexity and speed, capable of reasoning with general knowledge in various operations.</strong> There is skepticism in the comments about Gubrud&#8217;s claim to have coined the term &#8220;AGI,&#8221; with some suggesting he misremembers the history. The Oxford English Dictionary attributes the earliest use of the term to 1989, in the writings of G. Simons, not Gubrud.</p><ul><li><p>The term &#8216;artificial general intelligence&#8217; (AGI) is documented as early as 1989, with the Oxford English Dictionary citing G. Simons as the earliest source. However, M. Gubrud is often credited with popularizing it in scientific literature, though he did not coin the term himself.</p></li><li><p>The original definition of AGI by its coiner describes it as systems that match or surpass human brain capabilities in complexity and speed, capable of handling general knowledge across various domains, including industrial and military operations. This definition suggests a broad and versatile intelligence, though there is skepticism about whether current systems meet this standard.</p></li><li><p>There is debate about the significance of achieving AGI without recursive self-improvement, which was expected to trigger a technological singularity. The lack of such transformative advancements leads to skepticism about the current excitement surrounding AGI developments.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1s1mix1/jensen_huang_nvidia_claims_agi_has_been_achieved/">Jensen Huang (NVIDIA) claims AGI has been achieved</a></strong> (Activity: 2562): <strong>In a recent interview, Jensen Huang, CEO of NVIDIA, claimed that Artificial General Intelligence (AGI) has been achieved, a statement that has sparked significant debate. The interview, available on <a href="https://youtu.be/vif8NQcjVf0?si=WhXfzQ3-Dk5ZvEpo">YouTube</a>, lacks detailed technical evidence to support this claim, leading to skepticism among experts. Huang&#8217;s assertion is seen as potentially influenced by his role in promoting NVIDIA&#8217;s products, which are heavily invested in AI technologies.</strong> The top comments reflect skepticism towards Huang&#8217;s claim, highlighting a distrust in business leaders&#8217; statements about their own products. Commenters suggest that such claims may be more about marketing than factual advancements in AI.</p><ul><li><p>Sweaty_Rub4322 highlights a critical issue in the AGI debate: the lack of a universally accepted definition of AGI. This ambiguity complicates discussions and assessments of whether AGI has been achieved, as both academia and industry struggle to agree on what constitutes AGI. This underscores the need for a clear, standardized definition to facilitate meaningful progress and evaluation in the field.</p></li></ul></li></ul><h3><strong>2. Claude Code Features and Updates</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/ClaudeAI/comments/1s1ujv6/claude_can_now_use_your_computer/">Claude can now use your computer</a></strong> (Activity: 2106): <strong>Claude, an AI developed by Anthropic, is now capable of using your computer to perform tasks via Claude Cowork and Claude Code. This feature, currently in research preview, allows Claude to open applications, navigate browsers, and manage spreadsheets, effectively automating tasks typically done manually. It prioritizes using connected apps like Slack and Calendar, but can also directly interact with apps on your screen with permission. This functionality is available on Pro and Max tiers for macOS users, requiring an updated desktop app paired with a mobile device. More details can be found <a href="https://claude.com/product/cowork#dispatch-and-computer-use">here</a>.</strong> Concerns were raised about the security implications of allowing an AI to control a computer, with some users expressing apprehension about potential job displacement. Others noted this as a strategic move by <strong>Anthropic</strong> in response to competitors like <strong>OpenAI</strong>.</p><ul><li><p>A key concern raised is about security implications of allowing Claude to access a user&#8217;s computer. This involves potential risks such as unauthorized data access or manipulation, which could be exploited if not properly secured. The rapid pace of feature releases may exacerbate these concerns, as new functionalities might not be thoroughly vetted for vulnerabilities before deployment.</p></li><li><p>The introduction of Claude&#8217;s ability to use a computer is seen as a competitive response to OpenAI&#8217;s advancements, particularly in the context of AI models like GPT-4. This move by Anthropic could be aimed at maintaining parity or gaining an edge in the AI capabilities race, highlighting the competitive dynamics in the AI industry.</p></li><li><p>There is a sentiment that the rapid development and release of new features by Claude could lead to job displacement. As AI models become more capable of performing complex tasks traditionally done by humans, there is a growing concern about the impact on employment, especially in sectors heavily reliant on routine cognitive tasks.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ClaudeCode/comments/1s2ci4f/claude_code_can_now_dream/">Claude Code can now /dream</a></strong> (Activity: 1953): <strong>Claude Code has introduced a feature called Auto Dream, designed to enhance the agent&#8217;s memory management by mimicking human REM sleep processes. This feature reviews past session transcripts, identifies relevant information, prunes outdated or contradictory data, and consolidates it into organized files. It operates in the background, triggering after 24 hours and five sessions since the last consolidation, and ensures no conflicts by using a lock file. This approach aims to improve performance by managing memory more intelligently, rather than just expanding context windows.</strong> Some commenters express skepticism about the feature, suggesting it might lead to unnecessary token usage and questioning the AI&#8217;s self-promotion style. Others humorously suggest additional commands to manage AI hallucinations and errors.</p><ul><li><p>AutoDream is a feature for Claude Code that acts like a &#8216;sleep cycle&#8217; for its memory system, addressing the memory bloat issue introduced by the Auto Memory feature. Auto Memory, released in v2.1.59, allows Claude to take notes on projects, but over time, these notes can accumulate noise and contradictions, degrading performance. AutoDream mitigates this by periodically consolidating memories, similar to human REM sleep, through a four-phase process: Orient, Gather signal, Consolidate, and Prune &amp; index.</p></li><li><p>The AutoDream process involves four phases: <strong>Orient</strong>, which scans existing memory to understand stored data; <strong>Gather signal</strong>, which identifies outdated memories and performs targeted searches; <strong>Consolidate</strong>, which merges new information and resolves contradictions; and <strong>Prune &amp; index</strong>, which maintains a concise index and removes stale data. This process only triggers after 24+ hours and 5+ sessions since the last consolidation, ensuring it doesn&#8217;t interfere with active work.</p></li><li><p>AutoDream operates read-only on project code, modifying only memory files and not the actual codebase. This ensures safety and integrity of the code while managing memory efficiently. The full system prompt for this feature is available on GitHub under <code>agent-prompt-dream-memory-consolidation.md</code>, providing transparency and allowing users to understand its operation.</p></li></ul></li></ul><h3><strong>3. Sora Shutdown Announcements</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/OpenAI/comments/1s2oyl3/sora_is_officially_shutting_down/">Sora is officially shutting down.</a></strong> (Activity: 854): <strong>The image is a screenshot of an announcement from the Sora app&#8217;s official account on X.com, stating that Sora is shutting down. The message thanks users for their engagement and promises more details on the shutdown timeline for the app and API. This indicates a significant change in the app&#8217;s lifecycle, likely due to strategic shifts or financial unsustainability, as suggested by comments noting high costs and low engagement.</strong> Comments suggest that Sora&#8217;s shutdown is due to its unsustainable business model, particularly after changes to copyright handling that increased costs and reduced user engagement. The app was initially innovative but became a liability.</p><ul><li><p>Chasemania highlights the unsustainable nature of Sora, pointing out that the product faced high operational costs and low user engagement. The attempt to respect copyright laws excessively led to a decline in user interest, turning the platform into a liability rather than an asset.</p></li><li><p>The discussion touches on the challenges of balancing copyright compliance with user engagement. Sora&#8217;s initial appeal was overshadowed by its inability to maintain user interest while adhering to strict copyright regulations, which ultimately contributed to its downfall.</p></li><li><p>The comments reflect on Sora&#8217;s initial success and subsequent decline, emphasizing the difficulty in sustaining a platform that requires high operational costs and strict adherence to copyright laws, which can deter user engagement and lead to financial instability.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ChatGPT/comments/1s2oxnu/sora_is_officially_shutting_down/">Sora is officially shutting down.</a></strong> (Activity: 1429): <strong>The image is a social media announcement from the Sora team about the shutdown of the Sora app. The post expresses gratitude to the community and promises to provide more details soon regarding the app&#8217;s and API&#8217;s timelines and how users can preserve their work. This indicates a planned and structured shutdown process, aiming to minimize disruption for users.</strong> Comments reflect skepticism about the app&#8217;s impact and user base, with some users expressing surprise at the app&#8217;s longevity given its perceived lack of financial viability.</p></li></ul><h1><strong>AI Discords</strong></h1><p>Unfortunately, Discord shut down our access today. We will not bring it back in this form but we will be shipping the new AINews soon. Thanks for reading to here, it was a good run.</p>]]></content:encoded></item><item><title><![CDATA[🔬Why There Is No "AlphaFold for Materials" — AI for Materials Discovery with Heather Kulik]]></title><description><![CDATA[Lessons from a Decade on the Frontier of AI for Science]]></description><link>https://www.latent.space/p/materials</link><guid isPermaLink="false">https://www.latent.space/p/materials</guid><dc:creator><![CDATA[Brandon Anderson]]></dc:creator><pubDate>Tue, 24 Mar 2026 16:53:15 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191799646/eba6afdaeeb23f324571de99aa2e767b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Materials science is the unsung hero of the science world.</strong> Behind every physical product you interact was decades of research into getting the properties of materials just right. Your gym clothes contain synthetic fibers developed over decades. The glass screen, diodes, and chip substrate technology needed to read this blog post were only viable due to many teams of material scientists.</p><p>Our guest Prof. <a href="https://cheme.mit.edu/profile/heather-j-kulik/">Heather Kulik</a> was one of the first material scientists to realize that there was alpha in combining computational tools with data driven modeling &#8212; <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>she did AI for science before it was cool. She has a hard-fought perspective for how to succeed in this field. Yes, she believes the wins are real. To get there you must work hard to deeply integrate domain expertise with AI techniques, and also maintain a discriminating mind. Ultimately what matters is you succeed in the lab, and nature doesn&#8217;t care about how hyped a model is. These lessons personally resonated with the <a href="http://latent.space/">Latent.Space</a> Science team and our own experience<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><div id="youtube2-KSCCKCz2x04" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;KSCCKCz2x04&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/KSCCKCz2x04?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>This episode is a must watch for all aspiring AI for science practitioners. A few highlights:</p><p><strong>Designing new polymers with AI:</strong> Heather&#8217;s group recently used AI to design new polymers that are significantly stronger. These materials were created and tested in the lab, and the scientists who built them were surprised by the designs. The AI had figured out certain building blocks could break in a novel way. The AI discovered a purely quantum mechanical effect, and after convincing their lab collaborators to actually synthesize it, the material turned out to be four times tougher!</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;558430af-e438-4bfe-b784-b3bf443e4fc8&quot;,&quot;duration&quot;:null}"></div><p></p><p><strong>The twenty-two-atom ligand challenge</strong>: When asked about the role and need of human scientists, Heather points out that AI has a strong understanding of academic chemistry, but is still lacking intuition. Every time an LLM is updated, Heather asks it to design a ligand that contains exactly twenty-two heavy atoms. She has yet to find one that can succeed at this seemingly simple task that any expert could do in a second! Is this the chemistry counterpart to counting &#8216;r&#8217;s in strawberry?</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;21fce8a4-d988-43a6-abde-5f472605f82e&quot;,&quot;duration&quot;:null}"></div><p></p><blockquote><p><strong>Side note:</strong> Heather joked that this comment would date itself immediately, so we decided to see if this was still true three months<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> after recording. <strong>We found some interesting results!</strong> We asked both Claude and ChatGPT to design a 22 atom ligand for both a metal-organic framework (MOF) and a Kinase protein. </p><ul><li><p>For the Kinase, both models got it right: Claude pulled out RDKit in a python script and iterated on several designs, whereas ChatGPT just one-shotted it. </p></li><li><p>For MOFs, both models got it wrong, generating ligands with 21, 23, or 24 atoms, yet stubbornly not getting 22 atoms. </p></li></ul><p>Is there something different about how LLMs reason in the materials and bio domains?</p></blockquote><p><strong>Materials vs biology:</strong> The two biggest domains of AI in science have been biology and materials. We asked Heather if there could be an AlphaFold moment for materials. Her answer reframes how we should think about the field:</p><ul><li><p>First, the datasets in material science are woefully lacking in comparison to the bio world. The closest to ground truth in most cases are noisy DFT datasets. These are just approximations to the real world! The datasets that are accurate are all boring, as Heather quipped &#8220;We have really good datasets for really boring chemistry.&#8221; Furthermore, good experimental structures are hard to come by and require interpretation. So generating generating high-quality, novel datasets at scale would really drive the field forward.</p></li><li><p>More philosophically, AlphaFold is making predictions in a fairly limited space: there are just twenty amino acids. Sure, even here AlphaFold doesn&#8217;t get everything right, but it seems plausible that one could learn the entire design space. For materials, each element is a new set of interactions and chemistry, with little to no transferability. This is a massive open problem in material science that we hope some of the smartest AI scientists will want to work on!</p></li></ul><p><strong>The difficulties of trusting the literature</strong>: Heather&#8217;s team has spent the last few years using NLP and later LLMs to extract data from literature. Even a few thousand data points from these papers can be valuable for guiding her group&#8217;s work. One surprising result: sometimes the reported values for a property (say temperature) do not match up with the graphs in the papers! So there&#8217;s lots of potential in using LLMs to mine data from the literature, just do it with care.</p><p><strong>The role of academia in an ever-changing world:</strong> One theme that has been running through many of our conversations has been the changing role of the academic &#8212; and the scientist &#8212; in science. When startups are raising $100s of millions and hyperscalers and Big Pharma are all ramping up AI-for-science efforts, the academic researcher needs both resources and judgement about problems to chase more than ever.</p><p>Resources include data that is organized for machine learning, access to high throughput experimentation labs, and compute resources. These are all things that academics can build together. More importantly, Heather emphasizes curiosity about problems that haven&#8217;t hit the radar of the heavily capitalized AI companies. After so many years on the forefront of AI for Science, Heather&#8217;s judgement that Chemical Engineering and Material Science still need curious people asking questions with no clear path to money is a welcome beacon in the AI fog.</p><p></p><h2>Full Video podcast </h2><p>Is on <a href="https://youtu.be/KSCCKCz2x04">Youtube</a>!</p><div id="youtube2-KSCCKCz2x04" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;KSCCKCz2x04&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/KSCCKCz2x04?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I really like em-dashes &#8212; not an llm I swear!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Heather has a <a href="https://pubs.aip.org/aip/aco/article/1/2/020902/3366801">great article</a> that shares far more of her journey and lessons than we could ever cover here on the pod.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>We&#8217;re getting faster at releasing, sorry it took so long Heather!</p></div></div>]]></content:encoded></item><item><title><![CDATA[[AINews] Dreamer joins Meta Superintelligence Labs — 9 month retro of Personal Superintelligence]]></title><description><![CDATA[By now we&#8217;re pretty used to LS Pod guests going on to great success, but today&#8217;s news is fast for even us - Nat and Alex at MSL have execuhired Dreamer just days after we shipped their pod, barely 11 days after we recorded with them:]]></description><link>https://www.latent.space/p/ainews-dreamer-joins-meta-superintelligence</link><guid isPermaLink="false">https://www.latent.space/p/ainews-dreamer-joins-meta-superintelligence</guid><pubDate>Tue, 24 Mar 2026 06:50:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MFsE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>By now we&#8217;re pretty used to LS Pod guests going on to great success, but today&#8217;s news is fast for even us - <a href="https://x.com/swyx/status/2036162589261254964">Nat and Alex at MSL have execuhired Dreamer</a> just days after we shipped their pod, barely 11 days after we recorded with them:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MFsE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MFsE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png 424w, https://substackcdn.com/image/fetch/$s_!MFsE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png 848w, https://substackcdn.com/image/fetch/$s_!MFsE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png 1272w, https://substackcdn.com/image/fetch/$s_!MFsE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MFsE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png" width="646" height="730.5436241610738" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1348,&quot;width&quot;:1192,&quot;resizeWidth&quot;:646,&quot;bytes&quot;:1909368,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/191952315?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MFsE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png 424w, https://substackcdn.com/image/fetch/$s_!MFsE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png 848w, https://substackcdn.com/image/fetch/$s_!MFsE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png 1272w, https://substackcdn.com/image/fetch/$s_!MFsE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7af96441-4647-4d95-8ff4-9a507e6a50d8_1192x1348.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We&#8217;re surprised, but not at all disappointed. If you can&#8217;t tell <a href="https://www.latent.space/p/dreamer">from the pod</a>, we were immediately in love with the tech and the polish, but it was always going to be a long slog to build any consumer AI business and it is a very nice thing indeed to have Team Zuck on your side to push consumer distribution.</p><p>This is also approximately the 9 month anniversary of the MSL <a href="https://www.meta.com/superintelligence/">&#8220;Personal Superintelligence&#8221; Manifesto</a> from Zuck, which reads:</p><blockquote><p>As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from <strong>everyone having a personal superintelligence that helps you achieve your goals</strong>, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.</p></blockquote><p>and,</p><blockquote><p>If trends continue, then you&#8217;d expect people to spend less time in productivity software, and more time creating and connecting. <strong>Personal superintelligence that knows us deeply, understands our goals, and can help us achieve them will be by far the most useful.</strong> </p></blockquote><p>Rewatch the Dreamer walkthrough and observe how Sidekick is your personal intelligent agent-of-agents whose main job is the latter sentence:</p><div id="youtube2-TvmxWWfiYWI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;TvmxWWfiYWI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/TvmxWWfiYWI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>This execuhire (our term for <a href="https://news.smol.ai/issues?pattern=execuhire">these licensing+hire-but-not-acquire deals</a>) comes after <a href="https://news.smol.ai/issues/25-12-29-meta-manus">the $2B Manus acquisition in December</a>, also done in a matter of 10 days, which had <a href="https://www.youtube.com/watch?v=xz0-brt56L8">similarly impressive tech</a> and decent distribution, though perhaps with less of an &#8220;OS&#8221; and ecosystem heavy emphasis as Dreamer. </p><p>Combining the two teams makes for one of the most formidable consumer <a href="https://www.latent.space/p/agent-labs">agent labs</a> on Earth, and it is pretty clear what kind of talent Nat Friedman is in the market for (if you give him a pass for <a href="https://about.fb.com/news/2025/09/introducing-vibes-ai-videos/">Vibes</a>). If you are savvy enough&#8230; you should be able to tell what other kinds of companies might be up next. (register your predictions in the comments!)</p><p></p><p></p><blockquote><p>AI News for 3/20/2026-3/23/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Claude Computer Use, Agent Harnesses, and the Shift From &#8220;Codegen&#8221; to Full Workflow Automation</strong></p><ul><li><p><strong>Anthropic pushed computer use onto the desktop</strong>: Claude can now control the <strong>mouse, keyboard, and screen</strong> to operate arbitrary apps in a <strong>macOS research preview</strong> via Claude Cowork and Claude Code, a notable widening of the agent surface beyond APIs and browser sandboxes. The launch landed alongside strong community reactions about not needing a laptop for many tasks anymore and why Anthropic may have skipped acquiring broader external agent stacks in favor of owning the full &#8220;do anything on your computer&#8221; loop (<a href="https://x.com/claudeai/status/2036195789601374705">Claude announcement</a>, <a href="https://x.com/felixrieseberg/status/2036193240509235452">Felix Rieseberg</a>, <a href="https://x.com/Yuchenj_UW/status/2036197273496068102">Yuchen Jin</a>, <a href="https://x.com/alexalbert__/status/2036227208675729687">Alex Albert</a>).</p></li><li><p><strong>The agent stack is converging on long-running, parallel, tool-rich workflows</strong>: multiple tweets pointed to a maturing harness layer around coding and ops agents: <strong>Hermes Agent</strong> momentum and ecosystem curation (<a href="https://x.com/nyk_builderz/status/2035958826973733150">awesome-hermes-agent</a>, <a href="https://x.com/Teknium/status/2036068990867603720">Teknium tips</a>, <a href="https://x.com/NousResearch/status/2036122143398961659">open-source vibe shift</a>); <strong>T3 Code</strong> adding integrated browser and terminal capabilities (<a href="https://x.com/LLMJunky/status/2035856842224497049">T3 Code browser integration</a>, <a href="https://x.com/theo/status/2036216034949312851">Theo on open-sourcing T3 Code</a>); <strong>Command Center</strong> and similar orchestration tools for many-agent parallel execution from one workspace (<a href="https://x.com/jimmykoppel/status/2036077396210728974">Jimmy Koppel</a>); and <strong>Parchi</strong> / BYOK workflows for very long-running autonomous tasks (<a href="https://x.com/0xSero/status/2036197042045751563">0xSero</a>, <a href="https://x.com/0xSero/status/2036204079056081043">Qwen3.5-REAP in Parchi</a>).</p></li><li><p><strong>Operational reality is now the bottleneck, not just model IQ</strong>: several practitioners complained that newer top models can be too eager, over-agentic, or delegated to weaker subagents, hurting real coding workflows; this showed up in complaints about <strong>GPT-5.2 Pro subagents</strong>, <strong>Claude browser/computer use fragility</strong>, and the broader critique that superficial parallelization often becomes &#8220;<strong>slop theater</strong>&#8221; rather than throughput gains (<a href="https://x.com/MParakhin/status/2035879791027773732">Mikhail Parakhin</a>, <a href="https://x.com/saranormous/status/2035932898218713170">Sarana</a>, <a href="https://x.com/jeremyphoward/status/2035966832197427509">Jeremy Howard</a>, <a href="https://x.com/bentlegen/status/2035943186841915711">bentlegen</a>). A recurring theme: the winning products will likely be those that <strong>close the loop</strong> with traces, evals, incidents, and production feedback, not just generate code (<a href="https://x.com/jakebroekhuizen/status/2036137460288332077">LangSmith &#8220;close the loop&#8221;</a>, <a href="https://x.com/kimmonismus/status/2036126784887071221">PlayerZero summary</a>).</p></li></ul><p><strong>Research on Self-Improving Agents, RL Post-Training, and Benchmark Generation</strong></p><ul><li><p><strong>Meta-affiliated work on self-improvement advanced beyond fixed meta-procedures</strong>: <strong>Hyperagents / DGM-H</strong> extends the Darwin G&#246;del Machine idea by allowing agents to improve not only task behavior but also the <strong>procedure that generates future improvements</strong>. The claim is that these meta-level improvements transfer across domains including coding, paper review, robotics reward design, and Olympiad grading, addressing a key limitation of prior self-improving systems that kept the self-improvement loop itself hand-authored (<a href="https://x.com/jennyzhangzt/status/2036099935083618487">Jenny Zhang</a>).</p></li><li><p><strong>Meta also presented a broader RL post-training unification story</strong>: <strong>RLLM = RL + LM-as-RM</strong> trains a language-model reward model <strong>on-policy</strong> from the policy&#8217;s own outputs, aiming to unify post-training over <strong>easy-to-verify, hard-to-verify, and non-verifiable</strong> tasks. The notable claim is that using a generative LM reward model can improve reward quality across task classes compared with more brittle bespoke reward setups (<a href="https://x.com/jaseweston/status/2036119252214620513">Jase Weston</a>).</p></li><li><p><strong>Benchmark and environment generation is scaling up fast</strong>: <strong>WebArena-Infinity</strong> claims a dramatic reduction in browser environment construction cost&#8212;from months of grad-student labor to <strong>under 10 hours and &lt;$100 per environment</strong>&#8212;while producing harder, verifiable browser-use tasks where strong open-source models now score <strong>below 50%</strong> despite doing much better on legacy WebArena/OSWorld. This matters because RL for agents increasingly needs automatically generated, high-authenticity environments rather than a handful of handcrafted testbeds (<a href="https://x.com/shuyanzh36/status/2036098118023049630">Shuyan Zhou</a>).</p></li><li><p><strong>Topical RL synthesis remained popular, though less novel</strong>: a high-engagement overview from The Turing Post catalogued <strong>16 RL variants</strong> spanning RLHF, RLAIF, RLVR, process rewards, self-feedback, and critique-based methods&#8212;useful as a taxonomy, but the more technically significant tweets this cycle were about <strong>how RL environments and reward models are being industrialized</strong> (<a href="https://x.com/TheTuringPost/status/2035857987705954760">Turing Post RL list</a>).</p></li></ul><p><strong>World Models, JEPA, Mechanistic Interpretability, and Emerging Training Theory</strong></p><ul><li><p><strong>JEPA/world-model work had one of the stronger technical showings of the day</strong>: <strong>LeWorldModel</strong> claims stable end-to-end JEPA training <strong>directly from pixels</strong> with no teacher-student tricks, no EMA, and no heavy heuristics: <strong>15M params</strong>, <strong>1 GPU</strong>, and <strong>&lt;1 second planning</strong>, with follow-on summaries emphasizing <strong>~48&#8211;50&#215; planning speedups</strong> and competitive performance against prior world-model baselines. This attracted attention because JEPA-style methods have often been seen as fragile or trick-heavy; these results argue for a much simpler training recipe (<a href="https://x.com/lucasmaes_/status/2036080584569618741">Lucas Maes</a>, <a href="https://x.com/randall_balestr/status/2036086865460171110">Randall Balestriero</a>, <a href="https://x.com/robotsdigest/status/2036104283192709345">RobotsDigest</a>).</p></li><li><p><strong>Mechanistic interpretability continues to mature from &#8220;vibes&#8221; into reverse engineering</strong>: a thread summarizing Anthropic&#8217;s &#8220;On the Biology of a Large Language Model&#8221; framed current mech interp as uncovering circuits and internal features with a level of specificity that would have sounded implausible a decade ago, while also cautioning that traced circuits need not correspond to what the model can explicitly verbalize about its own reasoning (<a href="https://x.com/mathemagic1an/status/2035850046735098065">summary thread</a>).</p></li><li><p><strong>Training theory and optimizer scaling also got attention</strong>: Antonio Orvieto&#8217;s thread argued that optimization theory for adaptive methods explains much of known <strong>LLM hyperparameter scaling</strong> and can suggest transfer rules without brute-force sweeps, while follow-up discussion highlighted optimizer dependence and implications for Muon-style setups (<a href="https://x.com/orvieto_antonio/status/2036129786205008188">Orvieto</a>, <a href="https://x.com/giffmana/status/2036156010272849950">giffmana reaction</a>, <a href="https://x.com/leloykun/status/2036178508809118067">leloykun follow-up</a>). This is one of the more useful undercurrents of the day: people are trying to replace empirical scaling folklore with derivations.</p></li></ul><p><strong>Document Parsing, Retrieval, and Search Infrastructure Became More &#8220;Agent-Native&#8221;</strong></p><ul><li><p><strong>Document parsing is becoming a serious systems layer, not a side utility</strong>: Google Devs and LlamaIndex highlighted a workflow combining <strong>LlamaParse + Gemini 3.1 Pro</strong> for extracting structured data from difficult financial PDFs, claiming roughly <strong>15% accuracy gains</strong> on brokerage statements and complex tables. Separately, LlamaIndex&#8217;s new <strong>LiteParse</strong> targets a lighter-weight parsing path with URL and stream support and no VLM dependency, specifically pitched as something agents can call cheaply and quickly (<a href="https://x.com/googledevs/status/2036101456239939750">Google Devs</a>, <a href="https://x.com/jerryjliu0/status/2036155687848518097">Jerry Liu</a>, <a href="https://x.com/jerryjliu0/status/2036171132806869251">LiteParse</a>).</p></li><li><p><strong>Search/retrieval infra for coding agents improved materially</strong>: Cursor shipped <strong>Instant Grep</strong>, advertising regex search over <strong>millions of files in milliseconds</strong>, with a technical writeup on the indexing/algorithm tradeoffs. For agentic coding this kind of primitive matters more than another tiny model gain; search latency directly shapes whether agents can iterate over large repos fast enough to be useful (<a href="https://x.com/cursor_ai/status/2036122609931165985">Cursor announcement</a>, <a href="https://x.com/cursor_ai/status/2036122612472881574">blog link</a>).</p></li><li><p><strong>Late interaction / multi-vector retrieval is having a moment</strong>: the Weaviate/LightOn discussion argued that late interaction systems finally look practical for broader deployment, especially for code and reasoning-heavy retrieval. The core argument: token-level multi-vector representations can still be cheaper and more reusable than full cross-encoders, while materially improving recall and ranking quality for agentic workloads (<a href="https://x.com/CShorten30/status/2036080609362161900">Connor Shorten podcast</a>, <a href="https://x.com/softwaredoug/status/2036082251734138904">softwaredoug</a>, <a href="https://x.com/AmelieTabatta/status/2036082256482062606">Am&#233;lie Chatelain</a>).</p></li></ul><p><strong>Model and Product Releases: Sakana Chat, MiniMax Plans, Luma Uni-1, NVIDIA Kimodo, and More</strong></p><ul><li><p><strong>Sakana AI made the biggest concrete product launch in the set</strong>: it launched <strong>Sakana Chat</strong> for Japanese users, backed by a new <strong>Namazu alpha</strong> model family, described as post-trained open models tuned to reduce upstream bias and better reflect Japanese context and values. Sakana positioned this as both a consumer product and a demonstration of culturally localized post-training; the supporting technical blog also tied into its prior work using ensembles plus <strong>novelty search</strong> to extract narratives from <strong>1.1M social posts</strong> in a Yomiuri collaboration on information operations analysis (<a href="https://x.com/SakanaAILabs/status/2036246622141849724">Sakana Chat</a>, <a href="https://x.com/SakanaAILabs/status/2036247684139589688">Namazu alpha</a>, <a href="https://x.com/hardmaru/status/2035884310356754715">Hardmaru on the OSINT workflow</a>).</p></li><li><p><strong>MiniMax continued to push productization hard</strong>: it introduced a <strong>flat-rate &#8220;Token Plan&#8221;</strong> covering text, speech, music, video, and image APIs under one subscription, explicitly pitching predictable all-modality billing and compatibility with third-party harnesses. This is notable not because subscription packaging is flashy, but because multimodal API consumption has become operationally annoying enough that simplifying pricing is itself product differentiation (<a href="https://x.com/MiniMax_AI/status/2036123727373672910">MiniMax Token Plan</a>).</p></li><li><p><strong>Generative media shipped notable artifacts</strong>: <strong>Luma&#8217;s Uni-1</strong> was pitched as a model that &#8220;thinks and generates pixels simultaneously,&#8221; while <strong>NVIDIA&#8217;s Kimodo</strong> drew strong engagement as a promptable motion/timeline model trained on <strong>700 hours of mocap</strong>, supporting both human and robot skeletons and available on Hugging Face (<a href="https://x.com/LumaLabsAI/status/2036107826498544110">Luma Uni-1</a>, <a href="https://x.com/victormustar/status/2036043907776098345">Kimodo</a>).</p></li><li><p><strong>Other release notes worth flagging</strong>: Hugging Face <strong>Kernels 0.12.3</strong> added support for <strong>Flash-Attention 4</strong> via <code>cutlass.cute</code> kernels (<a href="https://x.com/RisingSayak/status/2036038782793994541">Sayak Paul</a>); <strong>TRL v1.0.0</strong> claimed up to <strong>44&#215; VRAM savings</strong> for long-sequence training with AsyncGRPO on the way (<a href="https://x.com/DirhousssiAmine/status/2036131263803781305">Amine Dirhoussi</a>); and <strong>AI2&#8217;s MolmoPoint GUI</strong> targeted VLM-based GUI automation with grounding tokens rather than coordinate regression, reporting <strong>61.1 on ScreenSpotPro</strong> (<a href="https://x.com/HuggingPapers/status/2036101402477404284">HuggingPapers</a>).</p></li></ul><p><strong>Top Tweets (by engagement, filtered for technical relevance)</strong></p><ul><li><p><strong>Claude computer use launch</strong>: Anthropic&#8217;s desktop control feature was the most consequential product release in the set and one of the clearest signs that mainstream assistants are moving from &#8220;answering&#8221; to <strong>operating software directly</strong> (<a href="https://x.com/claudeai/status/2036195789601374705">announcement</a>).</p></li><li><p><strong>Cursor Instant Grep</strong>: highly engaged because it addressed a real systems bottleneck for coding agents&#8212;repo-scale search latency&#8212;not just another benchmark increment (<a href="https://x.com/cursor_ai/status/2036122609931165985">Cursor</a>).</p></li><li><p><strong>Luma Uni-1</strong>: major engagement around a model that collapses reasoning and image generation into one product surface, though details remain sparse in the tweet itself (<a href="https://x.com/LumaLabsAI/status/2036107826498544110">Luma Labs</a>).</p></li><li><p><strong>Sakana&#8217;s narrative intelligence / OSINT workflow</strong>: one of the more substantial applied-AI posts, combining LLM ensembles, novelty search, hypothesis generation, and human verification over <strong>1.1M posts</strong> (<a href="https://x.com/SakanaAILabs/status/2035883994940887161">Sakana</a>).</p></li><li><p><strong>JEPA / LeWorldModel</strong>: strong engagement for a compact world model recipe that is much simpler and faster than many expected, and thus potentially more reproducible by ordinary labs (<a href="https://x.com/lucasmaes_/status/2036080584569618741">LeWorldModel</a>).</p></li><li><p><strong>Hyperagents / DGM-H</strong>: among the most technically interesting research posts because it targets <strong>meta-level self-improvement</strong>, not just better task execution (<a href="https://x.com/jennyzhangzt/status/2036099935083618487">Hyperagents</a>).</p></li></ul><p></p>
      <p>
          <a href="https://www.latent.space/p/ainews-dreamer-joins-meta-superintelligence">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Dreamer: the Personal Agent OS — David Singleton]]></title><description><![CDATA[/dev/agents is out of stealth as Dreamer, and the vision is staggeringly ambitious.]]></description><link>https://www.latent.space/p/dreamer</link><guid isPermaLink="false">https://www.latent.space/p/dreamer</guid><pubDate>Fri, 20 Mar 2026 21:03:23 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191603783/2317717a636a1484d239498d8e0d100e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<blockquote><p><em>Mar 23 update for Latent Spacenauts: this episode was recorded before the <a href="https://x.com/dps/status/2036156505138012473">Dreamer team announced they were joining Meta Superintelligence Labs</a>, and it turned out to be the last interview they did before the news became public. Consider this a snapshot from just before the transition!</em></p></blockquote><div><hr></div><p>In 2024, <a href="https://www.linkedin.com/in/davidpsingleton/">David Singleton</a> left Stripe and joined forces with <a href="https://www.linkedin.com/in/hbarra">Hugo Barra</a> for a buzzy stealth startup named <a href="https://siliconangle.com/2024/11/26/new-startup-named-dev-agents-led-ex-google-meta-tech-leaders-raises-56m-ai-agents/">/dev/agents</a>. This month they emerged out as <strong><a href="https://dreamer.com/latentspace">Dreamer</a></strong>, a consumer-first platform to discover, build, and use AI agents and agentic apps, centered on a personal &#8220;Sidekick&#8221; that helps users customize experiences via natural language. </p><div id="youtube2-hUBBQu7vdTQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;hUBBQu7vdTQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/hUBBQu7vdTQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Sidekick is nothing less than an &#8220;agent that builds agents&#8221;, with all the complexity that that entails:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xqBF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xqBF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png 424w, https://substackcdn.com/image/fetch/$s_!xqBF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png 848w, https://substackcdn.com/image/fetch/$s_!xqBF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png 1272w, https://substackcdn.com/image/fetch/$s_!xqBF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xqBF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png" width="1082" height="676" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/faa771e8-e595-42ab-add8-4767f21981f7_1082x676.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:676,&quot;width&quot;:1082,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:710188,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/191603783?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xqBF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png 424w, https://substackcdn.com/image/fetch/$s_!xqBF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png 848w, https://substackcdn.com/image/fetch/$s_!xqBF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png 1272w, https://substackcdn.com/image/fetch/$s_!xqBF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffaa771e8-e595-42ab-add8-4767f21981f7_1082x676.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You&#8217;ve seen many many website builder, app builder, and even agent builder startups by now, but our favorite detail is the sheer amount of work that has gone into the &#8220;full stack&#8221; nature of the platform, including shipping their own SDK, logging, database, prompt management, serverless functions, and so on. Most platforms restrict the tech stack you can use just to get off the ground &#8212;&nbsp;Dreamer does it &#8220;right&#8221; by letting you push whatever arbitrary code you want to their VMs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kMV1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kMV1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png 424w, https://substackcdn.com/image/fetch/$s_!kMV1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png 848w, https://substackcdn.com/image/fetch/$s_!kMV1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png 1272w, https://substackcdn.com/image/fetch/$s_!kMV1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kMV1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png" width="1456" height="871" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:871,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3051866,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/191603783?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kMV1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png 424w, https://substackcdn.com/image/fetch/$s_!kMV1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png 848w, https://substackcdn.com/image/fetch/$s_!kMV1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png 1272w, https://substackcdn.com/image/fetch/$s_!kMV1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefd15d20-0408-4a29-be1a-db204de32f60_2854x1708.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Paying the Builders</h2><p>Of course former leaders of Stripe and Android would not stop at just building the tools, but also building the ecosystem. Dreamer is deeply aware of the 4 sided network effect it has going on and is ready to fund all of it.</p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c-B8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c-B8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png 424w, https://substackcdn.com/image/fetch/$s_!c-B8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png 848w, https://substackcdn.com/image/fetch/$s_!c-B8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!c-B8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c-B8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png" width="1456" height="1126" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1126,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:195766,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/191603783?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c-B8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png 424w, https://substackcdn.com/image/fetch/$s_!c-B8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png 848w, https://substackcdn.com/image/fetch/$s_!c-B8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!c-B8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43a3eb07-c4b9-4b9c-b7ae-684c4590e78d_1634x1264.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>It&#8217;s time to Dream!</p><p></p><h2>Full Video Episode</h2><p>on <a href="https://youtu.be/TvmxWWfiYWI">youtube</a>.</p><div id="youtube2-TvmxWWfiYWI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;TvmxWWfiYWI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/TvmxWWfiYWI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><h2>Transcript</h2><h2>[00:00:00] Meet Dreamer Purple</h2><p>[00:00:00] <strong>swyx:</strong> Okay, we&#8217;re here in the studio with David Singleton. Welcome.</p><p>[00:00:08] <strong>David Singleton:</strong> Hey, Wix. It&#8217;s great to be here.</p><p>[00:00:09] <strong>swyx:</strong> It&#8217;s great to have you. Uh, we have very sympa that your company color is the same as Lean Spaces color.</p><p>[00:00:15] <strong>David Singleton:</strong> That&#8217;s right. Dreamer Purple.</p><p>[00:00:17] <strong>swyx:</strong> It used to be Devrel agents, which I thought was very cool. It&#8217;s like you call back to Devrel Payments.</p><p>[00:00:22] <strong>David Singleton:</strong> Yeah.</p><p>[00:00:22] <strong>swyx:</strong> And you were obviously CTO Stripe. And talk to me about just the origin or thinking process behind Dreamer. Yeah. And maybe, maybe start with like, what, what is Dreamer?</p><p>[00:00:31] <strong>David Singleton:</strong> Yeah.</p><h2>[00:00:31] What Is Dreamer</h2><p>[00:00:31] <strong>David Singleton:</strong> So Dreamer is a new product, uh, which everyone can come and play with today. Um, it&#8217;s a place where everyone, literally, everyone can discover, build, and enjoy and use AI agents and agenda apps.</p><p>[00:00:45] And we really did design it for consumers, for folks who are not necessarily. Uh, have any kind of technical background. It&#8217;s really aimed at everyone. I think often of my sister, she&#8217;s very smart. She&#8217;s not in the slightest bit technical. She has lots of problems in her life that [00:01:00] she would like to be able to have great software and intelligent software to solve.</p><p>[00:01:04] But you know, even with the rise of tools like Cloud Code and so forth, she&#8217;s got no way to get started. And Dreamer is a place where she can come in, grab some intelligent apps that other people in the community have built, start using them right away, and solve real problems in her life.</p><h2>[00:01:19] Sidekick And Waitlist</h2><p>[00:01:19] <strong>David Singleton:</strong> And at the core, we have a personal agent called the Sidekick.</p><p>[00:01:24] Um, you can give your sidekick a name, you can give it its own personality, and it really helps you across your entire day, your life. It helps you use all of the agents on the platform, and it also helps you build anything you want. And we&#8217;ve been working in this for a little while. We recently launched in beta.</p><p>[00:01:41] So anyone can go to dreamer.com, join the wait list. Um, and we have many, many, many people in the community now who are building really fun, really powerful, really useful. Agents and the agentic apps for themselves.</p><p>[00:01:54] <strong>swyx:</strong> I think we&#8217;re gonna go right into a demo. Yeah. I just wanna make an observation that, uh, you, you, [00:02:00] you put discover first before build.</p><p>[00:02:02] Mm-hmm. But actually, at least for the engineers in the audience. &#8216;cause we are primarily engineers and you&#8217;re primarily targeting consumers, right?</p><p>[00:02:08] <strong>David Singleton:</strong> Yeah.</p><p>[00:02:08] <strong>swyx:</strong> For engineers. Like, there&#8217;s a huge full stack of stuff, which we&#8217;re gonna dive into. Let&#8217;s write. It&#8217;s so impressive. I&#8217;m like, holy shit, this, this is what I&#8217;ve always wanted.</p><p>[00:02:16] Cool. Uh, so, so I think that&#8217;s really good and I&#8217;ve, in some ways, I think given your background given, uh, Hugo&#8217;s, is it Hugo? Hugo.</p><p>[00:02:24] <strong>David Singleton:</strong> Hugo. Hugo Bar. Yeah.</p><p>[00:02:25] <strong>swyx:</strong> Hugo, it&#8217;s not surprising that you can basically kind of build an app store Yeah. For agents.</p><p>[00:02:30] <strong>David Singleton:</strong> Yeah. So Hugo was my co-founder. Yeah. Um, Hugo and I met with our other co-founder Nicholas Checkoff in the very early days of Android at Google, where we were building Google&#8217;s first mobile apps.</p><p>[00:02:41] Uh, we then contributed to very core pieces of Android itself. And you&#8217;re right, we were really excited about building two things. One, solving a bunch of problems. That this breakthrough technology here I&#8217;m talking about mobile needed to have solved in order to make it work for real people at scale. And then secondly, building this ecosystem, um, [00:03:00] of third party developers using the Play Store, um, and able to deliver way more value on the platform than we could have delivered on our own.</p><p>[00:03:08] And we think about Dreamer in exactly the same way. So I was working at Stripe, as you mentioned, and we had the opportunity to put some of the very first AI agent systems in the world into production. And from the moment we did the first of those, I was just struck with a strong sense of conviction that this is breakthrough technology that&#8217;s gonna change how all of us work with computers and phones and so forth, all of the, the technology in our lives, but.</p><p>[00:03:34] There&#8217;s a lot of problems to be solved, for real people to be able to make this approachable. Um, and it really is kind of a direct analog for what we were solving back in the early days of mobile apps at Google and, and Android. So it&#8217;s, it&#8217;s been fun to bring that to life.</p><p>[00:03:47] <strong>swyx:</strong> Yeah. Uh, let&#8217;s look at it.</p><p>[00:03:48] <strong>David Singleton:</strong> Yeah, let&#8217;s take a look.</p><h2>[00:03:49] Dashboard And Daily Briefing</h2><p>[00:03:49] <strong>David Singleton:</strong> So, uh, dreamer.com, this is our homepage. This is where you can come and, uh, watch some videos about what is here and sign up for the wait list. Once</p><p>[00:03:57] <strong>swyx:</strong> you, I, I just wanna say for those listening, &#8216;cause we have a lot, you [00:04:00] know, switch to YouTube, look at the animations. So much care.</p><p>[00:04:03] <strong>David Singleton:</strong> We, we really care about, uh, this product being fun.</p><p>[00:04:07] Uh, and, and interesting to use. Obviously a lot of people are using it to do real important stuff. You can do real work, uh, here, uh, but also you can build fun things too. Once you get off of our wait list, you&#8217;ll come into the product. The first thing that happens is you&#8217;ll have a conversation with your side cake, which is this little friendly, uh, character here.</p><p>[00:04:27] And psychic will seek to get to know you and understand you. What do you care about? And will help you discover and build your first AI agents or agentic apps. After that, you&#8217;re, you&#8217;re gonna have a dashboard. This is my dashboard. Everyone&#8217;s is different. Um, you can see I have a few things here. I have a feed.</p><p>[00:04:42] So a lot of our agents do things in the background when you&#8217;re not looking and the feed is how they let you know what they&#8217;ve been up to. I have, uh, some widgets, uh, from apps that I have built. Uh, this one is called Calendar Hero. Uh, this is something that I installed from the gallery. Uh, so built by someone in our community.</p><p>[00:04:59] It&#8217;s a [00:05:00] really powerful calendar app because for each of my meetings, if it&#8217;s with someone I don&#8217;t already know, well it&#8217;ll actually go off and research it, um, and give me both a history of my interactions with those people and also a bunch of, you know, public useful information to, to get started. One of the things I love about this particular app is that every day it generates a podcast, um, a daily briefing.</p><p>[00:05:24] And one of the things that we&#8217;ve done with the platform is we&#8217;ve made it possible for all the things that agents do to show up in places that you care about. So if you look over here, this is the screen in my phone, and if I go ahead and open my Apple Podcasts, you can see right here. Your Daily briefing podcast is ready.</p><p>[00:05:39] This was produced by an agent running in my Dreamer account, and it was very easy by scanning a QR code to connect it to my Apple podcast. That&#8217;s what I listened to in the car now every morning. Yeah. On my way to work.</p><p>[00:05:50] <strong>swyx:</strong> It, it</p><p>[00:05:50] <strong>David Singleton:</strong> preps me for, for my day.</p><p>[00:05:52] <strong>swyx:</strong> So one additional bit of context. I asked you immediately after seeing this was like, what, what about, I wanna talk back to my agent and you said you actually started with voice and then you went to [00:06:00] podcasts.</p><p>[00:06:00] &#8216;cause it&#8217;s nice to have it pre downloaded</p><p>[00:06:02] <strong>David Singleton:</strong> that, right? That&#8217;s right. Um, yeah, we, you, you can talk to your sidekick. So, you know, on mobile we have, uh, a dreamer app and you can talk to the sidekick right here. Um, but we&#8217;ve actually found that making things, uh, show up in the other apps that you already use in your life is incredibly powerful.</p><p>[00:06:19] So let&#8217;s take a look at what&#8217;s kind of under the hood here.</p><h2>[00:06:21] Gallery Tools And Payouts</h2><p>[00:06:21] <strong>David Singleton:</strong> So I already mentioned that we have a gallery, so this is where you&#8217;ll find a lot of agents from our community. Uh, there&#8217;s. Many at this point, hundreds. And they are solving all kinds of, uh, use cases. I&#8217;d say the the top use cases are on personal productivity, but also a lot of information management that can range from personal information like docs and so forth, managing your emails.</p><p>[00:06:42] It also ranges out to public information that you might be interested in, but you need something to help manage the, the kind of fire hose of stuff that&#8217;s coming at you. For instance, I have, um, an agent which looks at all the AI news, um, all the time. There&#8217;s a lot of it and it finds the stuff that I would actually be [00:07:00] interested in, um, and I find it incredibly useful.</p><p>[00:07:03] So these are agents that you can install that other people have built. Anything that you install on Dreamer, you can actually just say, I wanna start making some changes, and we&#8217;ll look at that in a second. But in natural language, with the sidekicks help, you can change any of these experiences to work just the way you want them.</p><p>[00:07:18] But the base layer of the system are tools. So you know, as well as anyone swyx, that any AI system is only as good as the quality of data that it can pull in and the quality of action it can take. So before we launched our beta, we worked very hard to make sure that we seeded our tools with a bunch of very high quality and powerful integrations.</p><p>[00:07:39] So, you know, for instance, this is real Google search, this is actual Gmail. Um, and you can do very useful things with those. But also this is a platform for everyone. And as we got started talking to people in our alpha community, a whole bunch of sports use cases popped out and we realized if you want to build something cool for sports with ai, you need really high quality live data.</p><p>[00:07:58] So look at these [00:08:00] Formula one M-L-B-N-F-L, uh, these are tools, uh, that we&#8217;ve built. We&#8217;ve done a, these are not data scraped off the web. This is a, a direct data feed integration. And because it&#8217;s live and &#8216;cause it&#8217;s high quality, you can build really powerful stuff. But tools is not something that we are just going to kind of control ourselves.</p><p>[00:08:19] The platform is open for tool Builders to contribute tools that anyone on Dreamer can use. So, um, this is actually the place in the platform where I think software engineers, um, well number one, would love for you to come and play with it. Uh, but software engineers are really gonna build, um, a lot of powerful stuff into the system.</p><p>[00:08:38] And we are actually sharing something for the first time on this podcast, which there is, uh, tool builders on Dreamer get paid. So if you publish a tool to the platform and a lot of agents use it, you&#8217;ll actually get paid, uh, in proportion to their usage. And we&#8217;d love for folks to come and give this a try.</p><p>[00:08:54] We&#8217;ve got good docs that help you get started and you can build things that, you know, scratch your own itch. For instance, someone built this [00:09:00] Ski Bum tool, which provides live snow conditions for a bunch of, uh, ski resorts. I&#8217;d love to show you how I&#8217;ve used that in a second. And also we have some tools, partners where the tools themselves are paper use.</p><p>[00:09:12] So for instance, parallel web systems is a premium tool. Uh, you can do really cool stuff with it. Um, it&#8217;s a a, an agentic web research tool. And that one, because it&#8217;s expensive to operate, is paid on a, on a per usage basis. But if you&#8217;re coming in to build agents on the platform, even the premium tools, you get a free trial.</p><p>[00:09:29] So you get a chance to actually try them out, make sure that the use case is good for you before you decide to, to to sign up. So that&#8217;s tools. So we have the gallery, we have tools, and then the sidekick helps us put all of this together to build agents. We do that in the agents studio. You can also do this on your phone, but if I open up Agent Studio here on Desktop psychic&#8217;s, just gonna start a conversation about what you want to build together.</p><p>[00:09:51] I&#8217;d love to show you one that I made recently.</p><p>[00:09:53] <strong>swyx:</strong> Let&#8217;s do</p><p>[00:09:53] <strong>David Singleton:</strong> it.</p><h2>[00:09:53] Building A Conference App</h2><p>[00:09:53] <strong>David Singleton:</strong> Um, let&#8217;s look at something that hopefully is kind of near and dear to your heart. So one of the things I love about Dreamer and this kind of moment in technology is that if you think about it. There are all these things in your life where, have you ever gone to a conference?</p><p>[00:10:09] I know you have. Right? And, uh, big conferences have apps. Um, and these apps are usually built by agencies and they&#8217;re, they&#8217;re usually actually quite expensive to build. I&#8217;ve been involved in running some of these myself. And how many conferences have you been to where the app was good? Zero. Honestly.</p><p>[00:10:23] <strong>swyx:</strong> Exactly. Zero,</p><p>[00:10:24] <strong>David Singleton:</strong> maybe one. I, I&#8217;ve, I&#8217;ve been to one conference. That was pretty good. Wait, wait session sessions. Um, but, but the point is, they&#8217;re rarely great pieces of software. Right. And they&#8217;re also expensive to build, but they&#8217;re, they&#8217;re interesting &#8216;cause they&#8217;re episodic, they last for this one thing. Um, and then they&#8217;re, they&#8217;re not relevant anymore.</p><p>[00:10:43] Um,</p><p>[00:10:43] <strong>swyx:</strong> and so it&#8217;s the worst feeling to invest in them because, you know, it&#8217;s like, it&#8217;s got a limited. Date?</p><p>[00:10:48] <strong>David Singleton:</strong> Absolutely. So I decided to build, uh, a conference app for your AI engineer conference. Amazing. Uh, on Dreamer. One of the things that Swix has done, uh, which I [00:11:00] thought was very forward-looking, is actually put a whole bunch of data about the conference on the webpage in an LLM readable way.</p><p>[00:11:06] There&#8217;s an LLMs txt file, there&#8217;s a feed of all of the sessions in js, ON. So I used the data from your conference last year and built this intelligent app, uh, just by talking to our sidekick, uh, in Dreamer. So just to give you a quick tour, this is my Dream Conference app. What I always wanna do for conferences is I wanna be able to search for speakers.</p><p>[00:11:28] I&#8217;m usually there because, uh, there, uh, is a speaker I care about. So, you know, SWIX, you&#8217;re the speaker I care about. I can actually see here who you&#8217;re on stage with. So here&#8217;s, here&#8217;s Greg Brockman. You&#8217;ve read even ai, uh, and this is his session. And look Greg and Swix for the speaker. So let&#8217;s add that to my schedule.</p><p>[00:11:45] Great. And then maybe there&#8217;s a couple others I might see here. Like on day two, I remember there were some keynotes. So, uh, building the open agenda web, that sounds fun. So I add that to my schedule.</p><p>[00:11:55] <strong>swyx:</strong> She&#8217;s now CEO of Xbox.</p><p>[00:11:56] <strong>David Singleton:</strong> Awesome.</p><p>[00:11:57] <strong>swyx:</strong> Which is interesting. So cool. So,</p><p>[00:11:59] <strong>David Singleton:</strong> so I&#8217;ve [00:12:00] gone through and picked out a couple of sessions that I cared about.</p><p>[00:12:03] That&#8217;s as far as I usually get with any conference app. But of course you&#8217;ve got the whole of the rest of the conference to figure out what to do. So here is where the native intelligence of, of these things you build on Dreamer can come in. So I&#8217;m gonna click guide me. So Dreamers sidekick actually parsed out the whole schedule and figured out what some of the themes are and I can choose what I&#8217;m interested in here.</p><p>[00:12:23] I&#8217;m definitely interested in agents. Uh, I&#8217;m definitely interested in code generation and also reasoning in rl. So now I&#8217;m gonna say build my schedule. So what this is doing is. It&#8217;s going across every time slot for the conference. And it&#8217;s choosing among the things I could go to, which one it thinks is best for me based on my interests.</p><p>[00:12:41] It also uses its own memory of me that&#8217;s part of Dreamer, uh, to understand what I might like best. And you know, there&#8217;s an LLM prompt running for each one of these time slots. So this is, it&#8217;s not super fast, but it&#8217;ll be done in about 30 or 40 seconds. And I&#8217;m gonna have a special custom schedule for the conference.</p><p>[00:12:57] This, like I said, is my [00:13:00] dream conference app is exactly what I&#8217;ve always wanted and I was able to build this yesterday morning. Um, I did it between some meetings. I think I spent a total of 25 minutes of wall clock time on it. I did it over the course of a couple of hours. And, uh, here is my schedule for the conference.</p><p>[00:13:15] I can see it in a calendar view. This is what I should do on Tuesday, this is what I should do on Wednesday. Oof, no conflicts, but, you know, I may not go to every single thing. And there you have it built in, you know, dreamer. So let&#8217;s take a look at what the building experience actually looks like. So this is the, the actual account that I made it on.</p><p>[00:13:32] Oh, of course I should say anything you build on Dreamer also works on your phone. So, uh, here is my AI engineer conference app right here on my phone. Got all the same functionality, and of course this is the best place to jump into my schedule.</p><p>[00:13:46] <strong>swyx:</strong> Yeah.</p><p>[00:13:46] <strong>David Singleton:</strong> Um,</p><p>[00:13:46] <strong>swyx:</strong> so you could generate a podcast about it just completely multimodal, absolute thing, right?</p><p>[00:13:51] To me, I mean, this is why I outsource, I mean, well, I, I posted the L-M-T-X-T, the JSON because you cannot run an engineer conference in 2025 [00:14:00] and not let engineers. Do whatever they want.</p><p>[00:14:02] <strong>David Singleton:</strong> Yeah.</p><p>[00:14:03] <strong>swyx:</strong> And since all conference apps suck, I&#8217;m just gonna put up a ba minimum viable app and just let people do whatever they want.</p><p>[00:14:09] <strong>David Singleton:</strong> Totally. And the cool thing about this on Bremer is I published this to the gallery and you can use it so you&#8217;ve got one that&#8217;s built to my taste of conference apps. I think it&#8217;s pretty cool. But you might want something different. Yeah. In which case you just start telling the sidekick how to change it.</p><p>[00:14:23] So let&#8217;s just very quickly look</p><p>[00:14:24] <strong>swyx:</strong> at our, what sports grid is also, you can fork it, right? That I can publish. That&#8217;s right. I can publish your one and go, this is the base starter. It&#8217;s, it&#8217;s got good defaults, but go customize, whatever.</p><p>[00:14:32] <strong>David Singleton:</strong> That&#8217;s right. That&#8217;s right.</p><p>[00:14:33] <strong>swyx:</strong> Yeah.</p><h2>[00:14:33] Agent Studio Under The Hood</h2><p>[00:14:33] <strong>David Singleton:</strong> So let&#8217;s take a look at how I actually built this.</p><p>[00:14:34] This is real. So I&#8217;m gonna say make changes. This experience we&#8217;re looking at now is our, uh, agent development studio. Um, like I said, you can do this on your phone as well. And in fact, this one I started out on desktop. Let&#8217;s look at my actual prompts. I said, let&#8217;s make an agent called AI Engineer Schedule Planner should be a custom schedule planner for the AI engineer conference.</p><p>[00:14:53] I&#8217;m not gonna read this all up. You get, you get the point and it told it where to get the data from. So that was the first prompt. And actually after I gave it that [00:15:00] prompt, I actually had a simple version of this app working, um, after the sidekick took one turn. So the Sidekick is a, like a professional software engineer, and we&#8217;ve worked very hard to make this work and build functional apps for folks that might not have any engineering experience whatsoever.</p><p>[00:15:14] So, you know, done here we have build logs that are technical, but you can hide those away. And sidekick, as it is building, will actually translate everything that is coming out of, uh, of the, the harness into English that you can actually read. And by the way, this English is in the personality of your sidekick, which is fun.</p><p>[00:15:32] Um. And the way that we build agents and agent apps, it&#8217;s a little different to what you might have seen in some other platforms for a couple of reasons. One, just the build process. The very first thing that Sidekick does, it understands all the agents you&#8217;ve got set up. It understands all the tools and it will come up with a plan for how to realize your goal, how to make sure it actually has the data and the capabilities to complete it.</p><p>[00:15:54] It will occasionally refuse. If it can&#8217;t do what you&#8217;re asking, it will tell you I can&#8217;t do that. It needs another tool. And that&#8217;s a good [00:16:00] jumping off point for any of the tool builders out there to build a new tool. So it&#8217;ll fi first figure out how, then it will build it, and then it will actually test it.</p><p>[00:16:07] So it will actually make sure that the thing that it has generated is realizing your goal. And you probably know as well as anybody that anytime you can get any. Modern state-of-the-art coding model into a loop where it can make changes and perceive its own output and then fix bugs. Magic happens. So these builds, the first build will often take 10 to 15 minutes on Dreamer, which is a little bit longer than you might&#8217;ve seen on some other platforms.</p><p>[00:16:31] But the first thing that it creates will work most of the time. And then of course, as you start making smaller changes, you can like ask it to tweak the UI in any way that you like. Those are much faster. And just to give you a sense, uh, for this one, here&#8217;s something I asked. Put a logo, I gave it a logo file in static files.</p><p>[00:16:48] Use that as the title. So for folks that actually really want to dig, uh, into a bit more detail, we&#8217;ve provided a powerful IDE here. So I can actually see here&#8217;s the code that was generated and some pieces of the [00:17:00] code are more accessible than others, like the prompts. So this is the prompt that&#8217;s used by a powerful LLM in order to do that schedule picking.</p><p>[00:17:08] And I can actually read it here directly. I can edit it without having to ask the sidekick if I want to do that.</p><p>[00:17:12] <strong>swyx:</strong> So this is very nice.</p><p>[00:17:13] <strong>David Singleton:</strong> This is for the more, the more, uh, sophisticated users.</p><p>[00:17:16] <strong>swyx:</strong> Yeah. This is other people&#8217;s entire startup is prop management.</p><p>[00:17:21] <strong>David Singleton:</strong> This is true. The other thing that is different about Dreamer is once you&#8217;ve built something here, it&#8217;s ready to go.</p><p>[00:17:28] We host it. So you don&#8217;t have to worry about getting a database from a database provider signing up, getting API keys. You don&#8217;t have to worry about your LLM provider tokens. All of that is hosted on the platform. And you can use it yourself. You can share it to the gallery for other people to, to riff on it.</p><p>[00:17:46] You can also share it with your friends and coworkers to use your instance of the agent or agentic app. And we&#8217;re seeing that happen a lot in our community. We&#8217;ve seen a whole bunch of folks who built little applications for their personal life [00:18:00] and shared them with their significant other. We&#8217;ve seen people who are building little productivity apps for their team at work and sharing it, uh, among them.</p><p>[00:18:07] And we actually do this a lot inside of the company. So at this point we, we pretty much run the company on Dreamer agents for all kinds of important things. Uh, maybe a good example of that is, um, our wait list. People are signing up every time someone signs up for our wait list. A dreamer agent will actually research, uh, that person.</p><p>[00:18:25] And we&#8217;re looking for folks who are builders, not super technical to build agents and come in, uh, and give us a lot of feedback and we&#8217;re prioritized bringing those people off of the wait list First,</p><p>[00:18:35] <strong>swyx:</strong> just a quick question on that one is there&#8217;s, it may not come up again. Do you find enrichment APIs to be useful like the ZoomInfo?</p><p>[00:18:42] Uh, clear bit</p><p>[00:18:43] <strong>David Singleton:</strong> enrichment is a very, uh, common use case. Um, on dreamer. Any application on Dreamer can kick off a sub-agent to do a particular task. Um, so this actually is a powerful agentic harness that runs inside of its own [00:19:00] vm. Uh, we call them sidekick tasks &#8216;cause they actually run in the context of the sidekick.</p><p>[00:19:04] I&#8217;ll talk more about Sidekick in a second and. Enrichment is a very common use case. And the cool thing about a sidekick task is that it has access to all the tools on the platform, but also public data as well. And so very frequently enrichment on our platform happens using public data that it can be found in the web.</p><p>[00:19:24] There are some tools for getting people data, uh, from, uh, from various bespoke systems. And so that works pretty well. But actually, you&#8217;d be surprised. I mean, we would love if someone out there would like to build a ZoomInfo tool, we don&#8217;t have one today. We&#8217;d love to see that on the platform, and I&#8217;m sure it&#8217;ll be very powerful.</p><p>[00:19:39] But we&#8217;re also seeing that this powerful agent harness can pull a lot of data in on that note of tools that make experiences better, we&#8217;re constantly adding more tools because people in the community are building them and publishing them. We review the tools carefully and then they go live for everybody.</p><p>[00:19:54] Yesterday we added granola. And that was pretty cool. So I was talking to actually, uh, Sarah on my team was [00:20:00] talking to, uh, someone building on the platform this morning and they actually, they have an agentic app that they built, which is a kind of magic to-do list. So they put stuff on their to-do list and for each thing it kicks off one of these, uh, sidekick tasks to figure out how to move the ball forward thing.</p><p>[00:20:14] Sometimes it&#8217;ll complete it</p><p>[00:20:15] <strong>swyx:</strong> entirely. Yeah.</p><p>[00:20:16] <strong>David Singleton:</strong> Often by calling another agent on the platform and sometimes it just kind of researches it and helps &#8216;em take the first step.</p><p>[00:20:21] <strong>swyx:</strong> Yeah. Do you know, this is Sam Altman&#8217;s number one, ask for an AI app. It&#8217;s the self-completing to-do list.</p><p>[00:20:26] <strong>David Singleton:</strong> Yeah. The self-completing to-do list is something that a lot of people have built on Dreamer and are getting a lot of use out of.</p><p>[00:20:32] Yeah. And, and finding it actually genuinely I shouldn&#8217;t, I should, I should try that. Mm-hmm. Please do. And you&#8217;ll even find some in the gallery that you can remix. So he was saying this morning that he&#8217;s, he built this self completing to-do list, uh, on Dreamer already. But he connected the granola tool yesterday and now something really magical happens, which is when he says in meetings that he&#8217;s gonna do a thing, it magically shows up on his to-do list and then it can magically get completed.</p><p>[00:20:56] And then, as I mentioned, all the agents, all the [00:21:00] apps on Dreamer can actually work together. So our coding agent, as it builds them, does something very special where it exposes the internals of each of the experiences to the system. And then Sidekick can manipulate those to get stuff done. So he has built another agent, which he uses for recruiting.</p><p>[00:21:18] It kind of keeps track of candidates and also it&#8217;s got a kinda mini CRM function, so he&#8217;s able to introduce candidates to each other. He told us this morning that something he&#8217;d committed to do in a meeting that was recorded on granola yesterday showed up in his magic to-do list and his magic to-do list.</p><p>[00:21:34] It was like introduce a person for recruiting, used his recruiting agent to get it done.</p><p>[00:21:39] <strong>swyx:</strong> Ah,</p><p>[00:21:39] <strong>David Singleton:</strong> um, and this is, this is the dream. This is why we started the company. It really is the case that you can build and use these very powerful, bespoke experiences that can automate your life by working together. And I&#8217;d love to talk a little bit about how they work together.</p><h2>[00:21:55] Ecosystem Trust And Monetization</h2><p>[00:21:55] <strong>David Singleton:</strong> So obviously it&#8217;s really cool to have [00:22:00] software that will work on your behalf, but it&#8217;s only useful if you can trust it, right? So privacy and security is very important to us making these things accessible and. While also being trustworthy is hard. So the model that we have, which is working very well, is that the sidekick is at the core of everything here.</p><p>[00:22:22] So it is both your companion, your helper, but it&#8217;s also the traffic cup in the system. So when, when one agent wants to work with another agent and dreamer, it doesn&#8217;t do it directly, it does it via the sidekick, well ask the sidekick to do the thing. And the sidekick understands both everything, all the expectations that have been set with me as a user about what agents can do, which tools I&#8217;ve given them permission to use.</p><p>[00:22:45] And it will make sure that whatever is is going on is actually aligned with my own interests. And you know, that&#8217;s part of the background that I bring to this problem domain. I&#8217;ve. Worked for years, uh, keeping very important information, safe and secure. And [00:23:00] so as we started to think about this problem, we realized that we actually had to build something that&#8217;s a bit like an operating system.</p><p>[00:23:06] You know, the sidekicks, like the kernel, the agents and apps are like users. Yeah. Different rings. Exactly. Because if you try to pick off just one piece of this, you can&#8217;t actually make it work for people at scale. Uh, because you could build little vibe coded apps, but they&#8217;re gonna grab all your data willy-nilly.</p><p>[00:23:23] They won&#8217;t be able to work together. You actually have to invest in the fundamental core in order to make it work well for people. And that&#8217;s what we&#8217;ve been doing and it&#8217;s, uh, it&#8217;s been a lot of fun. One other thing I wanted to mention is, um, I&#8217;ve obviously talked about two things, tools and agentic apps.</p><p>[00:23:42] We really designed Dreamer to be an ecosystem and a platform, and one of my favorite quotes about platforms, I think it&#8217;s from Bill Gates, is that you can only be a platform. If you create more value for the folks participating and using the platform than, than the platform itself creates. [00:24:00] And that&#8217;s our goal here.</p><p>[00:24:01] So we at every step have been thinking about how do we make sure that other people are deriving even more value from Dreamer than we are? So in that vein, I already mentioned tool builders get paid and people can build agents that solve their needs and share them with others, and we are already thinking about ways that they can actually monetize those as well.</p><p>[00:24:24] Against that backdrop, one of the things that we are launching today is our Builders in Residence program. So there are tons of people building really cool stuff and contributing it to the gallery already, but we&#8217;ve been really inspired by programs we&#8217;ve seen at other companies where artists might be in residence, people that are very creative.</p><p>[00:24:43] And might have ideas outside of what the, the folks at the company or in the ecosystem already have. And so we are looking for creative people who have fun ideas and, you know, want to really figure out how to apply their creativity at the cutting edge [00:25:00] of technology today to come and work with us. So, uh, if you go to dreamer.com/latent space, you&#8217;ll find, ooh, well, we love Latent space.</p><p>[00:25:09] Uh, you&#8217;ll find a link both to, uh, our tool Builder information and our builder in residence program. And for builders and residents, we&#8217;ll let you in off the wait list quickly, build an agent, and then for a small number of, of the most creative folks, we&#8217;re going to pay you to build agents. Uh, you can work directly with our team.</p><p>[00:25:29] You know, this is like building Legos. So, you know, we&#8217;ve got some of the basic blocks together already, but if you need a Ron steering wheel and we don&#8217;t have one already, like we&#8217;ll build it for you. Yeah. Um, we really want to be inspired by, by these, uh, these builders in residence.</p><p>[00:25:43] <strong>swyx:</strong> This Legos thing is pretty common as an analogy.</p><p>[00:25:46] And there&#8217;s a, there&#8217;s a thing I call the master builder. Uh, we, the actual Lego company has master builders that they employ Yeah. To inspire people and post on socials.</p><p>[00:25:56] <strong>David Singleton:</strong> That is exactly what inspired us as well. Honestly, we talked about the Lego Master [00:26:00] Builder program, so that&#8217;s our builder in residence program.</p><p>[00:26:02] <strong>swyx:</strong> Yeah.</p><p>[00:26:03] <strong>David Singleton:</strong> Um, and then, uh, finally back on, on tools. Like I said, anyone can come in and build tools today. If you follow the latent space link dreamer.com/latent space, again, we&#8217;ll get you off. Directly off the wait list. So you can build right away, you can monetize by publishing onto the platform. That&#8217;s for everyone, the very best tool that gets added to the platform by mid-April.</p><p>[00:26:23] Uh, we have a $10,000 prize that we want to give out really, because we just want to seed the creativity of everyone out there. So we&#8217;re excited to do that.</p><p>[00:26:31] <strong>swyx:</strong> Yeah. And you know, uh, this is completely a flywheel, right? Like the more tools, the more builders, the more the third thing agents, you know, it just feeds into each other.</p><p>[00:26:39] <strong>David Singleton:</strong> That&#8217;s right.</p><p>[00:26:39] <strong>swyx:</strong> Yeah. Just on the payments thing, because we probably won&#8217;t touch on that again, but I have to ask the former CTO Stripe on payments as presumably you&#8217;re using Stripe Connect.</p><p>[00:26:48] <strong>David Singleton:</strong> Yeah.</p><p>[00:26:48] <strong>swyx:</strong> Um. Any pain points that you&#8217;re, people are very interested in agent commerce and micropayment and all these things.</p><p>[00:26:55] Presumably stable coins get into a conversation at some point, but maybe not now.</p><p>[00:26:58] <strong>David Singleton:</strong> Yeah, we are [00:27:00] really, really excited about e agent commerce. The first step we are taking is help people in the world who have never been able to build these kind of experiences and software before to build stuff that meets their passions, share it with the world and get paid.</p><p>[00:27:14] So that&#8217;s all commerce that happens on our platform, and so we don&#8217;t need anything new to facilitate that. Stripe Connect has existed for quite a while and is the perfect solution for this kind of stuff, so, um, we we&#8217;re excited about that. First and foremost, however. A lot of the things that people are already doing on Dreamer, we just talked about a self-completing to-do list.</p><p>[00:27:34] A lot of the ways that you want to complete to-dos is by actually closing the loop in the real world, and that&#8217;s going to involve the exchange of value. So we have some folks that are building tools already that actually do have money move in order to, to complete that, that loop. So far, we just want to be open and agnostic to all the protocols out there.</p><p>[00:27:54] I honestly think this moment in time is a little bit like the early web. So I personally started coding as a kid [00:28:00] and I think I got access to the internet in about 19 95, 19 96. And back then, uh, the web existed, you know, HTTP was a protocol, but there were also other protocols I was using all the time, like Gopher and UUCP and uh, various others.</p><p>[00:28:15] So the point is like the web, HTTP and HTML. Was just one among many protocols. And of course it became the winner and it&#8217;s awesome. Yeah. Um, but the others were also kind of interesting and viable at the time as well. And I think the world of agentic commerce is like this right now. Also,</p><p>[00:28:30] <strong>swyx:</strong> acp.</p><p>[00:28:31] <strong>David Singleton:</strong> Acp, exactly.</p><p>[00:28:32] All the, all the cps, you know, on Dreamer. We hope that folks will build tools that kinda make use of all of these things, but I&#8217;m sure that at a certain point. One or two will emerge as the winners, and then we&#8217;ll be able to build like really deep support in,</p><p>[00:28:44] <strong>swyx:</strong> yeah. This is like maybe a complete tangent, but I do think about how a lot of these companies in AI companies in particular have to switch from c based to usage based because of course, but then, then they end up, end up having to sort of [00:29:00] obscure the margins a little bit and then they inventing end up inventing their equivalent of rob robots.</p><p>[00:29:04] <strong>David Singleton:</strong> Mm-hmm.</p><p>[00:29:04] <strong>swyx:</strong> Uh, where they&#8217;re like, well, okay, well every company should have their own currency. And it&#8217;s, it&#8217;s like very short lead to a token.</p><p>[00:29:11] <strong>David Singleton:</strong> Yeah.</p><p>[00:29:11] <strong>swyx:</strong> Or, and I&#8217;m like, okay, well where does this end? I can&#8217;t really play out the next step as to like, is this chaos? Is this,</p><p>[00:29:18] <strong>David Singleton:</strong> yeah.</p><p>[00:29:18] <strong>swyx:</strong> Okay.</p><p>[00:29:18] <strong>David Singleton:</strong> Well, I think it is kind of like the wild west.</p><p>[00:29:21] I don&#8217;t mean that in a completely, it&#8217;s all completely disorganized way, but there&#8217;s just so many things that could happen from here. The Overton window is very wide, right? Not far how this might land. And I&#8217;m just very excited to be building a platform that can take advantage of all of those opportunities and we&#8217;re just gonna be there.</p><p>[00:29:36] Uh, working for our users to make sure that things that emerge work,</p><p>[00:29:39] <strong>swyx:</strong> you&#8217;re gonna own the consumers, you&#8217;re gonna be up the OS for the app store for everything.</p><p>[00:29:43] <strong>David Singleton:</strong> So one of the ways to think about this is, um, dreamer actually uses all of the state-of-the-art models as a user. You don&#8217;t have to think about should I be using, you know, Opus four six, or should I be using the five four model from [00:30:00] OpenAI?</p><p>[00:30:00] We are continually doing evals and so forth to make sure that the best things are there for you. You can just build on the platform and know that as the world ships around, you&#8217;re gonna get the right stuff for you. Um, and I think that&#8217;s something that is needed to actually have folks take advantage of this technology at scale.</p><p>[00:30:19] I&#8217;d love to show you another example of something I built.</p><p>[00:30:21] <strong>swyx:</strong> Let&#8217;s do it.</p><p>[00:30:22] <strong>David Singleton:</strong> This is another example of software that just lasts for a certain moment in time. So recently I went on a ski trip with a bunch of friends,</p><p>[00:30:31] ski</p><p>[00:30:31] <strong>David Singleton:</strong> Bum. Uh, so it uses ski bum. Yes. I went on a ski trip to Big Sky. I&#8217;d never been there before.</p><p>[00:30:38] And I made this little intelligent app for us. And you can see it says it&#8217;s loading big sky conditions. So it&#8217;s actually calling the Ski Bum tool that I just showed you, which is, uh, published in our, uh, in our gallery. So what is this? This is a little app that was just for our weekend trip. It shows the current status of all the lifts of Big Sky.</p><p>[00:30:54] Using that tool from the ecosystem, it shows the forecast for the upcoming weekend. It shows our [00:31:00] accommodation. This is just like where my group was staying. This is just for us and also a bunch of dining information that one of our friends, uh, put together who, who&#8217;s an expert on Big Sky. So I was able to take this app, share the link with my friends.</p><p>[00:31:12] They weren&#8217;t on Dreamer yet, just send it to them on iMessage and they get a version they can use on their phone. And of course, here&#8217;s the real kicker. So I&#8217;ve been on ski trips before and other weekend adventures with my friends. Yeah, people pay for different things and at the end of the weekend it&#8217;s always a pain to figure out who needs to pay, who to settle up.</p><p>[00:31:29] So we use this during the weekend. We added all of our expenses in here. Uh, too close are it&#8217;s drill data. It&#8217;s only too closely. And then at the end of the trip, we press split. And we&#8217;re, we settled up and we&#8217;re done. So there&#8217;s another dreamer. This was all through dreamer. So the, the actual payment? No, no.</p><p>[00:31:47] We, it happened because, because we paid for stuff in the real world, it was like, okay, this person needs to pay that person 20 bucks. Right? Right. This person already paid in that. Right. So it just helped us all settle up. We didn&#8217;t move the money on Dreamer. You could do that. And in fact, if you&#8217;re a tool builder [00:32:00] thinking about this and getting excited, like come build a tool to do that stuff.</p><p>[00:32:02] We really think of our tool builders as design partners.</p><p>[00:32:05] <strong>swyx:</strong> Yeah. I got, I got the tool. Uh, what, like, I hate, I use Bank of America. I hate bank, I hate the app. Mm-hmm. I hate the web. All banking websites just horrible.</p><p>[00:32:13] <strong>David Singleton:</strong> Yeah.</p><p>[00:32:13] <strong>swyx:</strong> So just build me, like build a thing on top of Plaid.</p><p>[00:32:15] <strong>David Singleton:</strong> Yeah. Right. And then just So</p><p>[00:32:17] <strong>swyx:</strong> five code by banking app,</p><p>[00:32:18] <strong>David Singleton:</strong> there&#8217;s already a tool for that.</p><p>[00:32:20] Oh. So, um, attain Finance is a tool, a builder in our community built. Okay. Um, and it uses a secure system like Plaid. To access your, uh, financial data and you can build powerful personal finance agents on Dreamer today using this tool. And like I said, we review tools carefully. So when bringing Attain Finance onto the platform, we did actually quite a detailed security review with that company to make sure that if folks build stuff with it, it&#8217;s, it&#8217;s gonna work well.</p><p>[00:32:49] So yeah, check that out. I think, uh, I&#8217;m, I&#8217;m pretty certain it connects to Bank of America. So you&#8217;ll be able to build the, the app that you wanted already?</p><p>[00:32:55] <strong>swyx:</strong> Yeah. There&#8217;s a couple of points I wanted to sort of dive in on, maybe highlight to folks, [00:33:00] because I, obviously, I spent more time with Dreamers. So we&#8217;re making a point where you choose on behalf of your users because they&#8217;re meant to be consumers.</p><p>[00:33:07] So maybe less technical,</p><p>[00:33:08] <strong>David Singleton:</strong> right?</p><p>[00:33:08] <strong>swyx:</strong> But obviously people can, how users can override. If you read that&#8217;s, but it&#8217;s not just lms, it is also the, the transcription. It, it&#8217;s like all, like there&#8217;s, there&#8217;s a first party curated set of here&#8217;s the house opinion. That&#8217;s right. On what?</p><p>[00:33:21] <strong>David Singleton:</strong> That&#8217;s</p><p>[00:33:21] <strong>swyx:</strong> right. The thing is, that&#8217;s right.</p><p>[00:33:22] Is what&#8217;s the list? Is there like,</p><p>[00:33:24] <strong>David Singleton:</strong> yeah, so actually if you look in the tool gallery, the first party kind of curated set are all the ones that have these grayscale icons. So we have a built in tool for image understanding, for image generation, for RSS, exploration, text to speech and so forth.</p><p>[00:33:38] <strong>swyx:</strong> Recipes.</p><p>[00:33:39] <strong>David Singleton:</strong> Uh, we actually do have a built in recipes tool.</p><p>[00:33:41] It turns out that a lot of people in our alpha wanted to do stuff for cooking. Yeah. Um, and you know, you can scrape the web to get good recipes, but we were able to quite quickly find a good repository of recipes. It works great here. Yeah.</p><h2>[00:33:55] Stable Tool Interfaces</h2><p>[00:33:55] <strong>David Singleton:</strong> So the point behind these though is that we&#8217;ll keep the interfaces stable, so they&#8217;ll always work.</p><p>[00:34:00] But you know, the best translation model and, you know, there are people using this translation tool to translate Chinese podcasts into English. It&#8217;s, it&#8217;s pretty powerful. It can deal with very long text, but the best translation tool today might be different from the best translation tool sometime next year.</p><p>[00:34:15] And we&#8217;re just gonna make sure that that translation tool is always pretty close to state of the art. So you can build something and you know it&#8217;s gonna continue to work well. Of course, some of our tools are branded. You may actually have a preferred way of buying groceries, like maybe you prefer Instacart and that&#8217;s great.</p><p>[00:34:29] You can use the Instacart tool specifically.</p><p>[00:34:31] <strong>swyx:</strong> Yeah.</p><h2>[00:34:32] Partnerships And Ecosystem</h2><p>[00:34:32] <strong>swyx:</strong> Your partnerships, uh, I mean, I don&#8217;t know if you ever hit of partnerships, but this is gonna be a bonanza for anyone on to do deals.</p><p>[00:34:38] <strong>David Singleton:</strong> We have an amazing person who, uh, works on all of our partnerships. Um, and it&#8217;s part of what you have to do to build a platform like this that&#8217;s gonna work for people.</p><p>[00:34:46] Like, we&#8217;ve gone and done that. Schlep has a lot of work, one talks lots of different companies, um, in order to make sure that you&#8217;ve got good tools at the core.</p><p>[00:34:54] <strong>swyx:</strong> Yeah.</p><p>[00:34:54] <strong>David Singleton:</strong> And then of course, because we&#8217;re open to tool builders contributing to the platform, this is only gonna get better and better and [00:35:00] better.</p><p>[00:35:00] <strong>swyx:</strong> Yeah.</p><h2>[00:35:01] Agent Lab Routing Layer</h2><p>[00:35:01] <strong>swyx:</strong> One observation I have this, this is gonna master a thesis I&#8217;ve been pursuing, which is, uh, what I&#8217;ve been calling an agent lab</p><p>[00:35:05] <strong>David Singleton:</strong> mm-hmm.</p><p>[00:35:06] <strong>swyx:</strong> Where you sort of different than a model lab in, in, in the sense that you never train your own models, but you are the router evaluation layer, ex subject domain expert for choosing between, uh, models.</p><p>[00:35:18] <strong>David Singleton:</strong> Yeah.</p><p>[00:35:18] <strong>swyx:</strong> And you&#8217;re explicitly doing these things. And so like in my sort of construction, every agent lab does some version of this where like, here&#8217;s the image understanding endpoint and we will route for you and don&#8217;t worry about it. Yeah. Sally, I think it&#8217;s kind of cool.</p><p>[00:35:32] <strong>David Singleton:</strong> I, I think it makes total sense. Um, and again, to make this work for folks that don&#8217;t follow the AI news every day, it&#8217;s an actually, it&#8217;s a, it&#8217;s a really important thing to do.</p><p>[00:35:42] Yeah. And it, it&#8217;s been, it&#8217;s been a real pleasure. I mean, I&#8217;m a, I&#8217;m personally a total geek for this stuff. I love it. And being able to go and dive into all those details in order to make it work well for other people. It&#8217;s a true pleasure. I cannot imagine working at anything else right now. It&#8217;s just so much fun.</p><p>[00:35:56] <strong>swyx:</strong> The tricky part is multimodality when some of these things do [00:36:00] merge.</p><p>[00:36:00] <strong>David Singleton:</strong> Mm-hmm.</p><p>[00:36:01] <strong>swyx:</strong> And you are, you&#8217;re sort of, this is your imposing structure on things that fundamentally don&#8217;t want to be structured. And so sometimes that might work against you, but for 99% of these cases, this is fine.</p><p>[00:36:10] <strong>David Singleton:</strong> Yeah. I mean, I think it&#8217;s gonna be very interesting to see how the, the, the world matures because a lot of the power of dreamer is the ability to kick off these subagents, so these powerful agent harnesses, which can actually change how they work based on the data.</p><p>[00:36:25] I actually think that we will be able to. Kind of keep up with and stay at the forefront of the changing landscape of how tools and systems work together. And that&#8217;s, that&#8217;s new. You know, software didn&#8217;t used to work like this and now it does. Um, so even, even just figuring out how to design the right pri to make that possible has itself be a lot of fun.</p><h2>[00:36:44] Builders Can Publish Tools</h2><p>[00:36:44] <strong>swyx:</strong> This is, is a sort of maybe two part question that why can&#8217;t streamer make its own tools? And then why don&#8217;t you let you builders maybe stand up their own routing group? I call this a routing group, right? Like where it&#8217;s like collect Yeah. Things.</p><p>[00:36:58] <strong>David Singleton:</strong> So two things, to [00:37:00] some extent, dreamer does make its own tools in that agents appear to the system as tools.</p><p>[00:37:05] So they can be, they can be used to accomplish things. So you can build an agent that is essentially a tool. Yeah. Um, and it it,</p><p>[00:37:12] <strong>swyx:</strong> which is to me very useful for reuse.</p><p>[00:37:14] <strong>David Singleton:</strong> Right.</p><p>[00:37:14] <strong>swyx:</strong> Right. Exactly. &#8216;cause I, I like, this is the way I like it. Now my next five apps, I don&#8217;t want to do this whole series of back and forth again.</p><p>[00:37:20] <strong>David Singleton:</strong> Right.</p><p>[00:37:21] <strong>swyx:</strong> Yeah.</p><p>[00:37:21] <strong>David Singleton:</strong> Um. Then at the tool layer of the system, it&#8217;s open to anyone. So it&#8217;s actually quite powerful and flexible. So if you wanted to add a tool, which was, uh, imagine that you were training your own foundation model, Swyx. That might be fun. And imagine you wanted people to be able to play with, I don&#8217;t know, maybe you make like, you know, nano chat or whatever and you want to Yeah.</p><p>[00:37:42] Let people play with your own nano chat and see how I change themselves.</p><p>[00:37:44] <strong>swyx:</strong> Now.</p><p>[00:37:45] <strong>David Singleton:</strong> You could, you could publish a tool that is Nano Chat and it nano image generation behind a tool, and it could be your own writer if you wanted to. I see. And honestly, if that&#8217;s the kind of thing that gets you excited as a builder, please come and do it.</p><p>[00:37:57] Like we, we really are [00:38:00] believers in this idea that we aren&#8217;t going to figure out every single detail ourselves. We&#8217;re gonna make sure it&#8217;s a safe and fun place to build this stuff, but we&#8217;re really open to these ideas coming from other people. Um, and so I&#8217;d like nothing more than you come in and build a tool that does some of that cool stuff that you, that you have in mind.</p><p>[00:38:15] <strong>swyx:</strong> Yeah. Awesome.</p><p>[00:38:16] <strong>David Singleton:</strong> And just as a reminder, if you&#8217;d like to do that, the way to find the links is dreamer.com/latent space. Um, and for a limited time on that page, um, anyone who&#8217;s listening to this podcast will also get directly off of our wait list. Uh, it&#8217;s quite long right now. We are working hard to bring Zika.</p><p>[00:38:32] Wait, so skip the wait list.</p><p>[00:38:33] <strong>swyx:</strong> You know, I think, I think that&#8217;s fantastic. I, I think it&#8217;s, it is really sort of probuild way to do it. I wanted to jump back to the, the bar. Yeah. You know, you know, I get excited about this.</p><p>[00:38:41] <strong>David Singleton:</strong> Yes. Okay. Let&#8217;s set it back in there.</p><p>[00:38:43] <strong>swyx:</strong> Like, let&#8217;s, you know, this is the engineer podcast that&#8217;s get</p><p>[00:38:46] <strong>David Singleton:</strong> Yeah.</p><p>[00:38:46] <strong>swyx:</strong> As technical as you can.</p><p>[00:38:47] <strong>David Singleton:</strong> Yeah.</p><p>[00:38:47] <strong>swyx:</strong> On everything you&#8217;ve built, like have a show off.</p><p>[00:38:50] <strong>David Singleton:</strong> Yeah. Okay.</p><h2>[00:38:51] Under The Hood Debugging</h2><p>[00:38:51] <strong>David Singleton:</strong> So let&#8217;s go wild in the aisles in the Asian studio. So as you can see, over on the left here is a conversation with the sidekick where you ask it what to do and it will explain in English that anyone can understand what&#8217;s going on.</p><p>[00:39:03] But, um, if you want to pull back the covers and look under the hood, um, if you&#8217;re, uh, an engineer like me, then we have this, uh, this kind of debug drawer at the bottom. So you can see the full build logs here, but you can actually also dig in and see the files and prompts that have been generated. Uh, you can upload files from your computer in static files.</p><p>[00:39:24] Um,</p><p>[00:39:24] <strong>swyx:</strong> very important,</p><p>[00:39:25] <strong>David Singleton:</strong> uh, indeed. You can actually read the prompts that have been generated for you. We intentionally put an example in here just that you can see what the format looks like. And then, you know, we already looked at this one that was generated for this particular, um, app, but if you actually want to bring the code out of Dreamer and work on your own local machine, you can.</p><p>[00:39:45] So at the core of everything here is an SDK with a powerful command line interface and we built that first. It&#8217;s actually possible to build agents on Dreamer without talking to the sidekick. You can write code with your fingers on a keyboard if you want to. I know that&#8217;s very [00:40:00] antiquated, not, but actually this can be a lot of fun.</p><p>[00:40:02] So if you wanna pull it out onto your laptop, you can use our, our CLI and, uh, you can edit it in cursor or in cloud code. You know, you don&#8217;t have to use our sidekick. And the CLI actually has full access to the rest of the platform with you as the user. So, you know, obviously it is, uh, secure and privacy sensitive, and this is a way that, um, some of our most technical builders do build stuff on the platform.</p><p>[00:40:24] The really cool thing is the side cake. When it&#8217;s in coding mode, it uses exactly the same CLI. So the way it. Build stuff on Dreamer is using the same tools that you might as an engineer. Um, and that&#8217;s actually a very powerful abstraction because it turns out that the right way to give a lot of context to agents to use CLIs is to write great documentation.</p><p>[00:40:46] Make sure that all of the things that you could do are actually possible. And guess what? That makes it a delightful developer experience for real heroes as well.</p><p>[00:40:53] <strong>swyx:</strong> Yeah. So that&#8217;s pretty cool. We&#8217;ve been telling developers to do this and they ignore this until now they have to for content.</p><p>[00:40:58] <strong>David Singleton:</strong> I, I&#8217;ve been saying this for a [00:41:00] long time.</p><p>[00:41:00] Uh, we actually Stripe docs.</p><p>[00:41:02] <strong>swyx:</strong> I mean, come on. Absolutely. Come on.</p><p>[00:41:03] <strong>David Singleton:</strong> Absolutely. But actually, I was chatting with folks at Stripe last week and saying, Hey, you gotta make the Stripe CLI actually tell agents what they can do on Stripe because that way they&#8217;re gonna use more stuff on Stripe. I think this is a real trend for the entire industry.</p><p>[00:41:16] <strong>swyx:</strong> Yeah.</p><p>[00:41:16] <strong>David Singleton:</strong> So we, we&#8217;ve been doing that.</p><p>[00:41:17] <strong>swyx:</strong> To me, this, this download and, uh, GI push mm-hmm. Everything is complete confidence in that you&#8217;re not hacking it. Right. Because there&#8217;s other, let&#8217;s call them AI builder platforms that impose their stack on you and if you, if you, and so therefore they don&#8217;t allow you to do this because they cannot.</p><p>[00:41:34] Right. &#8216;cause they, they impose some degrees of freedom, uh, restrictions so that they can get it to work. Yours is a fully general like VM running the full code. Correct. Do whatever you want. Correct. Any language you want. Correct. Yeah.</p><p>[00:41:46] <strong>David Singleton:</strong> Correct. Well, in terms of language, if you use the SDK, you could build stuff in other languages.</p><p>[00:41:51] We&#8217;ve actually found that TypeScript is the best language for building these experiences. Yes. Because it&#8217;s strongly tight. So you find out at compile time if you&#8217;ve made mistakes [00:42:00] and there&#8217;s nothing better than getting in. A coding agent in a loop where it can see its mistakes and ask them. So TypeScript is the language that everything gets built in by default here.</p><p>[00:42:08] <strong>swyx:</strong> Did And did you see that TypeScript overtook Python? I did. I did. Yeah.</p><p>[00:42:12] <strong>David Singleton:</strong> And for what it&#8217;s worth, when we started the company, we started writing stuff in Python, and I love Python. Um, if I do, uh, a vendor code, I always write it in Python. It&#8217;s my favorite language as a developer with my fingers on the keyboard.</p><p>[00:42:23] Um, but TypeScript is an amazing language for AI because there&#8217;s tons of training data in the models, um, and it&#8217;s strongly tight. And actually at the company we built most of the stack in TypeScript, and we have this amazing property, which is, we have type safety all the way from the database to the front end.</p><p>[00:42:40] And there&#8217;s nothing better for working with coding agents than being able to have them check their correctness, compile time. So the same ideas behind building the company&#8217;s code base, we&#8217;ve put into the agent SDK here as well.</p><p>[00:42:51] <strong>swyx:</strong> Yeah. Do you know if you&#8217;d use one of those tools, like Prisma or whatever, or is it Tool Lab for you?</p><p>[00:42:55] <strong>David Singleton:</strong> We, we actually have crafted most of our own tools. Um. For [00:43:00] instance, we had LLM Driven Code Review, uh, before the thing that got published from philanthropic this week. You know, we, we&#8217;ve been doing this stuff, uh, on our own bat</p><p>[00:43:07] <strong>swyx:</strong> email, we&#8217;ll pay $25 per review.</p><p>[00:43:09] <strong>David Singleton:</strong> We, we pay a lot less than that. However, I hear that those reviews are excellent and possibly worth $25.</p><p>[00:43:14] <strong>swyx:</strong> Yeah. You know, it&#8217;s an option. Right. It&#8217;s good, good to have it.</p><p>[00:43:17] <strong>David Singleton:</strong> Just to give you a tour of some other stuff here. So, um, I can also see all the versions. Yeah. Um, this is not gi, this is not gi, this is built into dreamer. I can see all the versions that have been pushed before. Why is it</p><p>[00:43:27] <strong>swyx:</strong> not gi?</p><p>[00:43:28] <strong>David Singleton:</strong> It&#8217;s not gi because we can make it work more efficiently than Git.</p><p>[00:43:32] And we actually, we do some work behind the scenes to kind of understand what&#8217;s in each of these versions. Yeah. Um,</p><p>[00:43:37] <strong>swyx:</strong> so one of the things I&#8217;m pursuing, and I have a lot of thesis, right? Mm-hmm. One of the thesis is like, does GI go away? Does GitHub go away? And like, what, what is the active reinvent</p><p>[00:43:46] <strong>David Singleton:</strong> you for, for what it&#8217;s worth to some extent.</p><p>[00:43:48] And anything you build, there&#8217;s a lot of path dependency. If we started over, we might make this gi There&#8217;s, uh, you know, within the company we use, uh. For our, you know, platform source code. And we like it and it [00:44:00] works well with coding agents as well. The very first versions of this, we wanted to be able to make it possible for the sidekick to manipulate it easily.</p><p>[00:44:06] Um, and this, this was an expedient way to do it.</p><p>[00:44:08] <strong>swyx:</strong> Yeah.</p><h2>[00:44:08] Workflows Logs And Databases</h2><p>[00:44:08] <strong>David Singleton:</strong> Um, you can also see all the activity that has happened in the workflows that you build. A lot of agents, you&#8217;ll build on Dreamer, do things in the background, so they run on triggers. These are stimuli from the outside to kick them off, and this is a nice way to see all of the things that might have kicked off your agent.</p><p>[00:44:24] You know, you can have an agent that kicks off on a webhook, so you can plug it into external systems. You can have an agent that runs when you receive certain emails that match filters, including LLM filters. And so here you can see, oh, when did it run? What did it do? You know, if I open up one of these guide me prompts or guide me, uh, events.</p><p>[00:44:41] Oh my can see God. Well, I told you it was calling an LLM for every one of those time slots. Here&#8217;s all of the LLM calls, here&#8217;s the actual prompts.</p><p>[00:44:49] <strong>swyx:</strong> And you don&#8217;t mind exposing all of this, right?</p><p>[00:44:51] <strong>David Singleton:</strong> No. We want builders to see what&#8217;s going on under the hood. It&#8217;s haiku to,</p><p>[00:44:53] <strong>swyx:</strong> okay. Yeah. So,</p><p>[00:44:54] <strong>David Singleton:</strong> okay. Right now that one was haiku.</p><p>[00:44:56] Like I said, we work with all the models and sidekick will actually pick the best one [00:45:00] for the job. And you saw that was pretty high quality and pretty fast. So Haiku four five is the one that it picked for that job. Exactly. Uh, we also have logs, as I mentioned, there&#8217;s a database spun up on demand for every, uh, agent.</p><p>[00:45:12] You don&#8217;t have to go and figure out how to do your own hosting. This is a SQL Light. This is a SQL Light database. Yeah. Um, it&#8217;s a multi-user SQL light database. And then, uh, but, but each one is you, you get a database that is unique to this agent. But then if you share the agent with multiple people, we take care of like who are the owners in each row?</p><p>[00:45:31] And all of that stuff is just there outta the box. Um,</p><p>[00:45:34] <strong>swyx:</strong> and again, in-house?</p><p>[00:45:35] <strong>David Singleton:</strong> In-house.</p><p>[00:45:36] <strong>swyx:</strong> Oh my God.</p><p>[00:45:37] <strong>David Singleton:</strong> Yeah. Um, well we do work with a bunch of infrastructure providers, but the technology for how to manipulate this is in-house. Fun fact. We actually did a lot of our own infrastructure development early on at the company and realized we need to spend our energy in the stuff that we&#8217;re uniquely doing in the world.</p><p>[00:45:53] So we&#8217;re very delighted to partner with a bunch of great designer and some of this stuff. And then finally, um, I mentioned that agentic apps agents [00:46:00] expose all of their internals to the system so the psychic can manipulate them and use them just like a user can. So you can see how it&#8217;s decided to break this problem up into functions.</p><p>[00:46:09] Some of the functions, the ones with the little I here are exported. That means that there&#8217;s probably the visible from outside. Exactly. And others are internal. And if you want to, you can dig right in here and call individual functions and see what happens. But mostly. You don&#8217;t need to think about that at all.</p><p>[00:46:24] Yeah. Uh, you can keep that little drawer closed and you can talk to your sidekick and build really powerful and enchanting experiences.</p><p>[00:46:30] <strong>swyx:</strong> Yeah. I mean, to me, like showing this gives the engineer a complete mental model of what you&#8217;ve done and what you can do with it. Yeah. For example, the first thing I, I, I look for.</p><p>[00:46:39] A mental checklist of things, right? Like is off in the database, off looks like it&#8217;s not right. So that&#8217;s a separate layer. That&#8217;s probably me means it&#8217;s hard to do multi-user apps on the same app, right?</p><p>[00:46:50] <strong>David Singleton:</strong> So you actually, we&#8217;ve solved that. So, um, see, yes, the platform builds in off, so you as a user sign into the platform, if you&#8217;re using an [00:47:00] agent that was published by someone else, then your identity is, is kind of taken care of by the system.</p><p>[00:47:05] And when you query the database, you&#8217;re gonna get the stuff that is for you. Unless the builder specifically said, this is public data that everyone should see. So they, they actually get a chance to think about that. And again, sidekick can guide you through building, uh, agents and apps that work that way.</p><p>[00:47:19] So you&#8217;re right, that&#8217;s another thing that people have to think about when they&#8217;re trying to figure out how to build software experiences on Dreamer. You, it&#8217;s built in. You talk to the sidekick as if it were a human being about what you want and that&#8217;s what you get. So, you know, my, my Big Sky app that I just showed you that was designed for multiple people to use it.</p><p>[00:47:38] And of course the things that we were putting in as expenses were supposed to be visible to everybody, and I just told the sidekick that&#8217;s the way I wanted it. Uh, but by default, if I built an app like that, the data from each user would not been visible to the others.</p><p>[00:47:49] <strong>swyx:</strong> Yeah. Yeah. Uh, this is, I presume this is a mood question, but basically you&#8217;ve had to build your own coding agent, right?</p><p>[00:47:55] Which is sidekick slash whatever is in Inside Psychic. Obviously there&#8217;s a lot of [00:48:00] people with a lot of desire for cloud code and Code X and attachment to it. Mm-hmm. I know under the hood data basically reduced to a loop, but like, would you let people use cloud coding and Code X or is the harness too specialized?</p><p>[00:48:12] <strong>David Singleton:</strong> Yeah. If you, if you want to use, um, cloud code and Code X, then you go down here. Yeah. Hit get the S St K. And we even say this right here, edits your heart&#8217;s content Z cursor code.</p><p>[00:48:22] <strong>swyx:</strong> Like people want to use it inside of Ick, right? Yeah. They want to switch the engine.</p><p>[00:48:26] <strong>David Singleton:</strong> Yeah.</p><p>[00:48:26] <strong>swyx:</strong> That&#8217;s the coding engine.</p><p>[00:48:27] <strong>David Singleton:</strong> Yeah. We are not doing that right now.</p><p>[00:48:29] Um, you know, again, the goal really is abstract the complexity. Yeah. Um, because the real target for. Building agentic apps is folks who can&#8217;t do this already today. I can&#8217;t tell you how many users in our community I&#8217;ve spoken to who are like Dreamer has changed my life because I used to have all these ideas.</p><p>[00:48:50] If only I could find an engineer to help me implement them, I&#8217;d be able to get them done. They&#8217;re free, and now I can talk to my sidekick and, and get it built. I think that&#8217;s like really how we think [00:49:00] about the people that should get a ton of value and fun, um, out of the platform. And so they&#8217;re not asking to be able to plug in their their own, you know, coding agent.</p><p>[00:49:11] And for those folks, the opportunity is massive. If you&#8217;ve never been able to do stuff in code, now you can build stuff for you, for your friends, for your family, for your coworkers. And also there&#8217;s a huge opportunity for folks who do build stuff in code to actually contribute to this ecosystem. So that&#8217;s how we think about it.</p><p>[00:49:28] <strong>swyx:</strong> Yeah. Amazing.</p><h2>[00:49:28] Personalization And Memory</h2><p>[00:49:28] <strong>swyx:</strong> That&#8217;s most of what I wanted to cover Dreamer wise. I think personalization and memory yeah. Is probably like the single most important job of, uh, of the os. Maybe we could talk about that and then I&#8217;ll, I wanted to zoom out on company building stuff.</p><p>[00:49:40] <strong>David Singleton:</strong> Yeah, yeah. Sounds good.</p><p>[00:49:41] <strong>swyx:</strong> Yeah. So how do you handle memory?</p><p>[00:49:43] What, yeah, what have you found? What have you tried and failed?</p><p>[00:49:45] <strong>David Singleton:</strong> Yeah. Okay. So, uh, first of all, at the core of dreamer is the sidekick. The sidekick gets to know you and it builds up a memory about you over time, and that turns out to be very important. So Dreamer, that&#8217;s your moat. That&#8217;s Dreamer gets better the more you use it.[00:50:00]</p><p>[00:50:00] For instance, a lot of agents in the platform, when you start using them, the first thing that they&#8217;ll show you, here&#8217;s what I think is relevant to you for this particular use case. Uh, a very popular kind of agent on Dreamer is a weekend activity planner. So, um,</p><p>[00:50:14] <strong>swyx:</strong> like, just tell me what to do.</p><p>[00:50:15] <strong>David Singleton:</strong> Well, tell me what to do, especially if I&#8217;ve got kids, right?</p><p>[00:50:17] So I have two kids and a dog, and my wife and I often spend a lot of time trying to figure out what are we gonna do with the crew this weekend. And, you know, we have interests that are very consistent. It actually can take a ton of work during the week to figure this out. So there is an agent on Dreamer called Weekend Activity Planner, and it helps me find things to do with, with the family of the weekend.</p><p>[00:50:39] In fact, this morning I got a message from my weekend activity planner telling me about the St. Patrick&#8217;s Day parade on Saturday. Oh, at Civic Center. I&#8217;m Irish. My kids are technically Irish as well. I mean, they, they, they have multiple citizenships, but you know, they&#8217;re, they&#8217;re Irish. Um, what a better thing to do than take them to the St.</p><p>[00:50:56] Patrick&#8217;s Day parade. Why did that get recommended to me? Because in the [00:51:00] profile that we can, activity Planner knows about me. It knows that I&#8217;m Irish, right? So all of that comes from the memory that Psychic builds up about me over time. We have invested in this a bunch. We will continue to invest in this more.</p><p>[00:51:11] We&#8217;ve tried actually many different techniques. As, you know, the, the kind of, um, cutting edge of a agentic memory has changed over time. You know, very early on we were putting lots of facts into a vector database and, uh, and doing embeddings and pulling them back out, um, using, you know, reverse lookup of embeddings rag that actually worked, but turned out to be much more complexity than was actually required.</p><p>[00:51:33] So, you know, today we&#8217;ve replaced it with a different system. Uh, I think we use a system that&#8217;s pretty similar to what you&#8217;ll find in lots of other products, but it&#8217;s an area that we&#8217;re actively, uh, investing in. Like, there&#8217;s, there&#8217;s. More than one person at the company specifically working on memory. And so expect us to just continue to make it better.</p><p>[00:51:50] <strong>swyx:</strong> Did you try knowledge graphs?</p><p>[00:51:51] <strong>David Singleton:</strong> We&#8217;ve tried knowledge graphs. The system that we have now is not a knowledge graph. Yeah. Um, but we&#8217;ve probably implemented most of the papers you&#8217;ve seen out there on agent [00:52:00] memory and the current system is working pretty well.</p><p>[00:52:02] <strong>swyx:</strong> Yeah. Excellent. Zooming out just on the company stuff.</p><p>[00:52:06] Mm-hmm. Um, uh, this is your first time in the CEO seat. Correct. You were CTO before. Correct. What&#8217;s different?</p><p>[00:52:11] <strong>David Singleton:</strong> Yeah. The difference between being a CEO and A CTO really is just. Like making sure you&#8217;re looking across everything. So, um, I have run products before, so for instance, Android wear, you&#8217;re basically a CEO</p><p>[00:52:25] <strong>swyx:</strong> of</p><p>[00:52:25] <strong>David Singleton:</strong> that product.</p><p>[00:52:26] I, I, I was running that as a general manager.</p><p>[00:52:28] <strong>swyx:</strong> Yeah.</p><p>[00:52:29] <strong>David Singleton:</strong> However, when you do it for your own company and the buck truly stops with you, it definitely kind of raises the temperature a little bit. Um, but it&#8217;s been a lot of fun for me to think about a lot of go to market topics. Um, I spend a lot of my time these days meeting users, uh, talking to folks about what they could do on the platform, being very active on X and LinkedIn, uh, which by the way is a huge pleasure.</p><p>[00:52:51] It is so much fun to be able to engage with users and potential users directly and understand what they would like to do. Um, and that&#8217;s the biggest difference [00:53:00] between this role and being the CTO, um, of, uh, of a company. At the same time, I am someone who always likes to look for why are we doing this?</p><p>[00:53:10] Who are the people that. Benefit from it. So, you know, even as A-C-T-O-I was always paying a lot of attention to topics across the company. So I feel very grateful for all I learned in my previous roles that kind of got me ready to, to do this at this kind of scale.</p><p>[00:53:24] <strong>swyx:</strong> Yeah.</p><h2>[00:53:24] Tiny Teams Hiring And Taste</h2><p>[00:53:24] <strong>swyx:</strong> To me this is like the natural lead into when I went into your office.</p><p>[00:53:27] Yeah. It&#8217;s surprisingly small.</p><p>[00:53:28] <strong>David Singleton:</strong> Yes.</p><p>[00:53:29] <strong>swyx:</strong> So, and I have a, another thesis I&#8217;m pursuing for latent space, which is the emergence of tiny teams. Yeah. Where, uh, you know, the, the classic sort of image is that teams with more millions in revenue than employees, right? Yeah. So you, that&#8217;s revenue efficiency definition.</p><p>[00:53:43] But I do think as a CEO, you are going to run a smaller team than you used to.</p><p>[00:53:46] <strong>David Singleton:</strong> Yeah. So I believe very strongly in the power of small teams. So the more people you add to a team, the more communication overhead there is. And it doesn&#8217;t even grow linearly. If you think about it, the more people you add, everyone cares [00:54:00] about getting to know everybody else.</p><p>[00:54:01] And sharing what they&#8217;re doing with everybody else. And that&#8217;s great. I&#8217;m not saying they shouldn&#8217;t, right? The very, like, I wanna work in teams that are fun, where people are talking to each other and, and sharing ideas and so forth. But, you know, there&#8217;s just a kind of gravitational weight that comes from larger and larger teams.</p><p>[00:54:16] So just inherently large organizations are less nimble than small ones. And if you run a large organization, you have to keep thinking about how do I kinda like prune things so that it can act like a small team. So a dreamer, the, the core team that built everything I just showed you was, was honestly about six people.</p><p>[00:54:34] Uh, we&#8217;re larger than I we&#8217;re about 17 people at the company now because as, but</p><p>[00:54:38] <strong>swyx:</strong> still, uh, for everything you just showed,</p><p>[00:54:40] <strong>David Singleton:</strong> it&#8217;s, it&#8217;s still a small team, which is great. Very, very high talent density team. We&#8217;ve been very, very careful and kind of obsessed as we grew to make sure that everyone that&#8217;s joining the company is joining a team that they&#8217;re gonna get a lot of, uh, learning out of, but also they&#8217;re actually going to kind of.</p><p>[00:54:57] Help everyone else a lot as well. There&#8217;s something very [00:55:00] special about that too. You know, every single person at our company I would be delighted to do any project with at any time because, uh, they&#8217;re just all great. And, you know, the smaller you keep the team, the easier it is to make sure that, that that talent density is there as well.</p><p>[00:55:14] Of course, it&#8217;s a real luxury to be building a company. We started this company in late 24, but it&#8217;s a real luxury to be building a company today because we can build with agents. So we&#8217;re using coding agents.</p><p>[00:55:26] <strong>swyx:</strong> Yeah,</p><p>[00:55:26] <strong>David Singleton:</strong> we&#8217;re using Dreamer marketing agents. All of our operations. We&#8217;re looking at how we can, can actually accelerate what we&#8217;re doing, uh, using our own tools.</p><p>[00:55:36] <strong>swyx:</strong> Um, any, actually any agents that you don&#8217;t build that you wanna shout out? Just that, that you love?</p><p>[00:55:41] <strong>David Singleton:</strong> Yeah. Is it</p><p>[00:55:41] <strong>swyx:</strong> other people&#8217;s</p><p>[00:55:42] <strong>David Singleton:</strong> agents that we built for the</p><p>[00:55:43] <strong>swyx:</strong> company? No, no, no. Other people&#8217;s, uh, stuff like you shout out granola.</p><p>[00:55:46] <strong>David Singleton:</strong> Yeah. So I showed you Attain finance. Uh, attain Finance has an agent as well, which like helps you manage your money.</p><p>[00:55:53] I find this really amazing. So, um, I always have this like lingering feeling that I&#8217;ve got a whole bunch of [00:56:00] subscriptions that if I just had a bit of time to go across them, I could, you know, figure out how to consolidate them. And the person who built Attain Finance doesn&#8217;t work at our company. What they were part of the early Alpha group.</p><p>[00:56:10] So they gotta kind of look at how all this stuff works pretty early on. And they built this really amazing experience that actually helps you. Like, save a lot of money because it will kind of help you analyze your purchases. It&#8217;s almost like a kind of a financial fitness coach. He&#8217;s called Andrew, uh, who, who built it.</p><p>[00:56:26] He came and showed it to us and the first thing it did was it recommended that he should buy fewer burritos. And, uh, he was like, it&#8217;s true. Like that is actually how I could save the most money. So, uh, that&#8217;s a, that&#8217;s a pretty cool example.</p><p>[00:56:38] <strong>swyx:</strong> Uh, I noticed he was first. Because he&#8217;s order alphabetical order.</p><p>[00:56:43] So I&#8217;m, I&#8217;m wondering how come there are no like Avar? Uh,</p><p>[00:56:46] <strong>David Singleton:</strong> yeah. Well if you&#8217;re a builder right there and you&#8217;re wondering how do I seo o myself on the Dreamer platform, Swyx suggest you name your tool Avar. In all seriousness though, those are the tools I have connected. So they&#8217;re in alphabetical order.</p><p>[00:56:58] If you haven&#8217;t yet connected them, we actually [00:57:00] kind of put them in the right order for you. So if Sidekick understands you and puts in the right order, uh, but I&#8217;d say a arc is gonna come before, uh, anything else,</p><p>[00:57:06] <strong>swyx:</strong> right? Yeah, exactly. Um, and, and then I, I think how has hiring changed? Yeah. You&#8217;ve hired plenty of self engineers in your life.</p><p>[00:57:14] <strong>David Singleton:</strong> Mm-hmm.</p><p>[00:57:14] <strong>swyx:</strong> I assume something&#8217;s changed.</p><p>[00:57:15] <strong>David Singleton:</strong> Yeah, absolutely. So one of the main things that I look for now when hiring engineers is. How well do you work with coding agents? Our team actually is quite experienced a good number. Everyone at Dreamer, other than, well, I guess I write a lot of code too. Everyone&#8217;s an ic, an individual contributor.</p><p>[00:57:32] Many of the folks that work on the team have previously been managers. And it turns out being an engineering manager, as long as you stay very close to the code and are able to continue to craft it yourself, is actually a great skill profile for being able to make agents work for you and for your team in this, uh, in, in this age.</p><p>[00:57:50] And so that&#8217;s definitely something that we look for quite intently when hiring engineers. And, um, we still have folks write some code like with their fingers. It&#8217;s just important to know [00:58:00] that the kind of core of the craft is there. But the vast majority of what we spend time doing is building quite significant and elaborate stuff together in a fun, collaborative environment with coding agents.</p><p>[00:58:09] <strong>swyx:</strong> Right.</p><p>[00:58:09] <strong>David Singleton:</strong> Um,</p><p>[00:58:10] <strong>swyx:</strong> so what, what is the interview loop like? Sit there with Codex, do something.</p><p>[00:58:13] <strong>David Singleton:</strong> Yeah, I mean, our interview loop is one a coding. Screen to make sure that the, the base is there. And then we actually do a couple of short projects, uh, with an engineer on our team and whoever is thinking about joining, where we&#8217;ll actually put out a very fully formed product idea, we&#8217;ll riff on it together and make sure that we can test product sense a little bit and we&#8217;ll actually try to build the whole thing with x or cloud code or whatever, uh, whatever the person is most familiar with.</p><p>[00:58:39] Um, and watching how someone thinks about prompting the agents, what they do while the agent is working. &#8216;cause you know, you can actually, this is a kind of interesting, uh, dynamic in the industry. Anytime I&#8217;m working on code these days, I always have more than one agent going at the same time because while one agent is going and reviewing the output of the next one, and if you [00:59:00] get them in a nice round robin, you can be very, very productive.</p><p>[00:59:02] You can also chain agents together. You can have one agent producing code, another agent reviewing it. And actually just seeing how folks have adapted their workflow, um, is a big part of what we&#8217;re we&#8217;re looking for in our interview process.</p><p>[00:59:13] <strong>swyx:</strong> Amazing. I guess last question, but also open to you to bring up any topics that I haven&#8217;t touched on, have you wanted LLMs to do that they still cannot do today?</p><p>[00:59:23] <strong>David Singleton:</strong> That&#8217;s a great question. Um, and it&#8217;s amazing &#8216;cause the capabilities of LLM just, just advanced so quickly. You know, if you&#8217;d asked me a year ago, I would&#8217;ve said, well, you know, music generation, I, I like music. Um, and Suno is amazing by the way. And, but previous generations i&#8217;d, yeah, I can kind of tell that that&#8217;s AI generated today.</p><p>[00:59:42] I listened to the latest tracks made by Suno. I&#8217;m like, that&#8217;s, that&#8217;s pretty impressive. If we went back six months, I&#8217;d be asking for better image generation. The latest nano banana, uh, which by the way is a tool on the platform that you can use on Dreamer is producing spectacular infographics.</p><p>[00:59:58] Spectacular [01:00:00] painterly images when I ask for those as well. So, so that&#8217;s quite impressive. I still think I, so I think as we go forward into the future, there is still a lot of room for human creativity and so that&#8217;s also maybe where I&#8217;m going to have to say that LLMs are most lacking. So I think that when you think about building software, the thing that&#8217;s really important and that we all need to bring is taste.</p><p>[01:00:24] Mm-hmm. Right? You have to like actually truly understand people, their motivations. How do I build something that&#8217;s really delightful? So, you know, we had to do a lot of work on Dreamer to make it possible for the experiences that we build to not look like AI generic slop.</p><p>[01:00:43] <strong>swyx:</strong> Right? We go,</p><p>[01:00:44] <strong>David Singleton:</strong> um. And we&#8217;ve done that by putting a lot of our own taste into the templates and the prompts and the, the harness.</p><p>[01:00:52] Um, so I hope you have fun playing with it. I, I, I think Dreamer today generates experiences that don&#8217;t feel super generic, but that&#8217;s a ton of [01:01:00] work, right? The LMS do not do that by default. And in fact, I, if I see a, if you ask for a simple like to-do list app or something, uh, built by the models, I can tell which model built it just by kind of how it looks.</p><p>[01:01:10] So, um, taste, creativity, sense of individuality is still something that I think the LLMs are not producing out of the box. And I think that&#8217;s gonna be an interesting frontier. What do you think?</p><p>[01:01:21] <strong>swyx:</strong> Usually that&#8217;s, this is by, uh, from builder to researcher question. &#8216;cause uh, we do have researchers listening.</p><p>[01:01:27] Yeah. And I&#8217;m just like, well, that&#8217;s it. But like soft taste for me please is, is like a very broad topic. Uh, what do I think? I mean, I agree. I just think that it&#8217;s too big of a topic to break down. Mm-hmm. Particularly because there&#8217;s a lot of, I&#8217;ll know it when I see it type, uh, eval, which is unverifiable for, for researchers to do so.</p><p>[01:01:45] <strong>David Singleton:</strong> Yeah, I mean I, I do talk to researchers quite often and, uh, we talk about this topic and I think most people agree</p><p>[01:01:51] <strong>swyx:</strong> uhhuh</p><p>[01:01:52] <strong>David Singleton:</strong> that, you know, one of the great things about building models to generate code was just, you know, it&#8217;s so verifiable. So Yeah. Um, you know, it&#8217;s [01:02:00] very tractable and they agree that the next problem is how do you kind of step up that hierarchy of needs and get into these taste questions.</p><p>[01:02:08] And quantifying taste is hard, but I&#8217;m actually kind of excited that some people are gonna start doing this. And you know, that&#8217;s when I think that some of the really iconic companies in the world will start to become places where, you know, folks really try to like. Debug and understand the creative process.</p><p>[01:02:23] And I get pretty excited about that.</p><p>[01:02:25] <strong>swyx:</strong> Yeah. Uh, I, I think we are slowly uncovering what intelligence really means and, and the, the benchmarks that we adopt and then abandon because they&#8217;re solved is, is basically us evolving the machine intelligence in the way that we, the different way than we evolved, but we are slowly understanding what it means to be intelligent.</p><p>[01:02:44] Right. And, uh, and it&#8217;s, it&#8217;s interesting. I wonder how they suppress us in the future, but like, we&#8217;re not even there yet. We&#8217;re just like, get, get it to a place where we like what we get. Mm-hmm. From the machinist sometimes. You know, it used to be 30%, now it&#8217;s like 95%, but still there&#8217;s that 5%. [01:03:00] That&#8217;s right.</p><p>[01:03:00] Yeah. Any other topics we should have touched on?</p><p>[01:03:02] <strong>David Singleton:</strong> No, I think we&#8217;ve covered everything, but I wanna remind everyone,</p><p>[01:03:06] <strong>swyx:</strong> ct</p><p>[01:03:06] <strong>David Singleton:</strong> dreamer.com/latent space.</p><p>[01:03:09] <strong>swyx:</strong> Yes. No, it&#8217;s a, it&#8217;s a very good deal. I mean, like, come on. Like, yeah. So thank you for offering that.</p><p>[01:03:14] <strong>David Singleton:</strong> Cool. Well Swyx, thank you so much. This was fun.</p><p>[01:03:16] <strong>swyx:</strong> Yeah, thank you.</p><p>[01:03:17] Uh, we, we&#8217;ll get Alejandro to put like flashing neon signs on the, on the YouTube. Cool. Wonderful. Um, alright. Thanks. So my cool,</p><p>[01:03:23] <strong>David Singleton:</strong> awesome, thank you.</p>]]></content:encoded></item><item><title><![CDATA[[AINews] Every Lab serious enough about Developers has bought their own Devtools]]></title><description><![CDATA[OpenAI buys Astral, Anthropic buys Bun, Google DeepMind bought the Antigravity team.]]></description><link>https://www.latent.space/p/ainews-every-lab-serious-enough-about</link><guid isPermaLink="false">https://www.latent.space/p/ainews-every-lab-serious-enough-about</guid><pubDate>Fri, 20 Mar 2026 07:15:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/qaJXBMwUkoE" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The news today of <a href="https://news.ycombinator.com/item?id=47438723">OpenAI acquiring Astral</a> completes a loop first opened by GDM when they bought <a href="https://news.smol.ai/issues/25-07-24-cogsurf-cursor">what became the Antigravity team last July</a>, and then<a href="https://news.ycombinator.com/item?id=46124267"> Anthropic&#8217;s purchase of Bun last December</a>. Astral joins OpenClaw and (to a lesser extent) gpt-oss and Whisper in OpenAI&#8217;s growing list of top tier open source AI projects.</p><p>This comes against the backdrop of Fidji Simo explicitly <a href="https://x.com/berber_jin1/status/2033694982943694988">dropping &#8220;side quests&#8221;</a> like <strong>Shopping</strong> (with <a href="https://x.com/negligible_cap/status/2034369496543305971?s=46">key partner Walmart reporting awful conversion</a> about 1/3 of click-outs) and prioritizing <strong>Enterprise</strong> (<a href="https://x.com/fidjissimo/status/2033537381907710092">Frontier Alliances</a>) and <strong>Coding</strong> (Astral), and now <a href="https://x.com/fidjissimo/status/2034769466433913082">unifying ChatGPT and Codex apps</a> into one &#8220;superapp&#8221; &#8212; <a href="https://www.latent.space/p/ainews-ai-vs-saas-the-unreasonable">something we have predicted</a> but is now explicitly being prioritized at the highest levels.</p><p>If we got one thing wrong in <a href="https://www.latent.space/p/ai-engineer">Rise of the AI Engineer</a> 3 years ago, it is the importance of the role of code. Back then we framed the &#8220;1+2=3&#8221; thesis - that LLM-powered software would be capable of much more than either LLMs or software would alone - essentially presaging what would now be called harness engineering. But we almost <strong>completely missed</strong> the importance of the recursive nature of agentic coding improving agent/LLM training, which has been called out all over from Claude Code to <a href="https://www.latent.space/p/ainews-minimax-27-glm-5-at-13-cost">MiniMax 2.7</a> as a key element of acceleration and importance in the labs. (we did end with a poignant &#8220;<em>As human Engineers learn to harness AI, AIs will increasingly do Engineering as well, until a distant future when we look up one day and can no longer tell the difference.</em>&#8221; &#8212; perhaps a half point there, but negative points on underestimating the importance and immediacy). Thankfully by the time the first AIE Summit came around, I was fully allocating 1/3 of the weight of AI Engineering to agentic coding:</p><div id="youtube2-qaJXBMwUkoE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;qaJXBMwUkoE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/qaJXBMwUkoE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><blockquote><p>AI News for 3/18/2026-3/19/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>AI Coding Agents, Developer Tooling, and the Race to Own the IDE</strong></p><ul><li><p><strong>Cursor&#8217;s Composer 2 looks like the day&#8217;s biggest developer-model launch</strong>: <a href="https://x.com/cursor_ai/status/2034668943676244133">@cursor_ai</a> released <strong>Composer 2</strong>, positioning it as a frontier-class coding model with major cost reductions. Cursor says quality gains came from its <strong>first continued pretraining run</strong> feeding a stronger base into RL (<a href="https://x.com/cursor_ai/status/2034668950240329837">details</a>). Third-party reactions emphasized both price/perf and benchmark competitiveness: <a href="https://x.com/kimmonismus/status/2034667869816979645">@kimmonismus</a> highlighted <strong>$0.50/M input</strong> and <strong>$2.50/M output</strong> with reported scores of <strong>61.3 on CursorBench</strong>, <strong>61.7 on Terminal-Bench 2.0</strong>, and <strong>73.7 on SWE-bench Multilingual</strong>; <a href="https://x.com/mntruell/status/2034729462211002505">@mntruell</a> framed Cursor as a new kind of company combining API models with <strong>domain-specific in-house models</strong>. The launch also included an <strong>early alpha UI</strong> at <a href="https://x.com/cursor_ai/status/2034719920710103452">Glass</a>, with commentary from <a href="https://x.com/theo/status/2034780545134256205">@theo</a> that the industry will likely converge on this more agent-native UX. Several engineers also noted the training/infra story: <a href="https://x.com/ellev3n11/status/2034778708163404102">@ellev3n11</a> said the RL run was distributed across <strong>3&#8211;4 clusters worldwide</strong>, and <a href="https://x.com/amanrsanger/status/2034704792925479356">@amanrsanger</a> said the ~<strong>40-person</strong> team is focused exclusively on software engineering tasks.</p></li><li><p><strong>OpenAI moves down-stack with Astral; Anthropic expands Claude Code&#8217;s surface area</strong>: <a href="https://x.com/charliermarsh/status/2034623222570783141">@charliermarsh</a> announced that <strong>Astral</strong>&#8212;the team behind <strong>uv, ruff, and ty</strong>&#8212;is joining OpenAI&#8217;s Codex team; <a href="https://x.com/gdb/status/2034662275391320472">@gdb</a> confirmed the deal from OpenAI&#8217;s side. The acquisition was broadly read as OpenAI strengthening its developer platform moat through ownership of foundational Python tooling; see <a href="https://x.com/Yuchenj_UW/status/2034661120599101498">@Yuchenj_UW</a> and <a href="https://x.com/simonw/status/2034672725088997879">Simon Willison&#8217;s commentary</a>. In parallel, Anthropic expanded <strong>Claude Code</strong> with <strong>channels</strong> so developers can interact via messaging apps, starting in research preview (<a href="https://x.com/neilhtennek/status/2034762196576805123">announcement</a>, <a href="https://x.com/neilhtennek/status/2034762489951658190">docs</a>). The product direction is notable: both OpenAI and Anthropic are pushing beyond &#8220;model API&#8221; toward persistent developer workflows and ambient agent access.</p></li></ul><p><strong>Agents, Multi-Agent Runtimes, and Enterprise Agent Control Planes</strong></p><ul><li><p><strong>The center of gravity is shifting from single agents to managed fleets, runtimes, and agent operating systems</strong>: <a href="https://x.com/LangChain/status/2034679590250258855">@LangChain</a> launched <strong>LangSmith Fleet</strong>, an enterprise workspace for creating and managing a <strong>fleet of agents</strong> with memory, tools, permissions, and channel integrations; repeated themes across the launch were <strong>agent identity</strong>, <strong>credential management</strong>, sharing controls, Slack exposure, and auditability (<a href="https://x.com/LangChain/status/2034694530478612777">overview</a>, <a href="https://x.com/Vtrivedy10/status/2034690067839521114">additional framing</a>). This lines up with broader discourse that &#8220;agent&#8221; is no longer a useful abstraction by itself: <a href="https://x.com/YuvalinTheDeep/status/2034624197528269085">@YuvalinTheDeep</a> argued the right metaphor is an <strong>AI operating system</strong> that allocates work, resources, and execution contexts. Complementary launches reinforced this stack-level view: <a href="https://x.com/cognition/status/2034679897084264659">@cognition</a> added <strong>teams of Devins</strong>, where Devin decomposes work and delegates to parallel Devins in separate VMs; <a href="https://x.com/lvwerra/status/2034666400007016590">@lvwerra</a> released <strong>AgentUI</strong>, a multi-agent interface coordinating code, search, and multimodal specialists; and <a href="https://x.com/hrishioa/status/2034666470932922745">@hrishioa</a> argued long-horizon agentic work now requires a dedicated runtime with <strong>checkpointing, rollback, provider-specific harness switching, and execution repair</strong>.</p></li><li><p><strong>Security and permissions are becoming first-class design constraints for agent systems</strong>: a recurring thread across launches was that production agent deployment is bottlenecked less by &#8220;can the model do it?&#8221; and more by <strong>permissions, blast radius control, and observability</strong>. <a href="https://x.com/swyx/status/2034667846505214295">@swyx</a> highlighted <strong>identity-based authorization</strong> as the emerging consensus for AI security, and <a href="https://x.com/baseten/status/2034649896523874356">@baseten</a> described <strong>NemoClaw</strong> as NVIDIA&#8217;s answer to OpenClaw-style safety concerns with <strong>zero permissions by default</strong>, sandboxed subagents, and infra-enforced private inference. LangChain&#8217;s Fleet launch also heavily emphasized permissioning and audit trails. The throughline: agent stacks are maturing into something much closer to enterprise software infrastructure than chatbot wrappers.</p></li></ul><p><strong>Model Releases, Benchmarks, and Retrieval/Reasoning Results</strong></p><ul><li><p><strong>MiniMax M2.7 is being positioned as a practical agent model rather than a pure &#8220;frontier giant&#8221;</strong>: MiniMax teased a deeper technical livestream with OpenClaw around <strong>self-evolution</strong> and infrastructure for <strong>100,000 running clusters</strong> (<a href="https://x.com/MiniMax_AI/status/2034520321466978488">announcement</a>), while early usage reports stressed improved <strong>emotional intelligence</strong>, <strong>character consistency</strong>, and strong agentic workflows (<a href="https://x.com/MiniMax_AI/status/2034528945962696948">MiniMax note</a>). More technical third-party evaluation from <a href="https://x.com/ZhihuFrontier/status/2034543142234628318">ZhihuFrontier</a> said M2.7 keeps overall performance roughly on par with the previous generation but upgrades <strong>instruction following</strong>, <strong>context hallucination handling</strong>, and <strong>large-code / multi-round dialogue</strong> behavior, albeit with slightly worse <strong>hard reasoning</strong> and higher token consumption. Integration momentum was immediate: <a href="https://x.com/Teknium/status/2034658808870621274">@Teknium</a> added M2.7 to <strong>Hermes Agent</strong>, and users reported better long-running agent behavior than OpenClaw in some workflows (<a href="https://x.com/populartourist/status/2034653545287348266">example</a>).</p></li><li><p><strong>Qwen 3.5 Max Preview and retrieval-centric systems posted notable leaderboard movement</strong>: <a href="https://x.com/arena/status/2034653740465336407">@arena</a> reported <strong>Qwen 3.5 Max Preview</strong> reaching <strong>#3 in Math</strong>, <strong>Top 10 in Arena Expert</strong>, and <strong>Top 15 overall</strong>, with particularly large gains versus prior Max variants in text, writing, and math (<a href="https://x.com/arena/status/2034658045113065603">breakdown</a>); <a href="https://x.com/Alibaba_Qwen/status/2034658901321560549">@Alibaba_Qwen</a> confirmed more optimization is coming. Meanwhile, one of the most technically interesting result clusters was around <strong>late interaction retrieval</strong>: <a href="https://x.com/antoine_chaffin/status/2034649565614272925">@antoine_chaffin</a> claimed <strong>BrowseComp-Plus</strong> is now near <strong>90% solved</strong> using <strong>Reason-ModernColBERT</strong>, a <strong>150M</strong> model that outperformed systems up to <strong>54&#215; larger</strong> on deep research-style retrieval. Multiple follow-ups from <a href="https://x.com/lateinteraction/status/2034651175023157550">@lateinteraction</a> and others argued this is not a one-off but another strong signal that <strong>multi-vector / late-interaction retrieval</strong> is systematically outperforming dense single-vector approaches in reasoning-intensive search.</p></li></ul><p><strong>Multimodal Models, OCR, Document Parsing, and Creative Tools</strong></p><ul><li><p><strong>A strong crop of document/OCR tooling shipped, spanning model-based and model-free approaches</strong>: <a href="https://x.com/nathanhabib1011/status/2034565076963991910">@nathanhabib1011</a> flagged <strong>Chandra OCR 2</strong> as a new <strong>SOTA OCR</strong> release with <strong>85.9% on olmOCR bench</strong>, <strong>90+ languages</strong>, a <strong>4B</strong> parameter model, and support for handwriting, math, forms, tables, and image caption extraction. Separately, <a href="https://x.com/skalskip92/status/2034658568117309600">@skalskip92</a> highlighted <strong>GLM-OCR 0.9B</strong> as a small OCR model reportedly beating Gemini on OCR benchmarks. On the parsing side, LlamaIndex open-sourced <strong>LiteParse</strong>, a local, layout-aware parser for PDFs, Office docs, and images with <strong>zero Python dependencies</strong>, built-in OCR options, spatial layout preservation, and explicit targeting at <strong>agent pipelines</strong> (<a href="https://x.com/llama_index/status/2034661997644808638">launch</a>, <a href="https://x.com/jerryjliu0/status/2034665976428724267">expanded post</a>). This is a useful split in the stack: high-end OCR/VLMs for difficult pages, plus lightweight local parsers for the common case.</p></li><li><p><strong>Image/video and world-model work keeps accelerating, but the interesting part is latency and deployability</strong>: Google rolled out a significantly upgraded <strong>AI Studio</strong> &#8220;vibe coding&#8221; experience with a new <strong>Antigravity</strong> coding agent plus <strong>Firebase</strong> integrations, enabling multiplayer apps, backend services, auth, and persistent builds (<a href="https://x.com/GoogleAIStudio/status/2034655113961455651">Google AI Studio post</a>, <a href="https://x.com/Google/status/2034658419202744614">Google summary</a>). On imaging, Microsoft launched <strong>MAI-Image-2</strong>, which debuted at <strong>#5</strong> on the Image Arena and posted large subcategory gains over MAI-Image-1, especially in <strong>text rendering</strong> and <strong>portraits</strong> (<a href="https://x.com/arena/status/2034660389284360585">arena ranking</a>, <a href="https://x.com/MicrosoftAI/status/2034661558492557386">Microsoft announcement</a>). For vision/video understanding, <a href="https://x.com/skalskip92/status/2034606226902827228">@skalskip92</a> showed <strong>MolmoPoint</strong> doing point-based multi-object tracking directly from a VLM, distinct from segmentation-first approaches like SAM. And <a href="https://x.com/kimmonismus/status/2034659158843072893">@kimmonismus</a> made a useful systems point: sub-<strong>100ms</strong> prompt-to-output loops in generative media may matter more than raw model quality for real production workflows.</p></li></ul><p><strong>Training, Architectures, Inference, and Systems Research</strong></p><ul><li><p><strong>Continued pretraining and RL environment quality are re-emerging as core competitive levers</strong>: Composer 2&#8217;s team explicitly attributed gains to <strong>continued pretraining before RL</strong> (<a href="https://x.com/cursor_ai/status/2034668950240329837">Cursor</a>), and several researchers argued this pattern will become more common for specialized models (<a href="https://x.com/code_star/status/2034672762263060562">@code_star</a>, <a href="https://x.com/cwolferesearch/status/2034713982515179672">@cwolferesearch</a>). Relatedly, <a href="https://x.com/pratyushmaini/status/2034653569706811782">@pratyushmaini</a> introduced the <strong>&#8220;Finetuner&#8217;s Fallacy&#8221;</strong>: early training data leaves a durable imprint on model representations that later finetuning struggles to undo. On the systems side, <a href="https://x.com/skypilot_org/status/2034681533051855173">@skypilot_org</a> scaled Karpathy-style autoresearch over a K8s GPU cluster, running <strong>~910 experiments in 8 hours</strong> instead of ~96 sequentially, an example of infrastructure directly changing the shape of automated research loops.</p></li><li><p><strong>Architecture exploration remains lively beyond standard transformers</strong>: <a href="https://x.com/MayankMish98/status/2034681226217595333">@MayankMish98</a> released <strong>M&#178;RNN</strong>, revisiting <strong>non-linear recurrence with matrix-valued states</strong> for scalable language modeling; <a href="https://x.com/tri_dao/status/2034696258938708438">@tri_dao</a> noted nonlinear RNN layers appear to add something distinct from attention and linear SSMs. NVIDIA&#8217;s <strong>Nemotron 3</strong> stack also drew attention for mixing <strong>Transformer + Mamba 2</strong>, <strong>MoE/LatentMoE</strong>, <strong>multi-token prediction</strong>, and <strong>NVFP4</strong> precision in service of lower inference costs and long-context agent workloads (<a href="https://x.com/TheTuringPost/status/2034668980892479993">summary</a>). At the infra layer, <a href="https://x.com/rachpradhan/status/2034576637359161365">@rachpradhan</a> reported <strong>TurboAPI</strong> reaching <strong>150k req/s</strong>, claiming <strong>22&#215; FastAPI</strong> throughput after a day of optimization, while <a href="https://x.com/baseten/status/2034681788724019700">@baseten</a> launched the <strong>Baseten Delivery Network</strong> to reduce large-model cold starts by <strong>2&#8211;3&#215;</strong>.</p></li></ul><p><strong>Top tweets (by engagement)</strong></p><ul><li><p><strong>OpenAI acquires Astral</strong>: <a href="https://x.com/charliermarsh/status/2034623222570783141">@charliermarsh</a> announced Astral joining OpenAI&#8217;s Codex team, one of the clearest signals that AI labs now see ownership of core developer tooling as strategic.</p></li><li><p><strong>Cursor Composer 2 launch</strong>: <a href="https://x.com/cursor_ai/status/2034668943676244133">@cursor_ai</a> had the highest-engagement technical product launch in the set, reflecting how central coding-model price/performance has become.</p></li><li><p><strong>Google AI Studio&#8217;s upgraded vibe coding stack</strong>: <a href="https://x.com/GoogleAIStudio/status/2034655113961455651">@GoogleAIStudio</a> and <a href="https://x.com/OfficialLoganK/status/2034656376450908203">@OfficialLoganK</a> drove major engagement around full-stack app generation with persistent builds, multiplayer, and backend integrations.</p></li><li><p><strong>LlamaIndex LiteParse</strong>: <a href="https://x.com/jerryjliu0/status/2034665976428724267">@jerryjliu0</a> resonated strongly, suggesting continued demand for practical, local-first parsing infrastructure for agent pipelines.</p></li><li><p><strong>Late interaction retrieval on BrowseComp-Plus</strong>: <a href="https://x.com/antoine_chaffin/status/2034649565614272925">@antoine_chaffin</a> posted one of the more important benchmark results of the day: a <strong>150M</strong> late-interaction retriever pushing a hard deep-research benchmark toward <strong>90%</strong>.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. Model and Benchmark Announcements</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rwvn6h/minimaxm27_announced/">MiniMax-M2.7 Announced!</a></strong> (Activity: 1078): <strong>The image presents a comparative analysis of the newly announced MiniMax-M2.7 model against other models like M2.5, Gemini 31 Pro, Sonnet 4.6, Opus 4.6, and GPT 5.4 across various benchmarks such as SWE Bench Pro, VIBE-Pro, and MM-ClawBench. MiniMax-M2.7 is highlighted in red and demonstrates superior performance in several categories. The model&#8217;s development emphasizes autonomous iteration, where it optimizes its performance through iterative cycles of analysis, planning, modification, and evaluation, achieving a </strong><code>30% performance improvement</code><strong> on internal evaluation sets. This process includes optimizing sampling parameters and enhancing workflow guidelines, indicating a shift towards fully automated AI self-evolution.</strong> One commenter highlights the importance of real-world usability over benchmark performance, expressing skepticism about models that excel in evaluations but may not perform well in practical applications. Another comment humorously notes the rapid pace of new model releases, expressing excitement and anticipation for future developments.</p><ul><li><p>Recoil42 highlights the autonomous iteration capabilities of the MiniMax-M2.7 model, which can optimize its own performance through iterative cycles. The model autonomously analyzes failure paths, plans changes, modifies code, and evaluates results, achieving a 30% performance improvement on internal evaluation sets. This process includes optimizing sampling parameters and enhancing workflow guidelines, indicating a move towards fully automated AI self-evolution.</p></li><li><p>Specialist_Sun_7819 raises a critical point about the discrepancy between benchmark performance and real-world usability. They emphasize that many models excel in evaluations but struggle with tasks that deviate from the training distribution. This comment underscores the importance of user testing to validate the practical effectiveness of models like MiniMax-M2.7.</p></li><li><p>Lowkey_LokiSN expresses concern about the model&#8217;s quantization resistance, referencing issues with the previous M2.5 model&#8217;s UD-Q4_K_XL variant. Quantization can affect model performance, and improvements in this area would be crucial for maintaining the integrity of MiniMax-M2.7&#8217;s capabilities when deployed in resource-constrained environments.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rwy5sl/omnicoderclaude46opusuncensoredgguf/">Omnicoder-Claude-4.6-Opus-Uncensored-GGUF</a></strong> (Activity: 397): <strong>The post introduces the OmniClaw model, crafted from real Claude Code / Codex sessions using the DataClaw dataset, and available on <a href="https://huggingface.co/LuffyTheFox/OmniClaw-Claude-4.6-Opus-Uncensored-GGUF">Hugging Face</a>. The Omnicoder model, distilled by Claude Opus, and the OmniRP model for creative writing, are also presented. All models are uncensored and use </strong><code>Q8_0</code><strong> quantization due to quality issues with other quants. The models were merged using a Python script available on <a href="https://pastebin.com/xEP68vss">Pastebin</a>, maintaining GGUF header and metadata for compatibility. The Omnicoder model was created by merging several models, including Jackrong&#8217;s and HauhauCS&#8217;s Qwen 3.5 9B models, Tesslate&#8217;s Omnicoder, and Bartowski&#8217;s Qwen 3.5-9B as a base. The OmniClaw and OmniRP models were further merged with models from empero-ai and nbeerbower, respectively. The post claims these models represent the best in Uncensored General Intelligence (UGI) for small 9B models based on the Qwen 3.5 9B architecture.</strong> A comment highlights a benchmark test on the Omnicoder 9B model, noting a <code>5.3%</code> pass@1 and <code>29.3%</code> pass@2 success rate on the Aider benchmark, with a runtime of <code>402 seconds</code> per problem, suggesting skepticism about the effectiveness of Claude distillation in improving Omnicoder&#8217;s performance.</p><ul><li><p>grumd provides a detailed benchmark comparison between Qwen3.5 35B-A3B and Omnicoder 9B using the Aider benchmark, which consists of 225 hard coding problems. Qwen3.5 35B-A3B achieved a <code>26.7% pass@1</code> and <code>54.7% pass@2</code>, taking <code>95 seconds</code> per problem on average. In contrast, Omnicoder 9B, after completing 75 problems, has a <code>5.3% pass@1</code> and <code>29.3% pass@2</code>, with a significantly longer average time of <code>402 seconds</code> per problem. This highlights a substantial performance gap between the models, particularly in efficiency and accuracy.</p></li><li><p>grumd expresses skepticism about the potential for Claude distillation to resolve Omnicoder&#8217;s performance issues, suggesting that the current results are not promising. The comparison with Qwen3.5 9B is anticipated to provide further insights into whether the performance issues are inherent to Omnicoder or if they can be mitigated through model adjustments or distillation techniques.</p></li><li><p>jack-in-the-sack raises a question about model interchangeability, specifically whether Claude Code can be replaced with Omnicoder. This reflects a common concern in the community about the compatibility and performance trade-offs when switching between different AI models, especially in specialized tasks like coding.</p></li></ul></li></ul><h3><strong>2. Hardware and Setup for AI Models</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rwwqbm/my_company_just_handed_me_a_2x_h200_282gb_vram/">My company just handed me a 2x H200 (282GB VRAM) rig. Help me pick the &#8220;Intelligence&#8221; ceiling.</a></strong> (Activity: 854): <strong>The user has access to a server with dual Nvidia H200 GPUs, each with </strong><code>141GB HBM3e</code><strong>, totaling </strong><code>282GB VRAM</code><strong>. They are tasked with testing large language models (LLMs) for local coding tasks, including code completion, generation, and reviews. A suggested model is Qwen 3.5 397B using </strong><code>vLLM</code><strong> for efficient context handling at </strong><code>Q4</code><strong> quantization. It&#8217;s recommended to avoid models like </strong><code>ollama</code><strong> or </strong><code>llama.cpp</code><strong> due to their poor handling of batched inference, which is crucial for concurrent coding tasks. Instead, </strong><code>vLLM</code><strong> or </strong><code>sglang</code><strong> are suggested for better stability and performance in multi-user environments.</strong> One commenter emphasizes the importance of defining clear goals and outcomes before experimenting to ensure continued access to the hardware. Another shares a negative experience with <code>ollama</code>, citing instability and poor performance, and recommends <code>vLLM</code> for its stability and suitability for multi-user environments.</p><ul><li><p>Zyj suggests using <code>vLLM</code> with the <code>Qwen 3.5 397B</code> model, which should allow for a significant context window at <code>Q4</code> precision. This recommendation is based on the available VRAM and the need to balance model size with context capabilities.</p></li><li><p>TUBlender advises against using <code>ollama</code> or <code>llama.cpp</code> for setups requiring batched inference due to their poor handling of concurrent requests. They share personal experience with <code>ollama</code> serving <code>qwen2.5 72b</code>, which resulted in instability and crashes, recommending <code>vllm</code> or <code>sglang</code> as more stable alternatives for multi-user environments.</p></li><li><p>Mikolai007 warns against using models that max out the GPU&#8217;s VRAM, emphasizing the importance of maintaining a healthy context window. They recommend <code>Minimax M2.5</code> and <code>Qwen 3.5</code> as optimal choices, noting that <code>GLM 5</code> is too large at <code>800b</code> despite its capabilities.</p></li></ul></li></ul><h3><strong>3. Open-Source AI Tools and Applications</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rx8327/two_weeks_ago_i_posted_here_to_see_if_people/">Two weeks ago, I posted here to see if people would be interested in an open-source local AI 3D model generator</a></strong> (Activity: 366): <strong>The post introduces a beta version of an open-source desktop application designed to generate 3D meshes from images, currently supporting the Hunyuan3D 2 Mini model. The app is modular, built around an extension system, and the developer is seeking feedback on features, file export extensions, and additional model support. The GitHub repository is available <a href="https://github.com/lightningpixel/modly">here</a>.</strong> Commenters suggest features such as multi-image input, text-based editing, checkpoint saving, and support for formats like <code>glTF</code>. They also recommend supporting <strong>Trellis 2</strong> for state-of-the-art open 3D model generation and propose a <code>ggml</code> backend for non-CUDA GPUs. Additional features like custom mesh import, texture generation, and basic editing tools are also discussed.</p><ul><li><p>New_Comfortable7240 outlines a comprehensive feature set for a local AI 3D model generator, emphasizing the need for a user-friendly interface that allows for the addition of images and text to create initial meshes. They suggest implementing a chat interface for iterative editing, saving checkpoints, and ensuring compatibility with the glTF format through a healing function. The comment also highlights the importance of node renaming in glTF to avoid confusion and proposes optional features like texture generation, animations, and Level of Detail (LOD) management.</p></li><li><p>Nota_ReAlperson mentions Trellis 2 as the state-of-the-art for free open 3D model generation and suggests supporting it. They also propose the challenging task of developing a <code>ggml</code> backend for non-CUDA GPUs, which would broaden accessibility for users without high-end hardware. This highlights the importance of considering diverse hardware capabilities in the development of the model generator.</p></li><li><p>ArtifartX emphasizes the necessity of importing custom meshes and generating textures for them, suggesting enhancements like blending and basic brush tools. They reference a past project using SDXL and ControlNet with custom shaders for projection, indicating the potential for advanced texture manipulation features. The comment also advises focusing on commonly used file formats such as OBJ, FBX, GLTF, and USD for export options.</p></li></ul></li></ul><h2><strong>Less Technical AI Subreddit Recap</strong></h2><blockquote><p>/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT, /r/ChatGPTCoding, /r/aivideo, /r/aivideo</p></blockquote><h3><strong>1. AI Model and Tool Releases</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1rxdu0c/harmonic_unleashes_aristotle_the_worlds_first/">Harmonic unleashes Aristotle, the world&#8217;s first formal mathematician agent for free</a></strong> (Activity: 446): <strong>The image announces the release of the &#8220;Aristotle Agent&#8221; by Harmonic, touted as the world&#8217;s first autonomous mathematician agent, available for free. This agent is notable for its ability to solve and formalize complex mathematical problems, distinguishing itself from other AI math tools by providing formal verification of proofs, which ensures correctness without human intervention. This is in contrast to other AI systems like DeepMind&#8217;s AlphaProof, which remains proprietary. The tool has been linked to recent attempts to solve the Erd&#337;s problem, highlighting its potential in tackling significant mathematical challenges.</strong> Commenters highlight the significance of the formal verification feature, which ensures that proofs are correct by construction, eliminating the need for human verification. There is curiosity about its capability to handle complex open problems beyond textbook-level challenges.</p><ul><li><p><strong>ikkiho</strong> highlights the significance of formal verification in Harmonic&#8217;s Aristotle, contrasting it with other AI math tools. Unlike LLMs that generate proofs in natural language, which may be incorrect, Aristotle&#8217;s use of Lean proofs ensures correctness by construction, eliminating the need for human verification. This approach is particularly notable as it is offered for free, unlike DeepMind&#8217;s proprietary AlphaProof.</p></li><li><p><strong>ikkiho</strong> also raises a question about the current capabilities of Aristotle, wondering if it has been tested on challenging open problems or if it is primarily solving textbook-level mathematics. This inquiry points to the potential of Aristotle to tackle more complex mathematical challenges in the future.</p></li><li><p><strong>omegahustle</strong> expresses hope that Aristotle remains free and is used responsibly, emphasizing the importance of its availability for those who can utilize it effectively. This comment underscores the potential impact of free access to advanced mathematical tools on the research community.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/GeminiAI/comments/1rx09kr/a_new_version_of_the_gemini_app_was_just_released/">A new version of the Gemini app was just released.</a></strong> (Activity: 425): <strong>The image announces an update to the Google Gemini app, version </strong><code>1.2026.1062300</code><strong>, which introduces a &#8216;Personal Intelligence&#8217; feature for free users in the US. This feature aims to enhance connectivity across Google apps, providing personalized responses. The update also includes UI improvements and bug fixes, with a download size of </strong><code>196.2 MB</code><strong>. This suggests a significant enhancement in user experience and integration capabilities within the Google ecosystem.</strong> Commenters express concerns about privacy, particularly regarding the potential for government access to personal data through the &#8216;Personal Intelligence&#8217; feature. There is also skepticism about the necessity of the Gemini app, with some users viewing it as redundant to existing Google app functionalities.</p><ul><li><p>Technical_Train_9821 raises concerns about data privacy with the Gemini app, highlighting the potential risks of allowing the app to access and connect personal data. They suggest that if the government were to gain access, it could make an individual&#8217;s entire online presence searchable, posing significant privacy issues.</p></li><li><p>brandeded shares practical use cases for the Gemini app, emphasizing its ability to integrate with other services and perform complex tasks. They describe scenarios where the app can create calendar appointments based on email content, search for specific financial transactions, and retrieve information from Google Drive, showcasing its utility in managing personal data efficiently.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/StableDiffusion/comments/1rwpyou/basically_official_qwen_image_20_not_opensourcing/">Basically Official: Qwen Image 2.0 Not Open-Sourcing</a></strong> (Activity: 495): <strong>The image in the Reddit post is an announcement for the launch of Qwen-Image-2.0, a next-generation image generation model by Alibaba. Initially tagged as &#8220;Open-Source&#8221; on the Qwen research page, it has now been reclassified as &#8220;Release,&#8221; indicating it will not be open-sourced. This change aligns with recent internal shifts at Alibaba, including the departure of key engineers and a strategic pivot away from open-source models due to revenue concerns. The model features professional typography rendering, support for </strong><code>1k-token</code><strong> instructions, and native </strong><code>2K</code><strong> resolution, aimed at creating detailed infographics and comics.</strong> Commenters express confusion and disappointment over Alibaba&#8217;s decision not to open-source Qwen-Image-2.0, arguing that its value diminishes when closed-source, especially given the competitive landscape with models like Midjourney. Additionally, it&#8217;s noted that Alibaba&#8217;s CEO has shown dissatisfaction with the lack of revenue from open-source models, influencing this strategic shift.</p><ul><li><p><strong>Skystunt</strong> highlights a critical issue with Qwen Image 2.0&#8217;s closed-source approach, emphasizing that its competitive edge is diminished when compared to other models like Midjourney or Nano Banana, which offer more mature UIs and open-source benefits. The model&#8217;s closed nature, combined with data privacy concerns, makes it less appealing despite its technical capabilities as a 7B parameter model.</p></li><li><p><em><strong>BreakingGood</strong></em> provides context on Alibaba&#8217;s strategic shift away from open-sourcing, citing the CEO&#8217;s dissatisfaction with the lack of revenue from open models. This has led to significant internal changes, including the departure of key engineers, suggesting a future where Alibaba may not release open-source models, impacting the community&#8217;s access to cutting-edge technology.</p></li><li><p><strong>LeKhang98</strong> comments on the perception of model release frequency, noting that while some feel overwhelmed by new models, the actual release rate is relatively low, with only 2-3 significant models per year. This perspective suggests that the community should appreciate the current pace and availability of new models, despite potential slowdowns in releases.</p></li></ul></li></ul><h3><strong>2. AI in Creative and Technical Applications</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1ry961j/an_australian_ml_researcher_used_chatgptalphafold/">An Australian ML researcher, used ChatGPT+AlphaFold to shrink 75% of his life-threatened dog&#8217;s MCT cancerous tumor, developing a personalized mRNA vaccine in just two months - after sequencing his dog&#8217;s DNA for $2,000</a></strong> (Activity: 498): <strong>An Australian machine learning researcher, Paul Conyngham, utilized ChatGPT and AlphaFold to develop a personalized mRNA vaccine for his dog, Rosie, who had a life-threatening mast cell tumor. By sequencing the tumor&#8217;s DNA for approximately </strong><code>$2,000</code><strong>, Conyngham identified neoantigens using ChatGPT and predicted protein structures with AlphaFold. Collaborating with Martin Smith from UNSW for genome sequencing and Pall Thordarson for mRNA synthesis, he successfully shrank the tumor by </strong><code>75%</code><strong> within two months, despite having no formal background in biology or medicine. This case highlights the potential of AI in personalized medicine and rapid vaccine development (<a href="https://www.the-scientist.com/chatgpt-and-alphafold-help-design-personalized-vaccine-for-dog-with-cancer-74227">source</a>).</strong> Commenters are debating the implications of this case, questioning whether it represents a significant shift in healthcare democratization or if it&#8217;s overhyped. Some suggest that regulatory barriers are hindering medical progress, as demonstrated by the rapid development achieved in this instance.</p><ul><li><p><strong>DepartmentDapper9823</strong> argues that this case illustrates how regulatory bodies may impede medical progress. They suggest that when these barriers are bypassed, advancements can occur more rapidly, as evidenced by the quick development of a personalized mRNA vaccine for the dog using ChatGPT and AlphaFold.</p></li><li><p><strong>AngleAccomplished865</strong> calls for expert opinions to assess the broader implications of this case, questioning whether it represents a significant shift in democratized healthcare or if it&#8217;s merely hype. They highlight the need for professional insights to determine the true impact of using AI tools like ChatGPT and AlphaFold in medical research.</p></li><li><p><strong>682463435465</strong> raises a concern that individuals with cancer might attempt to replicate this approach on themselves, indicating a potential risk of self-experimentation without proper medical guidance. This underscores the need for careful consideration of the ethical and safety implications of using AI in personalized medicine.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1rx0abd/built_an_open_source_tool_that_can_find_precise/">Built an open source tool that can find precise coordinates of any picture</a></strong> (Activity: 837): <strong>Netryx is an open-source tool developed by a college student, designed to determine precise geographic coordinates from street-level photos using visual clues and a custom machine learning pipeline. The tool is available on <a href="https://github.com/sparkyniner/Netryx-OpenSource-Next-Gen-Street-Level-Geolocation.git">GitHub</a> and aims to connect with developers and companies interested in geolocation technologies. The tool&#8217;s capabilities are demonstrated through a custom web version that geolocates events like the Qatar strikes, although the core pipeline remains consistent across versions.</strong> Commenters express mixed feelings about the tool&#8217;s potential uses, noting it could be both beneficial and harmful. There is also curiosity about its reliance on existing data sources like Google Street View for functionality.</p></li><li><p><strong><a href="https://www.reddit.com/r/ClaudeAI/comments/1rxyarx/i_built_a_claude_skill_that_writes_accurate/">I built a Claude skill that writes accurate prompts for any AI tool. To stop burning credits on bad prompts. We just hit 600 stars on GitHub&#8252;&#65039;</a></strong> (Activity: 728): <strong>The </strong><code>prompt-master</code><strong> is a Claude skill designed to optimize prompt generation for various AI tools, achieving over </strong><code>600 stars</code><strong> on GitHub. It intelligently detects the target AI tool and applies specific strategies, such as extracting </strong><code>9 dimensions</code><strong> from user input and identifying </strong><code>35 common prompt issues</code><strong>, to enhance prompt accuracy and efficiency. The tool supports a wide range of platforms including Claude, ChatGPT, Midjourney, and Eleven Labs, and is open-source, allowing for community-driven improvements. The latest version, </strong><code>v1.4</code><strong>, incorporates user feedback and plans for </strong><code>v1.5</code><strong> are underway, focusing on agent-based enhancements. <a href="http://github.com/nidhinjs/prompt-master">GitHub Repository</a>.</strong> Commenters highlight the tool&#8217;s ability to tailor prompts to specific AI models, such as <strong>Midjourney</strong> and <strong>Claude Code</strong>, as a key differentiator from generic prompt tools. There is interest in its compatibility with open-source models, suggesting potential for broader application.</p><ul><li><p>The tool&#8217;s ability to perform tool-specific routing is highlighted as a key feature, making it more effective than generic prompt enhancers. This is crucial because different AI tools like Midjourney and Claude Code require distinct prompt structures, which most general tools fail to address.</p></li><li><p>A user inquires about the compatibility of the tool with open-source models, specifically mentioning running it locally with ComfyUI on a 5090 GPU. This suggests interest in leveraging the tool&#8217;s capabilities beyond proprietary models, potentially expanding its utility in diverse AI environments.</p></li><li><p>Another user notes that while similar tools have been attempted, they often require manual tweaking of prompts. However, if this tool effectively manages tool-specific nuances, such as differences between Cursor and Claude Code, it could significantly enhance usability and efficiency.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/StableDiffusion/comments/1rx1w7d/i_got_tired_of_manually_prompting_every_single/">I got tired of manually prompting every single clip for my AI music videos, so I built a 100% local open-source (LTX Video desktop + Gradio) app to automate it, meet - Synesthesia</a></strong> (Activity: 306): <strong>Synesthesia is an open-source application designed to automate the creation of AI-generated music videos by integrating with local LLMs like </strong><code>Qwen3.5-9b</code><strong>. It processes three input files: an isolated vocal stem, a full band performance, and text lyrics, to generate a shot list that alternates between vocal and story segments. The app interfaces with LTX-Desktop for video generation, achieving a first-pass render of a 3-minute video in under an hour on a </strong><code>5090</code><strong> GPU at </strong><code>540p</code><strong> resolution. Users can adjust the shot list manually or let it run automatically, and select multiple takes per shot for final editing. The project is hosted on <a href="https://github.com/RowanUnderwood/Synesthesia-AI-Video-Director">GitHub</a>.</strong> One commenter suggests adding <strong>LoRA support</strong> for consistent character representation, while another criticizes the automation, arguing that it cannot replace the creative process of manual prompting.</p><ul><li><p>Loose_Object_8311 suggests that the app could benefit from <strong>LoRA support</strong> to maintain consistent character appearances across clips. LoRA (Low-Rank Adaptation) is a technique used to fine-tune models efficiently, which could enhance the app&#8217;s ability to generate consistent visual elements in AI-generated music videos.</p></li><li><p>InternationalBid831 inquires about compatibility with <strong>Wan2GP running LTX2</strong> instead of LTX Desktop, particularly for users with a <code>5070ti</code> GPU. This suggests a need for the app to support different hardware configurations and possibly different versions of the LTX software to accommodate a wider range of users.</p></li><li><p>Diadra_Underwood proposes adding a <strong>styles drop-down menu</strong> to the app, highlighting the potential for users to easily switch between different visual styles such as claymation, puppets, or CGI. This feature could enhance user experience by allowing for quick experimentation with various artistic styles in AI-generated content.</p></li></ul></li></ul><h3><strong>3. AI and Legal/Ethical Challenges</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/OpenAI/comments/1rx6o2i/the_dictionaries_are_suing_openai_for_massive/">The dictionaries are suing OpenAI for &#8220;massive&#8221; copyright infringement, and say ChatGPT is starving publishers of revenue</a></strong> (Activity: 718): <strong>Britannica and Merriam-Webster have filed a lawsuit against OpenAI in the Southern District of New York, alleging that OpenAI&#8217;s ChatGPT has infringed on their copyrights by using their researched content without permission. The lawsuit claims that ChatGPT&#8217;s ability to provide direct answers from absorbed content is depriving publishers of web traffic and ad revenue, which are crucial for their survival. This case adds to ongoing legal debates about AI&#8217;s use of online content and the boundaries of public knowledge versus proprietary information. <a href="https://fortune.com/2026/03/18/dictionaries-suing-openai-chatgpt-copyright-infringement/">Read more</a>.</strong> Commenters are questioning the implications of allowing companies to own definitions and the broader impact on information accessibility. There&#8217;s a satirical tone regarding the monetization of word usage, reflecting skepticism about the lawsuit&#8217;s premise.</p></li><li><p><strong><a href="https://www.reddit.com/r/ChatGPT/comments/1rxtt72/ceo_asks_chatgpt_how_to_void_250_million_contract/">CEO Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court</a></strong> (Activity: 465): <strong>In a recent legal debacle, Krafton CEO Changhan Kim attempted to void a </strong><code>$250 million</code><strong> contract with Unknown Worlds Entertainment by consulting ChatGPT instead of his legal team. The court decisively ruled against him, emphasizing the dangers of using AI for intricate legal strategies without professional oversight. The case underscores that while AI can assist in legal preparation by stress-testing arguments and summarizing precedents, it lacks the liability and contextual understanding necessary for direct legal action. For more details, see the <a href="https://www.404media.co/ceo-ignores-lawyers-asks-chatgpt-how-to-void-250-million-contract-loses-terribly-in-court/">404 Media report</a>.</strong> Commenters highlight the misuse of AI as a replacement for professional judgment, noting that AI should be used to enhance legal strategies rather than replace them. They emphasize the importance of human oversight, especially in complex legal matters, and suggest using AI to identify potential challenges rather than as a direct source of legal advice.</p><ul><li><p><strong>RobinWood_AI</strong> highlights the misuse of AI in legal contexts, emphasizing that AI should be used to enhance legal strategies rather than replace professional judgment. AI can assist in stress-testing arguments and drafting frameworks but lacks the liability and context of a human lawyer. The CEO&#8217;s mistake was using AI to directly void a contract without legal oversight, illustrating the gap between AI as a tool and a liability.</p></li><li><p><strong>chiqu3n</strong> discusses the limitations of AI in understanding specific legal contexts, noting that general AI models like ChatGPT may not account for special legislation that could affect contract terms. They compare this with a specialized legal LLM, &#8216;justicio&#8217;, which provided a more nuanced and legally accurate response, highlighting the importance of human expert review in critical legal matters.</p></li><li><p><strong>Dailan_Grace</strong> points out the issue of AI&#8217;s authoritative tone, which can mislead users into trusting incorrect information. AI models often present information confidently without hedging, which can be problematic if the user lacks the expertise to identify errors. This overconfidence in AI outputs may have contributed to the CEO&#8217;s poor decision-making.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ChatGPT/comments/1rx9rqh/jeremy_o_harris_drunkenly_called_openais_sam/">Jeremy O. Harris drunkenly called OpenAI&#8217;s Sam Altman a Nazi at the Vanity Fair Oscar party</a></strong> (Activity: 650): <strong>At the Vanity Fair Oscar party, playwright Jeremy O. Harris confronted Sam Altman, CEO of OpenAI, accusing him of being akin to a Nazi figure due to OpenAI&#8217;s new deal with the Department of War. Harris later clarified his statement, comparing Altman to Friedrich Flick, a German industrialist convicted of war crimes, rather than Joseph Goebbels. This incident highlights ongoing ethical debates surrounding AI and its military applications.</strong> The comments reflect skepticism about the appropriateness of the Nazi comparison, noting Altman&#8217;s Jewish background, and include some off-topic humor.</p></li></ul><h1><strong>AI Discords</strong></h1><p>Unfortunately, Discord shut down our access today. We will not bring it back in this form but we will be shipping the new AINews soon. Thanks for reading to here, it was a good run.</p>]]></content:encoded></item><item><title><![CDATA[[AINews] MiniMax 2.7: GLM-5 at 1/3 cost SOTA Open Model]]></title><description><![CDATA[congrats MiniMax!!]]></description><link>https://www.latent.space/p/ainews-minimax-27-glm-5-at-13-cost</link><guid isPermaLink="false">https://www.latent.space/p/ainews-minimax-27-glm-5-at-13-cost</guid><pubDate>Thu, 19 Mar 2026 06:47:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bZgR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Not 2 months after <a href="https://www.cnbc.com/2026/01/09/minimax-hong-kong-ipo-ai-tigers-zhipu.html">their IPO</a> and <a href="https://www.minimax.io/news/minimax-global-announces-full-year-2025-financial-results">first public quarter</a>, MiniMax is back in the news with <a href="https://x.com/MiniMax_AI/status/2034315320337522881#m">MiniMax 2.7</a>, a nice bright spot in Chinese Open Models after the <a href="https://x.com/swyx/status/2033030744352993296">changeover in Qwen</a>. They match <a href="https://www.latent.space/p/ainews-zai-glm-5-new-sota-open-weights?utm_source=publication-search">Z.ai&#8217;s GLM-5 SOTA open model from last month</a>, but the story is efficiency here (see green quadrant in <a href="https://x.com/ArtificialAnlys/status/2034313314420019462#m">Artificial Analysis&#8217; chart</a>):</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bZgR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bZgR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png 424w, https://substackcdn.com/image/fetch/$s_!bZgR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png 848w, https://substackcdn.com/image/fetch/$s_!bZgR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png 1272w, https://substackcdn.com/image/fetch/$s_!bZgR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bZgR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png" width="1456" height="855" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:855,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1940459,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/191449019?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bZgR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png 424w, https://substackcdn.com/image/fetch/$s_!bZgR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png 848w, https://substackcdn.com/image/fetch/$s_!bZgR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png 1272w, https://substackcdn.com/image/fetch/$s_!bZgR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f6a39ab-2a8a-499d-898d-437396bd2f4b_3600x2114.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The team calls out &#8220;<a href="https://x.com/MiniMax_AI/status/2034335605145182659">Early Echoes of Self-Evolution</a>&#8221;, calling it &#8220;our first model deeply participating in its own evolution.&#8221;, recalling <a href="https://www.latent.space/p/ainews-autoresearch-sparks-of-recursive">Karpathy&#8217;s Autoresearch</a>, although they only claim that &#8220;M2.7 is capable of handling 30%-50% of the workflow.&#8221;:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KE3r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KE3r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KE3r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KE3r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KE3r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KE3r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg" width="1280" height="673" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:673,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!KE3r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KE3r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KE3r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KE3r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f79d04e-0fdf-4940-90e0-9a7e5825f934_1280x673.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>They also report some work on multi-agent collaboration (&#8220;Agent Teams&#8221;) as well as follow Anthropic and OpenAI&#8217;s lead in applying their models <a href="https://docs.google.com/document/d/1ieBAOr8jOL36MTDCmQWakLdYjydDjCts/edit?usp=drive_link&amp;ouid=104169451586617858920&amp;rtpof=true&amp;sd=true">for finance usecases</a>. Finally, they launch <a href="https://www.openroom.ai/">OpenRoom</a>, an open source demo for entertainment usecases.</p><p></p><blockquote><p>AI News for 3/18/2026-3/19/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>MiniMax M2.7, Xiaomi MiMo-V2-Pro, and the expanding &#8220;self-evolving agent&#8221; model class</strong></p><ul><li><p><strong>MiniMax M2.7 is the headline model release</strong>: MiniMax positioned <a href="https://x.com/MiniMax_AI/status/2034315320337522881#m">M2.7</a> as its first model that &#8220;deeply participated in its own evolution,&#8221; claiming <strong>56.22% on SWE-Pro</strong>, <strong>57.0% on Terminal Bench 2</strong>, <strong>97% skill adherence across 40+ skills</strong>, and parity with <strong>Sonnet 4.6 in OpenClaw</strong>. A follow-up says the internal harness also recursively improved itself&#8212;collecting feedback, building eval sets, and iterating on <strong>skills/MCP, memory, and architecture</strong> (<a href="https://x.com/MiniMax_AI/status/2034315323109953605#m">thread</a>). Third-party coverage broadly echoed the &#8220;self-evolving&#8221; framing, including <a href="https://x.com/testingcatalog/status/2034250919345377604#m">TestingCatalog</a> and <a href="https://x.com/kimmonismus/status/2034269026353082422#m">kimmonismus</a>.</p></li><li><p><strong>Artificial Analysis places M2.7 on the cost/performance frontier</strong>: <a href="https://x.com/ArtificialAnlys/status/2034313314420019462#m">Artificial Analysis</a> reports <strong>50</strong> on its Intelligence Index, matching <strong>GLM-5 (Reasoning)</strong> while costing <strong>$176</strong> to run the full index at <strong>$0.30/$1.20 per 1M input/output tokens</strong>&#8212;less than one-third of GLM-5&#8217;s cost. They also report <strong>GDPval-AA Elo 1494</strong>, ahead of <strong>MiMo-V2-Pro (1426)</strong>, <strong>GLM-5 (1406)</strong>, and <strong>Kimi K2.5 (1283)</strong>, plus a large hallucination reduction vs M2.5. Distribution was immediate: <a href="https://x.com/ollama/status/2034351916097106424#m">Ollama cloud</a>, <a href="https://x.com/MiniMax_AI/status/2034327432124350924#m">Trae</a>, <a href="https://x.com/MiniMax_AI/status/2034328337527783857#m">Yupp</a>, <a href="https://x.com/MiniMax_AI/status/2034356786413867182#m">OpenRouter</a>, <a href="https://x.com/MiniMax_AI/status/2034357583797178841#m">Vercel</a>, <a href="https://x.com/MiniMax_AI/status/2034348503347171625#m">Zo</a>, <a href="https://x.com/MiniMax_AI/status/2034361282527461473#m">opencode</a>, and <a href="https://x.com/MiniMax_AI/status/2034339731660759097#m">kilocode</a>.</p></li><li><p><strong>Xiaomi&#8217;s MiMo-V2-Pro looks like a serious Chinese API-only reasoning entrant</strong>: <a href="https://x.com/ArtificialAnlys/status/2034239267052896516#m">Artificial Analysis</a> scores it at <strong>49</strong> on the Intelligence Index, with <strong>1M context</strong>, <strong>$1/$3 per 1M tokens</strong> pricing, and <strong>GDPval-AA Elo 1426</strong>. Notably, they call out stronger token efficiency than peers and a relatively favorable <strong>AA-Omniscience score (+5)</strong> driven by lower hallucination. This follows Xiaomi&#8217;s earlier open-weight <strong>MiMo-V2-Flash (309B total / 15B active, MIT)</strong>; V2-Pro itself is <strong>API-only</strong> for now.</p></li><li><p><strong>Mamba-3 is out and immediately being viewed through the hybrid-architecture lens</strong>: Cartesia announced <a href="https://x.com/cartesia/status/2034338862559121475#m">Mamba-3</a> as an SSM optimized for an inference-heavy world, with Albert Gu noting Cartesia-backed testing and support (<a href="https://x.com/_albertgu/status/2034347202613739947#m">link</a>). Early technical reactions focused less on standalone SSMs and more on plugging Mamba-3 into transformer hybrids: <a href="https://x.com/rasbt/status/2034088726997893168#m">rasbt</a> explicitly called out replacing Gated DeltaNet in next-gen hybrids like <strong>Qwen3.5 / Kimi Linear</strong>, while <a href="https://x.com/JG_Barthelemy/status/2034039081085108390#m">JG_Barthelemy</a> highlighted hybrid integration and &#8220;unlocking Muon for SSMs.&#8221;</p></li></ul><p><strong>Agent harnesses, skills, MCP, and the shift from &#8220;prompting&#8221; to systems design</strong></p><ul><li><p><strong>The strongest recurring theme is that harness engineering is becoming the real differentiator</strong>: Multiple posts argued that the bottleneck is no longer just the base model, but the surrounding execution environment. <a href="https://x.com/TheTuringPost/status/2034076706722746408#m">The Turing Post&#8217;s interview with Michael Bolin</a> frames coding agents as a problem of <strong>tools, repo legibility, constraints, and feedback loops</strong>&#8212;what many now call harness engineering. <a href="https://x.com/dbreunig/status/2034061742196859076#m">dbreunig</a> made a similar point about why teams stick with <strong>DSPy</strong>, and <a href="https://x.com/nickbaumann_/status/2034134875234832540#m">nickbaumann_</a> argued <strong>GPT-5.4 mini</strong> matters specifically because cheap, fast subagents change what is worth delegating.</p></li><li><p><strong>Skills are solidifying into a shared abstraction across agent stacks</strong>: A practical thread from <a href="https://x.com/mstockton/status/2034095691648098606#m">mstockton</a> lays out real usage patterns for <strong>SKILLS</strong>: progressive disclosure, trace inspection, session distillation, CI-triggered skills, and self-improving skills. <a href="https://x.com/RhysSullivan/status/2034125767987368242#m">RhysSullivan</a> suggests distributing skills via <strong>MCP resources</strong> may solve staleness/versioning. Anthropic&#8217;s Claude Code account clarifies that a skill is not just a text snippet but a <strong>folder with scripts/assets/data</strong>, and that the key description field should specify <strong>when</strong> to trigger it (<a href="https://x.com/claude_code/status/2034335585339375855#m">tweet</a>).</p></li><li><p><strong>Open agent stacks are converging on model + runtime + harness</strong>: <a href="https://x.com/hwchase17/status/2034297125417460044#m">Harrison Chase</a> published a walkthrough framing Claude Code, OpenClaw, Manus, etc. as the same decomposition: <strong>open model + runtime + harness</strong>, using <strong>Nemotron 3</strong>, NVIDIA&#8217;s <strong>OpenShell</strong>, and <strong>DeepAgents</strong>. Related infrastructure releases include <a href="https://x.com/samecrowder/status/2034123616720421210#m">LangSmith Sandboxes</a> for secure code execution, <a href="https://x.com/LangChain/status/2034321435418825023#m">LangSmith Polly GA</a> as an in-product debugging/improvement assistant, and a new <a href="https://x.com/LangChain/status/2034314483259031965#m">LangChain guide on production observability for agents</a>.</p></li><li><p><strong>MCP momentum continues, but there&#8217;s pushback</strong>: Useful MCP-related launches included Google Colab&#8217;s open-source <a href="https://x.com/_philschmid/status/2034197315661988010#m">MCP server</a>, enabling local agents to drive Colab GPU runtimes, and Google&#8217;s Gemini API update allowing <a href="https://x.com/_philschmid/status/2034308856885481791#m">built-in tools plus custom functions in one call</a>. At the same time, there&#8217;s visible skepticism: <a href="https://x.com/skirano/status/2034269154404868314#m">skirano</a> bluntly said &#8220;<strong>MCP was a mistake. Long live CLIs.</strong>&#8221; and <a href="https://x.com/denisyarats/status/2034067933975187586#m">denisyarats</a> joked about &#8220;<strong>model cli protocol</strong>.&#8221;</p></li><li><p><strong>A parallel trend: agent-native enterprise apps and &#8220;headless SaaS&#8221;</strong>: <a href="https://x.com/ivanburazin/status/2034042095548187072#m">ivanburazin</a> describes an emerging category of <strong>headless SaaS</strong>&#8212;traditional software rebuilt as agent-first APIs with no human UI. That idea lines up with product launches like Rippling&#8217;s <a href="https://x.com/parkerconrad/status/2034310231724073173#m">AI analyst</a>, Anthropic&#8217;s <a href="https://x.com/alexalbert__/status/2034276242317566107#m">Claude for Excel/PowerPoint webinar</a>, and the notion that meeting-notes apps are really becoming broader <strong>AI context/data apps</strong> (<a href="https://x.com/zachtratar/status/2034079952757547042#m">zachtratar</a>).</p></li></ul><p><strong>Infra, kernels, and model-system co-design</strong></p><ul><li><p><strong>Attention Residual became a case study in infra-model co-design</strong>: Several posts unpacked Kimi/Moonshot&#8217;s <strong>AttnRes</strong> work as more than a novelty architecture. <a href="https://x.com/bigeagle_xd/status/2034104829703045258#m">bigeagle_xd</a> emphasized co-design across model research and infra, linking to an inference-infra writeup; <a href="https://x.com/ZhihuFrontier/status/2034269774281400798#m">ZhihuFrontier</a> summarized why full attention residual strains <strong>pipeline parallelism</strong> due to asymmetric comms/memory patterns, and how <strong>Block Attention Residual</strong> plus cross-stage caching can restore symmetry. <a href="https://x.com/YyWangCS17122/status/2034273847164473820#m">YyWangCS17122</a> reinforced the theme: kernel optimization, algorithm-system co-design, and numerical rigor as the path to production-worthy large models.</p></li><li><p><strong>Custom kernel packaging is getting easier</strong>: <a href="https://x.com/ariG23498/status/2034107361733054814#m">ariG23498</a> highlighted Hugging Face&#8217;s new <code>kernels</code><strong> library</strong>, which aims to make custom kernels more shareable and easier to integrate via the Hub. The pitch is straightforward: lower the pain of writing and distributing fused/custom kernels without requiring every model team to hand-roll installation and integration logic.</p></li><li><p><strong>Inference optimization remains a first-class topic</strong>: The same thread on kernels reiterates the familiar optimization stack&#8212;close idle gaps between kernel launches, fuse ops with <code>torch.compile</code>, and only fall back to custom kernels where needed. On the hardware side, <a href="https://x.com/StasBekman/status/2034315810693599349#m">Stas Bekman</a> noted that NVLink&#8217;s marketed bandwidth can be misleading because it is not duplex in the way many assume.</p></li><li><p><strong>Compute bottlenecks are still upstream of everything else</strong>: <a href="https://x.com/kimmonismus/status/2034290731246907618#m">kimmonismus</a> argues that <strong>ASML EUV machines</strong> and their narrow supply chains may cap production at roughly <strong>100 machines/year by 2030</strong>, making lithography an important ceiling on AI scaling over this decade.</p></li></ul><p><strong>Documents, OCR, retrieval, and context engineering for real workflows</strong></p><ul><li><p><strong>Document AI is trending toward end-to-end multimodal parsers with grounding</strong>: Baidu introduced <a href="https://x.com/Baidu_Inc/status/2034265136182202765#m">Qianfan-OCR</a>, a <strong>4B end-to-end document intelligence model</strong> that collapses table extraction, formula recognition, chart understanding, and KIE into a single pass. <a href="https://x.com/VikParuchuri/status/2034317066048512392#m">Vik Paruchuri</a> open-sourced <strong>Chandra OCR 2</strong>, claiming <strong>85.9% on olmOCR bench</strong>, <strong>90+ language support</strong>, and stronger layout, handwriting, math, form, and table support in a smaller <strong>4B</strong> model. On the platform side, <a href="https://x.com/llama_index/status/2034300076441633276#m">LlamaIndex</a> and <a href="https://x.com/jerryjliu0/status/2034047686262087720#m">jerryjliu0</a> emphasized that production document agents need not just markdown conversion but <strong>layout detection, segmentation, metadata context, and visual grounding</strong> to support human-auditable document workflows.</p></li><li><p><strong>Late-interaction retrieval continues to push on the memory/quality tradeoff</strong>: <a href="https://x.com/victorialslocum/status/2034253990582423716#m">victorialslocum</a> summarized <strong>MUVERA</strong>, which compresses multi-vector retrieval into fixed-dimensional encodings, reporting about <strong>70% memory reduction</strong> and much smaller HNSW graphs at some recall/query-throughput cost. <a href="https://x.com/lateinteraction/status/2034254747666960683#m">lateinteraction</a> used the thread to reiterate the limitations of single-vector retrieval on harder OOD settings.</p></li><li><p><strong>Context engineering is becoming a product category</strong>: <a href="https://x.com/llama_index/status/2034347384973762694#m">llama_index</a> explicitly frames context engineering as the successor to prompt engineering, with structured parsing/extraction as a core lever. This pairs with Hugging Face&#8217;s new support for serving <strong>Markdown paper views to agents</strong> and a <strong>Paper Pages skill</strong> for searching and reading papers more token-efficiently (<a href="https://x.com/ClementDelangue/status/2034277529981178007#m">Clement Delangue</a>, <a href="https://x.com/NielsRogge/status/2034287785297735785#m">Niels Rogge</a>, <a href="https://x.com/mishig25/status/2034274342343733295#m">mishig25</a>).</p></li></ul><p><strong>Evals, training methodology, and benchmarks worth watching</strong></p><ul><li><p><strong>LLM-as-judge reproducibility is under fire again</strong>: <a href="https://x.com/a1zhang/status/2034059629072945251#m">a1zhang</a> showed a model scoring <strong>10%</strong> under <strong>GPT-5.2-as-judge</strong> vs <strong>43.5%</strong> under <strong>GPT-5.1-as-judge</strong>, despite a paper reporting <strong>34%</strong>&#8212;a stark reminder that judge choice can swamp conclusions. <a href="https://x.com/torchcompiled/status/2034068339023102060#m">torchcompiled</a> distilled the takeaway: don&#8217;t use LLM-as-judge without validating human correlation or tuning for it.</p></li><li><p><strong>Pretraining data composition is re-emerging as a major lever</strong>: <a href="https://x.com/rosinality/status/2034178558440898786#m">rosinality</a> highlighted work showing that mixing <strong>SFT data during pretraining</strong> can outperform the standard pretrain-then-finetune pipeline, with a scaling law for the ratio under a token budget. Related posts from <a href="https://x.com/arimorcos/status/2034295652193370602#m">arimorcos</a>, <a href="https://x.com/pratyushmaini/status/2034296042540466252#m">pratyushmaini</a>, and <a href="https://x.com/_christinabaek/status/2034285795071205737#m">Christina Baek</a> all argue that domain adaptation often benefits more from <strong>earlier data mixing</strong> or even <strong>repeating small high-quality datasets 10&#8211;50x during pretraining</strong> than from naive finetuning alone.</p></li><li><p><strong>Benchmarks are shifting toward &#8220;unsolved and useful&#8221;</strong>: <a href="https://x.com/OfirPress/status/2034298283774877926#m">Ofir Press</a> points to a future where improving on a benchmark means solving previously unsolved tasks that matter in the world, not just memorizing exam-like datasets. He also notes <a href="https://x.com/OfirPress/status/2034347578653868374#m">AssistantBench</a> remains unsolved 1.5 years later. New benchmark/tooling drops include <a href="https://x.com/mervenoyann/status/2034265145158119642#m">ScreenSpot-Pro on Hugging Face</a> for GUI agents and <a href="https://x.com/arena/status/2034294095150215182#m">Arena&#8217;s academic partnerships</a> funding eval work.</p></li></ul><p><strong>Top tweets (by engagement, filtered for technical relevance)</strong></p><ul><li><p><strong>OpenAI&#8217;s Parameter Golf challenge</strong>: OpenAI launched <a href="https://x.com/OpenAI/status/2034315401438580953#m">Parameter Golf</a>, a training challenge to fit the best LM in a <strong>16MB artifact</strong> trained in <strong>under 10 minutes on 8&#215;H100s</strong>, with <strong>$1M in compute</strong> behind it. Good talent-pipeline energy, and a nice complement to the NanoGPT speedrun culture (<a href="https://x.com/scaling01/status/2034312935661609280#m">details via scaling01</a>).</p></li><li><p><strong>Anthropic&#8217;s 81k-user study</strong>: Anthropic says it used Claude to interview <strong>80,508 people in one week</strong> about hopes and fears around AI&#8212;the company calls it the largest qualitative study of its kind (<a href="https://x.com/AnthropicAI/status/2034302152945144166#m">announcement</a>). The research is interesting both as social measurement and as a signal that model-mediated interviewing may become a standing product/research capability.</p></li><li><p><strong>Runway&#8217;s real-time video generation preview</strong>: Runway shared a research preview developed with NVIDIA showing <strong>HD video generation with time-to-first-frame under 100ms</strong> on Vera Rubin hardware (<a href="https://x.com/runwayml/status/2034284298769985914#m">tweet</a>). If it generalizes, this is a qualitatively different interaction loop for video models.</p></li><li><p><strong>Hugging Face on agent-facing research interfaces</strong>: The platform change to serve <strong>Markdown paper views to agents</strong> and the companion paper skill is small but important infrastructure for agentic research workflows (<a href="https://x.com/ClementDelangue/status/2034277529981178007#m">Clement Delangue</a>).</p></li><li><p><strong>VS Code integrated browser debugging</strong>: Microsoft&#8217;s latest <a href="https://x.com/code/status/2034332099231072639#m">VS Code release</a> adds integrated browser debugging for end-to-end web app workflows&#8212;useful in its own right, and likely to matter even more as coding agents are asked to operate against live browser state.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. MiniMax-M2.7 Model Announcements</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rwvn6h/minimaxm27_announced/">MiniMax-M2.7 Announced!</a></strong> (Activity: 947): <strong>The image presents a comparative analysis of the newly announced MiniMax-M2.7 model against other models like Gemini 3.1 Pro, Sonnet 4.6, Opus 4.6, and GPT 5.4 across various benchmarks such as SWE Bench Pro, VIBE-Pro, and MM-ClawBench. MiniMax-M2.7 is highlighted in red, indicating its performance metrics, which are crucial for understanding its capabilities relative to existing models. The model&#8217;s autonomous iteration capabilities are emphasized, showcasing its ability to optimize software engineering tasks through iterative cycles, leading to a </strong><code>30% performance improvement</code><strong> on internal evaluations. This highlights the model&#8217;s potential for self-evolution and automation in AI development.</strong> Commenters express skepticism about the practical usability of models that perform well on benchmarks but may not generalize well to real-world tasks. There is anticipation for user testing to validate the model&#8217;s effectiveness beyond controlled evaluations.</p><ul><li><p>Recoil42 highlights the autonomous iteration capabilities of the MiniMax-M2.7 model, which can optimize its own performance through iterative cycles. The model autonomously analyzes failure paths, plans changes, modifies code, and evaluates results, achieving a 30% performance improvement on internal evaluation sets by optimizing sampling parameters and workflow guidelines.</p></li><li><p>Specialist_Sun_7819 raises a critical point about the discrepancy between benchmark performance and real-world usability. They emphasize the importance of user testing to assess how models perform on tasks that deviate from their training distribution, suggesting that many models excel in evaluations but struggle with off-distribution tasks.</p></li><li><p>Lowkey_LokiSN expresses concern about the model&#8217;s quantization resistance, referencing issues with the previous M2.5 model&#8217;s UD-Q4_K_XL variant. This highlights the importance of maintaining model performance post-quantization, which can be a challenge for large models when reducing precision for deployment.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rwl0ek/minimax_m27_is_on_the_way/">MiniMax M2.7 Is On The Way</a></strong> (Activity: 329): <strong>The image is a tweet from MiniMax announcing their participation in the NVIDIA GTC event where they plan to discuss their upcoming model, MiniMax M2.7, along with multimodal systems and AI products. This suggests that MiniMax M2.7 might incorporate multimodal capabilities, potentially handling multiple types of data inputs like text, images, and audio. The mention of multimodal systems aligns with current trends in AI development, where models are increasingly designed to process and integrate various data forms for more comprehensive outputs.</strong> A comment highlights the desire for a smaller version of the model, indicating user interest in more accessible or resource-efficient versions. Another comment praises the performance of MiniMax 2.5, noting its speed and tooling capabilities, but points out the lack of image and audio input support, which could be addressed in the upcoming M2.7 model.</p><ul><li><p>z_3454_pfk highlights the performance of MiniMax 2.5, noting its efficiency with tooling and retrieval-augmented generation (RAG). The model is praised for its speed, though it currently lacks support for image and audio inputs, which could be a limitation for some applications.</p></li><li><p>Dismal-Effect-1914 emphasizes the compactness and efficiency of MiniMax 2.5, stating it is the best model available that fits under approximately 150 GB when using 4-bit quantization. This suggests a strong balance between performance and resource usage, making it suitable for environments with limited storage capacity.</p></li></ul></li><li><p></p></li></ul>
      <p>
          <a href="https://www.latent.space/p/ainews-minimax-27-glm-5-at-13-cost">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[[AINews] Claude Cowork Dispatch: Anthropic's Answer to OpenClaw]]></title><description><![CDATA[a quiet day.]]></description><link>https://www.latent.space/p/ainews-claude-cowork-dispatch-anthropics</link><guid isPermaLink="false">https://www.latent.space/p/ainews-claude-cowork-dispatch-anthropics</guid><pubDate>Wed, 18 Mar 2026 04:59:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dA-8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Note: AIE Europe is ~sold out! Tickets and limited sponsorships for <a href="http://ai.engineer/miami">AIE Miami</a> are next &#8212; as you can see from <a href="https://x.com/dexhorthy/status/2033980486813684181?s=20">online buzz</a>, <a href="https://x.com/thdxr/status/2034095613822808165?s=20">speakers</a> are excited and prepping. We&#8217;ll be there!</em></p><div><hr></div><p>By total coincidence, today&#8217;s <a href="https://www.latent.space/p/felix-anthropic">main pod guest</a> also released today&#8217;s title story:</p><blockquote><p><strong>swyx</strong>: Does remote control work for Claude Cowork yet? No. Right.</p><p><strong>Felix</strong>: Excellent question.</p><p><strong>swyx</strong>: Coming soon.</p></blockquote><p>And today, <a href="https://x.com/felixrieseberg/status/2034005731457044577?s=12">here it is</a>: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dA-8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dA-8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png 424w, https://substackcdn.com/image/fetch/$s_!dA-8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png 848w, https://substackcdn.com/image/fetch/$s_!dA-8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png 1272w, https://substackcdn.com/image/fetch/$s_!dA-8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dA-8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png" width="424" height="468.2251655629139" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1334,&quot;width&quot;:1208,&quot;resizeWidth&quot;:424,&quot;bytes&quot;:270209,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/191334680?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dA-8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png 424w, https://substackcdn.com/image/fetch/$s_!dA-8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png 848w, https://substackcdn.com/image/fetch/$s_!dA-8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png 1272w, https://substackcdn.com/image/fetch/$s_!dA-8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F319f7604-68d5-44d5-b37e-61de653f6941_1208x1334.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Multiple people, from <a href="https://x.com/simonw/status/2034014713928106261?s=46">SimonW</a> to <a href="https://x.com/emollick/status/2034067677157679379?s=46">Ethan Mollick</a>, are comparing it (favorably) to OpenClaw. As <a href="https://www.latent.space/p/ainews-nvidia-gtc-jensen-goes-hard">Jensen said yesterday</a>, every company needs an OpenClaw strategy. Now Anthropic, which famously &#8220;<a href="https://x.com/morqon/status/2023203435475063157?s=20">fumbled</a>&#8221; the Clawdbot relationship, has an answer, and it&#8217;s a pretty pretty good one.</p><p>Tune in to the full pod to get the full origin story, usecases, and design thinking (particularly around the technical choices of sandboxing and Electron), on today&#8217;s pod.</p><div id="youtube2-ZpZ7lFoWaT8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;ZpZ7lFoWaT8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ZpZ7lFoWaT8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p></p><blockquote><p>AI News for 3/14/2026-3/16/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>OpenAI&#8217;s GPT-5.4 Mini/Nano Release and the Shift to Small, Coding-Optimized Models</strong></p><ul><li><p><strong>GPT-5.4 mini and nano shipped across API, ChatGPT, and Codex</strong>: OpenAI launched <strong><a href="https://x.com/OpenAI/status/2033953592424731072">GPT-5.4 mini</a></strong> and <strong><a href="https://x.com/OpenAI/status/2033953595637538849">GPT-5.4 nano</a></strong>, positioning them as its most capable small models yet. Per <a href="https://x.com/OpenAIDevs/status/2033953815834333608">@OpenAIDevs</a>, GPT-5.4 mini is <strong>more than 2x faster</strong> than GPT-5 mini, targets <strong>coding, computer use, multimodal understanding, and subagents</strong>, and offers a <strong>400k context window</strong> in the API. OpenAI also claims mini approaches larger GPT-5.4 performance on evaluations including <strong><a href="https://x.com/OpenAIDevs/status/2033953828387885470">SWE-Bench Pro and OSWorld-Verified</a></strong>, while using only <strong><a href="https://x.com/OpenAIDevs/status/2033953840312291603">30% of GPT-5.4 Codex quota</a></strong>, making it the new default for many background coding workflows and subagent fan-out.</p></li><li><p><strong>Early reception focused on coding value, but also on pricing and truthfulness tradeoffs</strong>: Developers immediately highlighted mini&#8217;s utility for <a href="https://x.com/dkundel/status/2033953901301665838">subagents in Codex</a>, <a href="https://x.com/scaling01/status/2033954794105127007">computer-use workloads</a>, and external products such as <a href="https://x.com/windsurf/status/2033954998837776869">Windsurf</a>. However, commentary also converged on a familiar OpenAI pattern: better performance but higher price. Posts from <a href="https://x.com/scaling01/status/2033955279079907511">@scaling01</a> note <strong>$0.75/M input and $4.5/M output</strong> for mini, with nano likewise priced above prior nano tiers. Third-party evals were mixed: <a href="https://x.com/mercor_ai/status/2033955468650156503">Mercor&#8217;s APEX-Agents result</a> reported <strong>24.5% Pass@1</strong> for mini with xhigh reasoning, ahead of some lightweight and midweight competitors on that benchmark, while <a href="https://x.com/petergostev/status/2033995459522396287">BullshitBench</a> placed the new small models relatively low on resistance to false-premise/jargon traps. OpenAI also quietly acknowledged behavior tuning issues, with <a href="https://x.com/michpokrass/status/2033935238066540806">@michpokrass</a> saying a recent <strong>5.3 instant</strong> update reduced &#8220;annoyingly clickbait-y&#8221; behavior.</p></li></ul><p><strong>Agent Infrastructure: Sandboxes, Subagents, Open SWE, and the Harness Wars</strong></p><ul><li><p><strong>Code-executing agents are becoming the center of product architecture</strong>: Several launches point to a stack maturing around secure execution, orchestration, and deployment ergonomics rather than just better base models. LangChain introduced <strong><a href="https://x.com/LangChain/status/2033949251529793978">LangSmith Sandboxes</a></strong> for secure ephemeral code execution, with <a href="https://x.com/hwchase17/status/2033950657619874217">@hwchase17</a> explicitly arguing that &#8220;more and more agents will write and execute code.&#8221; In parallel, LangChain open-sourced <strong><a href="https://x.com/hwchase17/status/2033977192053612621">Open SWE</a></strong>, a background coding agent patterned after internal systems reportedly used at <strong>Stripe, Ramp, and Coinbase</strong>. The system integrates with <a href="https://x.com/BraceSproul/status/2033962118970818650">Slack, Linear, and GitHub</a>, uses subagents plus middleware, and separates harness, sandbox, invocation layer, and validation. This is a notable step from &#8220;chat copilots&#8221; toward deployable internal engineering agents.</p></li><li><p><strong>Subagents and secure execution are now first-class product features across the ecosystem</strong>: OpenAI&#8217;s Codex now supports <strong><a href="https://x.com/gdb/status/2033757784437895367">subagents</a></strong>, and GPT-5.4 mini was framed by OpenAI as especially good for that use case. Hermes Agent&#8217;s <strong><a href="https://x.com/NousResearch/status/2033877040399831478">v0.3.0</a></strong><a href="https://x.com/NousResearch/status/2033877040399831478"> release</a> is another strong signal: <strong>248 PRs in 5 days</strong>, first-class <strong>plugin architecture</strong>, live Chrome control via <strong>CDP</strong>, IDE integrations, local Whisper-based voice mode, PII redaction, and provider integrations like <a href="https://x.com/Teknium/status/2033811117521408078">Browser Use</a>. The resulting direction is consistent across vendors: agent value increasingly depends on safe execution environments, composable skills/plugins, and workflow-native surfaces rather than raw benchmark gains alone.</p></li></ul><p><strong>Architecture Research: Attention Residuals, Vertical Attention, and Mamba-3</strong></p><ul><li><p><strong>Attention over depth is having a moment</strong>: Moonshot&#8217;s <strong><a href="https://x.com/Kimi_Moonshot/status/2033796781327454686">Attention Residuals paper on arXiv</a></strong> triggered substantial technical discussion around &#8220;vertical attention&#8221; or attention across layers. A detailed explainer from <a href="https://x.com/ZhihuFrontier/status/2033751367198949865">@ZhihuFrontier</a> frames the idea as each layer querying prior-layer states, effectively extending attention from horizontal sequence interactions to inter-layer memory. Community reactions emphasized that this is not entirely isolated: <a href="https://x.com/rosinality/status/2033810580604158323">@rosinality</a> noted <strong>ByteDance also implemented attention over depth</strong>, and <a href="https://x.com/arjunkocher/status/2033846693918347641">@arjunkocher</a> published an implementation walkthrough. The interesting systems claim here is that because <strong>number of layers &lt;&lt; sequence length</strong>, some forms of vertical attention may be hidden under existing compute and impose little or no extra latency.</p></li><li><p><strong>Mamba-3 strengthens the case for inference-first hybrid architectures</strong>: The other major architecture release was <strong><a href="https://x.com/_albertgu/status/2033948415139451045">Mamba-3</a></strong>, presented by <a href="https://x.com/_albertgu/status/2033948415139451045">@_albertgu</a> and <a href="https://x.com/tri_dao/status/2033948569502413245">@tri_dao</a> as the latest step in making linear/state-space models more competitive in the hybrid era. The emphasis is explicitly on <strong>inference efficiency</strong>, not replacing transformers outright. Together summarized it as a <strong><a href="https://x.com/togethercompute/status/2033956365165859026">MIMO variant</a></strong> that improves model strength at similar decode speed, with claims of strongest performance among linear models and fastest prefill+decode at <strong>1.5B</strong>. Tri Dao also pointed to inference-heavy RL and long-rollout workloads as especially fertile ground for such architectures. The broader takeaway from both Attention Residuals and Mamba-3 is that labs are still searching for ways to relax the full-transformer bottleneck without sacrificing too much ecosystem compatibility.</p></li></ul><p><strong>GTC: NVIDIA&#8217;s Agent Push, Open Models, and the Infrastructure Thesis</strong></p><ul><li><p><strong>GTC messaging centered on inference, agents, and the &#8220;token factory&#8221; worldview</strong>: Multiple posts reflected Jensen Huang&#8217;s framing of future computers as systems for <a href="https://x.com/TheTuringPost/status/2033983885131059636">&#8220;</a><strong><a href="https://x.com/TheTuringPost/status/2033983885131059636">manufacturing tokens</a></strong><a href="https://x.com/TheTuringPost/status/2033983885131059636">&#8221;</a>, with inference now driving the next capacity wave. This showed up in product and ecosystem announcements: LangChain said its frameworks crossed <strong><a href="https://x.com/LangChain/status/2033788913937195132">1B downloads</a></strong> and joined the <strong>NVIDIA Nemotron Coalition</strong>; <a href="https://x.com/ggerganov/status/2033947673825337477">@ggerganov</a> highlighted <strong>Nemotron 3 Nano 4B</strong> support in llama.cpp; and Hugging Face&#8217;s <a href="https://x.com/jeffboudier/status/2033959279510884631">@jeffboudier</a> recapped a range of open NVIDIA drops spanning reasoning models, robotics datasets, and world models.</p></li><li><p><strong>Open and enterprise agent tooling dominated side announcements</strong>: H Company released <strong><a href="https://x.com/hcompany_ai/status/2033851052714320083">Holotron-12B</a></strong>, an open multimodal model built with NVIDIA for <strong>computer-use agents</strong>. Perplexity announced <strong><a href="https://x.com/perplexity_ai/status/2033947232467357874">Comet Enterprise</a></strong>, bringing its AI browser to enterprise teams with rollout controls and <a href="https://x.com/perplexity_ai/status/2033947356551647356">CrowdStrike Falcon integration</a>. NVIDIA&#8217;s broader business thesis also got amplified: <a href="https://x.com/TheTuringPost/status/2033981870141231215">@TheTuringPost</a> highlighted Jensen&#8217;s remark that the often-cited <strong>$1T AI infra opportunity</strong> only covers a subset of the stack through 2027, reinforcing that the industry is still very early in inference infrastructure buildout.</p></li></ul><p><strong>Open-Source Tooling, Local Agents, and Developer Stack Upgrades</strong></p><ul><li><p><strong>Local/private agent workflows keep improving</strong>: Hugging Face shipped an <strong><a href="https://x.com/ClementDelangue/status/2033982183791108278">hf CLI extension</a></strong> that auto-detects the best local model/quant for available hardware and spins up a local coding agent. Unsloth launched <strong><a href="https://x.com/UnslothAI/status/2033926272481718523">Unsloth Studio</a></strong>, an open-source web UI to train and run <strong>500+ models</strong> locally across Mac/Windows/Linux, with claims of <strong>2x faster training using 70% less VRAM</strong>, GGUF support, synthetic data tooling, tool calling, and code execution. Ollama added <a href="https://x.com/ollama/status/2033993519459889505">web search/fetch plugins and headless launch support</a> for OpenClaw workflows, while also showing up as a <a href="https://x.com/ollama/status/2033794815448780803">provider in CodexBar</a>.</p></li><li><p><strong>The &#8220;open coding agent&#8221; ecosystem is becoming legible</strong>: There&#8217;s increasing convergence on patterns: model-agnostic harnesses, structured skills, filesystem/state abstractions, and ephemeral cloud or local execution. LangChain&#8217;s <a href="https://x.com/RoundtableSpace/status/2033955271333011829">Deep Agents</a> was described as an MIT-licensed, inspectable replica of the Claude Code style of agentic harness. Hermes Agent&#8217;s plugin system and local-model friendliness pushed it into the same conversation. This is one of the clearer trends in the dataset: the frontier is no longer just open-weight models, but open harnesses and runtime layers for actually deploying agents.</p></li></ul><p><strong>Top tweets (by engagement)</strong></p><ul><li><p><strong>OpenAI small-model launch</strong>: <a href="https://x.com/OpenAIDevs/status/2033953815834333608">@OpenAIDevs on GPT-5.4 mini/nano</a> was among the day&#8217;s most consequential technical announcements, especially for coding-agent workloads.</p></li><li><p><strong>Cursor&#8217;s RL-based context compaction</strong>: <a href="https://x.com/cursor_ai/status/2033967614309835069">@cursor_ai</a> said it trained Composer to <strong>self-summarize through RL instead of prompting</strong>, cutting compaction error by <strong>50%</strong> and enabling harder long-horizon coding tasks.</p></li><li><p><strong>Mamba-3 release</strong>: <a href="https://x.com/_albertgu/status/2033948415139451045">@_albertgu</a> and <a href="https://x.com/tri_dao/status/2033948569502413245">@tri_dao</a> marked one of the most important architecture updates in sequence modeling this cycle.</p></li><li><p><strong>Unsloth Studio</strong>: <a href="https://x.com/UnslothAI/status/2033926272481718523">@UnslothAI</a> had one of the strongest open-source product launches, aimed squarely at local training/inference practitioners.</p></li><li><p><strong>Kimi Attention Residuals</strong>: <a href="https://x.com/Kimi_Moonshot/status/2033796781327454686">@Kimi_Moonshot</a> drove much of the architecture discussion, with follow-on analysis around vertical attention and inter-layer memory.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><h3><strong>1. Unsloth Studio Launch and Features</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rwa0f7/unsloth_announces_unsloth_studio_a_competitor_to/">Unsloth announces Unsloth Studio - a competitor to LMStudio?</a></strong> (Activity: 998): <strong>Unsloth Studio has been announced as a new open-source, no-code web interface for training and running AI models locally, potentially challenging the dominance of LMStudio in the GGUF ecosystem. It is compatible with </strong><code>Llama.cpp</code><strong> and offers features such as auto-healing tool calling, Python and bash code execution, and support for audio, vision, and LLM finetuning. The platform supports GGUFs and runs on Mac, Windows, and Linux, with capabilities for SVG rendering, synthetic data generation, and fast parallel data preparation. Installation is straightforward via </strong><code>pip install unsloth</code><strong>. More details can be found in the <a href="https://unsloth.ai/docs/new/studio#run-models-locally">Unsloth Documentation</a>.</strong> Some users question the characterization of LMStudio as the &#8216;go-to&#8217; for advanced users, suggesting alternatives like vLLM or llama.cpp. Others express excitement over the UI&#8217;s capabilities, particularly for training and data preparation.</p><ul><li><p><strong>danielhanchen</strong> highlights the extensive feature set of Unsloth Studio, noting its capabilities such as auto-healing tool calling, Python and bash code execution, and support for multiple operating systems including Mac, Windows, and Linux. The tool also offers advanced functionalities like SVG rendering, synthetic data generation, and fast parallel data preparation, making it a comprehensive solution for various AI tasks. More details and installation instructions are available on <a href="https://github.com/unslothai/unsloth">GitHub</a>.</p></li><li><p><strong>sean_hash</strong> points out the convenience of having both fine-tuning and inference capabilities integrated into a single tool like Unsloth Studio. This contrasts with the current need to use multiple projects to achieve the same functionality, highlighting Unsloth Studio&#8217;s potential to streamline AI development workflows.</p></li><li><p><strong>Specter_Origin</strong> expresses appreciation for Unsloth Studio&#8217;s open-source nature, contrasting it with the closed-source LM Studio. This openness could be a significant advantage for developers who prefer transparency and the ability to modify the tool according to their needs.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rw9jmf/introducing_unsloth_studio_a_new_opensource_web/">Introducing Unsloth Studio: A new open-source web UI to train and run LLMs</a></strong> (Activity: 579): <strong>Unsloth Studio is a new open-source web UI designed to train and run large language models (LLMs) locally on Mac, Windows, and Linux. It claims to train over </strong><code>500+ models</code><strong> at twice the speed while using </strong><code>70% less VRAM</code><strong>. The platform supports GGUF, vision, audio, and embedding models, and includes features like model comparison, self-healing tool calling, and web search. It also offers auto-creation of datasets from formats like PDF, CSV, and DOCX, and allows for code execution to enhance LLM output accuracy. Models can be exported to formats such as GGUF and Safetensors, with auto-tuning of inference parameters. Installation is facilitated via </strong><code>pip install unsloth</code><strong>. <a href="https://github.com/unslothai/unsloth">GitHub</a> and <a href="https://unsloth.ai/docs/new/studio">documentation</a> are available for further details.</strong> Commenters are enthusiastic about Unsloth Studio as a fully open-source alternative to existing platforms, highlighting its accessibility for fine-tuning models, especially for users with less expertise. There is anticipation for upcoming support for AMD, which is expected to broaden its usability.</p><ul><li><p>A user highlights the importance of making fine-tuning accessible, noting that Unsloth Studio provides an easy way to fine-tune models, which has been a challenge since the release of LLaMA 2. This accessibility could potentially revive the &#8216;golden age of fine-tunes&#8217;, making it easier for those with less expertise to engage in model customization.</p></li><li><p>Another user points out a technical issue encountered during installation, where an OSError due to insufficient disk space occurred while downloading a large <code>torch</code> package. This highlights a common challenge in AI/ML projects related to managing dependencies and system resources, suggesting that atomic installation of components might be necessary to lower the entry barrier.</p></li><li><p>An AMD representative expresses readiness to support the upcoming official AMD support for Unsloth Studio, indicating potential improvements in compatibility and performance for AMD hardware users. This collaboration could enhance the usability of Unsloth Studio across different hardware platforms.</p></li></ul></li></ul><h3><strong>2. Qwen3.5-9B Document Benchmark Results</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rv98wo/qwen359b_on_document_benchmarks_where_it_beats/">Qwen3.5-9B on document benchmarks: where it beats frontier models and where it doesn&#8217;t.</a></strong> (Activity: 295): <strong>The image compares the performance of Alibaba&#8217;s Qwen3.5-9B and OpenAI&#8217;s GPT-5.4 on document AI benchmarks. Qwen3.5-9B ranks #9 with a score of </strong><code>77.0</code><strong>, excelling in &#8220;Key Information Extraction&#8221; and &#8220;Table Understanding,&#8221; while GPT-5.4 ranks #4 with a score of </strong><code>81.0</code><strong>, leading in other areas. The benchmark results highlight Qwen3.5-9B&#8217;s superior performance in &#8220;OmniOCR&#8221; but its lag in &#8220;OmniDoc&#8221; and &#8220;IDP Core.&#8221; This aligns with the detailed breakdown in the post, where Qwen models outperform in OCR and VQA tasks but fall behind in table extraction and handwriting OCR.</strong> One commenter suggests that AI technology is reaching a functional ceiling, indicating that current models are sufficient for many tasks and can run efficiently on less powerful hardware. Another comment anticipates interesting comparisons with GLM-OCR, while a third notes the potential energy efficiency of using smaller Qwen models for tasks that can tolerate longer processing times.</p><ul><li><p><strong>Qwen3.5-9B&#8217;s performance</strong>: The model demonstrates competitive performance against larger frontier models, particularly in document processing tasks. Its ability to run efficiently on lower-end hardware, such as ultrabooks, highlights its energy efficiency and accessibility for broader applications. This suggests a shift towards optimizing smaller models for specific tasks rather than relying solely on larger, more resource-intensive models.</p></li><li><p><strong>Energy efficiency and reasoning</strong>: The Qwen3.5-9B model is noted for its energy efficiency, especially in tasks requiring extended reasoning. Compared to larger models like Gemini or GPT, Qwen3.5-9B offers a more sustainable option if processing time is not a critical factor. This positions it as a viable alternative for applications where energy consumption is a priority.</p></li><li><p><strong>Model variants and benchmarks</strong>: There is curiosity about the absence of larger Qwen model variants, such as the 27B dense and 35B MoE, in the benchmarks. This absence raises questions about the comparative performance and potential advantages of these larger models in specific tasks, suggesting a need for further exploration and benchmarking of these variants.</p></li></ul></li></ul><h3><strong>3. Mistral Small 4 and DGX Station Availability</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rvlfbh/mistral_small_4119b2603/">Mistral Small 4:119B-2603</a></strong> (Activity: 1057): <strong>Mistral Small 4 is a hybrid AI model with </strong><code>119 billion parameters</code><strong> and a </strong><code>256k context length</code><strong>, integrating Instruct, Reasoning, and Devstral capabilities. It supports multimodal input and features an efficient architecture that reduces latency by </strong><code>40%</code><strong>. The model includes advanced features like speculative decoding and 4-bit float quantization, optimized for tasks such as general chat, coding, and document analysis. It is available under an Apache 2.0 license for both commercial and non-commercial use. More details can be found on the <a href="https://huggingface.co/mistralai/Mistral-Small-4-119B-2603">Hugging Face page</a>.</strong> Commenters humorously note the shift in scale, with <code>120 billion parameters</code> now considered &#8216;small&#8217;, reflecting the rapid evolution in AI model sizes and capabilities.</p><ul><li><p>The Mistral Small 119B model is being compared to the Qwen3.5-122B-A10B model, with a focus on parameter activation. Mistral activates 6.5 billion parameters, whereas Qwen3.5 utilizes 10 billion, which may explain why Mistral does not outperform Qwen3.5 overall. This highlights the importance of parameter activation in model performance.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rvnppg/dgx_station_is_available_via_oem_distributors/">DGX Station is available (via OEM distributors)</a></strong> (Activity: 418): <strong>The image depicts a high-performance workstation, likely the NVIDIA DGX Station, which is now available through OEM distributors. This machine is designed for AI and deep learning applications, featuring advanced cooling and performance capabilities. The DGX Station is equipped with NVIDIA&#8217;s latest technology, making it a &#8216;dream machine&#8217; for many in the AI community. The discussion highlights its availability through distributors like Dell and Exxact, with prices reportedly in the </strong><code>85-90k USD</code><strong> range. The concept of &#8216;coherent memory&#8217; is mentioned, which refers to a memory architecture that allows for efficient data sharing between CPUs and GPUs, potentially enhancing performance in AI workloads.</strong> There is a discussion about the pricing and availability of the DGX Station, with some users noting discrepancies in Dell&#8217;s product listings. The concept of &#8216;coherent memory&#8217; is also questioned, indicating a curiosity about its implications for GPU performance.</p><ul><li><p>The DGX Station is priced between <code>85-90k USD</code>, as noted by users observing current market listings. This pricing positions it as a high-end machine, likely targeting enterprise or research institutions rather than individual consumers.</p></li><li><p>The DGX Station, despite its high cost and advanced capabilities, lacks a video output unless an additional card is installed. This design choice highlights its focus on computational tasks rather than traditional graphical output, aligning with its role as a data center or AI research tool rather than a consumer-grade product.</p></li><li><p>The concept of &#8220;coherent memory&#8221; in the DGX Station is questioned, with users speculating whether it allows full memory access to the GPU, similar to the DGX Spark. This feature would be significant for tasks requiring large datasets and high-speed processing, emphasizing the machine&#8217;s suitability for AI and machine learning applications.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rvohug/mistral_small_4_mistral_ai/">Mistral Small 4 | Mistral AI</a></strong> (Activity: 323): <strong>Mistral Small 4 is a multimodal AI model with </strong><code>119 billion parameters</code><strong> and a </strong><code>256k context window</code><strong>, utilizing a Mixture of Experts (MoE) architecture with </strong><code>128 experts</code><strong>. It is designed to optimize performance across reasoning, multimodal processing, and coding tasks, allowing configurable reasoning effort. Released under the Apache 2.0 license, it supports both text and image inputs, aiming for efficient enterprise deployment with reduced latency and improved throughput over its predecessor, Mistral Small 3. More details can be found in the <a href="https://mistral.ai/news/mistral-small-4">original announcement</a>.</strong> Commenters are intrigued by the model&#8217;s <code>6.5B active parameters</code>, comparing its inference cost to Qwen 3.5 35B-A3B, but with a larger expert pool. Concerns were raised about Mistral&#8217;s tool calling issues in previous versions, particularly with hallucinating function signatures and dropping parameters. The model&#8217;s performance on agentic tasks and context quality beyond <code>32k</code> are key areas of interest.</p><ul><li><p>RestaurantHefty322 highlights the competitive positioning of Mistral Small 4, noting that its <code>119B</code> parameters with <code>6.5B</code> active parameters align its inference cost with models like Qwen 3.5 35B-A3B, but with a larger expert pool. This could challenge Qwen&#8217;s dominance in the <code>~7B</code> active parameter tier, especially if Mistral has improved its tool calling capabilities, which were problematic in Devstral 2 due to issues like hallucinating function signatures and dropping parameters in multi-step chains.</p></li><li><p>The discussion touches on the importance of text and code quality at the <code>6-7B</code> active parameter range for local deployments, with a particular interest in how Mistral Small 4 handles context quality beyond <code>32k</code>. This is a critical area where smaller MoE models often struggle, despite having longer advertised context lengths.</p></li><li><p>RepulsiveRaisin7 expresses skepticism about Mistral Small 4&#8217;s improvements over Devstral 2, which was perceived as lagging behind competitors. The comment reflects a broader concern about whether Mistral Small 4 can offer tangible advantages over existing models like Qwen, especially given its size and the competitive landscape.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/LocalLLaMA/comments/1rvfypu/mistral_4_family_spotted/">Mistral 4 Family Spotted</a></strong> (Activity: 687): <strong>The Mistral 4 family introduces a hybrid model that integrates capabilities from three distinct model families: Instruct, Reasoning (formerly Magistral), and Devstral. The Mistral-Small-4 model features a </strong><code>Mixture of Experts (MoE)</code><strong> architecture with </strong><code>128 experts</code><strong> and </strong><code>4 active</code><strong>, totaling </strong><code>119 billion parameters</code><strong> with </strong><code>6.5 billion activated per token</code><strong>. It supports a </strong><code>256k context length</code><strong> and accepts multimodal input (text and image) with text output. Key functionalities include configurable reasoning effort, multilingual support, and agentic capabilities with native function calling. The model is open-sourced under the Apache 2.0 License. <a href="https://huggingface.co/mistralai/Mistral-Small-4-119B-2603">Mistral-Small-4</a> is designed for both speed and performance, offering a large context window and vision capabilities.</strong> Commenters are enthusiastic about the model&#8217;s capabilities, particularly its position in the <code>120 billion parameter</code> range, comparable to models like <strong>gpt-oss-120B</strong> and <strong>Qwen-122B</strong>. There is anticipation for its performance and potential applications.</p><ul><li><p>The Mistral 4 model is a hybrid architecture that integrates capabilities from three distinct model families: Instruct, Reasoning (formerly Magistral), and Devstral. It features a mixture of experts (MoE) with 128 experts and 4 active, allowing for 119 billion parameters with 6.5 billion activated per token. The model supports a 256k context length and accepts multimodal input, including both text and images, with text output. It also offers configurable reasoning effort, enabling a switch between fast instant replies and more computationally intensive reasoning modes.</p></li><li><p>Mistral 4 is designed to be highly versatile, supporting multilingual capabilities across dozens of languages and offering advanced agentic functionalities with native function calling and JSON output. It is optimized for speed and performance, maintaining strong adherence to system prompts. The model is released under the Apache 2.0 license, which allows for both commercial and non-commercial use and modification, making it accessible for a wide range of applications.</p></li><li><p>The model&#8217;s integration with llama.cpp is underway, as indicated by a pull request on GitHub. This suggests that Mistral 4 will soon be supported by llama.cpp, a popular framework for running large language models efficiently. This integration is likely to enhance the model&#8217;s accessibility and usability for developers looking to leverage its capabilities in various applications.</p></li></ul></li></ul><h2><strong>Less Technical AI Subreddit Recap</strong></h2><blockquote><p>/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT, /r/ChatGPTCoding, /r/aivideo, /r/aivideo</p></blockquote><h3><strong>1. AI Model and Tool Innovations</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1rvlvw5/incredible_stuff_incoming/">INCREDIBLE STUFF INCOMING</a></strong> (Activity: 483): <strong>The image presents a slide from a presentation on the NVIDIA Nemotron 3 Ultra Base model, which is approximately </strong><code>500B</code><strong> in size. It claims to be the &#8220;Best Open Base Model&#8221; with </strong><code>5X</code><strong> efficiency and high reasoning accuracy. The slide includes bar graphs that compare the performance of Nemotron 3 Ultra against other models like GLM and Kimi K2 across various benchmarks, including Peak Throughput, Understanding MMLU Pro, Code HumanEval, Math GSM8K, and Multilingual Global MMLU. The Nemotron 3 Ultra is highlighted for its superior performance in these categories.</strong> Commenters express skepticism about the benchmarks, noting that NVIDIA does not specify which GLM model is used for comparison and that the Kimi K2 model is relatively old, being eight months old. There is also a critique of the presentation technique, suggesting that starting the graph at <code>60%</code> exaggerates the performance gap.</p><ul><li><p><strong>elemental-mind</strong> points out the ambiguity in NVIDIA&#8217;s announcement, noting that they don&#8217;t specify which GLM model is being referred to. They highlight that the Kimi K2 model, if it&#8217;s the base version, is comparable to MiniMax M2.1 and GLM-5-no-reasoning in terms of intelligence, suggesting that the comparison might not be as impressive as it seems.</p></li><li><p><strong>FullOf_Bad_Ideas</strong> clarifies the distinction between base models and their finetuned counterparts. They suggest that the models being compared are likely Kimi K2 Base 1T and GLM 4.5 355B Base, rather than the more advanced K2.5 or GLM 5, which are instruct/reasoning finetunes. This distinction is crucial for understanding the performance and capabilities being discussed.</p></li><li><p><strong>ThunderBeanage</strong> expresses skepticism about the relevance of Kimi K2, describing it as outdated. They doubt that the GLM model mentioned is the latest GLM 5, implying that the comparison might not reflect the current state-of-the-art models. This skepticism highlights the importance of specifying model versions in performance discussions.</p></li></ul></li></ul><h3><strong>2. AI in Creative and Entertainment Applications</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/StableDiffusion/comments/1rv40xc/showing_real_capability_of_ltx_loras_dispatch_ltx/">Showing real capability of LTX loras! Dispatch LTX 2.3 LORA with multiple characters + style</a></strong> (Activity: 932): <strong>The post discusses the creation of a LORA model using LTX 2.3, trained on approximately </strong><code>440 clips</code><strong> from the game Dispatch, each with </strong><code>121 frames</code><strong> on average. The model includes over </strong><code>6 characters</code><strong> with distinct voices and styles, achieved by assigning each character a unique trigger word and detailed captions. The training was conducted using the <a href="https://github.com/AkaneTendo25/musubi-tuner">musubi fork by akanetendo25</a> and involved splitting clips with </strong><code>pyscene</code><strong>, converting them to </strong><code>24 fps</code><strong>, and using a custom captioning tool. The dataset was divided into HD and SD groups based on clip length, and training involved </strong><code>31GB VRAM</code><strong> usage with </strong><code>4 blockswap</code><strong>. The model was trained to </strong><code>64 rank</code><strong> to accommodate the complexity of the data, and checkpoints were made every </strong><code>500 steps</code><strong>. The author notes that LTX, while not as visually strong as WAN, offers significant potential for pre-visualization in game development.</strong> One commenter expressed skepticism about WAN 2.5 being open source, while another praised the dedication involved in training with <code>440 clips</code>, noting the clean results.</p><ul><li><p>Lars-Krimi-8730 inquires about the technical details of training the LTX 2.3 LORA model, specifically asking about the trainer used, settings, captioning methods, and resolution. This indicates a keen interest in the reproducibility and technical setup of the model training process.</p></li><li><p>Anxious_Sample_6163 highlights the use of 440 clips in the training process, which suggests a significant level of dedication and effort in data preparation. This number of clips implies a robust dataset that likely contributes to the model&#8217;s performance and cleanliness.</p></li><li><p>SvenVargHimmel asks about the training duration on a <code>5090</code> GPU, which points to interest in the computational resources and time efficiency of the model training process. This question is relevant for understanding the scalability and feasibility of training similar models.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/StableDiffusion/comments/1rutgoa/oldnokia_ultrareal_flux2klein_9b_lora/">oldNokia Ultrareal. Flux2.Klein 9b LoRA</a></strong> (Activity: 541): <strong>The post announces a retrained version of the Nokia 2MP Camera LoRA, named OldNokia UltraReal, designed to replicate the aesthetic of mid-2000s phone cameras. Key features include a soft-focus plastic lens effect, a washed-out color palette, and digital artifacts like JPEG compression and chroma noise, all trained on the author&#8217;s Nokia E61i photo archive. The model is available for download on <a href="https://civitai.com/models/1808651/oldnokia-ultrareal">Civitai</a> and <a href="https://huggingface.co/Danrisi/oldNokia_flux2_klein9b">Hugging Face</a>.</strong> One commenter humorously notes that Nokia cameras historically lacked the dynamic range depicted in the model. Another suggests training the model on <code>qwen-image</code> for further enhancement, while a third expresses enthusiasm for the LoRA and shares a personal project involving frame injection.</p><ul><li><p>jigendaisuke81 suggests training the model on <code>qwen-image</code>, indicating interest in exploring how the model performs with different datasets or architectures. This could imply a focus on enhancing image generation capabilities or testing the model&#8217;s adaptability to various image styles.</p></li><li><p>Striking-Long-2960 mentions an interest in &#8216;frame injection in Wan2GP&#8217;, which suggests a technical exploration of integrating frames into generative models. This could involve manipulating or enhancing image sequences, potentially for video or animation purposes, using the LoRA model.</p></li><li><p>berlinbaer highlights the technical achievement of the LoRA model in replicating specific visual effects, such as &#8216;blown out highlights with their blue-red color shift&#8217;. This suggests a focus on the model&#8217;s ability to accurately mimic complex photographic effects, which might be challenging to achieve through simple prompting alone.</p></li></ul></li></ul><h3><strong>3. AI and Employment Impact</strong></h3><ul><li><p><strong><a href="https://www.reddit.com/r/singularity/comments/1rw2tan/antrophic_ceo_says_50_entrylevel_whitecollar_jobs/">Antrophic CEO says 50% entry-level white-collar jobs will be eradicated within 3 years</a></strong> (Activity: 2162): <strong>Anthropic CEO predicts that </strong><code>50%</code><strong> of entry-level white-collar jobs will be eliminated within the next three years due to advancements in AI technologies. This statement highlights the rapid integration of AI in the workplace, potentially replacing tasks traditionally performed by humans, even when AI solutions like </strong><em><strong>copilot</strong></em><strong> may not yet match human expertise in quality and accuracy. The prediction underscores a significant shift in the job market, emphasizing the need for adaptation and skill evolution among the workforce.</strong> A notable comment highlights a personal experience where AI is being used to perform tasks inadequately, leading to errors and incorrect conclusions. This reflects a broader concern about the premature reliance on AI in professional settings, potentially undermining human expertise and job security.</p><ul><li><p>Due_Answer_4230 highlights a practical issue with AI integration in workplaces, where AI tools like Copilot are being used to replace human work, even when they perform poorly. This results in errors and incorrect conclusions, yet management may prefer AI for its speed, undermining skilled workers who have invested years in developing their expertise.</p></li><li><p>Stahlboden references a prediction from a year ago that AI would write 100% of code, noting that while this hasn&#8217;t fully materialized, AI&#8217;s role in coding has significantly increased. This reflects a broader trend of AI&#8217;s growing capabilities in technical fields, suggesting a potential future where AI could dominate certain tasks.</p></li><li><p>Environmental_Dog331 points out the lack of solutions from AI leaders regarding job displacement due to AI advancements. The comment underscores the challenge of creating new jobs at a pace that matches AI-driven job losses, highlighting a critical gap in strategic planning for workforce transitions.</p></li></ul></li><li><p><strong><a href="https://www.reddit.com/r/ChatGPT/comments/1rv9rsl/nbc_news_survey_finds_americans_hate_ai_even_more/">NBC News survey finds Americans hate AI even more than ICE</a></strong> (Activity: 1146): <strong>An NBC News survey reveals that only </strong><code>26%</code><strong> of voters have a positive view of AI, while </strong><code>46%</code><strong> hold negative views, making AI less favorable than most topics except the Democratic Party and Iran. This reflects a broader skepticism towards AI, despite its widespread use and potential as a productivity tool. The survey highlights a disconnect between AI&#8217;s perceived capabilities and its actual utility, particularly in replacing jobs that require significant industry knowledge.</strong> Commenters note a paradox where frequent AI users still harbor resentment due to overhyped claims about AI&#8217;s capabilities, particularly its potential to replace white-collar jobs. There&#8217;s a consensus that while AI is a powerful tool, it is not yet capable of replacing jobs requiring deep industry knowledge.</p><ul><li><p>TimeTravelingChris highlights the gap between AI&#8217;s potential and its current practical applications, noting that while AI can be a powerful productivity tool, it is not yet capable of replacing jobs that require significant industry and company knowledge. The commenter emphasizes the importance of validating AI outputs, as the technology still has notable gaps when scrutinized closely.</p></li><li><p>AlexWorkGuru discusses the disparity between AI&#8217;s potential as demonstrated by labs and the everyday experiences of users, which often involve frustrating interactions with basic AI implementations like chatbots and automated phone systems. This gap contributes to a credibility issue for AI, as the companies promoting it are often those that users already distrust, exacerbating negative perceptions.</p></li><li><p>bjxxjj points out that public perception of AI is heavily influenced by negative associations such as job layoffs and surveillance, rather than practical applications like educational chatbots. This suggests that survey results on AI sentiment may be skewed by the specific aspects of AI that respondents are considering.</p></li></ul></li></ul><h1><strong>AI Discords</strong></h1><p>Unfortunately, Discord shut down our access today. We will not bring it back in this form but we will be shipping the new AINews soon. Thanks for reading to here, it was a good run.</p>]]></content:encoded></item><item><title><![CDATA[Why Anthropic Thinks AI Should Have Its Own Computer — Felix Rieseberg of Claude Cowork & Claude Code Desktop]]></title><description><![CDATA[Claude Cowork came out of an accident.]]></description><link>https://www.latent.space/p/felix-anthropic</link><guid isPermaLink="false">https://www.latent.space/p/felix-anthropic</guid><pubDate>Tue, 17 Mar 2026 21:39:16 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191097767/358962f9a42c533afaf4acbd16bdab9b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><a href="https://claude.com/product/cowork">Claude Cowork</a> came out of an accident.</p><p>Felix and the Anthropic team <strong>noticed something interesting with Claude Code</strong>: many users were using it primarily for all kinds of messy knowledge work instead of coding. Even technical builders would use it for lots of non-technical work.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/felixrieseberg/status/2010882577113268372&quot;,&quot;full_text&quot;:&quot;Claude Code doesn't just resonate with developers anymore. Non-technical people are using it to build things. Technical people are using it for non-technical work. The line is blurring.\n\nI'm by far not the first to think about this. Multiple teams at Anthropic have been working&quot;,&quot;username&quot;:&quot;felixrieseberg&quot;,&quot;name&quot;:&quot;Felix Rieseberg&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1544558915819487233/qMrauBqx_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-13T01:12:21.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:88,&quot;retweet_count&quot;:148,&quot;like_count&quot;:1719,&quot;impression_count&quot;:321000,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>Even more shocking, Claude cowork <a href="https://www.axios.com/2026/01/13/anthropic-claude-code-cowork-vibe-coding">wrote itself</a>. With a team of humans simply orchestrating multiple claude code instances, the tool was ready after a brief week and a half.</p><p>This isn&#8217;t Felix&#8217;s first rodeo with impactful and playful desktop apps. He&#8217;s helped ship <strong>the Slack desktop app</strong> and is <strong>a core maintainer of Electron</strong> the open-source software framework used for building cross-platform desktop applications, even putting Windows 95 into an Electron app that runs on macOS, Windows, and Linux.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/felixrieseberg/status/1032642127178547201&quot;,&quot;full_text&quot;:&quot;I put Windows 95 into an Electron app that now runs on macOS, Windows, and Linux. It's a terrible idea that works shockingly well. I'm so sorry.\n\nGo grab it here: <a class=\&quot;tweet-url\&quot; href=\&quot;https://github.com/felixrieseberg/windows95/releases\&quot;>github.com/felixrieseberg&#8230;</a> &quot;,&quot;username&quot;:&quot;felixrieseberg&quot;,&quot;name&quot;:&quot;Felix Rieseberg&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1544558915819487233/qMrauBqx_normal.jpg&quot;,&quot;date&quot;:&quot;2018-08-23T14:54:03.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/DlSuLBJVsAAFs8r.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/YquOnOGrSz&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:474,&quot;retweet_count&quot;:5615,&quot;like_count&quot;:16045,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>In this episode, Felix joins us to unpack why execution has suddenly become cheap enough that teams can &#8220;just build all the candidates&#8221; and why the real frontier in AI products is no longer better chat, but trusted task execution.</p><p>He also shares why Anthropic is betting on local-first agent workflows, why skills may matter more than most people realize, and how the hardest questions ahead are about autonomy, safety, portability, and the changing shape of knowledge work itself.</p><h2>We discuss</h2><ul><li><p><strong>Felix&#8217;s path:</strong> <a href="https://slack.engineering/introducing-electron-to-the-windows-runtime/">Slack desktop app</a>, <a href="https://felixrieseberg.com/things-people-get-wrong-about-electron/">Electron</a>, Windows 95 in JavaScript, and now building Claude Cowork at Anthropic</p></li><li><p><strong>What Claude Cowork actually is:</strong> a more user-friendly, VM-based version of Claude Code designed to bring agentic workflows to non-terminal-native users</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2kE-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2kE-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png 424w, https://substackcdn.com/image/fetch/$s_!2kE-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png 848w, https://substackcdn.com/image/fetch/$s_!2kE-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png 1272w, https://substackcdn.com/image/fetch/$s_!2kE-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2kE-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png" width="1456" height="397" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:397,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:355520,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/191097767?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2kE-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png 424w, https://substackcdn.com/image/fetch/$s_!2kE-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png 848w, https://substackcdn.com/image/fetch/$s_!2kE-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png 1272w, https://substackcdn.com/image/fetch/$s_!2kE-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cab17ea-b452-4fd6-b7b5-8bb2f3cd3364_2568x700.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://news.ycombinator.com/item?id=47220118">https://news.ycombinator.com/item?id=47220118</a></figcaption></figure></div><p></p><ul><li><p><strong>Why &#8220;user-friendly&#8221; does not mean &#8220;less powerful&#8221;:</strong> Cowork as a superset product, much like how VS Code initially looked simpler than Visual Studio but became more hackable and extensible</p></li><li><p><strong>Anthropic&#8217;s prototype-first culture:</strong> why Cowork was built in 10 days using many pre-existing internal pieces, and how internal prototypes shaped the final product</p></li><li><p><strong>Why execution is getting cheap:</strong> the shift from long memos, specs, and debate toward rapidly building multiple candidates and choosing based on reality instead of theory</p></li><li><p><strong>The local debate:</strong> why Felix thinks Silicon Valley is undervaluing the local computer, and why putting Claude &#8220;where you work&#8221; is often more powerful</p></li><li><p><strong>Why Claude gets its own computer:</strong> the VM as both a safety boundary and a capability unlock, letting Claude install tools, run scripts, and work more independently without constant approval</p></li><li><p><strong>Safety through sandboxing:</strong> why &#8220;approve every command&#8221; is not a real long-term UX, and how virtual machines create a middle ground between uselessly safe and dangerously autonomous</p></li><li><p><strong>How Cowork differs from Claude Code:</strong> coding evals vs. knowledge-work evals, different system-prompt tradeoffs, longer planning horizons, and heavier use of planning and clarification tools</p></li><li><p><strong>Why skills matter:</strong> simple markdown-based instructions as a lightweight abstraction layer for reusable workflows, personalized automation, and portable agent behavior</p></li><li><p><strong>Skills vs. MCPs:</strong> why Felix is increasingly interested in file-based, text-native interfaces that tell the model what to do, rather than forcing everything through rigid tool schemas</p></li><li><p><strong>The portability problem:</strong> why personal skills should move across agent products, and the unresolved tension between public reusable workflows and private user-specific context</p></li><li><p><strong>Real use cases already happening today:</strong> uploading videos, organizing files, handling taxes, managing calendars, debugging internal crashes, analyzing finances, and automating repetitive browser workflows</p></li></ul><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/claudeai/status/2010805685530038351&quot;,&quot;full_text&quot;:&quot;In Cowork, you give Claude access to a folder on your computer. Claude can then read, edit, or create files in that folder.\n\nTry it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes. &quot;,&quot;username&quot;:&quot;claudeai&quot;,&quot;name&quot;:&quot;Claude&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1950950107937185792/QOfEjFoJ_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-12T20:06:49.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://substackcdn.com/image/upload/w_1028,c_limit,q_auto:best/l_twitter_play_button_rvaygk,w_88/pw5wkuixerzmpf9kubzx&quot;,&quot;link_url&quot;:&quot;https://t.co/GEaMgDksUp&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:264,&quot;retweet_count&quot;:888,&quot;like_count&quot;:12198,&quot;impression_count&quot;:7127562,&quot;expanded_url&quot;:null,&quot;video_url&quot;:&quot;https://video.twimg.com/amplify_video/2010793257588973569/vid/avc1/1154x720/JDWnCMUbcAY3K1Qj.mp4&quot;,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><ul><li><p><strong>Why AI products should work with your existing stack:</strong> Anthropic&#8217;s bias toward integrating with Chrome, Office, and existing workflows instead of rebuilding every app from scratch</p></li></ul><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/felixrieseberg/status/2031823821561610532&quot;,&quot;full_text&quot;:&quot;Shipping today: Small but meaningful updates to Claude in Excel &amp;amp; PowerPoint!\n\nWe obviously want Claude to be helpful in your work lives across a wide range of apps and data - and with this change, PowerPoint &amp;amp; Excel can share context and gain support for Skills.&quot;,&quot;username&quot;:&quot;felixrieseberg&quot;,&quot;name&quot;:&quot;Felix Rieseberg&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1544558915819487233/qMrauBqx_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-11T20:05:23.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:19,&quot;retweet_count&quot;:27,&quot;like_count&quot;:474,&quot;impression_count&quot;:29043,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><ul><li><p><strong>Computer use one year later:</strong> how much better it has gotten, why vision plus browser context is such a superpower, and why letting Claude see the thing it is working on changes everything</p></li><li><p><strong>Why many &#8220;AI verticals&#8221; may get compressed:</strong> specialized wrappers may matter in the short term, but better general models and stronger primitives could absorb a lot of narrow use cases</p></li><li><p><strong>The future of junior work:</strong> Felix&#8217;s concerns about entry-level roles, labor-market disruption, and whether AI can compress early-career learning into denser simulated experience</p></li><li><p><strong>Why Waterloo grads stand out:</strong> internships, shipping experience, and learning how real teams build products versus purely theoretical academic preparation</p></li><li><p><strong>The agentic future of the desktop:</strong> what it means for Claude to have its own computer, whether AI should act on your machine or a remote one, and how intimacy with personal data changes the product design space</p></li><li><p><strong>Why Electron still mattered:</strong> shipping Chromium as a controlled rendering stack, the limits of OS-native webviews, and why browser engines remain one of the great software abstractions</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/felixrieseberg/status/940288635202543616&quot;,&quot;full_text&quot;:&quot;&#10024; &#128218; I wrote a short book! I spent weeks squeezing the most important \&quot;getting started\&quot; knowledge about building desktop apps with <span class=\&quot;tweet-fake-link\&quot;>@electronjs</span> into just 55 pages.\n\nThanks to <span class=\&quot;tweet-fake-link\&quot;>@OReillyMedia</span>'s great <span class=\&quot;tweet-fake-link\&quot;>@allymacdonald</span>, it's now a pretty solid read!\n\n<a class=\&quot;tweet-url\&quot; href=\&quot;https://www.safaribooksonline.com/library/view/introducing-electron/9781491996041/\&quot;>safaribooksonline.com/library/view/i&#8230;</a> &quot;,&quot;username&quot;:&quot;felixrieseberg&quot;,&quot;name&quot;:&quot;Felix Rieseberg&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1544558915819487233/qMrauBqx_normal.jpg&quot;,&quot;date&quot;:&quot;2017-12-11T18:34:15.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/DQyRXazU8AAEmIG.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/xKYYtH82NC&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:3,&quot;retweet_count&quot;:28,&quot;like_count&quot;:107,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div></li><li><p><strong>Anthropic&#8217;s Labs mentality:</strong> wild internal experiments, half-broken future-looking prototypes, and the broader effort to move users from asking questions to delegating increasingly long and valuable tasks</p></li><li><p><strong>Why the endgame is not just more capability, but more independence:</strong> teaching users to trust AI with bigger scopes of work, for longer durations, with fewer interventions</p></li></ul><div><hr></div><h2>Felix Rieseberg</h2><ul><li><p>X: <a href="https://x.com/felixrieseberg">https://x.com/felixrieseberg</a></p></li><li><p>LinkedIn: <a href="https://www.linkedin.com/in/felixrieseberg">https://www.linkedin.com/in/felixrieseberg</a></p></li><li><p>Website: <a href="https://felixrieseberg.com/">https://felixrieseberg.com/</a></p></li></ul><h2>Anthropic</h2><ul><li><p>Website: <a href="http://anthropic.com">http://anthropic.com</a></p></li></ul><h2>Full Video Pod</h2><div id="youtube2-ZpZ7lFoWaT8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;ZpZ7lFoWaT8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ZpZ7lFoWaT8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Timestamps</h2><p>00:00 &#8212; Cheap execution and building all the candidates<br>00:44 &#8212; Intro in the new Kernel studio<br>02:47 &#8212; What Claude Cowork is<br>04:18 &#8212; Why user-friendly can be more powerful<br>05:33 &#8212; How Anthropic built Cowork<br>07:09 &#8212; Prototype-first product development<br>08:00 &#8212; Why local computers still matter<br>09:20 &#8212; Skills, primitives, and platform leverage<br>12:13 &#8212; Cowork&#8217;s architecture: VM + Chrome + system prompt<br>15:38 &#8212; Felix&#8217;s own bug-fixing Cowork workflows<br>17:38 &#8212; Local-first agents<br>20:16 &#8212; Evals, planning, and knowledge-work optimization<br>23:14 &#8212; What Anthropic means by evals<br>24:21 &#8212; Scaffolding, tools, and why skills matter<br>27:44 &#8212; Demo: YouTube uploads and self-generated skills<br>31:03 &#8212; Calendar automation and cleaning your desktop<br>34:47 &#8212; Browser context and why DOM access matters<br>37:47 &#8212; Skills portability and plugins<br>44:36 &#8212; Which AI categories survive?<br>46:19 &#8212; Junior jobs, simulated work, and labor disruption<br>52:00 &#8212; Gradual takeoff vs big-bang takeoff<br>53:42 &#8212; Finance, taxes, and enterprise verticals<br>56:24 &#8212; Vision and the improvement in computer use<br>57:31 &#8212; Why Claude writes its own scripts<br>58:06 &#8212; Should Claude have its own computer?<br>1:01:26 &#8212; Windows 95 in JavaScript<br>1:03:19 &#8212; VM tradeoffs and sandbox design<br>1:07:23 &#8212; Approval fatigue and safe delegation<br>1:11:18 &#8212; The future of Cowork<br>1:12:27 &#8212; What comes next for agentic knowledge work<br>1:15:13 &#8212; Electron, Chromium, and desktop software lessons<br>1:22:16 &#8212; Multiplayer agents and coworker-to-coworker workflows<br>1:26:05 &#8212; Anthropic Labs and closing thoughts</p><h2>Transcript</h2><p><strong>Alessio</strong>: Hey everyone. Welcome to the Latent Space Podcast, our first one in the new studio. This is Alessio, founder of Kernel Labs, and I&#8217;m joined by swyx, editor of Latent Space.</p><p><strong>swyx</strong>: Yeah, so nice to be here. Thanks to, uh, TJ, Alessio, Allen helping to set everything up. It looks beautiful. We even have the logo outside.</p><p>Yeah, kind.</p><p><strong>Felix</strong>: It&#8217;s like really nice, right? When you walk in here as a guest, you&#8217;re like, ah, this is a serious production. You&#8217;re like, feel it immediately.</p><p><strong>swyx</strong>: Yeah. Felix, you&#8217;ve been, you&#8217;re, you&#8217;re currently a product manager of Cowork or,</p><p><strong>Felix</strong>: uh, really Technic</p><p><strong>swyx</strong>: Eng. Yeah. The, the identities are kind of vague member technical staff.</p><p><strong>Felix</strong>: I know member staff is like, the official title will carry around forever.</p><p><strong>swyx</strong>: Yeah. I basically kind of wanted, like we&#8217;ve been. Kinda obsessed. I, I&#8217;ve been using it a lot, even for managing latent space. Like, uh, cowork helps me upload videos and like title things and like edit and everything. It&#8217;s, it&#8217;s like really amazing.</p><p><strong>Alessio</strong>: Cool. He said multiple times Cowork has said gi in the group track.</p><p><strong>swyx</strong>: Yeah, yeah, yeah. So, so we have a second, uh, we have a second channel, uh, for latent space tv. Uh, and I, uh, and uh, we basically, this is our Discord meetup. Um, and I I, we have like Claude Coworks, it might be a GI, I don&#8217;t know if we, we have, uh, uploaded it yet, but one of the sessions was like a, like a Claude cowork thing.</p><p><strong>Felix</strong>: I, you have to see, I would love to see it. Like, I&#8217;m so curious, like one of the most fun parts of my job is like constantly see the weird things people use Cowork for because it&#8217;s obviously like very hard for us to actually design for specific use cases we do. But like every single person who&#8217;s like most amazed is usually amazed about a thing that I didn&#8217;t even expect cowork would be good at.</p><p>Um, we have a new designer and it&#8217;s one of the first small tasks. I was like, Hey, we need like a new emoji for cowork for our internal stock. It&#8217;s like a pretty small thing. I like, can you please do it? And he drew an SVG and just gave it to coworker was like, can you animate this emoji? And now it has like this beautiful loopy animation.</p><p>Um, and I mean, I think obviously this goes down to like, it turns out you can do more things with code than you expected, but it, it&#8217;s like that kind of stuff that is really fun to me. So, long story short, I would love to see like, the kind of things you&#8217;re doing.</p><p><strong>swyx</strong>: I&#8217;ll pull it up. I&#8217;ll pull it up.</p><p><strong>Felix</strong>: Yeah. Yeah.</p><p><strong>swyx</strong>: Uh, but before we get into it, I, I think always wanna start with like a top level. What is Claude Cowork for people who haven&#8217;t heard of it? Haven&#8217;t tried it out.</p><p><strong>Felix</strong>: Okay. Uh, real quick, Claude Cowork is a user friendly version of Claude Code. So the way it basically works is we have Claude Code and for us, fairly impressive agent harness that over December we noticed more and more people are using either, even though they&#8217;re not technical, they, they&#8217;re not at home in the terminal or they are at home in the terminal, but they started using Claude Code for non-coding workloads, right?</p><p>Like managing expenses or like filling out receipts or organizing a knowledge base. Like there was a big obsidian moment that a lot of people liked and we wanted to capitalize on that, but also bring, bring this capability to people who are not terminal native and who might not know how to like brew and store something.</p><p>So cowork is Claude Code running in original machine with a little bit of padding, a little bit more guardrails, making it a little safer and a little bit more convenient for people who don&#8217;t wanna first open up the terminal when they go to work.</p><p><strong>swyx</strong>: It&#8217;s interesting, uh, that is kind of. Pitch that way as a more user friendly thing because I always feel like it, it, to me, I I treat it as like why I&#8217;m familiar with Claude Code.</p><p>Like we, we did a Claude Code episode Yeah. A year ago. But this one is like even more power user tools &#8216;cause it, uh, it kind of integrates much better with like clotting Chrome and, uh, in all the, all the other tooling. But like, maybe, maybe that&#8217;s like a perception thing, right? Like</p><p><strong>Felix</strong>: No, honestly, I don&#8217;t think you&#8217;re wrong.</p><p>This is like a, a thing I&#8217;ve been thinking a lot about for like the last two weeks. So,</p><p><strong>swyx</strong>: but when they say user friendly, it&#8217;s like, oh, it&#8217;s the dumb down version. But no, actually this is the superset.</p><p><strong>Felix</strong>: Yeah. Like, I think a similar thing happened, A similar thing happened to me about 10 years ago, like maybe 12 years ago when I was at Microsoft and we started working on, on Electron and like browser-based technologies and cross-platform stuff.</p><p>And one of the first use cases was Visual Studio Code, which used to be a website. And the initial narrative was, or Visual Studio Code is, is like a more user-friendly version of Visual Studio. But in a similar vein, I think there was some voices saying, oh, this is. For serious developers, like, we&#8217;re not gonna use this.</p><p>Right? For like anything. And I think in the end what happened is people have different stories about why Visual Studio Code became such a big thing. But my personal, my personal belief is that the Hackability and the extendability has like played a pretty big role, right? You can hook in Visual Studio Code that like almost any workload, it&#8217;s so easy to hack on, so easy to put extensions for it.</p><p>And I think cowork might be hitting a similar thing where it&#8217;s very easy to extend and it&#8217;s very easy to bring into your workflows. Uh, so the convenience I think is a bit of a, it&#8217;s obviously the thing we strive for as developers, but I think the way people find value in it then is by probably mapping it onto whatever they actually have to do in their job.</p><p><strong>Alessio</strong>: So end of last year, you see the spike of like non-technical usage and clock code. What&#8217;s the design process to say we should make clock code work? Because I mean, you built it in only 10 days. Um, I&#8217;m sure there was some discussion before on whether it&#8217;s easier to use mean. You know, like making, making like a desktop GUI is obviously one way to do it, but like there&#8217;s a lot of nuance in the product.</p><p>Like maybe talk people through what was like the trigger of like, we should build a separate thing. We should not build like a different plot code thing. And then maybe some of the more interesting design decisions that maybe you didn&#8217;t take.</p><p><strong>Felix</strong>: Yeah, I think philanthropic, we&#8217;ve been thinking about ways to move people who are comfortable with using Claude to answer questions and bring more of the power of like this thing to now like, execute tasks for you.</p><p>I can like solve problems for you can like build things for you. How do we bring that capability to people who are currently mostly comfortable with like a like question answer paradigm within the chat. And we&#8217;ve had a lot of prototypes around that. Just going back as far as like easily a year and a half.</p><p>Like we had a lot of people working on that. Um, and internally philanthropic is a very prototype demo, first culture. We have a lot of like internal prototypes that don&#8217;t reach the public. What Cowork actually became is like we sort of picked the right pieces out of the many prototypes that we had.</p><p>Right. And that&#8217;s, that&#8217;s maybe also like, I think an important qualifier whenever people mention this like 10 day number. I do think it&#8217;s important to me to mention that within Double Scratch there was like a lot of stuff already happening, right? Like, and I think it&#8217;s important for people to remember that when you build a website, you use React, you use like a bunch of other things.</p><p>And this is like a similar scenario with like a lot of pieces we already had. Um, and in terms of decision path, I think we live in like an interesting new world where execution is actually quite cheap.</p><p><strong>swyx</strong>: Mm-hmm.</p><p><strong>Felix</strong>: So maybe, maybe what you would do That&#8217;s so crazy. The year. I know it&#8217;s wild.</p><p><strong>swyx</strong>: You should be, ideas are cheap.</p><p>Execution is the hard part. I</p><p><strong>Felix</strong>: know. And like the, we, we used to live in this world maybe where you would take a product manager and the product manager would go to a number of potential customers and in this like very low bandwidth way, would try to. Try to like tease out what are the problems they&#8217;re having, what are they willing to buy?</p><p>Um, and then maybe what can you build to like drive out that need and then you go back and you like draft a spec and you think about it and then like you make a design and you execute it. We internally philanthropic app, not pretty much closer to the point where we&#8217;re like, don&#8217;t even write a memo, just like build, like let&#8217;s build all the candidates very quickly.</p><p>Let&#8217;s just build all of them and then pick the best ones. I think the, the decision that is most impactful both for the product as well for the users right now is like the way we put value on your local computer. I think that&#8217;s a big decision point a lot of people have thought about. Should this thing, whatever it is, should it ultimately run into computer or should it run in the cloud?</p><p>&#8216;cause they&#8217;re big trade offs, right?</p><p><strong>Alessio</strong>: I guess like if we solve auth, it would be easy to do in the cloud. But I think like the fact that I can just download any file from anywhere and then put it and cowork there, it&#8217;s like a big unlock. Um, I mean it&#8217;s interesting you mentioned reusing certain pieces. I think this is something I&#8217;ve been thinking about even with Claude Code, right?</p><p>The price of like writing code is going to zero, blah, blah, blah. But it actually seems like the value of having some sort of platform substrate is like increasing because as you build these new things, you can kind of plug them together.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: So I almost feel like when people are saying, oh, the value of a lot of software is gonna zero because you can recreate it, to me it&#8217;s almost like the opposite.</p><p>It&#8217;s like having an existing platform to build on top of. It&#8217;s like even more valuable because you can kind of bolt things on.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: You have obviously mcps, you have skills, you have like obviously the models, which is a big part. All these things kind of come together. Do you feel like that&#8217;s a valid way to think about it, where people should invest even more in kind of like primitives.</p><p>To rebuild on or are you like recreating a lot of it each time because like things change and it&#8217;s easier to rewrite than reuse?</p><p><strong>Felix</strong>: You know, I think, I think you&#8217;re right. I think you&#8217;re right that the holistic platform is really useful. And this is maybe a whole like a somewhat contrarian view to a lot of people in ai.</p><p>I actually don&#8217;t think that the future is going to be hyper personalized software down to the point where everyone is running their own version. Like, I actually think it&#8217;s going to be quite hard for all of us to have our own internal chat tool and like, if I wanna talk to you, like</p><p><strong>swyx</strong>: how</p><p><strong>Felix</strong>: is that gonna work, right?</p><p>In the, in the context of cowork and how we build it, I think it&#8217;s a bit of a combination. Like what the, the execution that gets cheap is not necessarily rebuilding all the primitives. I think our priori, there&#8217;s also not a lot of value in it. So for instance, my team did not think about rebuilding clock code.</p><p>We&#8217;re like very much started with the. The core thesis of this should be Claude Code.</p><p>Mm-hmm.</p><p><strong>Felix</strong>: And then we&#8217;ll like build things on top of it. The part of the execution that gets a little cheaper is like, how do you take all of these Lego pieces and put them together in a way that makes sense for users?</p><p>It&#8217;s like actually valuable. You have so many different approaches now in terms of what kind of, what kind of things do you actually elevate to a primitive, do you strongly believe that all your products should be built by just combining primitive that the public also has available? Do you keep some things internal?</p><p>Um, and I think that&#8217;s still evolving, but I think what&#8217;s probably gonna go away is like, I&#8217;m not sure if it&#8217;s gonna fully go away, but I&#8217;m gonna say, I think for me personally, I will probably no longer try to come up with a really good product without testing up with people. This is not a new concept, but wherever you used to have to make costly decisions around, do we pick technology A or technology B, or do we like, um, build it this way, build it the other way.</p><p>I really strongly believe now you just build all of them and try them out with a small focus group and then whatever, whatever is better is what you go with. Right. And that, that is probably quite different even from how we maybe worked a year ago. Right. Like, I think, I think this happened very recently.</p><p><strong>Alessio</strong>: Yeah. I started building something in on Electron since you&#8217;re here. Coincidence. Uh, but then Electron and like SQL Light are like, there&#8217;s like some issues that like between development and like, uh, building anyway. And I was like, let&#8217;s just rebuild the whole thing in Swift and just recreated the whole thing in Swift.</p><p>And it&#8217;s like, I. It&#8217;s done.</p><p><strong>swyx</strong>: You know, I didn&#8217;t take any effort. I, I, I don&#8217;t even know Swift.</p><p><strong>Alessio</strong>: Yeah, exactly. I was like, I&#8217;m the, I&#8217;m not reviewing it anyway, whatever. You can write in whatever language you pick, but the important stuff that I did was not write the electron bindings. Yeah. It was like the logic of what happens in the app, you know, and then the model is like, yeah, I can just recreate the same thing as with</p><p><strong>swyx</strong>: Yeah.</p><p>I, I think you still want, especially for people who are doing like high performance software or like very complex software, uh, you still want like, some view of the architecture. Uh, but you can use markdown for that,</p><p><strong>Felix</strong>: right? Yeah.</p><p><strong>swyx</strong>: Uh, you don&#8217;t actually have to read the code again. I, I&#8217;m still like on a sort of like a definitional thing.</p><p>Um, can we build a good mental model of Claude Cowork? Um, this is what I have, right? Like you you said it&#8217;s like fundamentally cloud co. We don&#8217;t wanna touch it. There&#8217;s the cloud app, there&#8217;s clouding Chrome. I think you guys do something different in planning, but, uh, I&#8217;ve been talking with Tariq who is on the cloud co team, and you guys are, he&#8217;s like, no, we just exposed planning.</p><p>Maybe we can clarify like, what are the major pieces. That people should be aware. It goes into cowork, like,</p><p><strong>Felix</strong>: okay, I think you basically have them. So really, um, you can, you can take planning more or less out. I think there&#8217;s a few things that are really valuable in cowork. Um, the virtual machine is probably the most powerful thing.</p><p>So we currently run like a, we currently run like a lightweight VM and we put clocked out into the vm and we do that for, for, um, a number of reasons. Safety and security is a big one, but even if you, even if you ignore for a second safety and security and you&#8217;re just like, okay, Yolo, I want this thing to do whatever.</p><p>It is quite powerful to give Claus on computer that is like generally a good idea. And in terms of architecture and UX and everything else that we&#8217;ve been working on, philanthropic, it often is quite useful for you to like anthropomorphize, um, clot aggressively and just be like, this is a person. What will you do if you give a, if you had a person, right?</p><p>Yeah. And the analogy I&#8217;ve given my dad this morning who is still like quite insistent on using chat even for like coding things, is if you were a developer and your employer told you that you don&#8217;t need a computer, they&#8217;re just gonna like, send you emails with a code and you send emails with code back like that, maybe work for Patrick Miles in the back, but that it&#8217;s not very effective.</p><p>Um, so what we can do with the VM is because it&#8217;s a, it&#8217;s a Linux system, Claude Code has more or less free reign to install whatever needs to install. It can install Python, it can install no js. We do have strict network ingress and egress controls. So you can still, as, as a user in like plain human language, make it clear to, to the entire system what you&#8217;re okay with and what you&#8217;re not okay with.</p><p>But at no point do we have to ask a real person, like a, like a person who might be in marketing or a lawyer. I&#8217;d have to go to a lawyer and be like, are you okay with me installing Homebrew?</p><p><strong>Alessio</strong>: Yeah, yeah.</p><p><strong>Felix</strong>: Right. Because the implications of the question and the answer are complex and nuanced and like, not, not easy to reason about.</p><p>This gives us a lot of distraction that makes Cloud very powerful. Now then around it, we, we do probably have a number of things that also keeps growing almost every single week that you&#8217;re probably noticing that make cowork maybe better for certain tasks than just cloud. Cloud on its own. Yeah. But most of those actually live in the system prompt.</p><p>They&#8217;re about like, what can we infer about the work that you do? What can we, what can we intru in the system prompt to make that more effective? It&#8217;s of course the like very tight integration with Cloud and Chrome. You&#8217;re noticing that a lot of people, especially as the models get better, a lot of people throw up their hands when it comes to MCP connectors in this area.</p><p>I&#8217;m not gonna, I&#8217;m not gonna go through like 25 M CCP connectors, click off everywhere and then like half of them don&#8217;t let me do the things anyway. So Cloud and Chrome is quite powerful because we can just talk to the cloud and Chrome sub agent and that will just do things for you.</p><p><strong>swyx</strong>: Yeah, so, so one example right in MCPI, honestly, I think that the state of MCP is kind of, kind of.</p><p>Really hard to integrate. Um, I need to, I needed to add, uh, Figma MCP to the coding agent that I use.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Uh, and, but I didn&#8217;t wanna read the docs, so I just had caught to it. And it&#8217;s, it&#8217;s great at reading docs and the same, same way I had to set up like a Google Cloud, um, account for some project I was working on and get some API keys somewhere.</p><p>And Google Cloud is famously super hard to navigate, so I just didn&#8217;t wanna deal with any of it. I just used Claude Cowork</p><p><strong>Felix</strong>: within the first week of developing on Core. This happened very, very quickly. Um, I caught myself by starting to use cowork for coding tasks, which is not ostensibly what we built it for, right?</p><p>We don&#8217;t need to. But I found myself, um, I found myself like on our internal, internal tool that we have for, to collect crashes and just like debugging information and I found myself sort like picking out the ones that I think we can easily fix versus the ones that might be like kernel corruption or something else on the operating system.</p><p>And I found myself sort of picking these out and then just telling Clark, go fix this bug. I was like, what am I doing here? Go one level up, tell a cowork, I want you to go to all these crash tools. I want you to find all the bugs that you think are fixable and not like an operating system crash. And then I want you to tell another cloud to like fix all of that.</p><p>Um, and that&#8217;s, that&#8217;s, that&#8217;s sort of another cloud,</p><p><strong>swyx</strong>: just so it can spin up another instance or,</p><p><strong>Felix</strong>: uh, it, currently what I do is, um, and this is a bit of a hack, but I tell it to use clockwork remote to which website itself? Yeah, that&#8217;s interesting. So you basically take, if you, if you imagine like a dashboard with like 20 bucks, you, this is remote control or clock or remote, or, sorry, I just wanted to confirm what, the way I&#8217;m using it is.</p><p>I have cowork running and I&#8217;m telling cowork, here&#8217;s where I normally go every morning to find the latest bugs. Go read the entire bug list, separate out which ones are fixable, which ones are, are fixable, and then for the fixable ones, four is this almost loop. For each bug, write a markdown file with a prompt.</p><p>And then for each markdown v, that is a prompt. Start of a cloud set. So natively Claude Code has</p><p><strong>swyx</strong>: this concept of subagents. Mm-hmm. And this is basically a subagent, but you&#8217;re not using the subagent functionality.</p><p><strong>Felix</strong>: I&#8217;m not using the subagent functionality. And the reason I&#8217;m not is because I&#8217;m firing that off as a Claude Code remote</p><p><strong>swyx</strong>: task.</p><p><strong>Felix</strong>: Yes. That&#8217;s kind of nice. &#8216;cause then I can just fire it off. I can go to my next meeting and in Claude Code remote. Now the work is happening.</p><p><strong>swyx</strong>: Mm-hmm. Yeah. You, you see like you&#8217;re already starting to use the cloud over your local machine. And I think this is one of those things where like. Shouldn&#8217;t just everything just be cloud first, right?</p><p><strong>Felix</strong>: Ah, this is such a good group. I&#8217;m like solely bad about this. I have so many thoughts about that. Okay. So I generally believe that Silicon Valley overall is undervaluing the local computer. And my default argument for that is always how come we&#8217;re all using MacBooks and not like an iPad or a Chromebook?</p><p>Um, that there is like still value in, in having a local machine. And now when I think about Clot, it&#8217;s this entity that is supposed to be very useful to you, like it tremendously useful to you. I think that entity needs to have access to all the same tools you have access to. Otherwise it&#8217;s gonna be hamstrung in like all these complex ways.</p><p>And there&#8217;s, there&#8217;s sort of two approaches we could take. We could say, okay, we&#8217;re gonna like one by one chip away at everything that is at your computer and move it into the cloud. That&#8217;s, that&#8217;s one way to do it. Um, and I think other products have taken that path. I personally, this is a very personal opinion, but I personally, for the amount of tools that I use.</p><p>Just don&#8217;t have the patience to give another tool like permissions to every single thing and keep those permissions up to date. The second thing that I&#8217;m still grappling with, and I don&#8217;t have a good answer for anyone just yet, but the second thing I&#8217;m still grappling with is what does it look like for someone to slurp up your entire work and put that in the cloud?</p><p>Like if I, just as an example, like if you could click a button and it just clone your entire computer into the cloud, is that something that you would want? I&#8217;m not totally convinced yet that all everyone will. Mm-hmm. And that is sort of like upstream of all the technical issues we&#8217;re gonna have. &#8216;cause like in general, I think the world is not ready for this kind of stuff.</p><p>Like, I&#8217;ll give you one quick example that would probably be very easy for us. So as a desktop app, we in theory with your permission, can do a lot of things on your computer, including reading your Chrome cookies. If we really want to do right, we could take your Chrome cookies, you would have to decrypt them for us.</p><p>We could put those on the cloud if we really felt like it. Pretty easy solution. That would be super cool. We could just be like, oh, we can do all your tasks in the cloud now. Um, a lot of websites, thanks, include it. If, if they see the same authentication from like two different locations, we&#8217;ll just lock down your account and now you have to go to the branch and be like, okay, I, I&#8217;m here with my passport.</p><p>You actually know that. Wow. Yeah. As tired as well are of the term agent for the age agent future, I think there&#8217;s a lot of stuff that sort of slowly needs to catch up and until that&#8217;s the case, the way I, as someone&#8217;s working on clock and make Cloud most effective is to like put it where you are working.</p><p><strong>swyx</strong>: Anything else? I thought with our mental model, so like, basically like, uh, part of me also just want, like the more I understand how it works, the more I can use it to its full potential. Right?</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: And so what I&#8217;m get hearing from you is you told me to delete the planning thing. You&#8217;re not doing anything special on, on the, that&#8217;s only exclusive to Qua cowork.</p><p><strong>Felix</strong>: We have some tricks for this sort of like change week over week. We eval cowork maybe against different use cases than he would evil clock code, right? If you think about it this way. Okay, so like clock code is our eval clock cowork. Yeah. So clock code is like quite optimized for coding tasks and we mostly value it whether or not we&#8217;re getting better or worse depending on how good it is at like a typical suite job.</p><p>And Clark Cowork on the other hand, we evaluate more against typical knowledge work, the kind of stuff he would find in finance or in like maybe a, like in like a legal office. Um, my personal use case is always like managing my things, like managing my personal mortgage or something like that, right? Or like wealth planning for me and my family.</p><p>Those are the kinds of use cases we eval, clock cowork on. And what you might be picking up on is like the subtle changes we make to the system. Prompt what we put in the system, prompt how we steer, clot with the tools we give it. Um, like either it&#8217;d be better in one or the other direction and whether there&#8217;s a trade off, try us exist a lot.</p><p>CLO code will be better of a code and Claude Cowork will be better. For non-coding tasks, will those gaps still exist in the next three generations of models? It&#8217;s like a little unclear to me though.</p><p><strong>swyx</strong>: Yeah,</p><p><strong>Felix</strong>: because right now these like hyper optimizations we make, I&#8217;m not sure for how long they&#8217;re still be relevant.</p><p><strong>swyx</strong>: I think what I was referring to was also, it, it just, uh, it qualitatively felt different when I probably, it&#8217;s just all prompting and I&#8217;m reading too much into it, but like the, the fact that it comes out with like a nine step plan, I can edit the plan and give feedback and, and, and see it execute the plan.</p><p>Yeah. It felt more long range than in Claude Code, but maybe that already existed in Claude Code and you just build a nicer UI for it.</p><p><strong>Felix</strong>: It&#8217;s kind of both. Um, like if the Clark Code people who build the planning functionalities would city, they probably say yes, we have all of those things in Clark code and they do.</p><p>Um, I think people tend to give cowork. Tasks that are maybe of longer time horizon, I thought is</p><p><strong>swyx</strong>: so long. Yeah.</p><p><strong>Felix</strong>: That&#8217;s like one thing, right? It&#8217;s just like that the, the chunk of work tends to be maybe a little bigger. And then the second thing is that because the work, when it gets longer, it gets a little bit more ambiguous.</p><p>We do tell co-work to make heavy use of the planning tool or to make heavy use of the ask user question tool, right? We do want it to come up with like. Different scenarios of, okay, tease out what the user actually wants. Don&#8217;t go off to work for like four hours and then come back with the wrong thing.</p><p>And you&#8217;re probably picking up on that.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: Um, I wish I could tell you I like built this magical thing and it&#8217;s like, there&#8217;s some secret sauce,</p><p><strong>swyx</strong>: but No, no, no. I mean, it&#8217;s, it&#8217;s just clarity is good that, you know, engineers just want to know. Yeah. They can, they can plan around it. And then I think also for me, um, I am realizing I have to switch to my, my other machine because this is a new machine that doesn&#8217;t have my session.</p><p>But, uh, yeah, the, the, the planning is really important for, for me to like approve or like to see whether it&#8217;s like, it&#8217;s right. The ask is, the question is so beautifully presented. I mean, it also, it also available in like cursor and, and in Claude Code. But like, I, I think like it&#8217;s so nice to see that it, like it&#8217;s kind of for me like to understand that it gets me, it gets what I want to do.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: It probably very hard</p><p><strong>swyx</strong>: just on the topical evals. Mm-hmm. When you say eval, I think people are very vague about what it means. Is it just like vibe testing or do you have like automated programmatic evals of Claude Cowork?</p><p><strong>Felix</strong>: When we say eval, uh, what we really mean is that we essentially take the entire transcript, including all the tools that clot has available ultimately to it, and we then measure what are the outputs, depending on what we tweak, right?</p><p>So we do run that a lot. We use that in training. Um, we use that in, in like, if you sort of separate out post training from like the scaffolding around it. Cowork sort of exists in the scaffolding space, but obviously we also train on it a little bit. Um, so when we say eval, we mean given the certain transcript, what do the outputs look like?</p><p>Including the file outputs as well as like the actual token outputs, like the ones that you see in the chat window.</p><p><strong>Alessio</strong>: I&#8217;m curious, um, how much of the failure modes are the model intelligence versus like the usage of the end tool to put the intelligence in? Like the well planning is like a good example, right?</p><p>It&#8217;s like one thing is to come up with a plan. The other thing is like make a nice spreadsheet. Yeah. That kind of runs you through the plan. Like how have you seen that? Well,</p><p><strong>Felix</strong>: the thing that I grapple with a lot is that whatever scaffolding you come up with, I think we still have a bit of sort of like model overhang where the model is dramatically more capable than right.</p><p>Users end up using it for. And I think part of that is that we&#8217;re just not getting the model all the tools to do all the things that&#8217;s theory capable of, right? There&#8217;s like one thing, um, however, whenever you do build the scaffolding, I&#8217;m sort of wondering at what point, at what point will that scaffolding go away and like how much you invest in figuring out what the right scaffolding is.</p><p>It&#8217;s kind of up to, it&#8217;s a little bit of a bet. And one thing that I as an NJ quite enjoy is that like working in philanthropic and working at a frontier lab, I maybe have a little bit more insight into what&#8217;s coming, coming down the chute in terms of like, what&#8217;s the next model, what is the model capable of?</p><p>What is good at, what is it bad at? And I&#8217;m, I&#8217;m increasingly wondering, is the right thing for us to like really invest too much in sort of these like scaffolding corrections where the model might otherwise not misbehave, but just not do the thing that you want?</p><p><strong>Alessio</strong>: Yeah.</p><p><strong>Felix</strong>: Or is it to just like give it as many capabilities as possible, try to make those safe so there&#8217;s the worst case scenarios, likeno status might be otherwise.</p><p>And then just simply wait a second for the next model drop. I&#8217;m personally, currently more leaning into the ladder. I think we&#8217;re gonna see a lot of like applications and companies that do very impressive things with ai that in the short term might seem very effective &#8216;cause they&#8217;re very specialized to individual use cases.</p><p>But I think once models get better generalization and get better at like those specific use cases without being super guided on those, I&#8217;m not sure how long that&#8217;s gonna stick around. And you can kind of, kind of already see this in like skills and NCP servers, right? Mm-hmm. We&#8217;ve, we&#8217;ve already seen sort of this like slow shift from MCP service to skills.</p><p>And like, maybe a good example is Barry who made skills. He was initially hacking on something that honestly looked a lot, looked, looked a lot like what Cowork does today. It was sort of thinking about what if cowork, but for like people who don&#8217;t wanna build code. Mm-hmm. And, um, he too did that as a prototype inside the desktop app.</p><p>One of the first use cases we thought of were, okay, what, what are like coding like use cases that could really benefit from graphical interfaces and like from being a little separated from the actual underlying code. And everyone comes with the same answers. Data analysis,</p><p><strong>Alessio</strong>: right?</p><p><strong>Felix</strong>: Yeah. Or saying how many users do we have today?</p><p>How many, like, it&#8217;s always data analysis. And I think the thing that ultimately led to skills is that we wanted to connect this little prototype to our data warehouse and. The team very quickly discovered that like instead of building a custom tool for the thing to talk our data warehouse, they just like meet and embarked on follow like mm-hmm.</p><p>Dear Claude, if you want to get data, here&#8217;s the end point. Here&#8217;s what the API looks like. You&#8217;ll figure it out.</p><p><strong>swyx</strong>: Ah.</p><p><strong>Felix</strong>: And then it be hand over control. Yeah, yeah. Also just like maybe go one step up in the layer of abstractions, right. Just, yeah. Instead of, instead of telling the thing, here&#8217;s ACL I, please call the CLI, or here&#8217;s an MCP.</p><p>Please call this ECT shape. Just like this is the end point. If you wanna know something, if you post here, maybe you can do post sql. It&#8217;s gonna be okay. And that ended up being so effective that they started trying the same pattern of like just giving the model a markdown file that describes whatever it needs to do.</p><p>That the whole thing eventually became skills and we&#8217;re like. We should package this up. This is a good idea.</p><p><strong>swyx</strong>: Yeah. Um, we&#8217;ve had Barry Mahesh, uh, on, on our conference and uh, he&#8217;s uh, definitely got a good idea there.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: I wanted to show you the, how I&#8217;ve been using Claude Cowork.</p><p><strong>Felix</strong>: Uh, this is was my favorite part.</p><p><strong>swyx</strong>: This is this. So this is like me, uh, this is how we run the Discord. Uh, we literally, uh, at first I didn&#8217;t trust Cloud Core. This was my very first usage.</p><p><strong>Felix</strong>: Okay.</p><p><strong>swyx</strong>: Right. So then I was like, okay, I will just try to manually download from Zoom all my recordings and upload it to YouTube. Yeah. Because this is a very laborious process.</p><p>I got a click, click, click YouTube, um, isn&#8217;t super user friendly. Uh, and it just did it. And then I was like, actually, you know, even the download from Zoom part, I should also. Put into Claude Cowork, and then I did it right. Here&#8217;s a bunch of, and it starts compacting here, and it, and it, it starts to even be able to do things like look through the individual frames of the video to name the video so I can upload it auto automatically.</p><p>Oh, that is, and this replaces my job as a YouTuber. We will forever appreciate your creative Yes. You know, and so that&#8217;s great. Uh, but then by the way, it compacts and makes, makes like a new thing, right? So I, I don&#8217;t, I don&#8217;t have the initial, initial thing, but then I asked it to make its own skills so that it, so that something that&#8217;s repetitive and one-off and human guided becomes more automated and I can use the skills independently and reuse them.</p><p>Uh, and it obviously you can write skills and that goes into context and skills at the bottom here, which is, which is so nice. Um, so I have all these skills that, that I now sort of do on a weekly basis. Uh, I know you&#8217;ve released scheduled Coworks, which I haven&#8217;t done yet, but</p><p><strong>Felix</strong>: course I should try them. I, I think this is like so wonderful and fun for me to see because.</p><p>One thing that is very fun for me about skills in particular is that they&#8217;re so easy to make. Like anyone can make a skill, like a text message, could be a skill, and they can be so hyper personalized to you. And this is like sort of the subtraction layer, right? Like, um, I, I&#8217;m just guessing, but I assume, heck, you are very good at your job.</p><p>You&#8217;re probably given this thing some guidance about how to do it, right? I,</p><p><strong>swyx</strong>: I just said, wrap everything up into, into a skill, right?</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: And then, uh, and then I was like, actually, sometimes I might need to break, uh, things apart because some parts fail or some parts might be needed in individually. So I told it to split one skill into three skills.</p><p>So it&#8217;s like a skill splitting thing, and then there&#8217;s like a parent skill that just orchestrates all of them if I want to use that. You know, like, um, I think that&#8217;s, that&#8217;s like really good. Uh, and, and, uh, there&#8217;s, there&#8217;s one more part, which is the, uh, Google Chrome thing that I told you about.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Where I&#8217;m like, okay, you know, what&#8217;s better than uploading, using Claude Coworks to YouTube?</p><p>Like actually. Looking at the docs to like programmatically upload to YouTube and then putting that in a skill. And I&#8217;ve never done that before. I don&#8217;t want to deal with Google Cloud. Yeah. So Claude Cowork does it for me.</p><p><strong>Felix</strong>: That is really cool.</p><p><strong>swyx</strong>: So, so I, I just, I don&#8217;t care. I just, like, I do a thing. I don&#8217;t, it doesn&#8217;t really matter.</p><p><strong>Felix</strong>: That is really cool. And then you&#8217;ve, I assume paired the skill just with the script that it&#8217;s built.</p><p><strong>swyx</strong>: Yeah, no, I just update, update the skills.</p><p><strong>Felix</strong>: Oh, that is beautiful. Yeah. That&#8217;s wonderful.</p><p><strong>swyx</strong>: It&#8217;s kind of like a skill, like, uh, uh, basically I think like the way that people ease into Claude Cowork is like take a knowledge work task that you would normally be clicking around for and then, uh, try to turn, turn that, and then you do the, okay, well what if you went further?</p><p>Okay. And then when, if you went further, when, if you, and it sort of expand the scope of cowork as you gain trust with it and, and also teach it how to replace you.</p><p><strong>Felix</strong>: Yeah. It&#8217;s like a little bit like playing factorial, but for your own life. Uh, like you say, you start really small.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: You start automating something really tiny and like.</p><p>Once it clicks, you keep adding onto this like automation empire. Just like make your life easier and easier. My favorite skill has been, um, every single morning Kohlberg starts looking at my calendar and make sure that there&#8217;s conflicts because people tend to schedule a lot of meetings, sometimes last minute, sometimes miss it soft and painful.</p><p>And a lot of products have existed like that A lot. I&#8217;ve written in the custom prompt there. I haven&#8217;t made it a skill, um, honestly should.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: But I&#8217;ve given it like pretty clear instructions about okay, here are some people, if they book over other meetings, I&#8217;m probably gonna go to their meeting. Like if Dario schedules a meeting.</p><p><strong>swyx</strong>: Right.</p><p><strong>Felix</strong>: Not try to reschedule down. Right. Um, and I think there&#8217;s some other rules in there about like what kind of meetings I care more about what kind of meetings I care less about. What is okay to like, maybe pun like when I want to be, when I want to be working, when I don&#8217;t want to be working. And it&#8217;s those really small things that I can think kind of click with people.</p><p>Right. When we launch co-work, I think one of the US races that went most viral on Twitter. X was clean up your desktop, which is stuff, because silly, that&#8217;s such a smart thing, right? Like you don&#8217;t need to model to clean up your desktop. Not really. Um,</p><p><strong>swyx</strong>: like this, like clean up my desktop.</p><p><strong>Felix</strong>: Yeah, exactly. Yeah.</p><p><strong>swyx</strong>: I need to, I need to choose my desktop, right? I guess give it access to my desktop.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Okay. Uh, okay. This is very scary. Oh, we&#8217;ll do it.</p><p><strong>Alessio</strong>: I did, I did it with my downloads folder. It was like, you have so many term sheets and there&#8217;s like eight copies of your rental lease for your office. I was like, all right.</p><p>Like, don&#8217;t yell at me.</p><p><strong>Felix</strong>: It&#8217;s like, it&#8217;s not such a small task. And then like, I, I would never go out there and normally otherwise and tell people I&#8217;ve pulled a product. It can organize your folder. Right. Um, because it feels small. But I think to your point like,</p><p><strong>swyx</strong>: oh, here&#8217;s, here&#8217;s the, here&#8217;s the ask user questions.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Uh,</p><p><strong>Felix</strong>: beautiful. Right. Elite obvious junk. You probably shouldn&#8217;t click that.</p><p><strong>Alessio</strong>: No.</p><p><strong>Felix</strong>: If he&#8217;s not done right.</p><p><strong>swyx</strong>: As long as it&#8217;s reversible, I don&#8217;t</p><p><strong>Alessio</strong>: make up blend to,</p><p><strong>swyx</strong>: yeah. Uh, yeah. No, I, I have a, I have a typical, everything is super messy folder. So, yes. I think this, this is super helpful. So this is a pretty simple task.</p><p>Mm-hmm. But I&#8217;ve, okay, here it is. Right. Here&#8217;s the progress. I don&#8217;t see this in, that&#8217;s why I&#8217;m like, this gotta be something different than, uh, than Claude Code, because I&#8217;m like, we</p><p><strong>Felix</strong>: do. Yeah. That&#8217;s, we do system prompt that. We&#8217;re like, all right. We want you to think about like, this task Yeah. Methodology.</p><p>Yeah.</p><p><strong>swyx</strong>: And then I can, I can, I can do like little suggestions for, for, for these things. It&#8217;s beautiful. Look at this. I, I can, I can like say like, oh, don&#8217;t do that. Don&#8217;t do this. It&#8217;s amazing.</p><p><strong>Felix</strong>: I&#8217;m so happy. You like it. Um, I mean, the other way around, like we&#8217;re part of the Clark core team, if you would like this in Clark COVID.</p><p><strong>swyx</strong>: Yeah. Yeah. Yeah. Uh, so, so yeah, I mean, uh, this is really good. Obviously I, I&#8217;m like kind of raving about it. Uh, you know, I have other things like sign up for pg e so if you can do phone calls for me, that&#8217;d be great. Um, I, I do, people</p><p><strong>Felix</strong>: have done that. Obviously you can&#8217;t do that natively, but people have done that with like, various other providers.</p><p><strong>swyx</strong>: Yeah. Uh, and then this is like signing up for the Figma MCP. Um, I, I really am trying to do like everything, um, data analysis as well. I do think, um, oh, design to code, uh, very, very good. Right? So like, here&#8217;s a Figma file, take it. And then this is where like a lot of other tasks is like knowledge work, like replace my manual clicking, but this is no, I would normally use Claude Code or uh, Claude Code for this, but because I perceive that you have better Chrome integration</p><p><strong>Felix</strong>: mm-hmm.</p><p><strong>swyx</strong>: I, I think you can actually do a better job of this. And I, this, this is one shot at my, uh, conference website.</p><p><strong>Felix</strong>: That&#8217;s pretty cool. Like at some point I would love to like, hear how you feel about code. In the desktop apps, which is like I never use, which is the, the same team. Same team.</p><p><strong>swyx</strong>: So I use the call code in terminal, which I, I perceive to be the default way of cloud coding.</p><p><strong>Felix</strong>: So one thing this has,</p><p><strong>swyx</strong>: sorry, I&#8217;m just like, I&#8217;m not</p><p><strong>Felix</strong>: here, I&#8217;m not here. All products. Can I talk about other stuff? Like I, I&#8217;m not sure if people out there wanna like hear me advertise my stuff for like an hour. Please do that. Um, this thing is like a builtin browser, which is a thing a lot of products have said.</p><p>Yeah, it&#8217;s a builtin browser. And I think giving cloud eyes into like what you&#8217;re actually working on makes it so much more effective. And that&#8217;s probably what you&#8217;ve seen in cohort because it can see Chrome, it can like debug the dom, it can like see things. Um, that does make it more powerful.</p><p><strong>swyx</strong>: Yeah. So, so I think, uh, my mental model was kind broken.</p><p>&#8216;cause I only use this cowork because I thought it had a, a browser thing in it. But I understand that the Claude Code app. The app version of Claude Code does have a built-in browser. I&#8217;ve seen, I&#8217;ve seen this preview thing.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: I just, I&#8217;ve never used it.</p><p><strong>Felix</strong>: But in the end, in the end, you sort of have it by hard.</p><p>Yeah. You basically get the same thing. Right? Like the, the, the additional skill that you&#8217;re describing is chart is better if we can see what it&#8217;s working on. Right. That&#8217;s, that&#8217;s sort of like the summary here and like whether it&#8217;s using your Chrome</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: Or it&#8217;s just like making up its own little like browser.</p><p>It doesn&#8217;t really make a big difference because either way it&#8217;s gonna see what it&#8217;s working on and that just makes it much better. And then you don&#8217;t have to run QA for your cloud.</p><p><strong>swyx</strong>: Why doesn&#8217;t it pick up my existing Claude Code sessions? &#8216;cause I, I mean, obviously I&#8217;ve used Claude Code, but Excellent question.</p><p>Um, don&#8217;t have a good answer other than like, we&#8217;re honest. Just haven&#8217;t Yeah. This is what the Open AI team does. Okay. Uh, cool. I I I don&#8217;t have other, like, I, I just, I, I do wanna expand people&#8217;s minds and also maybe show people if they haven&#8217;t really done it, but like, I, I think it&#8217;s very interesting how I sometimes use this more than I use, I mean, I use dia, right?</p><p>Yeah. Um, I, and I use, uh, I&#8217;ve used like all the other agentic browsers and philanthropic didn&#8217;t have to build an agentic browser because you just had Claude Cowork and that&#8217;s enough.</p><p><strong>Felix</strong>: Yeah. I also think like maybe integrating with number of excellent browsers out there, it&#8217;s like currently on my personal priority list, a little higher than like trying to rebuild a browser from scratch.</p><p>Yeah. You know, never say never, but I think going back to this idea of like, we wanna plug this into an entire existing workflow, I think our goal is actually to not replace any of the applications we have in your computer. But instead of like, work really well within a new workflow,</p><p><strong>Alessio</strong>: make the new one. Yeah.</p><p>Are, it seems that nowadays, especially on the browser, most of the innovation is like user ergonomics. It&#8217;s not really like the underlying browser engine. So I feel like to call it, it doesn&#8217;t really matter if it&#8217;s like the, uh, or Chrome or Alice, whatever.</p><p><strong>Felix</strong>: Yeah. We wanna, we wanna meet you wherever you are.</p><p>Which is like, like obviously I would say that, but it&#8217;s also just generally true because I don&#8217;t wanna shrink my potential user base artificially by saying, okay, like, I&#8217;m gonna start building for the people who are willing to switch browsers.</p><p><strong>Alessio</strong>: Right.</p><p><strong>Felix</strong>: That&#8217;s such a, like, you know, like many lawsuits have been filed over who gets to review the browser and like a lot of money has switched hands over the question of like, which browser is default and which search engine is default within the browser.</p><p>Um, I just wanna build for, yeah, I wanna build for swyx essentially. Like, I wanna, I wanna, I wanna build for people who have a number of annoying tasks that they feel like. Maybe clock could do it. Could do it for them.</p><p><strong>Alessio</strong>: Yeah. What do you think about skills portability? I think there&#8217;s been one thing, I use another thing called zo, which is kinda like a cloud computer plus agent.</p><p>And I have a skill to add visitors to the office. Yeah. So whenever somebody has to come in after hours, they need to check in downstairs. Um, but I wanna like text the thing, so it doesn&#8217;t really work in, in cowork, but now that skill is in the zone harness and it&#8217;s not in my cowork thing. And then if I make a change, it&#8217;s gotta, I gotta sync them.</p><p>How do you see that going? Like I see memory as like. Cloud personal, kinda like, I don&#8217;t necessarily want my memories to be cross thing.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: But I do want my skills to be cross agent that I use. I think with MTPs, people do the same thing. It&#8217;s like, oh, Mt. P Gateway. Mt P registry. I don&#8217;t really know if that&#8217;s like a business.</p><p>So I&#8217;m curious like if you&#8217;ve had any thoughts in the area.</p><p><strong>Felix</strong>: I think for me, this is sort of where I go back to the really basic primitives for our skills are file-based instead of like this complicated thing that exists inside a place somewhere that is like super proprietary. I&#8217;m really leaning into the idea of like, it&#8217;s all just files and vultures, and that makes it very portable on its own.</p><p>Right. We do have skills as part of this container format, which was just called plugins.</p><p><strong>Alessio</strong>: Mm-hmm.</p><p><strong>Felix</strong>: And plugins are available both for Claude Code and Claude Code work the same format, and you can install plugins. This works in cowork today. You can basically say, I&#8217;m gonna add a whole, like just a GitHub repo as a.</p><p>Skills marketplace or like a plugin marketplace. And that&#8217;s how we&#8217;re doing portability. I think we have a lot of room left to grow in. How do we make it easy for people to know that they can write skills? How do we make it easy for them to just like, share a skill with you? Because obviously all the words I just said, right?</p><p>Like I&#8217;m losing most of the knowledge worker base out there, right. And start by saying, oh, you can connect to GitHub repo. It&#8217;s not exactly how most people will end up working in like a general knowledge worker space. Um, but I think there&#8217;s something there. And another thing that&#8217;s there that I think has not really been properly explored is the, the, the combination of which part of the skill is very portable and then which part of the skill is like very personal to you.</p><p>Right. And I think that&#8217;s something we haven&#8217;t really solved as an industry. Hmm.</p><p><strong>swyx</strong>: It&#8217;s like, which, how you wanna introduce more structure to the skill or have always have like. Public skill, private skill, you know, pair. Yeah, yeah. Kind of. I think there&#8217;s</p><p><strong>Felix</strong>: like a, like the easiest way to do this, which is we do like use string interpolation or something.</p><p>Right, right. Yeah, yeah. Insert username here, insert like phone number, insert, like known folder, locations, that kind of stuff. Um, that&#8217;s probably clunky. That&#8217;s why we haven&#8217;t built it. Um, but I do think someone is going to come up with like an interesting way to keep everything we like about skills. The portability is just a file, it&#8217;s just marked down.</p><p>It&#8217;s just text, honestly. Right. Like a text file words. The complete lack of structure, which means you don&#8217;t need any kind of tutorial to write a skill. Just like explain it to Claude the way he would explain it to me and Claude will probably get it before I work. Mm-hmm. Right? You&#8217;re just like, for booking a flight, tell Claude how to book a flight the same way we tell him somewhere.</p><p>I just started working here today. But combine that with a very like, personal thing. Um, maybe we&#8217;ll stick with a booking a flight example. I don&#8217;t actually think. AI should be booking flights. I think the tools we have is yes.</p><p><strong>swyx</strong>: Yeah. Finally, somebody says it. It&#8217;s the default demo that everyone&#8217;s making.</p><p><strong>Felix</strong>: I&#8217;m</p><p><strong>swyx</strong>: like, I even against like booking demos, it is not a good showcase.</p><p><strong>Felix</strong>: Yeah. I&#8217;m like, I just wanna book my flight myself. But, um, I think there&#8217;s a lot of things that have a personal and a non-personal component and that&#8217;s maybe why people reach for flight booking because some things are very universal. Yeah. Super flight is usually better, right? Like few people try to book the most expensive flight.</p><p>And then some things are quite personal about like what times you prefer, which seat you prefer, which airports you prefer. Combining that and like a skill format that is actually portable, compatible, easy to understand for people. I think that would be very exciting. We just haven&#8217;t figured it out yet.</p><p><strong>Alessio</strong>: Yeah, I think the text part every, I think everybody by now has some sort of like cloud file thing. Either Dropbox, Google Drive, whatever. So it feels like in a way it should basically like sim link. My skills into all my agent harnesses. Yeah. Just keep those ing like we have internally this like valuable tokens repo, which is like all the commands sub agents.</p><p>It&#8217;s good. Uh, and then I build like a TUI where you can start it and be like, you know, install this command and this three sub agents into this agent in this folder and just copy paste this. It doesn&#8217;t do anything. It literally cp the file into that. But I feel like there should be something similar where like whenever I go into a new thing, it&#8217;s like, hey, here&#8217;s like the link to exactly the cloud folder and just bring down these skills into this.</p><p>Yeah. Like today it doesn&#8217;t quite work like that. Like if I install a new agent, I cannot, I have to like copy paste all the skills and I don&#8217;t even know where they are.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: That&#8217;s like the big problem. It&#8217;s like where do I find them?</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: Um, so I&#8217;m curious like in the future like that, that almost feels like my personal productivity thing will be my skills.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: Is not really the product that I use. Everybody has access to the same product. But today there&#8217;s, that just looks like copy pasting ME files, I</p><p><strong>Felix</strong>: think so many things I, I really like thinking about agents and LLMs just as like another coworker. So many attempts have made to build documentation companies that are like, oh, we&#8217;re gonna solve oil documentation problems.</p><p>Um, I myself, like spend a little bit of time working in notion, right? I&#8217;m like deeply familiar with the concept of let&#8217;s get everyone on the same page. Mm-hmm. Right? And what you&#8217;re basically saying here is you want all your agents to be on the same page about your preferences, about the skills, about the way they ought to work and like how they ought to execute.</p><p>And I&#8217;m not sure what the right thing is going to be if it&#8217;s going to be some, some company that can say, all right, we&#8217;re as an independent body, we&#8217;re not trying to like, push into any particular product. It&#8217;s our job to be like the skill authority, and we provide, I don&#8217;t know, we&#8217;re gonna be the Dropbox of skills and we can just sim link us into all the products we want to use.</p><p>I&#8217;m not sure that&#8217;s gonna be viable business, but as, as an idea, it would be cool.</p><p><strong>Alessio</strong>: Yeah. Yeah. I think so many things are just going away as businesses. It&#8217;s like, how am I supposed to do it? I&#8217;m not even asking somebody to make a product about it. Like yeah. I wanna personally know. And there&#8217;s things like you said, it&#8217;s like you almost wanna skill and then interpolate it between personal and work.</p><p>So if I&#8217;m booking a fly for work, it&#8217;s different than I&#8217;m booking a flight personally.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: In some ways, yeah. But like a lot of the scaffolding is the same, you know? Cool.</p><p><strong>Felix</strong>: I mean, as an engineer I will tell you like, you know, technic a person to technic a person. I will just be like siblings.</p><p><strong>Alessio</strong>: Well that&#8217;s what, that&#8217;s what I do.</p><p>We call that MD and agents that MD&#8217;s just the same how sim length. And so it is like, that works, but it feels like, yeah, I don&#8217;t know. Maybe</p><p><strong>Felix</strong>: you can always go one, you can always tell cowork problem and then cowork will solve it for you. Just make the siblings. That&#8217;s like one way to do it.</p><p><strong>Alessio</strong>: That&#8217;s true.</p><p>That&#8217;s true. All right. Everything is called cowork.</p><p><strong>Felix</strong>: Uh, potentially spicy. Question for both of you.</p><p><strong>swyx</strong>: Uh, which of these industries will go away?</p><p><strong>Alessio</strong>: Okay, so what <strong>Felix</strong> was saying before is interesting. There&#8217;s busy like. The short term pressure of like, we need to turn these tokens into valuable things, which is I should build the last mile product that harness the model.</p><p>And then there&#8217;s the question of like, long term, which ones are gonna still be valuable? And I think you&#8217;re kind of seeing this today with like, uh, you know, the coding space in a way is kind of like everybody&#8217;s moving up and up in stack because you need more than just turning tokens into code. I think search, like enterprise search is kind of saying the same thing.</p><p>Like with G Clean and like all these different companies is like, at the end of the day, if Cowork is the one doing all the work, the search itself is like such a small part that like, I don&#8217;t know if I&#8217;m really gonna pay that much money just to do search. It&#8217;s almost like everything is like a cowork vertical.</p><p>So like how much can cowork first party support?</p><p><strong>swyx</strong>: Mm-hmm.</p><p><strong>Alessio</strong>: And how much can it not? I think for a lot of these things, the planning thing that you were showing do Which one? The planning. The planning.</p><p><strong>swyx</strong>: Okay. Yeah. Yeah.</p><p><strong>Alessio</strong>: That&#8217;s one thing where like most of the value that these agents provide is like they&#8217;re better at planning for specific tasks.</p><p>Yeah. And have better tools for it.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Alessio</strong>: But I think the models are now moving in that direction and they have the right harnesses and they&#8217;re on your computer. So for me it&#8217;s almost like if for the end customer trusts your startup to be the provider of that task result, then I think that works. This is, uh, something that, this is a short</p><p><strong>swyx</strong>: spike that we&#8217;re, we&#8217;re working on.</p><p>Uh, yeah.</p><p><strong>Felix</strong>: I think, look, I&#8217;ll, I&#8217;ll, I&#8217;ll tell you this, like I don&#8217;t think I&#8217;m the best person to like actually estimate which industry is going to be hit the hardest. But I do think that at philanthropic as a group of people, we&#8217;re deeply worried about the impact. That the tools are going to have on the labor market, especially for like junior employees that, because I think, I think it&#8217;s only honest to say that when we talk about automating a lot away, a lot of the work that we personally find annoying that we maybe think&#8217;s not the best use of our time.</p><p>In a lot of industries, that kind of work would&#8217;ve been given to a junior entry level employee. Yeah. Right. And I think it&#8217;s, it&#8217;s only, it&#8217;s only right to be really worried about that and like worry what that&#8217;s going to do in particular to people like enter the shop market.</p><p><strong>Alessio</strong>: Mm-hmm. I have a solution for that.</p><p>Which you make them, you create simulative jobs for them.</p><p><strong>Felix</strong>: Okay.</p><p><strong>Alessio</strong>: So this is, this is like half joke, half true. So if you think about software engineering, when you&#8217;re like a junior engineer, you work like 1, 2, 3 years. And in those three years there&#8217;s like maybe like a handful of moments where like you really learn something.</p><p>And then a bunch of other days where like you&#8217;re not really progressing.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: I think now we can use AI and these models to actually like shortcut these careers and almost like simulate the early years of your work and like just make them like super dense and like these learnings, it&#8217;s like, hey, we&#8217;re working on this feature, which is like a distributed system and you need to learn this thing that might take three months at a company.</p><p>And so you take three months here, it&#8217;s like we&#8217;re just simulating the whole thing. It&#8217;s actually not a real thing. And in one week we kind of speed run through the whole thing and you kind of learn your lesson from there. And we kind of repeat that in like one year. You basically get like three years worth of like projects and experience.</p><p>Yeah. I think it&#8217;s harder for like things like sales or for things like, you know, marketing because you don&#8217;t really have a way to get the feedback loop. But I think a lot of it, it sounds kind of silly, it&#8217;s like you&#8217;re making the new effect job, but it&#8217;s almost like you go to college, right? People pay to learn how to do it, and this might feel similar where it&#8217;s like, hey, we have the.</p><p>Jane Street Simulator is like, you wanna come work at Jane Street? We&#8217;ll just put you in the simulator for like three months.</p><p><strong>Felix</strong>: Wow.</p><p><strong>Alessio</strong>: And you&#8217;ll come out of it. It&#8217;s like, you know, I&#8217;m ready.</p><p><strong>Felix</strong>: So there, there is an aspect here. I&#8217;m not an expert enough to like actually know what, what is going to happen to marketing or legal or finance, right?</p><p>Like, I don&#8217;t work in those jobs and I, I don&#8217;t think I should talk about them, but I am an engineer and I think I have a pretty good idea of what engineering is like. And I think one thing we&#8217;re sort of seeing is that as a company and also as, as the public, we&#8217;re like deeply worried about entry level, but we&#8217;re also seeing more senior engineers accelerate it.</p><p>If like they&#8217;re more productive. They, they actually increase the value they provide. And the thing that I&#8217;m thinking about a lot is the fact that even before all of this happened, um, I&#8217;ve always had a lot of respect for the University of Waterloo and the, the new grads that have joined my teams as from coming from the University of Waterloo always felt like.</p><p>More ready than new grads will like literally spend their entire time at the university regardless of how good, but never actually had to work inside an environment where you have to ship things that eventually will be used by users. And I&#8217;m, I&#8217;m, I&#8217;m German. I like initially went to German University and I think the, the, the like information systems programs, there tend to be very theoretical, right?</p><p>Like I often give people the example of like trying to become a doctor, but you first have to do four years of biology and as a result when you get a new grad, you sort of have to teach them what it&#8217;s like to actually build products and to work in a company and like work with other people. And like some people will have different opinion and like, how do you do all of those things?</p><p>And the University of Ulu, it seems like they just. Spend half of their time. I dunno if it&#8217;s true, but I think it&#8217;s, it&#8217;s a year, right? They spend so much time,</p><p><strong>swyx</strong>: part of your job, uh, a cu a curriculum to do spend a year in internships.</p><p><strong>Felix</strong>: Yeah. They just like go from company to company. They show up on your team as like a junior engineer who spend like 20 companies.</p><p>Not really, but like, it seems like a lot of my new grads have also briefly worked at Apple, Google, Tesla. Yes. And uh, there&#8217;s a common meme where they like collect all these logos, like infinity stones, but, and they always put it on LinkedIn and it is very unclear that they&#8217;re an intern. Like Yeah, yeah, exactly.</p><p>But it does actually make them so much better compared to other new grads. And I wonder if that&#8217;s a useful model maybe for the future when we also have to like, crunch down the amount of time you have as a junior employee. &#8216;cause the value you have as a junior employee is going to like, be impacted.</p><p><strong>swyx</strong>: My sort of pro young people take is that they&#8217;re, you&#8217;re more, uh, you have higher neuroplasticity, you can learn more, you have less preexisting biases.</p><p>And, uh, what I is assuming is true for you, what OpenAI often says is that. Actually it&#8217;s the, the younger, like fresh grad engineers that use Codex or their coding stuff, uh, more innovatively than the, uh, experienced engineers who have a set and preferred way of doing things.</p><p><strong>Felix</strong>: Yeah. As I talk to people, I, I someone experience.</p><p><strong>swyx</strong>: Yeah. So maybe you&#8217;re more AI native. Yeah. And therefore you&#8217;re, you, you get cut. But like, I think the problem is you don&#8217;t need that many of them.</p><p><strong>Felix</strong>: I mean, philanthropic is on the record as saying we do believe that the impact on the market is going to be sizable and we do not think that people overall are ready.</p><p>Right. And we do actually think we should probably talk about it as a society much more. Yeah. I&#8217;m not sure that I&#8217;m like the individual that can add like anything useful there. But I think as societies with economists and, and governments that need to wrestle those questions in a way that is probably more meaningful than me wrestling with them, we&#8217;re probably not doing good enough.</p><p><strong>swyx</strong>: Well, we, we&#8217;ll try to educate and then I think also just releasing frequently as, as, as you guys do, or probably maybe too frequently</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Uh, is helping people to adjust over time. Right. Rather than one big bang thing. There&#8217;s like sort of this gradual takeoff that people are living through that we</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Waking people up. Right.</p><p><strong>Felix</strong>: Yeah. And I, but I think a lot of us like wondering at what point do we actually have full takeoff, right? Like at what point is there, we&#8217;re all sort of expecting this like big bang moment where things will accelerate so quickly that it becomes a self-reinforcing loop.</p><p><strong>swyx</strong>: Mm-hmm.</p><p><strong>Felix</strong>: And at that point, it&#8217;s sort of like off to the races and there will be no more like slowly catching up.</p><p>You notice just have cloud being so good at everything.</p><p><strong>swyx</strong>: Yeah. It&#8217;s when cowork is training models, it&#8217;s when it&#8217;s looking at tensor board and Exactly. Weight and biases and training things.</p><p><strong>Felix</strong>: I like we can all debate like how many years it&#8217;s away, right? Like some people make a better route, like maybe it&#8217;s 10 years away, maybe it&#8217;s a year away.</p><p>Um, I&#8217;m not entirely sure where, where I come on this time, but I&#8217;m not totally sure that ultimately it matters all that much, whether or not it happens in four or five years. If we have a decent one, certainly that&#8217;s going to happen. It&#8217;s probably something we should wrestle with.</p><p><strong>swyx</strong>: I wanted to talk, so by the way, the, the scheduled task complete, uh, the, the, there&#8217;s the clean my desktop task complete and it did it organized by file type, which, okay.</p><p>But, you know, I was trying to get it to do more sort of thematic, like read the file, understand what it&#8217;s about, group by, uh, the, the topic rather than the file type. But</p><p><strong>Felix</strong>: I mean, you can just follow up and have it do that. Oh yeah. Here, like it did, it is proposing That&#8217;s right.</p><p><strong>swyx</strong>: Yeah. So it&#8217;s, it&#8217;s got some like topical things, but uh, yeah, I could probably do better.</p><p>Like, yeah, so like I probably need to give it a skill to read video files so that it understands here&#8217;s how I like to,</p><p><strong>Felix</strong>: honestly though, like, um, I see that you&#8217;re using Opus 4.6, right? Like my recommendation for people is increasingly don&#8217;t worry about it anymore. Just like tell it what you want it to do.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: And it&#8217;s probably gonna figure out a way to do it. It might not be the way that you like necessarily or the way that you&#8217;ve gone about it.</p><p><strong>swyx</strong>: Videos, deeper,</p><p><strong>Alessio</strong>: lower outsourcing, organizing all of this. So let&#8217;s fight. Yeah. Yeah.</p><p><strong>Felix</strong>: I&#8217;m honestly like, so curious what cloud is gonna come up with.</p><p><strong>swyx</strong>: I&#8217;ll kick that off.</p><p>I wanted to also just talk about the, the overall, uh, you know, you talk about data analysis, you talk about like, uh, your, your personal finances. You also said, uh, which by the way for us is very timely tax season, right? Like Yeah. Use cloud core for tax season. It is not responsible for any mistakes, but might as well, right?</p><p>Like it&#8217;s, it&#8217;s free knowledge work for you. Yeah. Uh, so I just like, I think cloud for finance is a big deal. Um, and this is definitely like in that mix. I wonder, is it like, do you, is it a separate team? Do you talk to them? How important is it? Right. Like, because you can also natively output Excel files now.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Just</p><p><strong>Felix</strong>: talk about the</p><p><strong>swyx</strong>: finance effort</p><p><strong>Felix</strong>: grow. Yeah. We care about the verticals quite a bit. So we do have a dedicated verticals team. We have a dedicated enterprise team,</p><p><strong>swyx</strong>: and those is business engineering, not sales.</p><p><strong>Felix</strong>: It&#8217;s engineering. Yeah, yeah, yeah. It&#8217;s engineering. So we do have people who sort of come to work every single day and they, they ask themselves, how do we make co-work extremely effective for people in those specific industries?</p><p>How do we make it easier for them to understand, how do we make it easier for them to plug into this and like sort of get the same value out of it that software engineers get? I think it&#8217;s no real surprise that software engineers ended up being sort of at the forefront of the entire AI moment because so much of it is this like Rub Goldberg machine nest where like we&#8217;re already used to automating things, right?</p><p>Like it&#8217;s part of our job. Yeah. So we care about it quite a bit. I think it also like really matches what we see. Cloud being very good and as a model, I think it provides tremendous amount of value to those customers in particular because. We can do so much with the amount of data they have. Those are like data heavy industries.</p><p>Their industries for correctness matters quite a bit.</p><p><strong>swyx</strong>: So for us of, I&#8217;ve used it to analyze my business, I just can&#8217;t show it. So</p><p><strong>Felix</strong>: it&#8217;s two sense. I had a similar question about, about taxes. Like, I did tweet, I did tweet about the fact, I did tweet about, oh, COVID is doing my taxes. This is honestly incredible.</p><p>And, um, it&#8217;s like annoying. He is like, this is so cool, but I&#8217;m not gonna, Twitter is maybe not the audience that needs to like see my tax return.</p><p><strong>swyx</strong>: Yeah. That way. Here, here it is. It&#8217;s it&#8217;s reading on the videos, so it&#8217;s like Yeah, it&#8217;s getting more, yeah.</p><p><strong>Felix</strong>: How did it actually do it? I&#8217;m actually curious.</p><p><strong>swyx</strong>: Oh, usually it just like, takes a screenshot and then it reads the screenshot vi by vision.</p><p>So this is what I do for my, my Zoom upload thing, right? Because I, I have paper club sessions that I need to upload to Zoom and I want it to automatically. Uh, title them and do show notes and everything. So it just take screenshots and try to try its best. Yeah. It wouldn&#8217;t probably benefit from transcribing, which it&#8217;s doing by, it&#8217;s operating by Pure Vision now, but it&#8217;s good enough.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: And then I, uh, I do have to call, uh, out to Nano Banana to do images. So unless you guys do images for me, uh, I have to call other people your images.</p><p><strong>Felix</strong>: We&#8217;re aware. We&#8217;re aware. It&#8217;s, it&#8217;s just like so fun for me because like, this is the thing that I&#8217;m increasingly doing, like increasingly curious about cloud&#8217;s, creativity and like figuring out what is great Claude&#8217;s approach is like some problem.</p><p><strong>swyx</strong>: Yeah. Vision for everything is, is like the, the superpower, right? Like, you know, and computer use, you guys were the first to do computer use, right. And when it was launched, I was very unimpressed. I was like, it&#8217;s slow, it&#8217;s unreliable, it&#8217;s wild. How much better? &#8216;cause it is one year ago.</p><p><strong>Felix</strong>: Yeah, I know. Like it was barely usable.</p><p>Yeah. I, I remember it was very usable, but is it wild how much better things have gotten? Yeah.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: Over that one year</p><p><strong>swyx</strong>: we went to the anthropic office because you, uh, for the launch event for computer use. Like there was like this hackathon. Yeah. And like nobody hack on computer use.</p><p><strong>Felix</strong>: But I did see, I, I I don&#8217;t know if you&#8217;re okay with me saying that, but I did see briefly that you do have like a, like an automate Mac, SMCB server installed.</p><p>Right. Uhhuh, you use that ever.</p><p><strong>swyx</strong>: What? Sorry? Which one? Where?</p><p><strong>Felix</strong>: Um, if you go to your settings.</p><p><strong>swyx</strong>: Oh, settings. Okay. Uh, where, sorry, this one?</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: Um, I noticed that in your connectors,</p><p><strong>swyx</strong>: Uhhuh. Uh, I probably said it at one time, but I don&#8217;t use it actively.</p><p><strong>Felix</strong>: Oh, okay. The</p><p><strong>swyx</strong>: a max automated. Yeah. Yeah. So, so I, yeah, this one I really wanted to like, just automate everything in my thing.</p><p>I didn&#8217;t find, I didn&#8217;t find it super reliable.</p><p><strong>Felix</strong>: Okay.</p><p><strong>swyx</strong>: Why?</p><p><strong>Felix</strong>: No, no, no question at all.</p><p><strong>swyx</strong>: Cloud is much better writing Apple Script and executing its own Apple Script than relying on these, uh, third party tools.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Uh, so I&#8217;ve increased, I, I initially installed Im CP and like all these other fcps that people built, and, but now I don&#8217;t use any of them anymore.</p><p>Like just, just let cloud write its own thing.</p><p><strong>Felix</strong>: Yeah. It&#8217;s</p><p><strong>swyx</strong>: gonna be more custom made. We keep going up the stack,</p><p><strong>Felix</strong>: but if using computer uses like a fairly interesting area to me, and it&#8217;s like also interesting in the sense that I don&#8217;t think we&#8217;re far away from, I don&#8217;t think we&#8217;re far away from clapping, very effective, but like using your computer and not just it&#8217;s theoretical computer.</p><p><strong>Alessio</strong>: Mm-hmm. What&#8217;s the relationship between the user and the computer? Like, uh, there, there were some tweets about how huge some of the VMs, the Claude Cowork creates ours, like 12, 15 gigabytes and people complain. Yeah. But at some point it&#8217;s like, if you&#8217;re using the computer, you&#8217;re taking action on, it&#8217;s, it&#8217;s just your computer.</p><p>And I&#8217;m just looking at it, you know, it&#8217;s like, I, I think that&#8217;s why people like the idea of like the Mac mini and the open claw or whatever on it because it&#8217;s like, it got its own home. You know? It is doing its thing, I&#8217;m doing my thing. I think there&#8217;s some kind of like, not like risk condition, but it&#8217;s like, okay, if I kickstart this task now I can&#8217;t really use the computer.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: You know, because car coworkers doing things on it and it&#8217;s kind of awkward, like, yeah. I&#8217;m not sure.</p><p><strong>Felix</strong>: I, I do think it&#8217;s a super interesting area because I, I can maybe tell you like some of the things I thought about that I think are actually a bad idea. So when, when we initially started working on cowork, I, I did have some dreams about, well, would it look like for cloud of its own cursor?</p><p>Could be cool, right? Like it&#8217;s a computer, we can write code, we can touch everything. Like who says that computers need to have one cursor? We could do a second cursor, but that actually breaks down quite a bit. Even if you go and like present cool dreams to both Apple and Microsoft, you&#8217;re like, wouldn&#8217;t it be cool if, um, it breaks down quite a bit?</p><p>&#8216;cause so many of our models on a computer are built around this idea of like, there&#8217;s only one thing working on it. Yeah, there&#8217;s like a foreground app, a background app, cloud and Chrome can work in the background, but that&#8217;s like within one application. But the operating system layer, that is a lot harder to implement.</p><p>So I&#8217;m, I&#8217;m still grappling with what, what does it mean for cloud to actually act on your computer. It&#8217;s the right format for cloud to have its own computer that you set up. And maybe every now and then you like zoom in and you play with it. Or is the right format for Claude to just like, wait until you are.</p><p>Stepping away for a little bit and take over while you&#8217;re gone. Or it&#8217;s the right move for cloud. Just like if it&#8217;s on computer in the cloud, and like whatever you want cloud to do, you have to set up yourself. Right. There&#8217;s like a, there&#8217;s like a number of different options. Um, this is the thing I think about a lot, like what is the relationship between you and your computer and you and your data on their computer?</p><p>Because how intimate that relationship is kind of depends on the tool and Right. The thing that you&#8217;re current looking at, right? Like we&#8217;re quite comfortable sharing some things, very uncomfortable, sharing other things. And I think whatever product is gonna be successful, we&#8217;ll have to deal with those, like, with those different things.</p><p>But you probably, even if Claude was capable of making a determination, would you want Claude to make that determination in the first place? It&#8217;s tricky, Barry, because it&#8217;s like, it&#8217;s more than just privacy. It&#8217;s like almost intimacy and it&#8217;s like tricky to reason about in a way that will make everyone comfortable.</p><p><strong>Alessio</strong>: Yeah, I could see. You know, a virtual box, like actual virtual box app where like you run the VM and then you have like a screen within the screen, you know, you can put it in the background, but then you can like jump in the screen and like you,</p><p><strong>Felix</strong>: that&#8217;s not a bad idea. Yeah.</p><p><strong>Alessio</strong>: You know, like, I mean I used it, you know, people used to do it virtualizing like C Linux in a Windows machine.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: And like you would just jump in and then you would jump out. But it&#8217;s like, it&#8217;s not like a dual boot. It&#8217;s like within the thing. The problem is that you need twice the amount of ram, twice the amount of, you know, it&#8217;s like, it&#8217;s kind of taxing on the machine. But I think that would be cool. Kinda like see, you know, the little quad window.</p><p>I can see desktop look cute. It is clicking around things</p><p><strong>swyx</strong>: I was gonna bring up. He&#8217;s the original machine and the machine guy, because he has the uh, windows. Windows 95 project. Where&#8217;s, where&#8217;s the Windows 85 project at?</p><p><strong>Felix</strong>: It&#8217;s probably somewhere in my GI guitar,</p><p><strong>swyx</strong>: right? No, no, no, no, no. It is like the first thing you see is this one.</p><p>Nice. Yeah,</p><p><strong>Felix</strong>: yeah,</p><p><strong>swyx</strong>: exactly.</p><p><strong>Felix</strong>: That was honestly a very fun project though. Like, obviously I didn&#8217;t, I, I should say this, just so that No, it&#8217;s the wrong impression. I did not write the actual, the actual, obviously I didn&#8217;t build Windows only five because I was a child, but also I did not build the actual engine that is capable of like simulating an X 86 processor and JavaScript and m um, that&#8217;s a tool called V 86, which is very cool and everyone should try.</p><p>But this came out of a, this came out of like a debate we had at work where people were like, they often are in the into debating the merits of electron and whether or not we should be building software in JavaScript, yes or no. And I still am very upset that I can run all of Windows 95 in JavaScript.</p><p>And launch Microsoft Excel inside the virtualized JavaScript Windows only five machine, and do things that pro, I can do that entire chain faster than I can do a lot of other things in like traditional SaaS applications. Mm-hmm. Uh, this is sort of like a, like a performance rampage that I went on. So I&#8217;m mostly built this as a joke for some of my colleagues at Slack.</p><p>This took, took like one night. Um, what, but then that I, it was, it was not hard to do. It was all the hard work is in V 86. Yeah. Like if, go to the repo, it&#8217;s gonna say like, 99% of his work is done by, by um, a guy who goes after the, by the name. Copy. His name is Fabian.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: Um,</p><p><strong>swyx</strong>: cool. I think you&#8217;re, you&#8217;re kind of back on the Windows grind &#8216;cause you&#8217;re building out the Windows support.</p><p>Uh, I thought there was some really cool technical stories to tell. Uh, and it gives people an appreciation of like, well here&#8217;s how hard it is and here&#8217;s how important here, how, how you invested the sandbox. So maybe this is like a good opportunity to talk about something in the details.</p><p><strong>Felix</strong>: Oh yeah, the, the VM honestly is like so cool.</p><p>There&#8217;s a lot of things we dislike about the vm, right? Like there, there&#8217;s a lot of things that are real trade offs and you want to know why you making those trade offs. Um, and you&#8217;re right, like a lot of people write me like, Hey, how, how come cloud is taking up 10 gigabytes? I could say on the point, it&#8217;s not actually taking up 10 gigabytes.</p><p>It&#8217;s just like a way that macros displays bites is like wrong, but the way we actually ride it to disc is by we collapse the empty space and the image, so it&#8217;s not actually taking up 10 gigs. But that&#8217;s a technical differentiation. That&#8217;s probably not gonna matter to, like,</p><p><strong>swyx</strong>: to me, the the, the outcome is it takes too long to start.</p><p>Yeah. It&#8217;s like 30 seconds sometimes. So I don&#8217;t know. Oh, it should be faster than that. Whatever it be te about this feels like 30.</p><p><strong>Felix</strong>: Yeah. Like even either way, like whatever it is, it&#8217;s going to be, it&#8217;s going to be slower than just running Log Ultra on your computer. Right. So the trade offs are real, but what we&#8217;re doing on Windows, we&#8217;re using the Windows, windows, uh, host compute system.</p><p>It&#8217;s the same thing that WSL two runs on, like the Windows subsystem for Linux that I think a lot of developers appreciate quite a bit. Yeah. Um, and it&#8217;s, it&#8217;s pretty cool because we sort of like have to separate out which system space the virtual machine runs in, in who gets to talk the virtual machine because obviously you give this virtual machine a decent amount of power.</p><p>How do we optimize not just the connection between the two systems, but also how do we make sure that random other application doesn&#8217;t get to talk to Clot inside the vm?</p><p><strong>swyx</strong>: Hmm.</p><p><strong>Felix</strong>: We do some pretty interesting things. Um, last week we started writing a new networking service. A networking driver. That optimizes how Claw talks to the internet.</p><p>If your company&#8217;s doing like weird internet things like pack inspection and like, like, you know, taking your part as a cell and inside your company, I think there was probably like a very small, easy version to build of cowork that is much simpler but also breaks on most com most users, computers. And this one is quite nice because it works on most users computers.</p><p>Um, and the default example I always go for is I, I really want this to be highly effective on like a, on like a machine that most people pick up. And that machine will probably not have Python, it will not have no j And even if I just take away those two things, cloud is going to be so much less effective from</p><p><strong>swyx</strong>: your computer.</p><p>So what do you do? You don&#8217;t even, I mean, may maybe require people to install Node in Python.</p><p><strong>Felix</strong>: Oh, like, you mean for like a, what does the feature look like without a vm?</p><p><strong>swyx</strong>: No, no, no. So, so like, like you said, right? Let&#8217;s say a target machine is whatever&#8217;s a default spec, windows laptop.</p><p><strong>Felix</strong>: We do this, which is quite cool.</p><p>So on, on, uh, mes, we use the, um, apple virtualization framework, which is pretty solid, optimized, like it&#8217;s good stuff, and instead simple a p call, right?</p><p><strong>swyx</strong>: It&#8217;s</p><p><strong>Felix</strong>: like super simple.</p><p><strong>swyx</strong>: I, I saw the code recently and I&#8217;m like, that&#8217;s it. What the fuck</p><p><strong>Felix</strong>: would you, once you start like shipping production code on it, you start adding like all of these edge cases, your new</p><p><strong>swyx</strong>: Oh</p><p><strong>Felix</strong>: yeah, it ends up being a little longer, but, um, I think Apple really cooked with a virtualization framework and it&#8217;s very, very good.</p><p>It is very fast, it&#8217;s very reliable. And same on Windows. The, the host compute system. I think WSL two as well is maybe one of the diamonds within Windows. It&#8217;s like one of the few things that developers universally rave about is very, very cool. And like hooking into the same subsystem makes a lot easier for us to say We don&#8217;t really care how locked down your computer is.</p><p>Maybe it&#8217;s like your employer&#8217;s computer and your employer has decided that you get to install nothing.</p><p><strong>Alessio</strong>: Mm-hmm.</p><p><strong>Felix</strong>: Not trusted, but it&#8217;s true in a lot of environments, right? Like even at Anthropic, um, our IT department controls what kinda stuff you install, just like a pretty common experience for many companies.</p><p>Um, and this gives it departments a decent amount of, like, it makes their job so much easier because we can say you can separate out cloud&#8217;s computer from the user&#8217;s computer. And then for cloud&#8217;s computer, where you probably care about its data loss, you care about like a potentially hostile actor, you care about maybe data being exfiltrated.</p><p>And once you control the network and the file system layer, you don&#8217;t really care necessarily anymore. That cloud might be writing super useful Python scripts. What worries you about the fact is that like once you install Python, now anyone can do anything on a computer. Once you put that in the vm, that risk really goes down.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: So that&#8217;s why we jumped through all of these hoops.</p><p><strong>swyx</strong>: Yeah. I think you, you had a different, uh, tweet about this. Um, but it, it&#8217;s, it&#8217;s almost like people have also approved exhaustion. Like, it&#8217;s like you can&#8217;t approve every single commands. Like sometimes by, by default, some of the theis, I think even early called code, uh, we have to approve every single command.</p><p>Yeah. And, and like it&#8217;s so, so there&#8217;s this sort of dichotomy between either approve every step or dangerously get permissions.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: And actually sandboxing is like, kind of like the middle ground.</p><p><strong>Felix</strong>: Yeah. I do think, I do think it, it&#8217;s maybe on us as like the AI industry to come up something better than, oh, this is super safe as long as it doesn&#8217;t do anything right.</p><p>Right. But if you want this to be useful, then you have to like approve every single step of the way. And like, computer use is a good example. The only way to make computer use on your host, like super safe, like really super safe is probably if you approve every single action, right. Like models, like, I would like to type the word.</p><p>You&#8217;re like, okay, that seems fine. I know, I know. Which, like cursor is focused. Yeah. It&#8217;s not</p><p><strong>swyx</strong>: automation if you don&#8217;t delegate.</p><p><strong>Felix</strong>: Yeah, exactly. You need to like properly delegate. You need to be able to like delegate and walk away and trust that this thing is not gonna like mess dramatically. And I don&#8217;t even think we need to build perfect systems.</p><p>I don&#8217;t think we need to wait for like a hundred percent model alignment. We can rely on the same Swiss cheese model we&#8217;ve used in the industry for a long time. But I do think we need to like universally maybe eventually invest more. And that&#8217;s what we&#8217;re doing. We need to invest more in systems where we can say, you do not need to approve everything.</p><p><strong>swyx</strong>: Speaking of Swiss cheese model, he just wrote a thing about this.</p><p><strong>Felix</strong>: Oh cool.</p><p><strong>swyx</strong>: Yeah. Uh, yeah. Um, yeah. Super cool. I mean, yeah, it&#8217;s, it&#8217;s weird how like, I guess usually I think safety and security is kind of like a boring word to, to engineers. They&#8217;re like, just gimme be unsafe, gimme unsecure. But, um, I think.</p><p>Achieving the right thing. Like you are going after a consumer slash prosumer.</p><p><strong>Felix</strong>: Yeah. Yeah. Talking both kind of like both. I think I, I also want to capture people who would&#8217;ve no trouble using clock code like yourself, right?</p><p><strong>swyx</strong>: Yeah. Yeah.</p><p><strong>Felix</strong>: But still find it maybe just convenient, easier. You&#8217;re like, oh cool.</p><p>That&#8217;s like the list on the right. I can edit it. Those things are just easier to do if you have</p><p><strong>swyx</strong>: to. But this is like clearly the knowledge work side. Yeah. Claude Code will clearly capture the development workflow. But like I, I, I do think like you have to sweat this like safety and security details in order for people to trust it.</p><p>And like the even Claude and Chrome, like having the whatever API uses to do the background thing.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Um, that&#8217;s the only reason I use it is because otherwise I would have to just get a separate machine.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: And just run it, run to the, and that sounds like</p><p><strong>Felix</strong>: super annoying.</p><p><strong>swyx</strong>: Yeah. I mean, like currently doing it, but,</p><p><strong>Felix</strong>: and I think, I think also as developers, um, maybe we&#8217;re, we are more risk tolerant, but we&#8217;re also just like accepting we are more risk tolerant, but I think we also just have.</p><p>I don&#8217;t wanna say arrogance, but like sort of the trust that if like the really bad thing happens, we can probably fix it.</p><p><strong>swyx</strong>: I just tell Claude to like, check with me before doing any irreversible action. Like sending an email or doing permanently. Yeah, it&#8217;s good enough.</p><p><strong>Felix</strong>: But like, not even Claude, I mean like simple things such as NPM install, right?</p><p>Like we&#8217;re all running NPM install with full user permissions and if it wants to like read SSH, it well crazy that that is the default kind of why. Yeah, I know. I agree. I agree. Fine. Like I&#8217;m obviously doing it every single day. No, right. Like, uh, and I think obviously NPM and GitHub too have like done a pretty good job maybe over the last couple months to like clean house and come up with like more specific tokens.</p><p>But generally speaking, I think as engineers we&#8217;ve always been a little bit more risk tolerant. And if you do a little bit of introspection and you ask yourself, is that how we should be doing things, you might not always come up with the right answer. And I think for models too, like my approach, like I&#8217;m not gonna, the the safest thing is to do nothing.</p><p>We do want products that are quite capable, but to the extent possible, I don&#8217;t wanna ask you, are you okay with the script? Because I kind of believe that once it starts becoming a part of your workflow, you&#8217;re probably not either, either you don&#8217;t have the skill to understand whether or not the python, the script is safe or you&#8217;re not gonna read it anyway.</p><p><strong>swyx</strong>: Cool. I guess a, a couple partying questions. Uh, what&#8217;s the future of clockwork?</p><p><strong>Felix</strong>: I think we&#8217;re still, we&#8217;re still such early days. We&#8217;re gonna keep shipping things that we&#8217;re gonna keep shipping, things that, um, we&#8217;re gonna keep iterating on this thing like pretty quickly, but, which I mean, you can sort of continue to expect that every single week there&#8217;s gonna be like a small new feature, if not a big new feature.</p><p>Um, I&#8217;m going to continue probably to double down on your computer and like making you effective in your computer and making cloud effective in your computer. Um, we&#8217;re starting to grapple, as we talked about today, grapple more with a question of like, what does it mean? What does your computer mean? Does it have to be the one in front of you or like a VM on your computer or like a computer somewhere else?</p><p>And then the third thing that I&#8217;m quite excited about is. We&#8217;re continuing to go off this hill climbing on slowly taking users who are used to asking questions and getting an answer to slowly teaching them to like step more and more away. And that claw take over like bigger and bigger tasks and work both in time as well as in like scope.</p><p>And I think you can probably see most of the, our investments on our feature releases to like work on both of those things, like the ability to do more on your computer and then the ability to do more independently for longer.</p><p><strong>swyx</strong>: Does remote control work for Claude Cowork yet? No. Right.</p><p><strong>Felix</strong>: Excellent question.</p><p><strong>swyx</strong>: Coming soon. I mean, that&#8217;s an obvious thing if you want to keep betting on the, on your computer, but I, to me like. You know, we, we talk about like, people are not ready this year. Like the, there&#8217;s, there&#8217;s no wall. It&#8217;s, it&#8217;s accelerating to me like what will be we be doing differently at the end of this year that, you know, we are maybe not even thinking about this, uh, at the start of this year.</p><p>Right. Like, I&#8217;m just trying to look ahead as to like, what, what&#8217;s like a good use case that you&#8217;re, that we sort of aim towards? So for, for example, for the machine learning scientists, it&#8217;s always, okay, well I want AI scientists, I can automate, automate machine learning, but like for, for knowledge work, I mean, I can already, you know, get it to sign up for Google Cloud to mean as a GI.</p><p><strong>Felix</strong>: Yeah. &#8216;</p><p><strong>swyx</strong>: cause Google cuts are, but like, what, what is, what&#8217;s beyond that? I don&#8217;t know.</p><p><strong>Felix</strong>: I think it&#8217;s basically the idea that like you still had to tell her to build your script, right? He was still kind of involved.</p><p><strong>swyx</strong>: Yes.</p><p><strong>Felix</strong>: In maybe a way that felt kind of magical to you, but like, maybe to me on the other side is the person building this product still feels kind of heavy handed.</p><p>I see so much process that I&#8217;m like, oh, lemme take that away from you. Okay. But like, how do I just go, I will continues to go or continue to go like further and further up the stack. Make your life easier and easier.</p><p><strong>swyx</strong>: Oh, here&#8217;s one. Right?</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Watch, uh, I, you know, I don&#8217;t care about my own privacy or whatever, or I trust cloud, I trust philanthropic.</p><p>So just watch everything I do on a normal day-to-day basis. At the end of the day, tell me what you is called co workable.</p><p><strong>Felix</strong>: Yeah. I</p><p><strong>swyx</strong>: dunno.</p><p><strong>Felix</strong>: I think the funny thing about a lot of these products is that like, for good reason, I don&#8217;t enjoy, I, I don&#8217;t, throughout my entire career, I&#8217;ve never like teased too much what I&#8217;m working on because I think you should just like, yeah.</p><p>Release it. Yeah. Build the base and release it, and then talk about it. Like I&#8217;m, I&#8217;m not a big fan of the like vague posting my own work ahead of time.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: But the thing that is like always so fascinating to me is like, both of you all multiple times a day, you&#8217;ve like mentioned things and I&#8217;m like, yeah, that is obviously like very obvious</p><p><strong>swyx</strong>: Okay.</p><p><strong>Felix</strong>: That someone should be working on those things. Um, and I think we&#8217;re still in the space where if you look at cowork. The things that we will be releasing will probably not be a big surprise to either of you. You&#8217;re gonna be like, yeah, obviously that&#8217;s valuable obviously that we&#8217;re working on those things.</p><p><strong>swyx</strong>: Yeah.</p><p>Yeah.</p><p><strong>Felix</strong>: And obviously that&#8217;s good and useful. And the more I hit those points, the more our features fit into that category, I think the better it is for us because then we don&#8217;t end up building things that are too hyper specialized to difficult harness style.</p><p><strong>swyx</strong>: Yeah. I think the hyper specialized thing is very important.</p><p>It keeps you like general purpose. It, it means you&#8217;re not thinking too small. Maybe I don&#8217;t, I don&#8217;t know what the, the word is.</p><p><strong>Felix</strong>: Yeah, yeah, exactly. It&#8217;s like the whole concept that like at no point if we release, you know, there&#8217;s no Claude Code for no jazz applications that use React and 10 Stack. I know any of those two things.</p><p>And like if it&#8217;s anything else, I know several startups like that. I think that&#8217;s pretty, like, I&#8217;m not a vc, I&#8217;m not an investor. It&#8217;s like hard for me to predict where the markets go. But in terms of the building box that I&#8217;m interested in, the electron is probably by far the most popular thing I ever built.</p><p>And, um, electron itself is like. Very abstractable and generalizable. Right? Like so many apps run in it. And I think it would&#8217;ve been hard for me to predict how many apps actually end up using Electron.</p><p><strong>swyx</strong>: Yeah.</p><p><strong>Felix</strong>: Um, and what would&#8217;ve been even less useful for me to predict this in what those apps do. I distinctly remember a bloom coming out of being like, that is cool.</p><p>Like you are a camera in a little circle in the corner. That is pretty smart.</p><p><strong>swyx</strong>: That&#8217;s an app. Yeah. Yeah.</p><p><strong>Felix</strong>: Or at least was, I&#8217;m not sure if it still is. It was for a while. Or like one password has so many interesting things. Right. It, it&#8217;s, it&#8217;s, it&#8217;s a level of the stack that I&#8217;m quite comfortable with. And whenever I give other engineers, advisors actually that layer that I think is most valuable to invest in because the tools of that layer are not that good.</p><p>But that&#8217;s where you get the most leverage</p><p><strong>swyx</strong>: for like,</p><p><strong>Felix</strong>: the future in general.</p><p><strong>swyx</strong>: Just quick tangent on Electron. &#8216;cause I always wonder this, uh, have you looked at Tori?</p><p><strong>Felix</strong>: I have, yeah.</p><p><strong>swyx</strong>: What&#8217;s your take? Uh, you know, look, my, my my, my view is like most things should be Tori by default, unless you really need the full power of electron, but.</p><p><strong>Felix</strong>: Yeah, I can give like my take on, I can give my big take. Why do we ship an entire version of chromium inside the thing, right? Like why do we do that? And, um, people ask me this question a lot because it&#8217;s like very counterintuitive. Wouldn&#8217;t it be much easier to use the web use that are on the operating system?</p><p>Wouldn&#8217;t it be much easier not to have to do that? And the answer is yes. And like obviously I did that once upon a time. I did that there was a version of the Slack app that used just the operating system that use Wait, did you, did you start the Slack app? I would, well, team effort and</p><p><strong>swyx</strong>: Yeah, but I was, I was there.</p><p>We built the Slack app.</p><p><strong>Felix</strong>: Yeah. It&#8217;s crazy. Um, I mean obviously you get the electron guy to do it, but, well, but this is an interesting point. Like, by the time, by the time I joined Slack, they already had an app that was built with something at the time called Met Gap. It was a little bit like the same app gap thing for mobile.</p><p>It just used the operating systems. Web views. Um, and that didn&#8217;t work for like so many reasons. Um, and they were like, all right, maybe we need like bigger guns. We need to like take more control of the rendering stack. And there&#8217;s, there&#8217;s a few things I always mention here. Um, I think if you&#8217;re building a small app, just going with the operating systems web view is perfectly fine.</p><p>If you&#8217;re building an app, maybe that doesn&#8217;t have too many users who will like cry bloody murder. If it doesn&#8217;t work, that is fine. The reason to go with your own embedded rendering engine is because, and this is still true in 2026, the operating system render engines are not that good. They&#8217;re just not that good.</p><p>Both Microsoft and Apple are trying to move away from that. They so far really haven&#8217;t, the only way to upgrade those is to upgrade your operating system. So if you are, say Slack and you have critical rendering bug in WK WebU and some of the other WebU options, your only recourse is to tell your customer, oh, sorry, you&#8217;re too poor.</p><p>You didn&#8217;t bother the, its MacBook. Unacceptable.</p><p><strong>swyx</strong>: Mm-hmm.</p><p><strong>Felix</strong>: Unacceptable to user, unacceptable to user developer. So you sort of need to like go down the stack and like find the best rendering engine, then put it in your app. Why chromium, even though it&#8217;s very big chromium is by far the best thing. Like I, I often like to remind people the unreal engine, you wanna render some text.</p><p>They use chromium. Like chromium is part of the unreal engine for same purposes. Chromium is very, very good. I think it&#8217;s like one of the marvels of engineering. It&#8217;s very hard for, we&#8217;re in San Francisco right now where we&#8217;re recording. Most of the people in the city are web developers. It&#8217;s hard for me to like overstate how magical it is.</p><p>They run seat like rendering a YouTube video dynamically. Negotiating a bit rate, figuring out what to do about your extremely broken hardware driver. Actually, this is a fun thing. Um, okay, you can enter Chrome call on Wack Wack GPU. Okay? And if you scroll down a little bit, these are all the enabled workarounds because something is going wrong on your computer.</p><p>If you&#8217;re doing this on a Windows computer with like A GPU, that is not the most popular GPO, it will be much longer. And all of these are usually just there to make sure that if I say as a developer, I want a red pixel to appear here, that that actually happens. Chrome is such a marvel because of works on all the machines that user might throw you and it&#8217;s gonna work fairly reliably.</p><p>And if it doesn&#8217;t, they will probably fix it within 24 hours.</p><p><strong>swyx</strong>: I see. So this is the super operating system, right? That that works everywhere.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Right. Okay. Yeah.</p><p><strong>Felix</strong>: So a lot of the magic of Electron is honestly just that it makes it very easy for you to ch chromium in a way that serves you exactly in your use cases.</p><p>Elect, uh, exactly.</p><p><strong>swyx</strong>: Our next interview is with Morgan Dreesen.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>swyx</strong>: Who had the phrase like, desktop OSS are just poorly deep, uh, poor implications of the, the actual os, which is Chrome, which like actually works everywhere. And this is this, this is the platform where you ship apps.</p><p><strong>Felix</strong>: I, I think the wild thing is that like as engineers, we so often sort of assume that the platform, like the layer below us is like super stable.</p><p>Mm-hmm. And then you talk to those people and they&#8217;re like, ah, we are also just like guessing. Um, uh, and I had like a distinct moment at Slack where one of our customers at Slack was Nvidia, and for a while I really put GPU developers on this pedestal in my head. And I do think they&#8217;re still probably much smarter than I am.</p><p>But I was like hardware engineers who built the chips, who then like built the drivers. Their work must be so much harder than mine. They must be very good. And we had like one bug in Slack where like if you had a YouTube video in Slack, it wouldn&#8217;t quite render why. Like it would have these weird artifacts.</p><p>And, um, that ended up being a chromium bug. And I ended up on this like giant thread. So I got to see a lot of the source code. And they also are just like common to do. We don&#8217;t know why this is weird, but if you flip this bit, things work. You know, this is just like happening with every layer of the stack.</p><p>Maybe the, uh, you know, the,</p><p><strong>swyx</strong>: the end of year a GI prediction is that clock can build chromium. You see, you see you, you laugh now. But yeah, like, you know, someday</p><p><strong>Felix</strong>: it&#8217;s, it&#8217;s sounding, it could get pretty good. Like it used to be completely useless. Um, mostly just like overwhelmed, both with how hyper specialized tools are inside the chromium repo.</p><p>Like for, for a long time. Chrome has like sort of reinvent all the tools because none of them are capable of ending Chrome. I think the EGI moment I am kind of waiting for is at what point are we gonna say Electron is probably no longer necessary because you can just build fully native apps. The Swifty?</p><p>Yeah. Like not just in Swift because this is one thing, like it&#8217;s pretty easy if you, I think our current models are quite capable of taking an electron app and replicating it Swift, are they gonna be capable of like building an app that is actually more performant, which is less memory? All of that stuff, um, is gonna go into the same hyper optimization that developers have done for like a long time.</p><p>We&#8217;re not quite there yet. Work and like point even our best models at a thing and say, just replicate this, a native code. Make no mistakes. Ultra think. Right? We&#8217;re not quite there yet. Um, ultra</p><p><strong>swyx</strong>: think is bad</p><p><strong>Felix</strong>: today. Think is back. Yes. Okay.</p><p><strong>swyx</strong>: Or we&#8217;ll get an ultra think for like days,</p><p><strong>Felix</strong>: just a pretty long time before,</p><p><strong>swyx</strong>: but he worked on Ultra think for days.</p><p>Yeah. Why he just, it&#8217;s just. Front,</p><p><strong>Alessio</strong>: I&#8217;ll let it, the</p><p><strong>Felix</strong>: more goes into</p><p><strong>swyx</strong>: it. Yeah. Okay.</p><p><strong>Alessio</strong>: Another question I had is like coworks. So if I have my Claude Cowork, like what&#8217;s kinda like the multiplayer mode? I think sub agents is like single player Split up the context.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: And the multiplayer cowork is like, my colleague is some file on their machine that I wanna know about or I wanna know how their task is going to then update my thing.</p><p>Like is that interesting? Is that something that makes sense for you to build or for like</p><p><strong>Felix</strong>: It&#8217;s like super interesting to me it, it almost goes back to like some of the scaffolding room. Like okay, are we gonna be end up, are we, will we end up building scaffolding that will just go away? And like a question I have here is at what point do we just assign these things, like their own Gmail account?</p><p>We just give them their like Slack handle and then they will just like use the same tools we humans use to interact with each other. You mentioned our finance people, they&#8217;ve been working pretty hard on very good office integrations. And I think for a while we&#8217;ve been like, we built so much tech around cloud, leaving useful comments inside a Google Doc, and now it just does, it just like leaves a comment in your Google Doc and that&#8217;s how you interact with it.</p><p>Maybe like the similar thing where I still have open questions around what is the best interaction mode? Is it for us to build something super custom for cowork agents to talk to each other? Or is it okay, let&#8217;s just jump straight to the finish line and say, well, we&#8217;re just gonna give this thing, if you use Slack at work, we&#8217;re just gonna give this thing a Slack handle.</p><p>And that&#8217;s going to be the way, it&#8217;s like multiplayer capable.</p><p><strong>Alessio</strong>: They communicate with each other. Yeah. Yeah. Like, you know, as a, as a fun project, I build this thing called piq, which basically takes any repo and the PI agent, uh, coding agent, it puts it in a VPS, and then there&#8217;s a public web hook where anybody can submit a coding task.</p><p>Oh. And then there&#8217;s a dashboard in which you review the task and then piq pi, pi, uh, queue.</p><p>Yeah. You basically get all these like tasks, anybody can submit a task.</p><p><strong>Felix</strong>: Mm-hmm.</p><p><strong>Alessio</strong>: And to me it&#8217;s almost like in the organization of the future, it&#8217;s like the sales people are talking to the engineering team that is talking to the marketing team, to the product team, and all these coworker are going to like queue up decisions for other people to approve in a way.</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: You know, and I&#8217;m kind of curious what that looks like and like how do you, how do I give my cowork the ability to build a proof task without asking me</p><p><strong>Felix</strong>: Yeah.</p><p><strong>Alessio</strong>: And how to decide which one I need to review. Yeah. You know, because for some of these things it&#8217;s like, you know, you wanna change the color of something that&#8217;s kinda like a branding decision.</p><p>Or another one is like, hey, your thing is just broken. It&#8217;s like, this is like how you fix it. Yeah. And Claude can actually review whether or not that prompt matches what he&#8217;s trying to do today. Everything is still very, it&#8217;s like multiplayer within the single player, you know? Yeah. I guess spin up many of them, but like, how do I get multiple people to hand off to each other things using their particular context?</p><p><strong>Felix</strong>: Yeah. And for both of your coworkers to like talk to each other. Right,</p><p><strong>Alessio</strong>: right. Yeah. Hey, we got an episode today. Can you like, have you, you know, or</p><p><strong>Felix</strong>: Yeah. This is like a, uh, I know we&#8217;re like running out of time here, but like we, we previously talked about sharing skills and I did have this question of like, what if your cowork would just like ask the other coworks if they have a skill for this task?</p><p>Doesn&#8217;t matter. These could do.</p><p><strong>swyx</strong>: Right. Like, okay, so skill transfer.</p><p><strong>Felix</strong>: Yeah, like,</p><p><strong>swyx</strong>: um, and again, that&#8217;s, maybe</p><p><strong>Felix</strong>: this maybe goes back into the territory of like building something very powerful and building something creepy often goes hand in hand. Um, because I could tell from the reaction that my fellow engineers said that this is probably not what we&#8217;re gonna do, but like.</p><p>We have Bluetooth le right? Like I, this computer can figure out that it&#8217;s sitting right next to this computer. So you&#8217;re probably working on the same thing. Um, well, you see that in cowork, probably not. But, um, there&#8217;s like, I think really creative solutions to problems that we really haven&#8217;t tried yet.</p><p>Yeah,</p><p><strong>Alessio</strong>: yeah, yeah. Yeah.</p><p><strong>swyx</strong>: Excellent. I guess the, the last thing is, uh, philanthropic labs. Uh, I always have this mental model of a model lab versus, uh, agent lab. And this is basically Anthropics internal agent lab, which co Claude Code, uh, is now under, right? It&#8217;s part of the whole org.</p><p><strong>Felix</strong>: I mean, people are so fungible, right?</p><p>Like,</p><p><strong>swyx</strong>: okay, this is just, I, I don&#8217;t know how, I don&#8217;t know real. This is, I don&#8217;t know.</p><p><strong>Felix</strong>: No, it&#8217;s a real team. It&#8217;s a very, um, the, the last team is primarily working though on things that you don&#8217;t see in public yet. Um, they&#8217;re trying like really wild out there, ideas that seem quite improbable. Um, the mad science</p><p><strong>swyx</strong>: thing.</p><p>But you, you&#8217;re, are you officially under this thing or</p><p><strong>Felix</strong>: No? We&#8217;re, where is the Claude Code is, but now Claude Code is like a fairly big group where. I actually know many people we are like, like I remember yesterday coming into our weekly COVID meeting. I was like, woo,</p><p><strong>Alessio</strong>: this is hot.</p><p><strong>Felix</strong>: There&#8217;s a lot of people here.</p><p>Um, but we still have a labs team and we actually made the labs team a lot bigger. Mike just joined the labs team as a, as an ic, which I think is very cool and very fun. But they&#8217;re, they&#8217;re working on things that you have not seen yet that are extremely out there and probably half broken. Right? Like the sort of the idea of a lab team is that it should only work on things that make really no sense for anyone else to work on.</p><p><strong>swyx</strong>: Okay. Well, looking for exciting things from there, but thank you so much. I know we&#8217;re out of time, but uh, appreciate your joining us. I appreciate co cowork, everyone go use it. Uh, it is the closest I&#8217;ve felt to a I this year. That&#8217;s so nice you to say. Thank you very much. Yeah. Thank you for your time. Yeah.</p>]]></content:encoded></item><item><title><![CDATA[[AINews] NVIDIA GTC: Jensen goes hard on OpenClaw, Vera CPU, and announces $1T sales backlog in 2027]]></title><description><![CDATA[a quiet day lets us reflect on NVIDIA GTC 2026]]></description><link>https://www.latent.space/p/ainews-nvidia-gtc-jensen-goes-hard</link><guid isPermaLink="false">https://www.latent.space/p/ainews-nvidia-gtc-jensen-goes-hard</guid><pubDate>Tue, 17 Mar 2026 03:25:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fyMz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It is NVIDIA GTC day again, and over his <a href="https://www.youtube.com/watch?v=X2i_8O75_Os&amp;pp=ygUKbnZpZGlhIGd0Yw%3D%3D">signature 2hr unrehearsed keynote</a>, Jensen gave updates on the entire NVIDIA <a href="https://x.com/swyx/status/2033666752836759687?s=20">universe</a> and ecosystem and celebrated his <a href="https://x.com/swyx/status/2033688824048709851?s=20">InferenceMAX champions belt</a>. As one might expect, Blackwell and Rubin are selling very very well (some <a href="https://x.com/BenBajarin/status/2033623321540235661?s=20">accounting is necesary</a>), and now <a href="https://www.nvidia.com/en-us/data-center/vera-cpu/">Vera</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fyMz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fyMz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png 424w, https://substackcdn.com/image/fetch/$s_!fyMz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png 848w, https://substackcdn.com/image/fetch/$s_!fyMz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png 1272w, https://substackcdn.com/image/fetch/$s_!fyMz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fyMz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png" width="1456" height="764" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:764,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2482097,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.latent.space/i/191206166?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fyMz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png 424w, https://substackcdn.com/image/fetch/$s_!fyMz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png 848w, https://substackcdn.com/image/fetch/$s_!fyMz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png 1272w, https://substackcdn.com/image/fetch/$s_!fyMz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c9e5d8d-2b90-4c27-ab1c-fd67c035acb4_3592x1886.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The final section of the keynote was focused on OpenClaw, where Jensen went extremely hard in complimenting it and then pointed out the security issues, then pitched his solution, <a href="https://github.com/NVIDIA/NemoClaw">NemoClaw</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SDFj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SDFj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SDFj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SDFj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SDFj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SDFj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg" width="1456" height="672" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:672,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!SDFj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SDFj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SDFj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SDFj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfaddec2-3aea-4a3f-b4bd-07158c2251d6_2048x945.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>NVIDIA moves at impressive speed for a $4T company, and <a href="https://www.latent.space/p/nvidia-brev-dynamo">we had some of their next generation leaders on the pod</a> to give more insight on how NVIDIA works this fast:</p><div id="youtube2-64D6tcsPH1U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;64D6tcsPH1U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/64D6tcsPH1U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><p></p><blockquote><p>AI News for 3/14/2026-3/16/2026. We checked 12 subreddits, <a href="https://twitter.com/i/lists/1585430245762441216">544 Twitters</a> and no further Discords. <a href="https://news.smol.ai/">AINews&#8217; website</a> lets you search all past issues. As a reminder, <a href="https://www.latent.space/p/2026">AINews is now a section of Latent Space</a>. You can <a href="https://support.substack.com/hc/en-us/articles/8914938285204-How-do-I-subscribe-to-or-unsubscribe-from-a-section-on-Substack">opt in/out</a> of email frequencies!</p></blockquote><div><hr></div><h1><strong>AI Twitter Recap</strong></h1><p><strong>Architecture Research: Moonshot&#8217;s Attention Residuals and the Debate Around Prior Art</strong></p><ul><li><p><strong>Moonshot&#8217;s </strong><code>Attention Residuals</code><strong> paper was the clearest technical story in the feed</strong>: <a href="https://x.com/Kimi_Moonshot/status/2033378587878072424">@Kimi_Moonshot</a> introduced a replacement for fixed residual accumulation with <strong>input-dependent attention over prior layers</strong>, plus <strong>Block AttnRes</strong> to keep cross-layer attention practical. Claimed results: <strong>1.25x compute advantage</strong>, <strong>&lt;2% inference latency overhead</strong>, validated on <strong>Kimi Linear 48B total / 3B active</strong>; follow-up posts highlighted improved hidden-state magnitude control and more uniform gradients across depth (<a href="https://x.com/Kimi_Moonshot/status/2033378596438556853">paper thread</a>, <a href="https://x.com/Kimi_Moonshot/status/2033378599450079581">paper link</a>). The release triggered strong positive reactions from practitioners and researchers including <a href="https://x.com/Yuchenj_UW/status/2033404695880896804">@Yuchenj_UW</a>, <a href="https://x.com/elonmusk/status/2033528245464047805">@elonmusk</a>, <a href="https://x.com/nathancgy4/status/2033390157102244098">@nathancgy4</a>, and multiple visual explainers such as <a href="https://x.com/eliebakouch/status/2033488233854620007">@eliebakouch</a> and <a href="https://x.com/tokenbender/status/2033437211371454915">@tokenbender</a>.</p></li><li><p><strong>The interesting second-order discussion was whether this is new, or &#8220;new at scale&#8221;</strong>: <a href="https://x.com/behrouz_ali/status/2033581834953453853">@behrouz_ali</a> argued the idea substantially overlaps with prior work like <strong>DeepCrossAttention</strong>, criticizing missing citations and broader ML novelty inflation; <a href="https://x.com/cloneofsimo/status/2033586628770570323">@cloneofsimo</a> made a similar point that Google had explored related ideas earlier, while others countered that the systems work and scaling evidence matter as much as the core intuition (<a href="https://x.com/_arohan_/status/2033587983455293638">context</a>, <a href="https://x.com/_arohan_/status/2033589201363735004">more context</a>). Net: the paper mattered both as an architectural proposal and as a live example of the field&#8217;s ongoing tension between <strong>idea novelty</strong>, <strong>citation quality</strong>, and <strong>frontier-scale validation</strong>.</p></li></ul><p><strong>Coding Agents, Harnesses, and Skills Infrastructure</strong></p><ul><li><p><strong>OpenAI&#8217;s Codex momentum showed up repeatedly</strong>: OpenAI Devs promoted a <a href="https://x.com/OpenAIDevs/status/2033333345619464228">Codex x Notion event</a>, while company posts and leadership commentary emphasized fast adoption. <a href="https://x.com/fidjissimo/status/2033537381907710092">@fidjissimo</a> said <strong>Codex is at 2M+ weekly active users</strong>, up nearly <strong>4x YTD</strong>, with OpenAI also building a deployment arm for enterprise rollout. <a href="https://x.com/sama/status/2033599375256207820">@sama</a> added that &#8220;hardcore builders&#8221; are switching to Codex, and <a href="https://x.com/gdb/status/2033605419726483963">@gdb</a> said <strong>GPT-5.4</strong> reached <strong>5T tokens/day within a week</strong> and a <strong>$1B annualized run-rate in net-new revenue</strong>. Product-wise, Codex also added <a href="https://x.com/i/status/2033636701848174967">subagents</a>, reinforcing the shift toward multi-agent coding workflows.</p></li><li><p><strong>The infrastructure layer around coding agents is maturing fast</strong>: <a href="https://x.com/AndrewYNg/status/2033577583200354812">@AndrewYNg</a> expanded <strong>Context Hub / chub</strong>, an open CLI for current API docs that now supports <strong>agent feedback loops</strong> on documentation. <a href="https://x.com/AssemblyAI/status/2033514383914283118">@AssemblyAI</a> shipped a maintained <strong>skill</strong> for Claude Code, Codex, Cursor, and compatible agents so they can use current API patterns rather than stale training priors. <a href="https://x.com/dair_ai/status/2033546855376916735">@dair_ai</a> highlighted a paper on <strong>automated extraction of agent skills from GitHub repos</strong> into standardized <code>SKILL.md</code>, with claimed <strong>40% knowledge-transfer gains</strong>. Together these point toward a new agent tooling stack: <strong>skills files, up-to-date docs, feedback channels, and repo-mined procedural knowledge</strong>.</p></li><li><p><strong>LangChain pushed further into &#8220;agent harness engineering&#8221;</strong>: <a href="https://x.com/LangChain/status/2033596690171629582">@LangChain</a> launched <strong>LangGraph CLI</strong> for terminal-based deploy/dev flows, and the ecosystem open-sourced <strong>Deep Agents</strong>, framed by <a href="https://x.com/itsafiz/status/2033591253955449289">@itsafiz</a> and <a href="https://x.com/simplifyinAI/status/2033581939756818648">@simplifyinAI</a> as an MIT-licensed recreation of the workflow behind top coding agents: planning/todos, filesystem ops, shell access, sub-agents, and context management. Internally, <a href="https://x.com/Vtrivedy10/status/2033608199564067098">@Vtrivedy10</a> said this is also the base for production agent work and evals. The notable pattern is that teams are no longer just shipping models; they&#8217;re shipping <strong>reference harnesses</strong>.</p></li></ul><p><strong>Open-Source Agents: Hermes&#8217; Breakout, OpenClaw Integrations, and Agent UX</strong></p><ul><li><p><strong>Hermes Agent had a strong community cycle</strong>: hackathon projects spanned home media automation (<a href="https://x.com/rodmarkun/status/2033307437088850102">@rodmarkun&#8217;s anime server tool</a>), cyber tooling (<a href="https://x.com/aylacroft/status/2033429386427351043">@aylacroft</a>), geopolitics/OSINT forecasting (<a href="https://x.com/WeXBT/status/2033391568426598608">@WeXBT</a>), and research visualization (<a href="https://x.com/t105add4_13/status/2033364535852360069">@t105add4_13</a>). User sentiment was consistently that Hermes is <strong>easier to set up</strong> and <strong>more robust</strong> than OpenClaw: see <a href="https://x.com/Zeneca/status/2033460972346650852">@Zeneca</a>, <a href="https://x.com/fuckyourputs/status/2033503910376431728">@fuckyourputs</a>, <a href="https://x.com/austin_hurwitz/status/2033552632241857002">@austin_hurwitz</a>, and <a href="https://x.com/0xMasonH/status/2033608276286243323">@0xMasonH</a>. <a href="https://x.com/Teknium/status/2033563976219709766">@Teknium</a> also posted setup guides like enabling <strong>Honcho memory</strong>.</p></li><li><p><strong>OpenClaw still expanded its ecosystem despite the Hermes comparisons</strong>: <a href="https://x.com/ollama/status/2033339501872116169">@ollama</a> announced <strong>Ollama as an official provider</strong> for OpenClaw; Comet launched an <a href="https://x.com/dl_weekly/status/2033529164813250938">observability plugin</a> for tracing calls/tools/costs; and there were third-party mods like <a href="https://x.com/i/status/2033636585963721182">NemoClaw</a>. The broader takeaway is less &#8220;winner takes all&#8221; and more that open agents are starting to resemble classic software ecosystems: <strong>providers, memory backends, tracing, onboarding guides, and hackathon-driven extensions</strong>.</p></li></ul><p><strong>Model and Product Releases: Perplexity Computer, Gemini Embeddings, Mistral/Minimax Signals</strong></p><ul><li><p><strong>Perplexity&#8217;s </strong><code>Computer</code><strong> rollout was the most concrete end-user agent launch</strong>: <a href="https://x.com/AravSrinivas/status/2033561054324953432">@AravSrinivas</a> and <a href="https://x.com/perplexity_ai/status/2033562296077963773">@perplexity_ai</a> announced <strong>Computer on Android</strong>, then extended it so <a href="https://x.com/perplexity_ai/status/2033598416962592813">Computer can control Comet</a> and use the <strong>local browser</strong> as a tool without connectors/MCPs, with local cookies preserved and user visibility into actions (<a href="https://x.com/AravSrinivas/status/2033598960238277059">details</a>, <a href="https://x.com/denisyarats/status/2033602822537965600">implementation note</a>). This is notable because it broadens agentic execution from cloud integrations to <strong>permissioned local browser control</strong>.</p></li><li><p><strong>Google added a foundational multimodal primitive</strong>: <a href="https://x.com/Google/status/2033631279925891078">@Google</a> launched <strong>Gemini Embedding 2</strong> in public preview via Gemini API and Vertex AI, positioned as a <strong>single embedding space</strong> across <strong>text, image, video, and audio</strong>, supporting <strong>100+ languages</strong>. This is the kind of release that may end up more consequential for production search/retrieval systems than another frontier-chat model benchmark.</p></li><li><p><strong>Other model and release signals worth noting</strong>: <a href="https://x.com/matvelloso/status/2033304726226493829">@matvelloso</a> praised <strong>gemini-3.1-flash-lite-preview</strong> on price &#215; latency &#215; intelligence; <a href="https://x.com/QuixiAI/status/2033419073401287156">@QuixiAI</a> reverse-engineered <strong>Qwen 3.5 FP8</strong> and also got <strong>Qwen3.5-397B-FP8</strong> running on <strong>8&#215; MI210</strong> at <strong>6 tok/s</strong> (<a href="https://x.com/QuixiAI/status/2033342155414982952">run note</a>); <a href="https://x.com/AiBattle_/status/2033503838284447758">@AiBattle_</a> and <a href="https://x.com/kimmonismus/status/2033531736647463151">@kimmonismus</a> pointed to <strong>MiniMax 2.7</strong> appearing imminent; <a href="https://x.com/scaling01/status/2033625927268126969">@scaling01</a> surfaced <strong>Leanstral</strong> as part of <strong>Mistral Small 4</strong>; and <a href="https://x.com/SeedFold/status/2033515503839514771">@SeedFold</a> launched <strong>SeedProteo</strong> for diffusion-based de novo all-atom protein design.</p></li></ul><p><strong>Systems, Inference, and Graphics: GTC, Speculative Decoding, and DLSS 5</strong></p><ul><li><p><strong>NVIDIA GTC&#8217;s message was unequivocal: the center of gravity is inference</strong>. Jensen&#8217;s framing of the &#8220;<strong>inference inflection point</strong>&#8221; was widely repeated (<a href="https://x.com/basetenco/status/2033622003018830198">@basetenco quote</a>), alongside ecosystem positioning posts from <a href="https://x.com/nvidia/status/2033551362210865371">@nvidia</a>, <a href="https://x.com/kimmonismus/status/2033615181415387610">@kimmonismus</a>, and others. Several infra-adjacent updates landed around the conference: <a href="https://x.com/vllm_project/status/2033560408980914550">vLLM&#8217;s OCI production-stack guide</a>, and a strong systems contribution in <a href="https://x.com/i/status/2033634407634927624">P-EAGLE</a>, which removes the sequential bottleneck in speculative decoding by generating <strong>K draft tokens in one pass</strong>, with reported <strong>up to 1.69x speedup over EAGLE-3</strong> on <strong>B200</strong> and integration in <strong>vLLM v0.16.0</strong>.</p></li><li><p><strong>On the graphics side, DLSS 5 dominated reactions</strong>: NVIDIA positioned it as the biggest graphics leap since real-time ray tracing, with strong reactions from <a href="https://x.com/ctnzr/status/2033613807105544666">@ctnzr</a>, <a href="https://x.com/GeForce_JacobF/status/2033615891045454112">@GeForce_JacobF</a>, and <a href="https://x.com/Grummz/status/2033641075806769382">Digital Foundry-linked discussion</a>. The key technical claim is <strong>fully generative neural rendering / relighting</strong> with original geometry/assets preserved, pushing visual fidelity materially forward in real time. Not directly an LLM story, but very much part of the broader trend toward <strong>neuralized runtime systems</strong>.</p></li></ul><p><strong>AI in Science, Healthcare, and Security</strong></p><ul><li><p><strong>The most substantive science/health post was Microsoft&#8217;s GigaTIME thread</strong>: <a href="https://x.com/AnishA_Moonka/status/2033344818475360562">@AnishA_Moonka</a> summarized work from Microsoft, Providence, and UW where a model predicts multiplex immunofluorescence-style spatial proteomics from a <strong>$5 pathology slide</strong>, trained on <strong>40M cells</strong>, applied to <strong>14,256 patients across 51 hospitals</strong>, producing <strong>~300k virtual protein maps</strong> and surfacing <strong>1,234 validated associations</strong>. The thread claims the model is open-source and argues this could democratize cancer immune profiling at scale.</p></li><li><p><strong>Other technically meaningful science/safety items</strong>: <a href="https://x.com/GoogleResearch/status/2033599853297865181">@GoogleResearch</a> described a study evaluating LLMs on <strong>high-temperature superconductivity reasoning</strong>, claiming curated closed-system models outperform web-heavy setups for scientific work; <a href="https://x.com/AISecurityInst/status/2033562026534953156">@AISecurityInst</a> evaluated <strong>seven frontier models</strong> on cyber ranges for autonomous attack capability; and <a href="https://x.com/askalphaxiv/status/2033345556949397718">@askalphaxiv</a> highlighted LeCun&#8217;s <strong>Temporal Straightening for Latent Planning</strong>, where straightening latent trajectories improves planning stability by making Euclidean distance better track reachable progress.</p></li></ul><p><strong>Top tweets (by engagement)</strong></p><ul><li><p><strong>Healthcare foundation-model impact</strong>: <a href="https://x.com/AnishA_Moonka/status/2033344818475360562">GigaTIME pathology &#8594; spatial proteomics thread</a> was the highest-signal high-engagement technical post.</p></li><li><p><strong>Architecture innovation</strong>: <a href="https://x.com/Kimi_Moonshot/status/2033378587878072424">Moonshot&#8217;s Attention Residuals release</a> drew exceptional engagement and broad expert discussion.</p></li><li><p><strong>Coding agent product momentum</strong>: <a href="https://x.com/sama/status/2033599375256207820">@sama on Codex growth</a> and <a href="https://x.com/gdb/status/2033605419726483963">@gdb on GPT-5.4 API ramp</a> were the clearest demand-side signals.</p></li><li><p><strong>Open agent ecosystem</strong>: <a href="https://x.com/ollama/status/2033339501872116169">Ollama becoming an OpenClaw provider</a> was one of the largest open-agent infra announcements by engagement.</p></li><li><p><strong>Agent knowledge infrastructure</strong>: <a href="https://x.com/AndrewYNg/status/2033577583200354812">@AndrewYNg on Context Hub</a> stood out as a concrete proposal for agent-to-agent documentation sharing.</p></li></ul><div><hr></div><h1><strong>AI Reddit Recap</strong></h1><h2><strong>/r/LocalLlama + /r/localLLM Recap</strong></h2><p></p>
      <p>
          <a href="https://www.latent.space/p/ainews-nvidia-gtc-jensen-goes-hard">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>