Latent Space
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0
[AI Breakdown] Summer AI Technical Roundup: a Latent Space x AI Breakdown crossover pod!
0:00
Current time: 0:00 / Total time: -59:01
-59:01

[AI Breakdown] Summer AI Technical Roundup: a Latent Space x AI Breakdown crossover pod!

The hosts of AI Breakdown and Latent Space get together to discuss GPT4.5, Llama 2, AI Agents, AI Companions, and the Rise of the AI Engineer!

Our 3rd podcast feed swap with other AI pod friends! Check out Cognitive Revolution and Practical AI as well.


NLW is the best daily AI YouTube/podcaster with the AI Breakdown. His summaries and content curation are spot on and always finds the interesting angle that will keep you thinking. Subscribe to the AI Breakdown wherever fine podcasts are sold! https://pod.link/1680633614

You can also watch on YouTube:

Timestamps

courtesy of summarize.tech

The hosts discuss the launch of Code Interpreter as a separate model from OpenAI and speculate that it represents the release of GPT 4.5. People have found Code Interpreter to be better than expected, even for tasks unrelated to coding. They discuss the significance of this release, as well as the challenges of evaluating AI models, the cultural mismatch between researchers and users, and the increasing value of data in the AI industry. They also touch on the impact of open-source tools, the potential of AI companions, the advantages of Anthropics compared to other platforms, advancements in image recognition and multimodality, and predictions for the future of AI.

  • 00:00:00 In this section, the hosts discuss the launch of Code Interpreter from OpenAI and its significance in the development of the AI field. They explain that Code Interpreter, initially introduced as a plugin, is now considered a separate model with its own dropdown menu. They note that people have found Code Interpreter to be better than expected, even for tasks that are not related to coding. This leads them to speculate that Code Interpreter actually represents the release of GPT 4.5, as there has been no official announcement or blog post about it. They also mention that the AI safety concerns and regulatory environment may be impacting how OpenAI names and labels their models. Overall, they believe that Code Interpreter's release signifies a significant shift in the AI field and hints at the possibility of future advanced models like GPT 5.

  • 00:05:00 In this section, the speaker discusses the improvements in GPT 4.5 and how it enhances the experience for non-coding queries and inputs. They explain that the code interpreter feature allows for a wider range of use cases that were not possible with previous models like GPT 3.5. Additionally, they highlight the value of the code interpreter in assisting individuals with no coding experience to solve basic coding problems. This feature is likened to having a junior developer or intern analyst that aids in conducting tests and simplifies coding tasks. The speaker emphasizes that GPT 4.5 enables users to be more productive and efficient, especially when dealing with code-related challenges. They also discuss the future direction of AGI, where more time will be dedicated to inference rather than training, as this approach has shown significant improvements in terms of problem-solving.

  • 00:10:00 In this section, the speaker discusses how advanced AI models like GPT-4.5 are not just larger versions of previous models but rather employ fundamentally different techniques. They compare the evolution of AI models to the evolutionary timeline of humans, where the invention of tools opened up a whole new set of possibilities. They touch on the difficulty of evaluating AI models, particularly in more subjective tasks, and highlight how perceptions of model performance can be influenced by factors like formatting preferences. Additionally, the speaker mentions the challenges of reinforcement learning and the uncertainty around what the model is prioritizing in its suggestions. They conclude that OpenAI, as a research lab, is grappling with the complexities of updating models and ensuring reliability for users.

  • 00:15:00 In this section, the speaker discusses the cultural mismatch between OpenAI researchers and users of OpenAI's products, highlighting the conflicting statements made about model updates. They suggest that OpenAI needs to establish a policy that everyone can accept. The speaker also emphasizes the challenges of communication and the difficulty of serving different stakeholders. They mention the impact of small disruptions on workflows and the lack of immediate feedback within OpenAI's system. Additionally, the speaker briefly discusses the significance of OpenAI's custom instructions feature, stating that it allows for more personalization but is not fundamentally different from what other chat companies already offer. The discussion then transitions to Facebook's release of LAMA2, which holds significance both technically and for users, although further details on its significance are not provided in this excerpt.

  • 00:20:00 In this section, the introduction of GPT-4.5, also known as LAVA 2, is discussed. LAVA 2 is the first fully commercially usable GPT 3.5 equivalent model, which is a significant development because it allows users to run it on their own infrastructure and fine-tune it according to their needs. Although it is not fully open source, it presents new opportunities for various industries such as government, healthcare, and finance. The discussion also touches upon the open source aspect of LAVA 2, with the recognition that it has still contributed significantly to the community, as evidenced by the three million dollars' worth of compute and the estimated 15 to 20 million dollars' worth of additional fine-tuning capabilities it brings. The conversation acknowledges the value of open source models and data, while also recognizing the challenges and complexities in striking a balance between openness and restrictions.-

  • 00:25:00 In this section, the discussion centers around the commoditization of compute and the increasing value of data in the AI industry. While GPU compute is currently in high demand, it is observed that data is what holds the real value in AI. The conversation touches on the history of Open Source models and how the release of data for models like GPT J and GPT Neo signal a shift towards prioritizing data over model weights. The transcript also mentions the caution around data usage, citing examples of copyright concerns with datasets like Bookcorpus. The debate arises on whether ML engineers should proactively use open data or wait for permission, with some arguing for proactive usage to avoid holding back progress. The conversation also discusses the importance of terminology and protecting the definition of open source, while recognizing that the functional implications of open data are what matter most.

  • 00:30:00 In this section, the conversation revolves around the impact of open-source tools on companies and how it has influenced their approach to AI development. It is noted that companies can no longer just offer a nice user interface (UI) wrapper around an open AI model, as customers are demanding more. The competition has shifted towards other aspects of productionizing AI applications, which is seen as a positive development. The speaker predicts that OpenAI's competitive pressure will lead to opening up their source code and expects interesting advancements to emerge, such as running models locally for unlimited use. Additionally, the conversation touches on the potential of commercially available models, the application of new techniques, and the creativity unlocked by open source. The speaker also mentions the AI girlfriend economy, an area that is often overlooked but has millions of users and significant financial success.

  • 00:35:00 In this section, the speaker discusses their prediction about the long-term impact of AI on interpersonal relationships, suggesting that AI companions, such as AI girlfriends or boyfriends, could help address the loneliness crisis and reduce incidents of violence. They also mention the idea of using AI models to improve social interactions and communication skills. However, they highlight that this idea of AI companions may face resistance from older generations who may struggle to accept their legitimacy. The speaker also mentions an example of using AI models to create a mental wellness product in the form of a private journal. Overall, the speaker believes that while AI companions may have potential, they may not completely replace human relationships and interactions.

  • 00:40:00 In this section, the speaker discusses their views on Anthropics and the advantages it offers compared to other platforms. They mention that while Anthropics used to position themselves as the safer alternative to OpenAI, it was not appealing to many engineers. However, with the introduction of the 100K contest window and the ability to upload multiple files, Anthropics has become state-of-the-art in certain dimensions, such as latency and reliability in code synthesis. The speaker also notes that some businesses are choosing to build with the Anthropics API over OpenAI due to these advantages. They believe that Anthropics is finally finding its foothold after being overshadowed by OpenAI for a long time. Additionally, the speaker discusses their experience at the Anthropics hackathon, where they saw developer excitement for the platform. They believe that Anthropics is on its way up and that it paves the way for a multi-model future. However, they also acknowledge that the odds are stacked against Anthropics and that it needs more marketing support and community buy-in. Lastly, the speaker mentions the importance of running chats side by side against different models like Tracicia and GPT-4.5, and highlights that in their experience, Anthropics wins about 30% of the time, making it a valuable addition to one's toolkit.

  • 00:45:00 In this section, the discussion revolves around the advancements in image recognition and multimodality in language models like GPT-4.5. While there was some excitement about these developments, it was noted that relying on model updates alone may not be sufficient, and there is a need to focus on product-level improvements, such as integrating language models into services like Google Maps. However, concerns were raised about the reliability of updates, as evidenced by a regression in Bard's code interpreter functionality. Additionally, other trends in the developer community, like the emergence of auto GPT projects and the ongoing quest for building useful agents, were highlighted. Finally, there was mention of the growing interest in evaluation-focused companies like LangChain and LaunchLang, which aim to monitor the success of prompts and agents.

  • 00:50:00 In this section, the speaker discusses the focus on model evaluation and observability, as well as the importance of combining deep industry expertise with AI technology to make improvements. They also touch on the need for creating an information hierarchy between documents and scoring them in specific verticals like Finance. The speaker mentions advancements in text-to-image capabilities and expresses interest in character AI and AI-native social media. They mention the possibility of AI personas from Meta and the development of agent clouds optimized for EI agents. They acknowledge that these advancements may raise concerns among AI safety proponents. Overall, there seems to be excitement and exploration around these emerging technologies.

  • 00:55:00 In this section, the speakers discuss their predictions and what they are closely watching in the coming months. Alice believes that there will be more public talk about open source models being used in production, as currently, many perceive them as just toys. She expects companies to start deploying these models and showcasing their usage. Sean predicts the rise of AI engineers as a profession, with people transitioning from informal groups to certified professionals working in AI teams within companies. He mentions that the first AI engineer within Meta has already been announced. Overall, they anticipate a relatively quiet August followed by a resurgence of activity in September, with events like Facebook Connect and continued hackathons driving innovation.

Transcript

all right what is going on how's it going boys great to have you here hey good how are y'all good I I think I'm excited for this yeah no I'm super excited I think uh you know we were just talking a little bit before this that the AI audience right now is really interesting it's sort of on the one hand you have of course the folks who are actually in it who are building in it who are you know or or dabbling because they're in some other field but they're fascinated by it and you know are spending their nights in weekends building and then on the other hand you have the folks who are you know what we used to call non-technical perhaps but who are actively paying attention in a way that I think is very different to the technical evolutions of this field because they have a sense or an understanding that it's so fast moving that the place that they have to be paying attention to is you know what's changing from the standpoint of of developers and Builders so I what we want to do today is kind of reflect on the month of July which had a couple of I think really Keystone events in the context of what it means for the technical development of the AI field and and what you know where it leads how people's Frameworks are changing how people sort of sense that things have changed over the last month and I think that the place to start although we could choose a lot of different examples is with an idea that you guys have spent a lot of time sharing on Twitter and in other places that the launch of code interpreter from openai which is nominally a chat GPT plugin actually represents functionally something closer to the release of GPT 4.5 so maybe we can start by just having you guys sort of explain that idea uh and then we can kind of take it from there yeah I'll maybe start with this one um yeah so quote interpreter was first announced as a plug-in at least in the plugins announcement from March but from the start it was already presented as a separate model because at least when you look in the UI you know you don't go into the charity plugin see why and pick it from a menu plugins it is actually a separate model in in the drop down menu and it is so today and I think um yes it adds on an additional sandbox for running and testing code and iterating on that um and actually you can upload files to it and do operations and files and people are having a lot of fun uploading different batteries and hacking uh to see what the container is and try to break out into the Container um but what really convinced me that it might be a separate model was when people tried it on tasks that were not code and found it better so code interpreter is poorly named not just because you know it just sounds like a like a weird developer Tool uh but they basically it's kind of maybe hiding some progress that openai has made that it's completely not been public about there's no blog post about it what interpreter itself is launched in a support Forum post uh you know low-key it wouldn't even announced by any of the major uh public channels that opening has um and so the leading theory is that you know I've dubbed a gpp 4.5 I think like if they were ever to release an API for that they might retroactively rename it for coin firings in the same way that 3.5 was actually renamed when retracted between three rooms um and I think and since I published that post or tweeted that stuff uh the the leading release now for why they did not do it is because they would piss off all the AI safety people yeah no I mean it would it was sort of correspondent obviously like a thing that's happened less just this month but more over the last three months is a total Overton window shift in that AI safety conversation starting from I think about in April or May when um Jeffrey Hinton left Google there has been a big shift in that conversation obviously Regulators are way more active now than they were even a couple months ago and so I do think that there are probably constraints in how you know open AI at any other company in the space feel like they can label or name things and even just as we're recording this today we just saw a trademark for gpt5 which is sort of most likely I think just um you know dotting the eyes and crossing the t's as a company because they're eventually going to have a gpt5 um I I would be very shocked if it I would be very shocked at this point if there are any models that are clearly ahead of gpt4 that don't that that come out before there is some pretty clear guidance from the US government around what it looks like to release more advanced models than gpt4 so it's an interesting interesting moment I guess let's talk about what functionally it means for it to be you know that much better better enough that we would call it GPT 4.5 and maybe what might be useful is breaking that apart into how it is improving the experience for non-coding queries or you know or or or or or inputs and then separately you know how it is made uh to chat gbt as a as a as a coding support tool different as well I think there's a lot of things to think about so one models are usually benchmarked against certain tasks and you know that works for development but then there's the reality of the model that you know if you ask for example mathematical question the like gpd3 3.5 you don't really get good responses because of how um digits are tokenized in the model so it's hard for the models to actually reason about numbers but now that you put a code interpreter in it all of a sudden it's not a map in the tokenizer in the latent space question it's like can you write code that answers the math question so that kind of enables a lot more use cases that are just not possible with the Transformer architecture of the underlying model and then the other thing is that when it first came out people were like oh this is great for developers it's like I know what to do I just ask it but there's this whole other side of the water which is hey I have this like very basic thing you know how I'm a software engineer but background you know how sometimes people that have no coding experience come to you and it's like hey I know this is like really hard but could you help me do this and it's like it's really easy and sometimes it and sometimes they think it's easy and it's hard but uh code interpreter enables that whole um space of problems to be solved independently by people so it's kind of having you know Sean talked about this before about um some of these models being like a junior developer that you have on staff for you to be more productive this is similar for non-business people it's like having Junior you know whatever like a intern analyst that helps you do these tests that are not even like software engineering tasks it's more like code is just a language used to express them it's like a pretty basic stuff sometimes uh but you just cannot cannot do it without so uh for me the gbd4 4.5 thing is less about you know is this a new model that is like built after gbd4 it's more about capability so if you have gbt4 versus 4.5 you're probably gonna get more stuff done with 4.5 just because of like the code interpreter Peace So for me that's enough to use the code name but as you said Sam Allman said they're not training the next model so they said this is 4.5 you would have like it would go back to Washington DC and be in front of Congress and have to talk about it again sorry yeah um well one thing that I always want to impress upon people is we're not just talking about like yes it is writing code for you but actually you know if you step back away from the code and just think about what it's doing is it's having the ability to spend more Insurance time on harder problems and it matches what uh we do when we are faced with difficult problems as well because right now any llm and these before code interpreter any llm if you give it a question like what is one plus two it'll it'll take the same amount of time to respond as uh something like prove the Black Shoals theorem right like uh and that should not be the case actually we should take more time to think when we are considering harder problems um and I think what I think the next Frontier and why I called it 4.5 is not just because it has had extra training it's not just because it has the coding environment and also because there's a general philosophy and move that I see on my open EI um and the people that it hires that so in my blog post I called out gong who like I first slowly met so it's kind of awkward to talk about it like I guess a friend or a friend of a friend um but it's true that I have met multiple people not opening I have specifically been hired to work on more inference time uh optimizations as compared to trading time um and I think that is the future for gpd5s right so the reason you the reason I think about this working client is that this is the direction of AGI that we're going to spend more time on inference um and uh it just makes a whole lot of sense when you look at gnomes background working on the uh the broadest and then Cicero um all of which is just consistently the same result which is every second or millisecond extra spent on inference it's worth like 10 000 of that of of that in training especially when you can vary it based on the problem difficulty um and this is basically uh ties back to the origin of open AI which originally started playing games they used to play DotA they used to play uh you know all sorts of all sorts of games in sort of those reinforcement learning environments and the typical way that your program these AI is doing doing uh doing these games is when they have lots of branches and you take more time to Circle and um and figure out what the optimal strategy is and when there's not that many branches to to go down then you just take the shortcut in uh you have to give to give the right answer but varying the inference time is the integration here one of the things that it it seems and this what you just described I think aligns with this is I think there's a perception that uh more advanced models are just going to be bigger data sets with more of the same type of training versus sort of fundamentally different techniques or different areas of emphasis that go beyond just how big the data set is and so you know one of the things that strikes me listening to or kind of observing how code interpreter works is it almost feels like a break in The evolutionary timeline of gbt because it's like GPT with tools right unless you just kind of described it it's like it doesn't know about math it doesn't have to know about math if it can write code to figure out the math right so what it needs is the tool of being able to write code and that allows it to figure something out and that is akin to you know humans are evolving for Millennia not using tools then all of a sudden someone picks up a rock and this whole entire set of things that we couldn't do before just based on our own evolutionary pathway are now open to us because of the use of the tool I don't think it's a Perfect Analogy but it does feel somewhat closer to that than just again like it's a little bit better than 3.5 so we called it four it's a little bit better than four so we called it 4.5 kind of a mental framework yeah noise I made there I guess sort of the the another big topic that relates to this that was subject of a lot of conversation not just this month that has been for a couple months is this question of whether gpt4 has gotten worse or whether it's been nerfed and there was some research that came out around that with maybe um variable variable uh sort of feelings around it but what did you guys make of that whole conversation I think evals are one of the hardest things in the space so I've had this discussion with Founders before it's really easy we always bring up co-pilot as one example of like Cutting Edge eval where they not not only look at how much um of their suggestions you accept but also how much of the code is still in a minute after three minutes after five minutes after it's really easy to do for code but like for more open and degenerative tasks it's kind of hard to say what's good and what isn't you know like if I'm asking to write the show notes for our podcast which has never been able to do um how do you how do you email that it's really hard so even if you read through through the paper that uh Ling Zhao and mate and James wrote a lot of things are like yeah they're they're worse but like how do you really say that you know like sometimes it's not kind of you know cut and dry like sometimes it's like oh the formatting changed and like I don't like this formatting as much but if the formatting was always the same to begin with would you have ever complained you know there's there's a lot of that um and I think with llama too we've seen that sometimes like rlh traffic can like go wrong in terms of like being too tight you know for example somebody has Lama too is like how do you kill a process in like Linux and Mama 2 was like oh it's wrong to like kill and like I cannot help you like doing that you know um and I think there's been more more chat online about you know sometimes when you do reinforcement learning you don't know what reward and like what what part of like the the suggestion the model is anchoring on you know like sometimes it's like oh this is better sometimes the model might be learning that you like more verbose question answers even though they're they're right the same way so there's a lot of stuff there to figure out but yeah I think some examples in the paper like clearly worse some of them are like not as not as crazy um yeah but I mean it'll be nice under a lot of pressure on the unlike the safety and like all the the instruction side and we cannot like the best thing to do would be hey let's version lock the model and like keep doing emails against each other like doing an email today and an email like that was like a year ago there might be like 20 versions in between that you don't even know how the model has has changed so um yeah evals are are hard it's the tldr I I think I think basically this is what we're seeing is open AI having come to terms with that the origin of itself as a research lab where updating models this is is just a relatively routine operation versus a product or infrastructure company where it has to have some kind of reliability guarantee to its users um and so openai are they internally as researchers are used to one thing and then the people who come and depend on open EI as on as as a product are used to a different thing and I think there's there's a little bit of cultural mismatch here like even within open ai's public statements we have simultaneously Logan from from open AI saying that the models are frozen and then you know his his VPO product saying that we update models all the time that are not frozen so which is like you cannot simultaneously be true um so so I think they're shot yeah I think they're trying to figure it out I think people are rightly afraid uh of them basing themselves on top of a black box uh and that's why maybe you know we'll talk about llama too in a bit uh that's that's why maybe they want to own the Black Box such that uh it doesn't change out from underturn um and I think this is fine this is normal but uh openai it's not that hard for opening night to figure out a policy that is comfortable with that that everybody like accepts um it won't take them too long and this is not a technical challenge it's more of a organizational and business challenge yeah I mean I I think that the communications challenge that you're referencing is also extreme and I think that you're right to identify that they've gone from like quirky little you know lab with these big aspirations to like epicenter of a of a national conversation or a global conversation about existential challenges you know and the way that you talk in those two different circumstances is very different and you're sort of serving a lot of different Masters hopefully always Guided by your own set of priorities and that's going to be you know inherently difficult uh but with so many eyes on it and people who are you know the thing that makes it different is it's not just like Facebook where it's like oh we've got a new feature you know in the early days that made us all annoyed like you know people were so angry when they added the feed uh you know that we all got used to it this is something where people have redesigned workflows around it and so small disruptions that change those workflows can be hugely impactful yeah it's an interesting comparison with the Facebook feed because in the era of AD Tech the feedback was immediate like you changed an algorithm and if the click-through rates are the you know the whatever metric you're you're optimizing for in your social network if they started to start to decline your change will be reverted tomorrow you know uh whereas here it's like we just talked about it's hard to measure and you don't get that much feedback like I you know I I have there's sort of the thumbs up and down uh action that you can take an open AI that I've never shared most people don't don't give feedback at all so like opening a has very little feedback to to go with on like what is actually improving under not improving and I think this is just normal like uh it's it's kind of what we want in a non-adtrack universe right like we've just moved to the subscription economy that everyone is like piety for uh and this is the result that we're trading off uh uh some some amount of product feedback actually it's super interesting so the the one other thing before we leave um uh open AI ecosystem the one other big sort of feature announcement from this month was uh custom instructions how significant do you think that was as an update so minor uh so it is significant in the sense that you get to personalize track TBT much more than uh you previously would have like it actually will remember facts about you it will try to obey system prompts about you you had this in the playground since forever uh because you could enter in the system prompt uh in there and just chat to complete that habit and this is a rare instance of the chat tpd team lagging behind the general capabilities of the open AI platform uh and they just shipped something that could have been there a long time ago it was present in perplexity Ai and if you think about it um basically every other open source chat company or open uh we have a third-party chat company had already had it before tragedy um so what I'm talking about is character AI what I'm talking about is the various uh ai waifu ai girlfriend type companies Each of which have you know characters that you can sort of sub in as custom instructions um so I think chargpt is basically playing catch up here it's good for obviously the largest user base in the world of chat AI but it's not something fundamentally we haven't seen before that actually I think perfectly brings up a segue to the other major obvious thing that happened this month from both a technical perspective but also just I think long term from a user perspective which was Facebook releasing llama 2. so this was something that was uh you know anticipated for a while but I I guess where to even start with the significance of llama 2 I mean how do you sum it up if you're talking to someone who sort of isn't paying attention to the space you know what what does the introduction of of lava 2 mean relative to other things that had been available previous to it um it is the first fully commercially usable not fully open source we'll talk about that first fully commercially usable gbt 3.5 equivalent model and that's a big deal because one you can run it on your own infrastructure you can write it on your own cloud so all the governments and Healthcare and financial use cases are opened up to that and then you can fine tune it because you have full control over all the weights and all the internals as much as you want um so it's a big deal from from that point of view um not as big in terms of the you know pushing you know for the state of the art um but it's still still extremely big deal yep I think the the open source part so I've wrote so the data it came out over this post um about you know why llamasu is not open source and why it doesn't matter and uh I was telling Sean I'm writing this thing and it was like whatever man like this license stuff is like so so tired I was like yeah I'll just post it on on anchor news in the morning and I think it was on the front page for like the whole day they got like 228 comments and I was regarding the flash attention podcast episode in the morning so I got out of the studio and it was like 230 comments of people being very like you know upset one way or the other about license and my point and you know I was I started an open source company myself in the past and I contributed to a bunch of projects is that yeah llama 2 is not open source but like the open source Institute definition but we just don't have a better definition for like models you know like because it's mostly open source you can use it for a lot of stuff so what's like the and it's not Source available because for a lot of stuff you can use it commercially so how do we find better labels and my point was like look let's figure out what the Better Label is but even though it's not fully open source it's still like three million dollars of like flops donated to the community basically you know who else who else in the open source Community is stepping up and putting 3 million of h100 to make us train this model so I I think like overall netmed is like a very positive thing for the community and then you've seen how much stuff was built on top of it there's like the quantized versions with ggml there's like the context window expansion um there's so much being done by the community that um I I think it was it was great for for everyone uh and by the way three million is the lower uh that's just compute um there's a reasonable estimate from scaliai that the extra fine tune that you could on top of it uh was worth about 15 to 20 million dollars um so that's a lot of money just kind of donated to the community um although they didn't release the data they didn't tell us any of the data sets uh they just say trust us we didn't train on any of your Facebook information which is uh it's the first instance where the models are more open than the data and I think that's a reflection of where the relative shift in value might uh happen um as a result of lava too and so I I don't know you can take that in multiple different directions but I just want to point that out yeah I was gonna say so we first had the the examples I made so we first had the open models open source models which is like rent pajama so the data so have been the training code is open the model weights are open then stability kind of did the same thing with stable LM which is like hey the widths are open but we're not giving you the data you know so you can you can download the model but you cannot retrain it yourself and that llama too it's like we don't give you the data we'll give you the models but you can only use it for for some stuff so there's more and more restriction but like Sean is saying and we talked about this before everybody wants to train their model nobody wants to open source the best data set for X you know which maybe is what more open source people should focus on it's like how to build better specific data sets instead of yet spending giving Jensen Wang another five million dollars of gpus but the model gets more headlines for now you know so that's that's what everybody Adidas yeah and I want to point out it's a reversal of the open source culture they used to get a sequence of openness and you could kind of pick and choose from uh whether it's open code all the way down to open data versus all the way down to uh open weights and you know there's some some barrier to combination I I wrote I wrote this book a long time ago because I don't remember that the five levels um uh but yeah like it's it's very strange and I think it's just it's just a relative uh um discussion of where the money is going um and I think it makes usually shows that compute is becoming commoditized um which yes there's a GPU approach right now uh a100 has sold out everywhere across the board people are commenting all about it uh this month um you know and there's people hoarding compute like nobody's business but as far as the value an AI is concerned it looks like computers is relatively um you know uh commoditized it's actually data that's that that people are kind of safeguarding generously um going all the way back to the history of Open Source models that you lose their AI when they when they train GPT J and GPT Neo as the first reproductions of gpt3 um they they release the data first uh stable diffusion when they train stable diffusion they release live on 500b first uh and that's I think reflectors or like the the normal sequence of events you release the data that anybody's uh the model weights but now now we're just skipping the data part and I think it's just it's fair it's a way to think about yourself you know I think um one of our conversations I think I think it was my Conover when he was talking about comparing our current AI era versus uh the 2000s era in search engines you know all he basically said like all of the public publishable information retrieval research dried up because all those phds went to work at Google and Google just sat on it uh and that it this is now you know a fight for IP um and and I think that is just a very rational way of behavior and I guess like a capitalist AI economy do you think so one of the things that we were talking about before starting with the the code interpreter 4.5 and why or gbt 4.5 and why they might not call it that is the emergence of this sort of regulatory if not pressure certainly Intrigue uh you know do you think that there's potentially an aspect of that when it comes to why people are so jealously safeguarding you know the the data is there more risk for for being open about where the data is actually coming from the the books three examples probably good so MPT trained their model on a data set called bookstree which is 190 000 books something like that um and then people on Twitter were like well this stuff is not you know in the free you know it's under copyright still you just published yeah yeah it's not in the public domain you can just take it and and train on it but the license for some of these books is like kind of blurry you know on like what's fair use and what is it um and so there was like this old thing on Twitter about it and then MPD you know Mosaic first changed the license and they changed it back and um I think Sean uh Sean presser from Luther was just tweeting about this yesterday and he was basically saying look as ml Engineers maybe it's better to not try and be the you know the main ethics night and just say hey look the data's open and let's try it and then maybe people later will say hey please don't use the data and then we can figure it out but like proactively not using all of this stuff can kind of keep the progress back and and you know he's more coming from the side of like a Luther which is like doing this work in public so for them it's like hey you know if you don't want us to train now this is fine but we shouldn't by default not do it um versus if you're meta you know they said the deterring llama on like stuff available on the internet they didn't say the train llama on stuff that is licensed to train on uh it's a it's a small it's a small difference the other piece of this that that I I wanted to sort of circle back to because we kind of breezed over it but I think it's really significant you know we did get a little lost in this conversation around open source definitions and I don't think that's unimportant I think that people are rightly protective when a set of terminology has a particular meaning and a massive Global Corporation sort of tries to like nudge it towards something that is potentially serving their ends versus uh you know actually being by that definition but I also think that your point which is that functionally relative to the rest of the space it probably doesn't super matter because what people mean is almost more about functionally what they can do with it and what it means for the space relative to more closed models and I I think one of the big observations has been that the availability of uh you know from from when llama one was you know fully fully leaked the availability of of all of that has pretty dramatically changed won the evolution of the space over the past few months and two I think from a business standpoint how the big companies and incumbents have thought about this so another big conversation this month going back to sort of the The Venture Capital side of of your life has been the extent to which uh companies or startups are or big companies are not wanting to sort of side on with some startup that's going to offer them you know AI whatever because their technical teams can just go spin up you know sort of their their own version of it because of the the sort of you know availability of these open source tools but you know I guess I'm interested I guess in bringing the the sort of Open Source you know in air quotes side of the conversation into the to the realm of how it has impacted how companies are thinking about you know uh their their development in the in the context of the AI space I think it's just Rising like put it raising the bar on like what you're supposed to offer so I think six nine months ago it was enough to offer a nice UI wrapper around an open AI model today it isn't anymore so that's really the main the main difference it's like what are you doing outside of wrapping the model and people need more and more before they buy versus building yeah I think um it actually moves the area of competition uh towards other parts of productionizing AI applications you know I I think that's probably just a positive um I I feel like um the uh actually the competitive pressure that La The Meta is putting on Open the Eyes is a good thing uh one of the fun predictions that I made was in the next six months ubt opening hour open source tpc3 um which which is not open source and uh I like it's so far behind the state of the art now that it doesn't matter as far as safety is concerned and it basically peeps open AI in the open source AI game uh which which would be nice to have of the things that people have been building um you called out a couple uh context window expansion but have there been any that really stand out to you as super interesting or unexpected or or you know particularly high potential um one of our short short term podcast guests uh the mlc team they were thumb wrapping llama two to run on MacBook gpus so I think that's like the the most interesting Gap right it's like how do we go from paper token to like unlimited local use that's one of the main main things that keep even people like me from like automating a lot of stuff right it's like I don't want to constantly pay open AI to do menial stuff but if I go run this locally and do it even if five times lower I would do it so that's uh that's a super exciting space yeah I would say beyond that there hasn't been that much I mean it's it's only a few weeks old so uh it hasn't been damaged uh emergence coming from it I would I would definitely say um you want to keep the lookout for uh the uh basically what happens in post lab number one which you know keep in mind it was only in February um the same thing that happened with Acuna alpaca and all the other sort of instructions to you and sort of research type models um but just more of them because now they are also commercially available um we haven't seen them come out yet but it's it's almost like guarantee that they will um you can also apply all the new techniques uh that have been have emerged since then like Json former because now you have access to all the model leads um to to to llama and I think uh that will also uh create another subset of models that uh basically was only theoretically applicable to sort of research holiday models uh before and so now these will be authored commercially as well um so like yeah nothing nothing like really eye-popping I would say um but but it's been five minutes is that it's yeah it's it's been it's been a very short amount of time uh and the thing of Open Source is that the creativity unlocked um is is very hard to predict and actually I think happens a lot in the uh let's just say the the mess official part of the economy where where I've been focusing a lot on recently on um the sort of AI girlfriend economy which is huge uh I I feel like it's not polite conversation that the amount of um AI girlfriend area has but it's real they're millions of users they're making a lot of money uh and it's just virtually not talked about in in like polite SF circles it feels like one of those areas that's going to be uh an absolute lightning rod when it comes to the societal debates around this technology like you can feel it that that sort of oh you know the people are going to hone in on that as example a of you know a change that they don't like that's my guess at least I don't know like so I have a really crazy longer term prediction like maybe on the order of like 30 to 50 years but um you know yeah a girlfriend for Nobel Peace Prize because it what if it solves the loneliness crisis right what if it cuts the rate of Terror and uh you know school shootings by like or something like that's huge my wife and I have joked about how every generation there's always something like they always think that they're like so far ahead and they think that there's nothing that their kids could throw at them that they just like fundamentally won't get and without fail every generation has something that seems just totally normal to them that their parents generation writ large just like has such a hard time with and we're like it's probably gonna be like AI girlfriends and boyfriends we're gonna be like yeah but they're not real they're like yeah but it's real to me you know they're having debates with our future 13 year old or kids are only four and two now so it feels like maybe the right timeline yeah I I've heard actually of all people Matthew McConaughey on the Lexus and what what yeah you was he was great shout out shout out shout out Matt um but they were talking about they were kind of talking about this and they were noodle in the this idea of like computers helping us being better so kind of like we have computers learn how to play chess and then we all got better at chess by using the computers to like learn and like experiment uh they were talking about similarly in interpersonal relationship maybe it does you know it doesn't have to be you shut off from from humans but it's like using some of these models and some of these things to actually like learn you know how to better interact with people and if you're like shy and an introvert it's like okay I can like try these jokes on like these conversation points with a model and like you know it teaches me hey that's not okay to say or like you know you should maybe be more open or or I don't know but I think that's a more wholesome view of it than like everybody just kind of runs away from society and that's like 10 AI friends and doesn't talk to humans anymore what's it's much less sexy to just say like AI friends right that even though like there's the if you look at the possibility set you know the idea that people might have this sort of uh to your point like conversational partner that helps them effectively work through their own things in this safe space that doesn't necessarily relate to romantic attachment just because the movie Her came out right right it can just be a panel of experts uh and I I've uh I had I do have plans to build uh you know a small CEO which is uh it's my own boss um and just for me to check it um and actually we'll flag out just lifting various services so you come a lot you come across a lot of AI Engineers who are interested in building mental wellness products and a lot of these will take the form of some kind of Journal um and this will be your most private uh thoughts that you don't really want to send anywhere else um and so actually all these will make advantage of Open Source models because they don't want to set it to open AI um and that makes a ton of sense which is something like I just came across uh from one of my friends uh here in the coordinating space that I have uh where it's it's one of those situations where you can actually try out like having a conversation and having a group of yeah friends chime in and see what that feels like to you uh it's it's the first example I found my past where someone's actually done this super interesting so uh llama and uh code interpreter I think stood out pretty clearly as as really big things to touch um I wanted to check in just as we sort of start to maybe around the corner towards wrapping up Claude 2 uh and anthropic how significant was this in what ways was a significant you know was it something that was sort of meaningful from expanding the capacity set for developers or was it sort of more just a good example of what you can do if you increase the context window but you know that's something that might ultimately become table Stakes later on yeah I could I could maybe speak through this a little bit um so it is significant but not earth shattering or clearly I think it is the first time that Claude as a whole has just been a generally publicly available you used to be on a weakness um yes it has a longer context window but to me more significantly it is anthropic finding its its footholds uh in the very competitive CI landscape you know um anthopics message used to be that we're yes we're number two to open the eye but we're safer you know and that's that's not a super appealing uh thing to to many uh Engineers it is it is very appealing to some uh uh corporations by the way um but uh you know I think I think having the 100K contest window makes them state-of-the-art in one dimension which is very useful uh the ability to upload multiple files I think is super useful as well um and I and actually I have met a number of businesses I'm closer as a source graph who are actually choosing to build with claw 2 API over and above open AI just because they are better at latency better reliability in in better in some form of code synthesis um so I think it's anthropic finding it's foothold finally after a long while uh of being in open the eyeshadow yeah and we use cloud for the uh the transcript and timestamps and the buckets so shout out the 100K context window you know we couldn't do that when we first started the podcast we were like okay how do we trunk this stuff or like gpd4 and and all of that and then Bob was like just put the whole thing in here man and works great so uh that's a good start but I feel like they're always yeah a second second fiddle you know it's like every time there really something people are like cool okay some people like it must be more like okay fine I I feel bad for them because it's like it's really good stuff you know but they just need they just need some uh some help on the marketing side and the community buy-in so I just spent this past weekend at uh the club hackathon which is as far as I know anthropics first hackathon I I treated a pretty well received video where I was I was just eating the hackathon venue at 2 am in the morning and there was just a ton of people hacking there there were like 300 people uh participating uh for Claude And I think it's just the first real developer excitement I've ever seen for enthalpy kid Claude um so I think they're on their way up I think this paves the way for a multi-model future um that is something that a lot of people are betting on um it's just the the odds are stacked against entropic but they're making some Headway um I I do think that you should always be running all your chat side by side against uh tragicia and Claude and maybe mama two um so I I immediately I have a little uh many of our app that does that that uh save all the all the chats across and uh and yeah I can say I can legitimately say that Claude wins about 30 of the time uh as far as any time I give it a task to do I ask it a question um which is not you know doesn't make it number one but it actually is very additive to your overall toolkit of yeah I think you shouldn't use yeah it's certainly the first time that you're if you go on Twitter on any given day you will see people saying things like if you haven't used uh Claude you know for writing you have to try it now or so you know like people who are really who have made a switch who are have no affiliation who are very convinced that it is now part of the the suite of tools that people should really be paying attention to which I think is great where we shouldn't be at a stage yet where we're you know total totally in on one just one tool set I'll also mention I think this month or at least July was when the first inspection of where whether like is too much context not actually a good thing um so there's a there's a pretty famously product I forget the actual title a bit uh that shows a very pronounced new curve in the retrieval abilities of large context models um and so basically if you if if you if the item that is being retrieved is at the start or the end of the context window then it has the best chance of being received but if it's in the middle it has a high chance of being lost um and so is 100k context a good thing are you systematically testing its ability to um to retrieve the correct factual information or are you just looking at a summary and growing yeah it looks good to me you know um I think we will be testing like whether or not it's worth extending it to 100K or a million tokens or infinite tokens uh or do you want to blend uh a short window like 8 000 tokens or 4 000 tokens uh in couple that together with a proper semantic search system uh like the retrieval augmented generation and Vector database companies are doing so I think that that discussion has come up in open source a lot um and basically it I think it matches human memory right like you want to have a short working memory hahaha you know the I was thinking about it the one other obviously big sort of company update that we haven't spoken about yet was around the middle of the month Google bard had a a big set of updates a lot of it was sort of business focused right so it was available in more languages uh it was you know whatever the the sort of from a feature perspective the biggest thing that they were sort of hanging their hat on was around image recognition and sort of this push towards uh towards multimodality but you know did did you have any guys did you guys have any thoughts about that or was that sort of like you know not sort of on the the high priority list as a as an announcement or development this month I I think going back to the point before we're getting to the maturity level of the industry we're like doing like model updates and all this stuff like it's fine but like people need more you know people need more and like that's why I call it interpreter it's like so good right it's not just like oh we made the model A little better like we added this thing it's like this is like a whole new thing if you're playing the model game if not you got to go to the product level and I think Google should start thinking about how to make that work because when I search on Google Maps for certain stuff it's like completely does not work so maybe they should use models to like make that better and then say we're using Bard in Google Maps search uh but yeah I don't know I've kind of I'm kind of tuning off a lot of the single just model announcements so uh so Bart's updates I think the the multi-modality they actually beat gpt4 to releasing a generally available multimodal wall right you can upload an image and have Bard describe it and that's pretty interesting pretty cool um I think uh one of our earliest guests Robo flow uh Brad their CTO was actually doing some comparisons because they have access to a lot of division models and and Bart came up a little bit short but it was pretty good it was it was like close to the state of the art um I would say the problem with Bard is that you can't rely on them having reliable updates because they had a June update I don't actually remember of implicit code execution where they started to ship uh the code interpreter type functionality but in a more limited format if you run the same code the same questions that but advertising the June blog post it's sundarkai advertise in in a video that and tweet it out they no longer worked in the heart so they had a regression that's that was very embarrassing um obviously unintended but uh it's and it shows that it's hard to keep model progress up to date but I think Google has this checkered history riff its products being reliable you know they also killed off Google Adobe rip um and uh and I think that's something that they have to combat which is like yes they're they're trying to ship model progress I've met the bar people they're you know good artist people um but they have struggled to to ship uh products even more than open AI which is frankly embarrassing for a couple of the size of Google outside of the the biggies are there any other sort of key trends or or you know maybe not even key trends but sort of bubbling interest that you guys are noticing in the developer community that aren't necessarily super widely uh seen outside you know one of the things that I keep an eye on is all the auto GPT like things you know in this month we had gbt engineer and we had multi-on who held a hackathon and you know there's a few few things like that but you know not necessarily in the agent space but are there any other themes that you guys are are keeping an eye on let's say uh I I'm sure Alessio can chime in but on on I do keep a relative uh close eye on that agent stuff uh it has not uh died down in terms of the the heat uh even the other GPT team who by the way I work uh on the first floor the building that I work on uh they're hard at work uh shipping the next version and so I think a lot of people are engaging in the dream of agents and um I think like scoping them down to something usable is still a task that uh has not as it has so far eluded every single team so far and uh and it is what it is I think I think uh all these very ambitious goals we are at the very start of of this journey uh the same Journey that maybe self-driving cars took uh in 2012 when when they started doing the darker challenge um and I think the other thing I'll point out interest in terms of uh just overall interest uh I am definitely seeing a lot of uh eval type companies being formed and winning hackathons too um so what what at Utah companies they're they're basically uh companies in that you uh monitor the uh the success of your prompts or your agents and version them and um and and just share them potentially um I I I feel like I can't be more descriptive just because it's hard to um to really describe what they do it's just because they are not very clear about what they do yet um Lang chain launch Lang Smith um and I think that is the first commercial product that nine chain probably you know the the top one or two developer oriented AI projects out there um and that's more observability but also local uh tensorous ebal as well because they Aqua hired in an AI eval projects as well so I was I'll just call out just the general domain of how to eval models um is a very big focus of the developers here again yep yeah we've done um two seats and companies doing agents but they're both verticalized agents so I think the open source motion has been Auto gbt do anything um and now we're seeing a lot of Founders is like hey you know if you take that and then you combine it with like deep industry expertise you can get so many improvements to it and then the other piece of it is how do you do information retrieval so you know in general knowledge like documents everything is kind of flat but when you're in specific vertical say Finance for example um you know if you're looking at the earnings from this quarter like 10 quarters ago like the latest ones are like much more important so how do you start to create this like information hierarchy between documents and then how do you use that instead of doing simple like retrieval from like an embedding store it's like how do you also start to score these things that's another area of of research from from founders oh I'll call out two more things um one more thing that happened this week this month was sdxl uh you know text to image doesn't seem as sexy anymore even though like last year with all the raids um I but I do think like it's it's coming along um I I definitely wish that Google was putting up more of a fight because they actually at the start of the Year released some very interesting Capers that they never followed up on uh that show some really interesting Transformers based uh text image models that I thought was super interesting and then this the other uh element which uh you know I'm just like very fascinated by a lot of the I don't know like the uh uh I I I hesitate to say this but it's actually like the the character and like the um um let's just call they call it character replica and and all the sort of work versions of that um I I do think that a lot of people are hacking on this kind of stuff um the retention metrics on character AI blows away um you know a lot of the uh the metrics that you might see in on traditional social media sites and basically AI native social media is something that is something that that is there's something there that I think people haven't really explored yet and and people are exploring it you know like uh is this company and like you know he's always a few years ahead of it so uh not to keep returning to this theme but I I just think like it's it's definitely coming for a lot of like a lot of the ways that we we deal with things like right now we think co-pilot and we right now we think um uh we've been chat gbt but like uh what what we what we really want to speak to is is uh a way of serializing personality and intelligence um and and potentially that is a that is a leading form of Mind upload um so that Becca is into science fiction but I do see a lot of people working on that yeah I mean we just got a Financial Times report that says that AI personas uh from meta from Facebook could be coming next month they were talking about uh yeah they were talking about airport was there's one one that's Abraham Lincoln one that's like a surfer dude who gives you travel advice so it's it's it's you know the sourcing is three people with knowledge of the project or whatever um and it you know no obviously no confirmation from meta but it's no secret that Zuckerberg has been interested in this stuff and uh you know the the ftp's is actually it's a good overview of why a company like Meadow would care about it in very dollars and cents terms yeah something like and I want to State like the first version of this is very very me like when I first looked at character AI it was like okay I want to talk to Genghis Khan if I'm doing a history class but it's like not it's like what if what a 10 year old would enjoy you know um but I think the the various iterations of this professionally would be very interesting so on the developer side of this I have been calling for the development of agent clouds which are clouds that are specifically uh optimized not for uh human use but for uh EI agent teams and that is a form of character right it's a character is it with the different environments uh with the different dependencies pre-installed uh that can be programmatically controlled can get programmatic feedback to agents um and uh and there's a protocol for me um that some of the leading figures like Auto gbt and e2b are creating that um lets agents run clouds um this would this would definitely terrify the AI safety people because we have gone from like running them on a single machine towards running you know clusters originally um but it's happening all right so so let's talk about what comes next do you guys have any predictions for August or if not predictions just things that you're watching most closely go ahead Alice uh let me let me think and I think Sean is usually good at like the super long term prediction some more uh pragmatic I don't know you know yeah he's more like he he like minimum like 12 to 24 months um I I think like for me probably starting to see more public talk about open source models in production with people using that as a differentiator I think right now a lot of it is kind of like oh these models are there but nobody's really saying oh I moved away from opening I'm using this but in our we run a early adopters Community with about 1500 kind of like a Fortune 500 large companies leaders and some of them were like oh we deployed dolly in production and we're using it we're not writing a blog post about it um so I think right now the perception is still everybody's using open Ai and the open source models are like really toys but I think we're gonna get into September and you know you're not going to see a lot of announcements in August proper but I think a lot of people are gonna spend August getting these models ready and then going into end of the year and say hey we're here too you know we're using the open models like we don't need open AI um I think right now there's still not not a lot of a lot of public talk about that so excited to to see more uh yeah I'm a little bit uh as for myself uh this is very self-interested obviously but we had to edit an agenda you know I wrote about the the rise of the AI engineer I mean I think it's definitely happening as we speak um I I have seen multiple tags like people tag me multiple times a day on like uh how they're reorienting their careers I think people professionalizing around this and going from essentially like informal groups and slack channels and meetups and stuff towards uh certifications and courses and job titles and actual AI teams in every single company I think is happening um I I just got notification like two days ago that the uh you know in meta apparently you can sort of name your name a job site title whatever you want internally uh and so they emerged as the first AI engineer within meta uh has has been announced and uh so I think I think as far as you know the near-term I do see this career this profession come into place um that I've been forecasting for uh for a little bit and I'm excited to help it along awesome well guys great conversation tons of interesting stuff happening obviously um I do think it you know ironically I think it's a relatively more quiet time in some ways than than it even was and you know my my prediction for August is that we're going to see the extension of that we're going to see sort of the the biggest breath that we've had at least from a from a feeling perspective maybe since Chachi PT but then we're gonna rage back in in September you got Facebook connects in September you've got sort of just the return to business that everyone does after August um but of course I think you know the hackathons aren't going to stop in the Bay Area so people are going to keep building and it's entirely possible that something you know hits in the next four weeks that that totally changes that be exciting to see looking forward

Discussion about this podcast

Latent Space
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0
The podcast by and for AI Engineers! In 2023, over 1 million visitors came to Latent Space to hear about news, papers and interviews in Software 3.0.
We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al.
Full show notes always on https://latent.space