Latent Space
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0
Building the AI Engineer Nation — with Josephine Teo, Minister of Digital Development and Information, Singapore
1
0:00
-56:39

Building the AI Engineer Nation — with Josephine Teo, Minister of Digital Development and Information, Singapore

What can non-US cities do to keep up with AI progress? How do you make good AI industrial policy? Are politicians worried about AI affecting elections?
1

Singapore's GovTech is hosting an AI CTF challenge with ~$15,000 in prizes, starting October 26th, open to both local and virtual hackers. It will be hosted on Dreadnode's Crucible platform; signup here!


It is common to say if you want to work in AI, you should come to San Francisco.

Not everyone can. Not everyone should. If you can only do meaningful AI work in one city, then AI has failed to generalize meaningfully.

As non-Americans working in the US, we know what it’s like to see AI progress so rapidly here, and yet be at a loss for what our home countries can do. Through Latent Space we’ve tried to tell the story of AI outside of the Bay Area bubble; we talked to Notion in New York and Humanloop and Wondercraft in London and HuggingFace in Paris and ICLR in Vienna, and the Reka, RWKV, and Winds of AI Winter episodes were taped in Singapore (the World’s Fair also had Latin America representation and we intend to at least add China, Japan, and India next year1).

The Role of Government with AI

As an intentionally technical resource, we’ve mostly steered clear of regulation and safety debates on the podcast; whether it is safety bills or technoalarmism, often at the cost of our engagement numbers or ability to book big name guests with a political agenda. When SOTA shifts 3x faster than it takes to pass a law, when nobody agrees on definitions of important things2, when you can elicit never-before-seen behavior by slightly different prompting or sampling, it is hard enough to simply keep up to speed, so we are happy limiting our role to that. The story of AI progress has more often been achieved in the private sector, usually in spite of, rather than with thanks to, government intervention.

But industrial policy is inextricably linked to the business of AI, which we do very much care about, has an explicitly accelerationist intent if not impact, and has a track record of success in correcting for legitimate market failures in private sector investment, particularly outside of the US. It is with this lens we approach today’s episode and special guest, our first with a sitting Cabinet member.

Singapore’s National AI Strategy

It is well understood that much of Singapore’s economic success is attributable to industrial policy, from direct efforts like the Jurong Town Corporation industrialization to indirect ones like going all in on English as national first language. Singapore’s National AI Strategy grew out of its 2014 Smart Nation initiative, first launched in 2019 and then refreshed in 2023 by Minister Josephine Teo, our guest today.

the full 68 page pdf in one slide

While Singapore is not often thought of as an AI leader, the National University ranks in the top 10 in publications (above Oxford/Harvard!), and many overseas Singaporeans work at the leading AI companies and institutions in the US (and some of us even run leading AI Substacks?). OpenAI has often publicly named the Singapore government as their model example of government collaborator and is opening an office in Singapore in time for DevDay 20243.

sama standing in front of the Singapore coat of arms at DevDay 2023

AI Engineer Nations

Swyx first pitched the AI Engineer Nation concept at a private Sovereign AI summit featuring Dr. He Ruimin, Chief AI Officer of Singapore, which eventually led to an invitation to discuss the concept with Minister Teo, the country’s de-facto minister for tech (she calls it Digital Development, for good reasons she explains in the pod).

This chat happened (with thanks to Jing Long, Joyce, and other folks from MDDI)!

The central pitch for any country, not just Singapore, to emphasize and concentrate bets on AI Engineers, compared with other valuable efforts like training more researchers, releasing more government-approved data, or offering more AI funding, is a calculated one, based on the fact that:

  • GPU clusters and researchers have massive returns to scale and colocation, mostly concentrated in the US, that are irresponsibly expensive to replicate

  • Even if research stopped today and there was no progress for the next 30 years, there are far more capabilities to unlock and productize from existing foundation models and we <5% done on this journey

  • Good AI Engineering requires genuine skill and is deepening enough to justify sub-specialization as a sub-industry of Software Engineering

  • Companies and countries with better AI engineer workforces will disproportionately benefit from AI vs those who equivocate it as one of many equivalent priorities

  • Tech progress is often framed as “the future is here but it is not evenly distributed”. The role of the AI Engineer is therefore to better distribute the state of the art to as much of humanity as possible, including the elderly, poor, and differently abled.

All of which are themes we first identified in the Rise of the AI Engineer. Singapore simply has a few additional factors that make it not just a good fit, but an economic imperative:

  • English speaking, very-online country that is great at STEM

  • Aging, ex-growth population (Total Fertility Rate of 1.1)

  • #3 GDP per capita (PPP) country in the world

  • Physically remote from major economic growth centers ex China/SEA

That basically dictates that any continued economic growth must be disconnected to geography, timezone, or headcount, or reliance on existing industrial drivers. Short of holding Taylor Swift hostage, making an intentional, concentrated bet on AI industrial policy is Singapore’s best option to keep up progress in the 21st century. As a pioneer in education policy being the primary long term determinant of economic success, this may result in Python as Singapore’s next National Language in the long run, a proposal we also discussed extensively at the RAISE retreat where this episode was recorded.

Because of upcoming election season concerns around the globe, we also took the opportunity to ask about Singapore’s recent deepfake (election integrity) law.

Full YouTube episode

Show Notes

Timestamps

00:00:00 Introductions
00:00:34 Singapore's National AI Strategy
00:02:50 Ministry of Digital Development and Information
00:08:49 Defining a National AI Strategy
00:14:32 AI Safety and Governance
00:16:50 AI Adoption in Companies and Government
00:19:53 Balancing AI Innovation and Safety
00:22:56 Structuring Government for Rapid Technological Change
00:27:08 Doing Business with Singapore
00:32:21 Training and Workforce Development in AI
00:37:05 Career Transition Help for Post-AI Jobs
00:40:19 AI Literacy and Coding as a Language
00:43:28 Sovereign AI and Digital Infrastructure
00:50:48 Government and AI Workloads
00:51:02 Favorite AI Use Case in Government
00:53:52 AI and Elections

Transcript

Alessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small.ai.

Swyx [00:00:13]: Hey everyone, this is a very, very special episode. We have here Mr. Josephine Teo from Singapore. Welcome.

Josephine [00:00:19]: Hi Shawn and hi Alessio. Thank you for having me. Of course.

Swyx [00:00:23]: You are the Minister for Digital Development and Information and Second Minister for Home Affairs. We're meeting here at RAISE, which is effectively your agency. Maybe we want to explain a little bit about what Singapore is doing in AI.

Josephine [00:00:34]: Well, we've had an AI strategy at the national level for some years now, and about two years ago when generative AI became so prominent, we thought it was about time for us to refresh our national AI strategy. And it's not unusual on such occasions for us to consult widely. We want to talk to people who are familiar with the field. We want to talk to people who are active as practitioners, and we also want to talk to people in Singapore who have an interest in seeing the AI ecosystem develop. So when we put all these together, we discovered something else by chance, and it was really a bonus. This was the fact that there were already Singaporeans that were active in the AI space, particularly in the US, particularly in the Bay Area. And one of the exciting things for us was how could we also consult these Singaporeans who clearly still have a passion for Singapore, they do care about what happens back home, and they want to contribute to it. So that's how RAISE came about. And RAISE actually preceded the publication of the refresh of our national AI strategy, which took place in December last year. So the inputs of the participants from RAISE helped us to sharpen what we thought would be important in building up the AI ecosystem. And also with the encouragement of participants at RAISE, primarily Singaporeans who were doing great work in the US, we decided to raise our ambitions, literally. That's why we say AI for the public good, recognising the fact that commercial interest will certainly drive exciting developments in the industry space. But keep in mind, there is a need to make sure that AI serves the public good. And we say for Singapore and the world. So the idea is that experiments that are carried out in Singapore, things that are scaled up in Singapore potentially could have contributions elsewhere in the world. And so AI for the public good, for Singapore and the world. That's how it came about.

Alessio [00:02:50]: I was listening to some of your previous interviews, and even the choice of the name development in the ministry name was very specific. You mentioned naming is your ethos. Can you explain maybe a bit about what the ministry does, which is not simply funding R&D, but it's also thinking about how to apply the technologies in industry and just maybe give people an overview since there's not really an equivalent in the US?

Josephine [00:03:13]: Yeah, so when people talk about our Smart Nation efforts, it was helpful in articulating a few key pillars. We talked about one pillar being a vibrant digital economy. We also talk about a stable digital society because digital technologies, the way in which they are used, can sometimes cause divisions in society or entrench polarisation. They can also have the potential of causing social upheaval. So when we talked about stable digital society, that was what we had in mind. How do you preserve cohesion? Then we said that in this domain, government has to be progressive too. You can't expect the rest of Singapore to digitalise, and yet the government is falling behind. So a progressive digital government is another very important pillar. And underpinning all of this has to be comprehensive digital security. There is, of course, cyber security, but there is also how individuals feel safe in the digital domain, whether as users on social media or if they're using devices and they're using services that are delivered digitally. So when we talk about these four pillars of a Smart Nation, people get it. When we then asked ourselves, what is the appropriate way to think of the ministry? We used to be known as the Ministry of Communications and Information, and we had been doing all this digital stuff without actually putting it into our name. So when we eventually decided to rename the ministry, there were a couple of options to choose from. We could have gone for digital technologies, we could have gone for digital advancement, we could have gone for digital innovation. But ultimately we decided on digital development because it wasn't the technologies, the advancements or the innovation that we cared about, they are important, but we're really more interested in their impact to society, impact to communities. So how do we shape those developments? How do we achieve a digital experience that is trustworthy? How do we make sure that everyone, not just individuals who are savvy from the get-go in digital engagements, how does everyone in society, regardless of age, regardless of background, also feel that they have a sense of progression, that embracing technology brings benefits to them? And we also believe that if you don't pay attention to it, then you might not consciously apply the use of technology to bring people together. And you may passively just allow society to break apart without being too...

Swyx [00:06:05]: Oh my god, that's drastic.

Josephine [00:06:06]: That sounds very drastic, that sounds a bit scary. But we thought that it's important to say that we do have the objective of bringing people together with the help of technology. So that's how we landed on the idea of digital development. And there's one more dimension, that one we draw reference from perhaps the physical developmental aspects of cities. We say that if you think of yourself as a developer, all developers have to conceptualise, all developers have to plan, developers have to implement, and in the process of implementation you will monitor and things don't go as well as you'd like them to, you have to rectify. Yeah, it sucks, essentially, it is. But that's what any developer, any good developer must do. But a best-in-class developer would also have to think about the higher purpose that you're trying to achieve. Should also think about who are the partners that you bring into the picture and not try to do everything alone. And I think very importantly, a best-in-class developer seeks to be a leader in thought and action. So we say that if we call ourselves the Ministry of Digital Development, how do we also, whether in thinking of the digital economy, thinking of the digital society, digital security or digital government, embody these values, these values of being a bridge builder, being an entity that cares about the longer-term impact, that serves a higher purpose. So those were the kinds of things that we brought into the discussions on our own renaming. That's quite a good experience for the whole team.

Swyx [00:07:49]: From the outside, I actually was surprised, I was looking for MCI and I couldn't find it. Since you renamed it.

Josephine [00:07:54]: There, there, there.

Swyx [00:07:55]: Yeah, exactly. We have to plug the little logo for the cameras. I really like that you are now recognizing the role of the web, digital development, technology. We never really had it officially, it used to be Ministry of Information Communication and the Arts. One thing that we're going to touch on is the growth of Singapore as an engineering hub. OpenAI is opening an office in Singapore and how we can grow more AI engineers in Singapore as well. Because I do think that that is something that people are interested in, whether or not it's for their own careers or to hire out in Singapore. Maybe it's a good time to get into the National AI Strategy. You presented it to the PM, now PM, I guess. I don't know what the process was because we have a new PM. Most of our audience is not going to be Singaporeans. There are going to be more Singaporeans than normal, but most of our audience are not Singaporeans, they've never heard of it. But they all come from countries which are all trying to figure out the National AI Strategy. So how did you go about defining a National AI Strategy?

Josephine [00:08:49]: Well, in some sense, we went back to the drawing board and said, what do we want to see AI be able to do in Singapore? I mean, there are all these exciting developments, obviously we would like to be part of the action. But it has to be in service of something. And what we were interested in is just trying to find a way to continuously uplift our people. Because ultimately, for any national strategy to work, it must bring benefits to the local communities. And the local communities can be defined very broadly. You have citizen communities, and citizens would like to be able to do better jobs, and they would like to be able to earn higher wages. But it's not just citizen communities. Citizens are themselves sometimes involved in businesses. So how about the enterprise community? And in the enterprise community, in the Singapore landscape, it's really interesting. Like most other economies, we do have SMEs. But we also have multinationals that are at the very cutting edge. Because in order to succeed in Singapore, they have to be very competitive. So the question is, how can they, through the use of technologies, and including AI, offer an even higher value proposition to their customers, to their owners. And so we were very interested in seeing enterprise applications of AI. That in a way also relates back to the workforce. Because for all of the employees of these organisations, then to see that their employers are implementing AI models, and they are identifying AI use cases, is tremendously motivating for the broader workforce to themselves want to acquire AI-related skills. Then not forgetting that for the large body of small and medium enterprises, it's always going to be a little bit harder for smaller businesses to access technologies. So what do we put in place to enable these small businesses to take advantage of what AI has to offer? So you have to have a holistic strategy that can fire up many different engines. So we work across the board to make compute available, firstly to the research community, but also taking care to ensure that compute capacity could be available to companies that are in need of them. So how do we do that? That's one question that we have to go get it organised. Then another very important aspect is making data available. And I think in this regard, some of the earlier work that we did was helpful. We did, from more than a decade ago, already have privacy laws in place. We have data protection, and these laws have also been updated so as to support businesses with legitimate use cases. So the clarity and the certainty is there. And then we've also tried to organise data, make it more readily available. Some of it, for example, could be specific to the finance sector, some specific to the logistics sector. But then there are also different kinds of data that lies within government possession, and we are making it much more readily available to the private sector. So that deals with the data part of it. I think the third and very important part of it is talent. And we're thinking of talent at different levels. We're thinking of talent at the uppermost level, you know, for want of a better term, we call them AI creators. We know that they are very highly sought after, there aren't all that many in the world. And we want to interest them to do work with Singapore. Sometimes they will be in Singapore, but there is a value in them being plugged into the international networks, to be plugged into globally leading-edge projects that may or may not be done out of Singapore. We think that keeping those linkages are very important. These AI creators have to be supported by what we generally refer to as AI practitioners. We're talking about people who do data science, we're talking about people who do machine learning, they're engineers, they're absolutely engineers. But then you also need the broad swath of AI users, people who are going to be comfortable using the tools that are made available to them. So you may have, for example, a group within a company that designs AI bots or finds use cases, but if their colleagues aren't comfortable using them, then in some sense, the picture is not complete. So we want to address the talent question at all of these levels. In a sense, we are fortunate that Singapore is compact enough for us to be able to get these kinds of interventions organised. We already have a robust training infrastructure, we can rely on that. People know what funding support is available to them. Training providers know that if they curate programmes that lead to good employment outcomes, they are very likely to be able to get support to offer these programmes at subsidised rates. So in a sense, that ecosystem is able to support what we hope to see come out of an AI strategy. So those are just some of the pieces that we put in place.

Swyx [00:14:15]: Many pieces. 15 items. Okay. So for people who are interested, they can look it up, but I just wanted to get an introduction to people. Many people don't even know that we have a very active AI strategy, and actually it's the second one. There's already been a five-year plan, pre-generative AI, which was very foresighted.

Josephine [00:14:32]: One thing that we also pay attention to is how can AI be developed and deployed in a responsible manner, in a way that is trustworthy. And we want to plug ourselves into conversations at the forefront. We have an AI Safety Institute, and we work together with our colleagues in the US, as well as in the UK, and anywhere else that has AI Safety Institutes to try and advance our understanding of this topic. But I think more importantly is that in the meantime, we've got to offer the business community, offer AI developers something practical to work with. So we've developed testing tools, by no means perfect, but they're a start. And then we also said that because AI Verify was developed for traditional AI, classical AI, then for generative AI, you need something different. Something that also does red teaming, something that also does benchmarking. But actually our interests go beyond that, beyond AI governance frameworks and practical tools. We are interested in getting into the research as to how do you prove that an AI system is really safe? How do you get into the mathematics of it? I'm not an expert in this field, but I think it's not difficult for people to understand that until you can get to a proof, then some of the other testing is reassuring, but to an extent.

Swyx [00:15:58]: It may be fundamentally unprovable.

Josephine [00:16:00]: It may well be.

Swyx [00:16:01]: You might have to be comfortable with that and go ahead anyway.

Josephine [00:16:03]: Yes.

Alessio [00:16:04]: Yeah. Yeah. The simulations especially are really interesting. I think NTU is going to be one of the first universities to have these cyber ranges for like a AI red teaming training. One of our companies does AI red teaming and their customers are like some of the biggest foundation model labs. And then GovTech is like the only government organization working. So yeah, Singapore has been at the forefront of this. We sat down with the CPO of Grab, Philip Kendall, on my trip there, and they shut down their whole company for a week to just focus on Gen AI training. Literally, if you work at Grab, you have to do something in Gen AI and learn and get comfortable with it. Going back to your point, I think the interest of the government easily transpires into the companies. This is like a national priority, so we should all spend time in it.

Josephine [00:16:50]: You're right. Companies like Grab, what they are trying to do is to make awareness so broad within their organization and to get to a level of comfort with using Gen AI tools, which I think is a smart move because the returns will come later, but they will surely come. They're not the only ones doing that, I'm glad to say, some of our leading banks, even Singapore Airlines, which may be the airline that you flew into Singapore, they've got a serious team looking at AI use cases, and I don't know whether you are aware of it, they have definitely quite a good number. I'm not sure that they have talked about it openly because airline operations are quite complex.

Swyx [00:17:37]: At least Singapore Airlines offer.

Josephine [00:17:38]: No, because airline operations are very complex. There are lots of things that you can optimize. There are lots of things that you have to comply with. There are lots of processes that you must follow, and this kind of context makes it interesting for AI. You can put it to good use. And government mustn't be lagging too. We've always believed that in time to come, we may well have to put in place guardrails, but you are able to put in place guardrails better if you yourself have used the technology. So that's the approach that we are taking. Quite early on, we decided to lay out some guidelines on how Gen AI could be used by government offices. And then we also went about developing tools that will enable them to practice and also to try their hand at it. I think in today's context, we're quite happy with the fact that there are enough colleagues within government that are competent, that know, in fact, how to generate their own AI and create a system for their colleagues. And that's quite an exciting development.

Swyx [00:18:47]: I will mention that as a citizen and someone keen on developing AI in Singapore, I do worry that we lead with safety, lead with public good. I'm not sure that the Singapore government is aware that safety sometimes is a bad word in some AI circles because their work is associated with censorship.

Josephine [00:19:09]: Or over-regulation.

Swyx [00:19:10]: Over-regulation. And nerfing is the Gen Z word for this, of capabilities in order to be safe. And actually that pushes what you call AI creators, some others might call LLM trainers, whatever. There are trade-offs. You cannot have it all. You cannot have safe and cutting edge sometimes, because sometimes cutting edge means unsafe. I don't know what the right answer is, but I will say that my perception is a lot of the Bay Area, San Francisco is on the, let everything be unregulated as possible. Let's explore the frontier. And Europe's approach is like, we're going to have government conferences on the safety of AI, even before creating frontier AI. And Singapore, I think is like in the middle of that. There's a risk. Maybe not. I saw you shake your head.

Josephine [00:19:53]: It's a really interesting question. How do you approach AI development? Do you say that there are some ethical principles that should be adhered to? Do you say that there are certain guidelines that should inform the developer's thinking? And we don't have a law in place just yet. We've only introduced very recently a law that has yet to be passed. This is on AI generated content, other synthetic materials that could be used during an election. But that's very specific to an election. It's very specific to election. For the broader base of AI developers and AI model deployers, the way in which we've gone about it is to put in place the principles. We articulate what good AI governance should look like. And then we've decided to take it one step further. We have testing tools, we have frameworks, and we've also tried to say, well, if you go about AI development, what are some of the safety considerations that you should put in place? And then we suggest to AI model developers that they should be transparent. What are the things they ought to be transparent about? For example, your data. How is it sourced? You should also be transparent about the use cases. What do you intend for it to be used for? So there are some of these specific guidelines that we provide. They are, to a large extent, voluntary in nature. But on the other hand, we hope that through this process, there is enough education being done so that on the receiving end, those who are impacted by those models will learn to ask the right questions. And when they ask the right questions of the model developers and the deployers, then that generates a virtual cycle where good questions are being brought to the surface, and there is a certain sense of responsibility to address those questions. I take your point that until you are very clear about the outcomes you want to achieve, putting in place regulations could be counterproductive. And I think we see this in many different sectors. Well, since AI is often talked about as general purpose technology, yes, of course, in another general purpose technology, electricity, in its production, of course, there are regulations around that. You know, how to keep the workers safe in a power plant, for example. But many of the regulations do not attempt to stifle electricity usage to begin with. It says that, well, if you use electricity in this particular manner or in that particular manner, then here are the rules that you have to follow. I believe that that could be true of AI too. It depends on the use cases. If you use it for elections, then okay, we will have a set of rules. But if you're not using it for elections, then actually in Singapore today, go ahead. But of course, if you do harmful things, that's a different story altogether.

Alessio [00:22:56]: How do you structure a ministry when the technology moves so quickly? Even if you think about the moratorium that Singapore had on data center build-out that was lifted recently, obviously, you know, that's a forward-looking thing. As you think about what you want to put in place for AI versus what you want to wait out and see, like, how do you make that decision? You know, CEOs have to make the same decision. Should I invest in AI now? Should I follow and see where it goes? What's the thought process and who do you work with?

Josephine [00:23:23]: The fortunate thing for Singapore, I think, is that we're a single tier of government. In many other countries, you may have the federal level and then you have the provincial or state level governments, depending on the nomenclature in that particular jurisdiction. For us, it's a single tier.

Swyx [00:23:41]: City-state.

Josephine [00:23:42]: City-state. When you're referring to the government, well, is the government, no one asks, okay, is it the federal government or is it the local government? So that in itself is greatly facilitative already. The second thing is that we do have a strong culture of cooperating across different ministries. In the digital domain, you absolutely have to, because it's not just my ministry that is interested in seeing applications being developed and percolate throughout our system. If you are the Ministry of Transport, you'd be very interested how artificial intelligence, machine learning can be applied to the rail system to help it to advance from corrective maintenance where you go in and maintain equipment after they've broken down to preventive maintenance, which is still costly because you can't go around maintaining everything preventatively. So how do you prioritize? If you use machine learning to prioritize and move more effectively into predictive maintenance, then potentially you can have a more reliable rail system without it costing a lot more. So Ministry of Transport would have this set of considerations and they have to be willing to support innovations in their particular sector. In healthcare, there would be equally a different set of considerations. How can machine learning, how can AI algorithms be applied to help physicians, not to overtake physicians? I don't think physicians can be overtaken so easily, not at all for the imaginable future. But can it help them with diagnosis? Can it help them with treatment plans? What constitutes an optimized treatment plan that would take into consideration the patient's whole set of health indicators? And how does a physician look at all these inputs and still apply judgment? Those are the areas that we would be very interested in as MDDI, but equally, I think, my colleagues in the Ministry of Health. So the way in which we organize ourselves must allow for ownership to also be taken by our colleagues, that they want to push it forward. We keep ourselves relatively lean. At the broad level, we may say there's a group of colleagues who looked at digital economy, another group that looks at digital society, another group looks at digital government. But actually, there are many occasions where you have to be cross-disciplinary. Even digital government, the more you digitalize your service delivery to citizens, the more you have to think about the security architecture, the more you have to think about whether this delivery mechanism is resilient. And you can't do it in isolation. You have to then say, if the standards that we set for ourselves are totally dislocated with what the industry does, how hyperscalers go about architecting their security, then the two are not interoperable. So a degree of flexibility, a way of allowing people to take ownership of the areas that come within their charge, and very importantly, constantly building bridges, and also encouraging a culture of not saying that, here's where my job stops. In a field that is, as you say, developing as quickly as it does, you can't rigidly say that, beyond this, not my problem. It is your problem until you find somebody else to take care of it.

Swyx [00:27:08]: The thing you raised about healthcare is something that a lot of people here are interested in. If someone, let's say a foreign startup or company, or someone who is a Singaporean founder wants to do this in the healthcare system, what should they do? Who do they reach out to? It often seems impenetrable, but I feel like we want to say Singapore is open for business, but where do they go?

Josephine [00:27:30]: Well, the good thing about Singapore is that it's not that difficult eventually to reach the right person. But we can also understand that to someone who is less familiar with Singapore, you need an entry point. And fortunately, that entry point has been very well served by the Economic Development Board. The Economic Development Board has got colleagues who are based in, I believe, more than 40 And they serve as a very useful initial touch point. And then they might provide advice as to who do you link up with in Singapore. And it doesn't take more than a few clicks, in a way, to get to the right person.

Swyx [00:28:09]: I will say I've been dealing with EDB a little bit from my conference, and they've been extremely responsive and it's been nice to see, because I never get to see this out of government, nice to see that as someone that wants to bring a foreign business into Singapore, they're kind of rolling on the welcome mat.

Josephine [00:28:24]: But we also recognise that in newer areas, there could be question of, oh, okay, this is something unfamiliar. The way in which we go about it is to say that, okay, even if there is no particular group or entity that champions a topic, we don't have to immediately turn away that opportunity. There must be a way for us to connect to the right group of people. So that tends to be the approach that we take.

Swyx [00:28:52]: There's a bit of tension. The external perception of Singapore, people are very influenced by still the Michael Faye incident of like 30 years ago. And they feel us as conservative. And I feel like within Singapore, we know what the OB markers are, quote unquote, and then we can live within that. And it's actually, you can have a lot of experimentation within that. In fact, I think a lot of Singapore's success in finance has been due to a liberal acceptance of what we can do. I don't have a point apart from which to say, I hope that people who are looking to enter Singapore, don't have that preconception that we are hard to deal with because we're very eager, I think, is my perception.

Josephine [00:29:29]: You need to hop on a plane and get to Singapore, and then we are happy to show them around.

Swyx [00:29:34]: I'll take this chance to mention that, so next year, I kind of have been pitching as the Olympics of Singapore year, in the sense that ICLR, one of the big machine learning conferences is coming. I think one of your agencies had a part to do with that, and I'm bringing my own conference as well to host alongside. Excellent.

Josephine [00:29:50]: So you're hosting a conference on AI engineers? Yes. Fantastic. You'll be very welcome. Oh, yeah. Thanks.

Swyx [00:29:56]: I hope so. Well, you can't deny me entry.

Josephine [00:29:58]: Should we have reason to? No, no, no.

Swyx [00:30:02]: My general hope is that when conferences like ICLR happen in Singapore, that a lot of AI creators will be coming to Singapore for the first time, and they'll be able to see the kind of work that's being done. Yes. And that will be on the research side. And I hope that the engineering side grows as well. Yeah. We can talk about the talent side if you want.

Josephine [00:30:18]: Well, it's quite interesting for me because I was listening to your podcast explaining the different dimensions of what an AI engineer does, and maybe we haven't called them AI engineers just yet, but we are seeing very healthy interest amongst people in companies that take an enthusiastic approach to try and see how AI can be helpful to their business. They seem to me to fit the bill. They seem to me already, whether they recognize it or not, to be the kind of AI engineers that you have in mind, meaning that they may not have done a PhD, they may not have gotten their degrees in computer science, they may not have themselves used NLP. They may not be steep in this area, but they are acquiring the skills very quickly. They are pivoting. They have the domain knowledge.

Swyx [00:31:11]: Correct. It's not even about the pivoting. They might just train from the start, but the point is that they can take a foundation model that is capable of anything and actually fashion it into a useful product at the end of it. Yes. Right? Which is what we all want. Everybody downstairs wants that. Everybody here wants that. They want useful products, not just general capable models. I see the job title. There are some people walking around with their lanyards today, which is kind of cool. I think you have a lot of terms, which are AI creators, AI practitioners. I want to call out that there was this interesting goal to increase the triple the number of AI practitioners, which is part of the national AI strategy from 5,000 to 15,000. But people don't walk around with the title AI practitioners.

Josephine [00:31:49]: Absolutely not.

Swyx [00:31:50]: So I'm like, no, you have to focus on job title because job titles get people jobs. Yeah.

Josephine [00:31:55]: Fair enough.

Swyx [00:31:56]: It is just shorthand for companies to hire and it's a shorthand for people to skill up in whatever they need in order to get those jobs. I'm a very practical person. I think many Singaporeans are, and that's kind of my pitch on the AI engineer side.

Josephine [00:32:10]: Thank you for that suggestion. We'll be thinking about how we also help Singaporeans understand the opportunities to be AI engineers, how they can get into it.

Swyx [00:32:21]: A lot of governments are trying to do this, right? Like train their citizens and offer opportunities. I have not been in the Singapore workforce my adult career, so I don't really know what's available apart from SkillsFuture. I think that there are a lot of people wanting help and they go for courses, they get certificates. I don't know how we get them over the hump of going into industry and being successful engineers and I fear that we're going to create a whole bunch of certificates that don't mean anything. I don't know if you have any thoughts or responses on that.

Josephine [00:32:53]: This idea that you don't want to over-rely on qualifications and credentials is also something that has been recognised in Singapore for some years now. That even includes your academic qualifications. Every now and then you do hear people decide that that's not the path that they're going to take and they're going to experiment and they're going to try different ways. Entrepreneurship could be one of it. For the broad workforce, what we have discovered is that the signal from the employer is usually the most important. As members of the workforce, they are very responsive to what employers are telling them. In the organisational context, like in the case of Grab, Alessio was talking about them shutting down completely for one week so that everyone can pick up generative AI skills. That sends a very strong signal. So quite a lot of the government funding will go to the company and say that it's an initiative you want to undertake. We recognise that it does take up some of your company's resources and we are willing to help with it. These are what we call company-led training programmes. But not everyone works for a company that is progressive. If the company is not ready to introduce an organisation-wide training initiative, then what does an individual do? So we have an alternative to offer. What we've done is to work with knowledgeable industry practitioners to identify for specific sectors, the kinds of technology that will disrupt jobs within the next three to five years. We're not choosing to look at a very long horizon because no one really knows how the future of work will be like in 15, 35 years, except in very broad terms. You can. You can say in very broad terms that you are going to have shorter learning cycles, you are going to have skills atrophy at a much quicker rate. Those broad things we can say. But specifically, the job that I'm doing today, the tasks that I have to perform today, how will I do them differently? I think in three to five years you can say. And you can also be quite specific. If you're in logistics, what kinds of technology will change the way you work? Robotics will be one of them. Robotics isn't as likely to change jobs in financial services, but AI and machine learning will. So if you identify the timeframe and if you identify the specific technologies, then you go to a specific job role and say, here's what you're doing today and here's what you're going to be doing in this new timeframe. Then you have a chance to allow individuals to take ownership of their learning and say then, how do I plug it? So one of the examples I like to give is that if you look at the accounting profession, a lot of the routine work will be replaceable. A lot of the tasks that are currently done by individuals can be done with a good model backing you. Now, then what happens to the individual? They have to be able to use the model. They have to be able to use the AI tools, and then they will have to pivot to doing other things. For example, there will still be a great shortage of people who are able to do forensics. And if you want someone to do forensics, for example, a financial crime has taken place. Within an organisation, there was a discovery that was fraud. How did this come about? That forensics work still needs an application of human understanding of the problem. Now, one of the jobs that we found is that a person with audit experience is actually quite suitable to do digital forensics because of their experience in audit. So then how do we help a person like that pivot? Good if his employer is interested to invest in his training, but we would also like to encourage individuals to refer to what we call jobs transformation maps to plan their own career trajectory. That's exactly what we have done. I think we have definitely more than a dozen of such job transformation maps available, and they cut across a variety of sectors.

Swyx [00:37:05]: So it's like open source career change programmes. Exactly.

Josephine [00:37:08]: I think you put it better than I, Sean.

Swyx [00:37:11]: You can count on me for marketing.

Josephine [00:37:13]: Yeah. So actually, one day, somebody is going to feed this into a model.

Swyx [00:37:17]: Yeah, I was exactly thinking that.

Josephine [00:37:19]: Yeah, they have to. Actually, if they just use REG, it wouldn't be too difficult, right? Because that document, to add to a database for the purposes of REG, they will still all fit into the window. It's going to be possible.

Swyx [00:37:32]: This is a planning task. That is the talk of the week. The talk of the town this week, because of OpenAI's O1 model, that is, the next frontier after REG is planning and reasoning. So the steps need to make sense. And that is not typically a part of REG. REG is more recall of facts. And this is much more about planning, something that in sequence makes sense to get to a destination. Which could be really interesting. I would love the auditors to spell out their reasoning traces so that the language model guys can go and train on it.

Josephine [00:38:04]: The planning part, I was trying to do this a couple of years ago. That was when I was still in the manpower ministry. We were talking to, in fact, some recruitment firms in the US. And it's exactly as you described. It's a planning process. To pivot from one career to the next is very often not a single step. There might be a path for you to take there. And if you were able to research the whole database of people's career paths, then potentially for every person that shows up and asks the question, you can use this database to map a new career path.

Swyx [00:38:44]: I'm very open about my own career transition from finance to tech. That's why I brought Quincy Larson here to RAISE, because he taught me to code. And I think he can teach Singapore to code. Wow, why not?

Josephine [00:38:55]: If they want to. Many do. Yeah, many do.

Swyx [00:38:58]: Many do.

Josephine [00:38:59]: So they will be complementary. There is the planning aspect of it. But if you wanted to use REG, it does not have individual personalised career paths to draw on. That one has got a frame, a proposal of how you could go about it. It could tell you, maybe from A, you could get to B. Whereas what you're talking about planning is that, well, here's how someone else has gotten from A to B by going through C, D, E in between. So they're complementary things.

Swyx [00:39:33]: You and I talked a little bit this morning about winning the 30-year war, right? A lot of the plans are very short term, very like, how can we get it now? How can we, like, we got OpenAI to open an office here, great, let's go and get Anthropic, Google DeepMind, all these guys, the AI creators to move to Singapore. Hopefully we can get there, maybe not. Maybe, maybe not, right? It's hard to tell. The 30-year war, in my mind, is the kind of scale of operation that we did that leads me to speak English today. We as a government decided, strategically, English is an important thing, we'll teach it in schools, we'll adopt it as the language of business. And you and I discussed, like, is there something for code? Is it that level? Is it time for that kind of shift that we've done for English, for Mandarin? And like, is this the third one that we speak Python as a second language? And I want to just get your reactions to this crazy idea.

Josephine [00:40:19]: This may not be so crazy, the idea that you need to acquire literacy in a particular field. I mean, some years ago, we decided that computer literacy was important for everyone to have and put in place quite a lot of programs in order to enable people at various stages of learning, including those who are already adult learners, to try and acquire these kinds of skills. So, you know, AI literacy is not a far-fetched idea. Is it all going to be coding? Perhaps for some people, this type of skills will be very relevant. Is it necessary for everyone? That's something I think the jury is out. I don't think that there is a clear conclusion. We've discussed this also with colleagues from around the world who are interested in trying to improve the educational outcomes. These are professional educators who are very interested in curriculum. They're interested in helping children become more effective in the future. And I think as far as we are able to see, there is no real landing point yet. Does everyone need to learn coding? And I think even for some of the participants that raised today, they did not necessarily start with a technical background. Some of them came into it quite late. This is not to say that we are completely close to the idea. I think it is something that we will continue to investigate. And the good thing about Singapore is that if and when we come to the conclusion that that's something that has to become either third language for everyone or has to become as widespread as mathematics or some other skillset, digital skills, or rather reading skills, then maybe it's something that we have to think about introducing on a wider scale.

Alessio [00:42:17]: In July, we were in Singapore. We hosted the Sovereign AI Summit. We gave a presentation to a lot of the leaders from Temasek, GSE, EDVI about some of the stuff we've seen in Silicon Valley and how different countries are building out AI. Singapore was 15% of NVIDIA's revenue in Q3 of 2024. So you have a big investment in sovereign data infrastructure and the power grid and all the build-outs there. Malaysia has been a very active space for that too. How do you think about the importance of owning the infrastructure and understanding where the models are run, both from the autonomous workforce perspective, as you enable people to use this, but also you mentioned the elections. If you have a model that is being used to generate election-related content, you want to see where it runs, whether or not it's running in a safe environment. And obviously, there's more on the geopolitical side that we will not touch on. But why was that so important for Singapore to do so early, to make such a big investment? And how do you think about, especially the Saudi Sino-Asian, not bloc, but coalition, was at an office in Singapore, and you can see Indonesia from a window, you can see Malaysia from another window. So everything there is pretty interconnected.

Josephine [00:43:28]: There seems to be a couple of strands in your question. There was a strand on digital infrastructure, and then I believe there was also a strand in terms of digital governance. How do you make sure that the environment continues to be supportive of innovation activities, but also that you manage the potential harms?

Swyx [00:43:48]: I think there's a key term of sovereign AI as well that's kind of going around. I don't know what level this is at.

Josephine [00:43:52]: What did you have in mind?

Alessio [00:43:54]: Especially as you think about deploying some of these technologies and using them, you could deploy them in any data center in the world, in theory. But as they become a bigger part of your government, they become a bigger part of the infrastructure that the country runs on, maybe bringing them closer to you is more important. You're one of the most advanced countries in doing that. So I'm curious to hear what that planning was, the decision was going into it. It's like, this is something important for us to do today versus waiting later. We want to touch on the elections thing that you also mentioned, but that's kind of like a separate topic.

Swyx [00:44:29]: He's squeezing two questions in one.

Josephine [00:44:32]: Right. Alessio, a couple of years ago, we articulated for the government a cloud-first strategy, which therefore means that we accept that there are benefits of putting some of our workloads on the cloud. For one thing, it means that you don't have to have all the capacity available to you on a dedicated basis all the time. We acknowledge the need for flexibility. We acknowledge the need to be able to expand more quickly when the workload needs increase. But when we say a cloud-first strategy, it also means that there will be certain things that are perhaps not suitable to put on the cloud. And for those, you need to have a different set of infrastructure to support. So having a hybrid approach where some of the workloads, even for government, can go to the cloud, and then some of the workloads have to remain on-prem. I think that is a question of the mix. To the extent that you are able to identify the systems that are suitable to go to the cloud, then the need to have the workloads run on your on-prem systems is more circumscribed as a result. And potentially, you can devote better resources to safeguarding this smaller bucket rather than to try and spread your resources to protecting the whole, because you are also relying on security architecture of cloud service providers. So this hybrid approach, I think, has defined how we think about government workloads. In some sense, how we will think about AI workloads is not going to be entirely different. This is looking at the question from the government standpoint. But more broadly, if you think about Singapore as a whole, equally, not all the AI workloads can be hosted in Singapore. The analogy I like to make sometimes is, if you think about manufacturing, some of the earlier activities that were carried out in Singapore at some point in time became not feasible to continue. And then they have to be redistributed elsewhere. You're always going to be part of this supply chain. There is a global supply chain. There is a regional supply chain. And if everyone occupies a point in that supply chain that is optimal for their own circumstances, that plays to their advantage, then in fact, the whole system gains. That's also how we will think of it. Not all the AI workloads, no matter how much we expand our data center capacity, will be possible to host. Now, the only way we can host all the AI workloads is if we are totally unambitious. There's so little AI workload that you can host everything in Singapore. That has to be the case, right? I mean, if there's more AI workloads, it has to be distributed elsewhere. Does all of it require the latency, the very tight latency margins that you can tolerate and absolutely have to have them in Singapore? Some of it actually can be distributed, we'll have to see. But a reasonable guess would be that there is always going to be scope for redistribution. And in that sense, we look at the whole development in our region in a positive way. There is just more scope to be able to host these activities. For Southeast Asia?

Swyx [00:47:44]: For Southeast Asia.

Josephine [00:47:46]: Could be elsewhere in the world. And it's generally a helpful thing to happen. Keep in mind also that when you look at data center capacity in Singapore, relative to our GDP, relative to our population, it's already one of the most dense in the world. In that regard, that doesn't mean that we stop expanding the capacity. We are still trying to open up headroom. And that means greener data centers. And there are really two main ways of making the greener centers become a reality. One is you use less energy. One is you use greener energy. And we are pursuing activities on both fronts.

Alessio [00:48:22]: I think one of the ideas in the Sovereign AI team is the government also becoming an intelligence provider. So if you think about the accounting work that you mentioned, some of these AI models can do some of that work. In the future, do you see the government being able to offer AI accountants as a service in the Singaporean infrastructure? I think that's one of the themes that are very new. But as you have, most countries have shrunken population, declining workforce. So there needs to be a way to close the gap for productivity growth. And I think governments owning some of this infrastructure for workloads and then re-offering it to local enterprises and small businesses will be one of the drivers of this gap closure. So yeah, I was just curious to get your thoughts. But it seems like you're already thinking about how to scale versus what to put outside of the country. But we were.

Josephine [00:49:12]: We were thinking about access for startups. We were concerned about access by the research community. So we did set aside, I think, a reasonable budget in Singapore to make available compute capacity for these two groups in particular. What we are seeing is a lot of interest on the part of private providers. Some are hyperscalers, but they're not confined to hyperscalers. There are also data center operators that are offering to provide compute as a service. So they would be interested in linking up with entities that have the demand. We'll monitor the situation. In some sense, government ought to complement what is available in the private sector. It's not always the case that the government has to step in. So we'll look at where the needs are. Yeah.

Swyx [00:50:04]: You told me that this is a change in the way the government works in the private sector recently.

Josephine [00:50:09]: Certainly the idea that we were talking specifically about training. We said that with adult education in particular, it's very often the case that training intermediaries in the private sector are closer to the needs of industry. They're more familiar with what the employers want. The government should not assume that it needs to be the sole provider. So yes, our institutes of higher learning, meaning our polytechnics, our universities, they also run programs that are helpful to industry, but they're not the only ones. So it would have to depend on the situation, who is in a better position to fulfill those requirements. Yeah, excellent.

Swyx [00:50:48]: We do have to wrap up for your other events going on. There's a lot of programs that the Singapore government and GovTech in particular does to make use of AI within the government to serve citizens and for internal use. I'll show that in the show notes for readers and listeners.

Josephine [00:51:02]: Sure.

Swyx [00:51:02]: But I was wondering if you personally have a favourite AI use case that has inspired you or maybe affected your life or kids' life in some way.

Josephine [00:51:11]: That's a really good question. I would say I'm more proud of the fact that my colleagues are so enthusiastic. I'm not sure whether you've heard of it. Internally, we have something called AIBot. Yes.

Swyx [00:51:21]: Your staff actually said to me like three times, like AIBot, AIBot, AIBot.

Josephine [00:51:24]: Oh, okay.

Swyx [00:51:25]: I was like, what is this AIBot?

Josephine [00:51:26]: I've never heard of it.

Swyx [00:51:26]: But apparently, it's like the RAG system for the Singapore government. Yeah.

Josephine [00:51:30]: What happens is that we're encouraging our colleagues to experiment. And they have access to internal memos in each ministry or each agency that are treasure trove of how the agency has thought about a problem. So for example, if you're the Inland Revenue, and somebody comes to you with an appeal for a tax case. Well, it has been decided on before, many times over. But to a newer colleague, what is the decision to begin with? Now, they can input through a RAG system, all the stuff that they have done in the past. And it can help the newer colleague figure out the answer much faster. It doesn't mean that there's no longer a pause to understand, okay, why is it done this way? To your point earlier, that the reasoning part of it also has to come to the fore. That's potentially one next step that we can take. But at least there are many bots that are being developed now that are helping lots of agencies. It could be the Inland Revenue, as I mentioned earlier. It could be the agency that looks after our social security that has a certain degree of complexity. That if you simply did a search, or if you relied on our previous assistant, it was an assistant that was not so smart, if I could put it that way. It gave a standard answer. And it wasn't able really to understand your question. It was frustrating when after asking A, you say, okay, then how about B? And then how about C? It wasn't able to then take you to the next level. It just kept spewing out the same answer. So I think with the AI bots that we've created, the ability to have a more intelligent answer to the question has improved a great deal. But it's still early days yet. But they represent the kind of advancements that we'd like to see our colleagues make more of.

Swyx [00:53:21]: Jensen Huang calls this preservation of institutional knowledge. You can actually transfer knowledge much easier. And I'm also very positive on the impact of this for an aging population. We have one of the lowest birth rates in the world. And making our systems, our government systems smarter for them, it is the most motivating thing as an engineer that I would work on.

Josephine [00:53:37]: Great.

Swyx [00:53:38]: Yeah, I'm very excited about that. Is there anything we should ask you, like open-ended?

Josephine [00:53:43]: Unless you had another question that we didn't really finish.

Alessio [00:53:47]: Yeah, I think just the elections piece. Yeah, Singapore's running for elections.

Swyx [00:53:52]: How worried are you? How worried are you about AI? And it's a very topical thing for the US as well.

Josephine [00:53:58]: Well, we have seen it show up elsewhere. It's not only in the US. There have been several other elections. I think in Slovakia, for example, there was material, there was content that was put out that eventually turned out to be false. And it was very damaging to the person being portrayed in that content. So the way we think about it is that political discourse has to be built on the foundation of facts. It's very difficult to have honest discourse. You can be critical of each other. It doesn't mean that I have to agree with your opinions. It doesn't mean that only what you say or what somebody else says is acceptable. But the discourse has to be based on facts. So the troubling point about AI-generated content or other synthetic material is that it no longer contains facts. It's made up. So that in itself is problematic. So if a person is depicted in a realistic manner to be saying something that he did not say, or to be doing something that he did not do, that's very confusing for people who want to participate in the discourse. In an election, it could also affect people favorably or in a prejudicial manner, and neither of it is right. So we have to take a decision that when it comes to an election, we have to decide on the basis of what actually happened, what was actually said. We may not like what was said, but that was what was actually said. You can't create something and override it, as it were. So that was where we were coming from. It is, in a way, a very specific set of requirements that we are putting in place, which is that in an election setting, we should only be shown saying what we actually said, or doing what we actually did. And anything else would be an assault on factual accuracy. And that should not become a norm in our election. And people should be able to trust what was said and what they are seeing. So that's where it's coming from.

Swyx [00:56:13]: Thank you so much for your time. You've been extremely generous to have a minister as a listener of our little thing, but hopefully it's useful to you as well. If you're interested in anything, let us know.

Josephine [00:56:21]: I hope your AI engineer conference in Singapore is a great success. Yeah, well, you can help us.

Swyx [00:56:26]: Okay.

1

We also plan to hold the first international AI Engineer conference in Singapore to coincide with ICLR 2025, please save the date to visit in April 2025!

2

Or more sadly, a thing ceases to become important just as it becomes easy to define

3

Deepmind founder and now Chemistry Nobel Laureate Demis Hassabis also grew up in Singapore but that’s trivia more than anything! Hope you enjoyed that, footnote gang.

Discussion about this podcast

Latent Space
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0
The podcast by and for AI Engineers! In 2023, over 1 million visitors came to Latent Space to hear about news, papers and interviews in Software 3.0.
We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al.
Full show notes always on https://latent.space