This episode is part of our special series on the India AI Impact Summit 2026, examining the conversations, perspectives, and debates that are shaping global AI discourse.
Episode Summary
In this episode of Interpreting India, Nidhi Singh is joined by Mariano-Florentino (Tino) Cuéllar, President of the Carnegie Endowment for International Peace and one of the very few people to have attended all four global AI summits, from Bletchley Park to Delhi. The conversation traces the arc of AI summit diplomacy, what has been accomplished, where the gaps remain, and what the process reveals about how different parts of the world are thinking about a technology that is moving faster than any single government or institution can keep up with.
How has the conversation at AI summits shifted from existential risk and frontier safety to economic opportunity and beneficial deployment, and is that a sign of progress or a loss of focus? What did India bring to the AI governance conversation that the UK, South Korea, and France could not, and how does the scale of this summit change the trajectory of the summits? With the UK and the United States stepping back from multilateral consensus, can the summit series still deliver meaningful outcomes? What should Geneva, the next summit host, actually try to accomplish?
Episode Notes
Tino has been in the room at all four AI summits, and his account of how the conversation has evolved is both candid and grounding. Bletchley Park, he says, was about putting AI on the agenda as a matter of global significance. Seoul was about bringing the private sector formally into that conversation. Paris marked a pivot towards economic opportunity, reflecting a growing recognition, particularly in Europe, that being seen only as a regulator was not a position anyone wanted to hold for long. And New Delhi brought something none of the previous summits had: scale, and a genuinely different set of questions. Half a million people attended, and the conversations happening on the floor of the convention center were about crop yields, public service delivery, and what the technology meant for jobs and families. That, Tino says, is not a dilution of the AI safety agenda. It is a necessary part of building one that the rest of the world can actually be part of.
On the criticism that these summits produce declarations that no one enforces and voluntary commitments that companies quietly walk away from, Tino is pragmatic rather than defensive. He points to the eradication of smallpox, the reduction of nuclear weapons, and the Montreal Protocol as reminders that consequential international progress tends to look messy and incremental from the inside. The network of AI safety institutes that now exists across multiple countries, the UN panel on AI, and the fact that frontier labs are taking evaluation and testing seriously at all, are all, in his view, real if incomplete achievements. The harder question, particularly after the U.S. and UK declined to sign the Paris declaration, is whether the summit process can hold its shape as geopolitical competition intensifies and the appetite for multilateral consensus shrinks.
For Geneva, Tino hopes the conversation moves inward, towards understanding how AI is actually changing organizations, families, and daily life at the micro level. He is also candid about risks he thinks are still not being taken seriously enough, particularly around loss of control, pointing to early evidence of models that scheme, misrepresent, and in controlled environments show signs of self-preservation. His overall posture is one of cautious optimism: he does not think the technology should slow down, but he does think the work of aligning it with what is genuinely good for people has barely begun.
Transcript
Note: This is an AI-generated transcript and may contain errors.
Nidhi Singh: Hello and welcome to a new episode of Interpreting India. From geopolitical complexities to economic uncertainties, India faces critical challenges in its quest for a more prominent role on the world stage. This season, we at Carnegie India continue to bring voices from India and around the world to examine the role of technology, the economy, and international security in shaping India’s future.
I am Nidhi Singh, Associate Fellow at the Technology and Society Program at Carnegie India.
Over the past three years, a new kind of diplomacy has taken shape around artificial intelligence. Four global summits, from Bletchley Park to Seoul to Paris to Delhi, have tried to build international consensus on how to govern a technology that is moving faster than any regulatory framework can keep up with. But each summit has shifted the conversation in unexpected ways, and the geopolitics around AI have changed dramatically since the process began.
Today, we are going to talk about what actually happens inside these rooms, whether summit diplomacy is producing real results, and where all of this goes next. Joining us today to discuss this topic is Mariano-Florentino Cuéllar. Tino is the president of the Carnegie Endowment for International Peace, a former justice at the California Supreme Court, and has served three U.S. presidential administrations. He previously led Stanford’s Freeman Spogli Institute for International Studies.
Tino is also one of the very few people who has been in the room at all four AI summits, which makes him uniquely suited to help us make sense of this process. Tino, welcome to Interpreting India.
Tino Cuéllar: Thank you, Nidhi. It’s great to be here. I’m delighted to have the chance to have this conversation.
Nidhi Singh: Before we get into the summits, I wanted to ask you a question about your background and your involvement in all of this. You went from the California Supreme Court to running a think tank at a time when policy is moving fast, technology is moving faster, and geopolitics is a bit more complicated. How does that transition change the way that you think about technology policy? And honestly, do you miss being the person who gets to make the ruling?
Tino Cuéllar: Well, I think we all live our lives by trying to be true to ourselves. But if we have the chance to also learn from what is happening around us, to my mind, the chance to be a justice on one of the more influential courts in a very big country, in the United States, but serving nearly 40 million Californians, was a huge privilege.
In many ways, the day-to-day was a little different. In the court setting, you can take as much time almost as you need to work through really difficult legal issues, complex questions of judicial administration. How do you make a system responsive to the needs of millions of people speaking different languages? Something also very relevant to India that was much a topic of discussion in the India AI Impact Summit.
But the through line that connects my work here at Carnegie and at the courts is really a concern I have had from the time I was a little kid, which is: how do you take the commitments we make in society about living in a fair society, about trying to provide opportunities for people, about trying to keep the peace, and make them real for people?
In the courts, that was very much a function of how do you resolve complicated cases about criminal justice or technology or the environment or government power. Here in the think tank world, it is very much about how to take cutting-edge knowledge about the biggest questions facing the entire world, about the energy transition, about keeping the peace in the Middle East or in Europe, about trying to figure out how to live with new technologies that we are developing, and making those questions tractable for policymakers and understandable for the informed public.
Nidhi Singh: All very light topics to deal with, clearly. But from that, let’s talk about the summits now. Let’s start at the very beginning. Let’s go back to 2023. You have now been to all four of these summits, and there are not a lot of people who can say that.
So talk to us about the very first one. What was the mood in the room? What were the kinds of things that people were worried about at that point? And knowing what you know now, what does Bletchley look like in hindsight?
Tino Cuéllar: The mood was hopeful, but also with concern about how quickly the technology was moving and what capabilities it might give people who had ill intent to develop a biological weapon, or a novel kind of chemical weapon, or a cyber exploit to break into a government computer system or a hospital system.
But there was plenty of hope as well. There was a sense that we could accelerate scientific breakthroughs with access to frontier AI, much discussion of the protein-folding breakthrough that DeepMind was involved in, a company that was headquartered in the UK. There was a sense that as the technology became more available to more people, as smartphones had changed the world in complex ways, we would be living in a world shaped by how much more easily people could get access to expert knowledge.
There was a sense that so many ways in which the government dealt with people might be affected by new kinds of interfaces, the private sector too. And I think the decision that Prime Minister Rishi Sunak made to hold this meeting at Bletchley Park was telling and informative and symbolic because Bletchley Park is not just any random conference center. It is the place where Alan Turing labored with some of the very earliest computers to try to break the Enigma code that allowed the Nazis in World War II to keep their communication so effectively shielded.
That breakthrough was not only a tremendously significant event for national and international security, it was also an opportunity for people to begin to see what these machines were capable of, how they could recognize patterns and begin to connect the dots, literally and figuratively. So here we are, so many decades later, in a setting where Alan Turing, the very same person who was so critical in that breakthrough, had sketched out in a paper the possibility of machines that could think.
And I do not think anybody was under any illusion that we had yet achieved a kind of AGI breakthrough, but there was a sense that the world needed to come together to start acting, to do something to make sure that technology had maximum benefit and minimum risk.
Nidhi Singh: I think you have made a really interesting point about existential risk and how people were talking about it. I am going to circle back to that once we talk about all four summits. But I do want to put a pin in it for now.
After Bletchley Park, we had Seoul. Seoul is really interesting because that is the one that got companies to put something concrete down on paper. There were frontier safety commitments, risk thresholds, pre-deployment testing. What made this possible? Because getting tech companies to commit to things can be notoriously difficult, as we can see. And what do you think about these commitments now? Did they hold, or do you think people have walked back?
Tino Cuéllar: It’s a great question. The bottom line is I think these commitments did matter. And I think they came in part because these companies realized something. Any of us who study technology, whether it is automobile transportation, aviation, satellites, GPS, smartphones, the rise of the internet, and how all these technologies affect the power of governments and international relationships, realize that so many decisions about technology end up really running through the private sector.
For many reasons, in part because some of the technologies are honed and perfected in the private sector, but also because these entities have enormous reach globally, and they are often making frontline decisions about how to deploy the technology and what the technologies are going to be okay to be used for.
We think about the questions that arise when a company that owns massive amounts of data is hit with an official request for that information. The question is how much does the company fight that or go along with the government in any number of countries with different legal systems? These things all point in the direction of private sector actors being important.
But the people running those companies, the CEOs, the general counsels, they also can learn from history. And what is a fair inference, if you take that history seriously, is that the safer, more secure, and more trustworthy a technology is perceived to be, the more quickly people will adopt it. If you want people to be driving electric vehicles, have their batteries designed in a way that does not blow up. If you want people to be using the internet, let them make secure payment transactions, and so on.
So, I think these companies were encouraged in their own internal decision-making to accept voluntary commitments around testing and evaluation, for example, of frontier models, in part because they wanted the technology to be a good-news story and not something that is rickety, insecure, and unclear in terms of how it works.
Frankly, it also takes a while sometimes for governments to catch up with how quickly the technology moves. I do not think that means it is impossible for government to do that. But those moves on the part of government sometimes might take months, years, or in some cases decades. And quick and reliable action from the private sector is a good start.
Now, what do I think about those commitments now? I think they were a good start. But oftentimes, some of the commitments are framed at 20,000 feet or 10,000 feet. We will do testing. What testing? With what degree of involvement from a third party? What transparency will be provided in the test results? How are you going to deal with critical incidents? What will be reported to whom? All these questions then end up requiring attention afterward.
Nidhi Singh: Yeah, and I think this really brings to light a more serious question on what happens after the summits and how to ensure that all the momentum that you get at the summit is actually carried through.
Bletchley Park and Seoul, to some extent, had a narrative arc. Then comes Paris, and Paris shifts the thinking in ways that some people found refreshing, while others found maybe a bit concerning.
Tino Cuéllar: By the way, going to Paris always does that. I do not think anything quite stays the same if you go to Paris. The meals are different, the sound of the language is different. Perhaps that was not totally shocking.
Nidhi Singh: So, the summit rebrands itself from safety to action, and it puts investment and economic opportunities at the center. Why do you think the Action Summit in Paris made this pivot? And I want to be really direct about this question because it is a slightly harder question: do you think moving this conversation away from frontier AI risks ends up diluting the safety agenda, or do you think this was a necessary correction to make at that point?
Tino Cuéllar: Let me split the difference and say I think it was a predictable and reasonable direction. Let’s put it that way.
For so many of us who want a world that can benefit from what this technology can ultimately do, and frankly what technological and scientific progress can do for the world, keeping close engagement with the risk side and the benefit side is always going to be important.
Let me take a step back. We will talk about Paris, but I will just say: the world I was born into, the world you were born into, is a world where life expectancy was shorter than it is now. The world my parents were born into, go even further back, we are talking about giant changes in how much life people had access to.
How we went from a global life expectancy in the mid-40s to well into the 70s in the course of a couple of generations is a science and technology story, first and foremost. But it is also an institution story. It is a story about how we solve collective action problems. How do we get vaccinations into people’s arms? How do we reduce the amount of particulate matter pollution, whether it is in Delhi or Dayton, Ohio? How do we make people’s lives better?
That, to me, is the crux of why, if we are talking about this technology that effectively takes a machine, a physical object, and makes it smart, makes it capable of having a conversation and having input in a conversation like this one, we are going to have to deal with what the risks are and what the benefits are.
So I would say if Bletchley was about putting on the agenda that this technology was so important that it had to be an important vector for global conversation and discussion and decision-making, and Seoul was about acknowledging that the private sector was very central in this journey and needed to engage closely with these challenges and opportunities in order for humanity to benefit, then maybe it was not so surprising that Paris was about what we can do with this. How can it actually affect healthcare, financial services, education, banking?
One of the nice things I saw announced at the Paris Summit was ROOST, this effort that folks were involved in with different philanthropies, including the Schmidt-related philanthropies, to deal with the challenge of building open-source tools that could improve trust and safety and security in smaller companies, not necessarily the ones that could throw giant amounts of money at this.
That, to me, is a good example of how you can combine the safety, trust, and security focus, and also the benefit focus. If you are able to develop these open-source tools, in some cases using AI, that make things easier for medium-sized companies, then their business models can proliferate and they can benefit more people with new and innovative approaches to how you use the technology.
Nidhi Singh: That is a really good answer, and I think that does make sense.
Tino Cuéllar: Well, it was a really good question, so it is easy to answer well when you ask a question. But it was definitely different. Again, there is an interesting British-French subplot here too. Let’s remember the British can point to DeepMind, they can point to Alan Turing as, in some ways, the conceptual architect of the earliest ideas around artificial intelligence. The French have some companies like Mistral that are making progress.
But I think it is also fair to say that in the heartland of the EU, which I would say France is a part of, there had been an increasing recognition that the approach taken to data privacy, GDPR, and different data governance techniques had created an impression around the world that the Europeans viewed their role primarily as regulators.
Clearly one of the subplots, I thought, in that conversation in the Élysée Palace and elsewhere, was that the French were interested in having a conversation about how European companies and European governments saw a different future for themselves, one in which they were a little closer to the frontier and also quite interested in the benefits.
Nidhi Singh: That is actually a really useful insight to have from somebody who was in the room. But going from the UK to Seoul to Paris is different, but maybe not as different as the next one, which is Delhi.
Then we have the Impact Summit, which was earlier this year, and you were here for that. First summit in the Global South, half a million visitors, over 80 countries signing the declaration. What do you think India brought to this conversation that the UK, South Korea, and France did not?
Tino Cuéllar: India is giant. Nearly 20 percent of the world’s population is Indian. And when I think about the scale of India, I sometimes point out to folks that you can take my home region, I was born in Latin America before becoming an American immigrant, you can take the entire population of Latin America from Brazil to Central America to Mexico to Colombia to Argentina, double it, and add additional jurisdictions, California, Germany, and that is what you need to do to get to the size and scale of India.
So India brought a perspective that comes from the giant part of the world that is filled with societies and people, countries, cities that are developing, that are still acquiring the technologies and access to economic opportunity that are much more common in what we often associate with the northern parts of the world.
This is what makes India so interesting because if you combine the scale of India and the fact that even the chunks of Indian society that have high access to technology, that are near the technological frontier, that are well into the global middle class, even if that is a relatively small proportion of the overall Indian population right now, in absolute numbers we are talking about far more people than the populations of many wealthy countries.
It highlights the sort of insight, possibility, life experiences, governance challenges, and opportunities that India brings to this conversation about artificial intelligence. Think linguistic diversity. Think about how we can get more compute and more functionality from somewhat less reliance on incredibly elaborate, incredibly expensive compute, GPU infrastructure at the frontier.
That permeated the conversation in India. So more people, more of a sense of what is the purpose of the technology when we talk about beneficial deployments. What does that mean to people who are in a situation that is far further from a kind of homeostasis of global prosperity, that is still on the rise? How does that mesh with some of what India’s own experience developing digital public infrastructure teaches the world? That is very much what I was seeing.
Nidhi Singh: We have discussed the narrative arc of the summits. Now we will get into some harder questions, but before we do, I want to know a little bit more about the behind-the-scenes of the summits. You have sat across tables from heads of state, tech CEOs, and civil society activists when you were in these summit rooms. Which group of people is the hardest to get a straight answer out of?
Tino Cuéllar: That is very interesting. I am going to answer that in two ways, Nidhi.
Number one, I would say the folks who are former government officials, for example, but who work in private sector actors. They are kind of betwixt and between because they are coming from public service. They often have a relationship with the public. They have been long practiced in the arc of explaining things to the public. And yet, they are in a context where there are crosscurrents, where they have to tell a compelling story about what their particular company is doing and how it is doing it.
So I think sometimes, these are often incredibly thoughtful and smart people, but you just kind of know that they are trying to figure out how to draw the Venn diagram as a whole between what the company needs them to get across, what they themselves are interested in, and what they think the world should know.
But I think the most interesting thing about people involved in the summits is apparent if you compare the Delhi Summit and the Bletchley Park Summit in the UK. The UK affair was really dedicated to two groups, and it was a fairly small operation. There were high-level policymakers, private sector, public sector, civil society, and then a slightly larger group around them of technical experts and participants from different backgrounds, maybe academics, who had some things to say in the conversation. But it was never more than a couple hundred people.
Seoul and Paris were a bit bigger than that. But there were always kind of two audiences there. There was a small audience of higher-level decision-makers and a somewhat wider audience of people who belonged in the conversation and had some background in the details of the technology or the policy.
In Delhi, there was a third group, and this is your point about the half a million people. Students, young professionals, people trying to start their startups in the ecosystem of India, broader groups of government officials trying to make sense of this, and state governments, for example, trying to think about how this technology might affect the delivery of services, the provision of new information to the public. And that was a far, far larger group.
Just trying to interact a little bit with this group of people, trying to listen in on their conversations as they were trying to draw together their own careers, what this technology may have meant for their lives, for their families, for their workplaces, that was fascinating. I had never seen that happen before.
Nidhi Singh: To build off this and go back to the point that you talked about when we were talking about Bletchley, you had mentioned existential risks and frontier AI. You can see that there is a divergence here.
When the summit was in Bletchley, the conversation was about existential risks and frontier models. Then you come to Delhi, and at this point we are talking about crop yields and public service delivery. So we are seeing two different readings of that. One says that the conversation is maturing and is getting more grounded, and it is dealing with real-world impact. The other says that it is losing coherence because now the conversation is trying to be everything to everyone at the same time. Where do you come down on this?
Tino Cuéllar: I come down in the middle, which is a safe place, but I am going to make a case for the middle as the right place to be.
It is certainly true that if one leans into a narrative of the positivity of any technology, fossil fuels, aviation, smartphones, the internet, one is likely to miss the full picture because no human activity, obviously, is purely beneficial. Countries, armies, cities, they all have their pros and cons, and the arc of human civilization and life for most of us is to take what is the reality around us and ideally, if we are even a little public-minded, to make it better.
How can public transportation, economic opportunity, architecture, and infrastructure in this or that city be a little better when my kids are growing up in it than it is for me, for example? But without some attention to what everybody has to gain from a technology and how people might see it differently, and particularly how people might need to focus on the benefits rather than the costs differently based on where they sit, then you cannot really have a global conversation about risks and governance.
I will give you an example. When fossil fuels began to develop as a technology of real economic opportunity, which is to say as a technology that gave people access to cheap fertilizers that improved food production, this is a little piece of what the Green Revolution was about, as well as cheaper forms of transportation that allowed people to not live their entire lives in a little village or city where they were born, as well as everything ranging from pharmaceutical products to packaging technologies that changed daily life, those changes were not equally beneficial to everybody.
Some people paid a higher price being close to industrial facilities and polluted more. But also, many people who might have been perfectly well off before fossil fuels began to change the world might have seen the coming of fossil fuels, at best, as a business opportunity, but not really as something that affected their own lives or was likely to improve it.
So when I think about the importance of taking seriously what a technology can deliver around access to expertise, and by the way, it is imperfect right now. I am not suggesting that it is as good as every doctor or every teacher. But I think about people for whom the reality of better healthcare is never going to be because we give them access to a high enough number of phenomenal doctors, just purely human doctors, that everybody will now have somewhat equal access to that kind of healthcare, that everyone will have the level of access to good medical care that people have in the global middle class.
So we have to think about what we can do to make the world a little bit better able to access those solutions and opportunities. I think the conversation about the benefits side of this in a country like India is going to be a little different than in a country like the United States or Denmark or even Japan.
To my mind, that is what we are really trying to achieve. We are trying to achieve a conversation that can be open and inclusive enough for everybody so that if you are taking seriously those needs of being in the global middle class that billions of people feel, many of them in India, we can then take seriously what else needs to be on the table, including how to make sure that these technologies do not create vulnerabilities for critical infrastructure, do not worsen already pretty intense geopolitical tensions, and do not present us with a scenario where humans really have less control than we would like about what is happening around us.
Nidhi Singh: Yeah, I think that is a lot to ask for the summit.
Tino Cuéllar: Yeah, it is true. It does not all happen just in a summit. But this is what I think a summit at its best aspires to try to achieve.
Nidhi Singh: One of the other larger criticisms that we have seen come up after almost every edition of the summit is that critics say summits like this produce declarations that are not being enforced. They have voluntary commitments that companies can walk away from. And it is essentially a photo op for politicians, heads of state, and technology companies.
You have been inside all of the summits, and by this point you have heard all of this criticism. What is your response to this?
Tino Cuéllar: My response is that I would rather live in a world where these summits are happening than in a world where they are not. You can accept that they are not going to, by themselves, really solve any one of the biggest problems or make sure that AI gets delivered in a form that provides real benefits to most people in the world.
But let’s take a slightly wider historical lens. I think about the eradication of smallpox. I think about how we went from 70,000 nuclear weapons to closer to 12,000. I think about how we were able to limit the chemicals that basically deplete the ozone layer and therefore raise the risk of not only skin cancer but other health problems on Earth by trying to remove CFCs. I am talking about the Montreal Protocol and the Kigali Agreement on top of that.
All of that takes international meetings, takes photo ops, takes a lot of symbolism. It takes many meetings where people wonder if anything was accomplished, and then there are breakthroughs. That, to me, is the reality of global relationships and diplomacy: three, four steps forward, one or two steps back, and a bunch of photo ops in between.
Any reasonable person can get sick and tired of photo ops. But I have to tell you, if I tell the story about those big leaps in life expectancy we talked about from the 40s to the 70s, they run through some of these summits. They are not perfect, they are not all equally valuable, but they matter.
Nidhi Singh: That is a very pragmatic but also optimistic view of this, which I think we are kind of missing right now.
One of the things that has come up, and you just referred to it as well, is that there are a lot of leaps being made forward and a lot of people who are trying to move the conversation forward. But it is not happening in a vacuum because geopolitics around this also plays a large role.
The geopolitics have shifted quite dramatically since Bletchley. When this happened in 2023, I remember you called Bletchley a remarkable achievement in diplomatic terms. Then we had Paris. By the time that happened, the U.S. and UK both refused to sign the declaration. The current posture that you are seeing generally geopolitically, and maybe more specifically in the U.S., is more about competition than cooperation. With this reality on the table, how does a summit series that was primarily built on multilateral consensus continue to function?
Tino Cuéllar: Well, I think it has to depend on a degree of honesty about both what remains to be achieved and what has been achieved. Here, as I often end up telling my kids when they ask me, “Is this good or bad?” I will say to them, compared to what? What is your standard?
So let’s start with the things that remain to be done. Is there a framework for countries to understandably benefit their defense capabilities by relying on fast-evolving forms of artificial intelligence without creating a race-to-the-bottom dynamic where virtually every aspect of warfare gets automated, where humans increasingly lose some degree of real control and responsibility for warfare? No, that remains to be done.
Is there real global-level clarity on the three or four biggest, most compelling beneficial applications for AI, and a single global repository for technical assistance, financial support, and coordination to address those two or three most beneficial global deployments? That is almost a trick question. No, we are not there yet either.
Do we have a framework where countries can effectively say with confidence, we are competing on frontier AI, but there is real certainty that every frontier-level model with truly the capability to materially improve how easily non-state actors design some kind of cyber, chemical, or biological exploit beyond what they could do with very low-level expertise plus the internet, and do these models all get tested before they get deployed? No, we are not there yet.
On the other hand, what do we have? We have a network of AI security, trust, and safety institutes around the world, including a very first-rate one in the UK, some early efforts in countries like India. That network, I should say, prominently had the risk that the new administration in the U.S. might take apart what had been created in the U.S. in the Commerce Department within the framework of the National Institute of Standards and Technology. That institute persists. It has a different name, and it has particularly focused on innovation and security, but it continues to be a top-notch repository of real knowledge inside the U.S. government about how you test frontier models.
Do we have an international UN panel to look at AI technology, how it is changing, what risks it poses in a way that brings together scientists from all over the world? Absolutely, we do. Do we have the frontier companies taking very seriously their responsibility to have scaling plans that look at the risks of their technology? We in fact do.
Do we have some jurisdictions with laws that simply say that if you are testing frontier models, you should provide some transparency for those models? Absolutely, we have that in New York and California. So I think real success has been accomplished up to a point.
If I think about intelligence not only as the subject of the technology we are talking about, but as a metaphor for what we are trying to achieve, we are building synapses in the global brain, in effect. Do more synapses need to be built? Absolutely. But I think we are going to get through this a little bit with an understanding that this is a shared moment of real significance to the world. And I think we have made some good progress on that score, at least for the moment.
Nidhi Singh: I agree with you to a large extent there because I think if you just look at the conversations that happened in Delhi, a large part of it was also about bringing a global majority lens to the conversations. And I think that will go quite far, or at least I hope so.
When we were having the conversations here in Delhi, a lot of the conversation was about sovereignty and access to compute and what locally relevant AI looks like. Do you think that this will be a permanent shift in how we talk about this, or do you think it was just because it was happening in a global majority country?
The next summit is now in Geneva, and there is at least some amount of concern that as you move these conversations back to Europe, the focus that we have developed so far through this summit might get dropped. What do you think is going to happen in the next one?
Tino Cuéllar: India is an authentically rising power. It is already an enormously important country. It has a powerful military. It has a giant share of the global population. So it is neither surprising nor unreasonable for a country like India to ask: what role is this country playing in the technological frontier? And what aspects of the technology stack, whether it is sophisticated hardware or the lower-compute models that provide multiple-language functionality or whatever it may be, are going to be most deeply in the heartland of this society, in this country?
I think many countries might ask that: Brazil, Turkey, certainly South Korea, and so on. But I think it is important to put the discussion of sovereignty and AI in a broader context. And that is a context where a vision for economic cooperation that seemed very attractive in the 1990s is no longer really the template for how we think about global relationships, where increasingly the real barriers to deep economic integration were technical. They were about non-tariff barriers.
There was language like that that suggested that this was mostly about getting the right experts around the table and figuring out how to turn the dial a little bit further in the direction of lowering intricate barriers and ultimately just letting people trade and move capital, perhaps even people. That is not how the world is understood right now.
There are real tensions between powerful countries: Russia-U.S., China-U.S., India-China, etc. Tensions that genuinely put that whole vision at risk of looking like a really poor fit for the reality of the world. So in that kind of environment, all this discussion about sovereignty is not so surprising.
Nidhi Singh: I am going to pause on the summit conversation just for a minute to zoom out a little bit and maybe talk about India and how we are looking at building out AI.
India has been building, like you have already alluded to, one of the most ambitious national AI strategies because we have the largest population, a billion-plus people, a massive developer base, and a government that really wants to lead when it comes to AI deployment. What is your take on the trajectory of AI development and deployment in India? And what kind of a role do you think India will eventually have in the global picture?
Tino Cuéllar: There was a ton of energy at the summit in Delhi around sophisticated models that could provide functionality with less compute in areas that were particularly important to India: voice, language. That is terrific. I think that is a good start in a direction that might make India and Indian companies not only relevant to the beneficial deployments that we need to see in a country with so much linguistic diversity, but also to so many people who, for multiple reasons, communicate in voice rather than sitting down and typing at an interface.
But it is pretty clear, I suspect, to Indian policymakers that there is more that needs to be worked out. Some of the deals that were announced to develop data center capacity, further energy generation with some of the centers of industrial capability in India, that all points in the direction that it is more than very efficient models that have a very specific tie to Indian society.
Over time, I would imagine that there will be a conversation in India about open-weight and open-source models, how to leverage those effectively. There will be an opportunity to talk about how to lean into data gathering around beneficial deployments and the use of AI by people as societies change.
In the United States and in Europe and perhaps in Japan and some of the advanced industrialized countries like South Korea, for a variety of reasons, there will be a lot of interest in tracking better how very systematic use of frontier AI shapes the public, labor markets, youth and child-rearing issues, education, social relationships.
There is probably an opportunity for India to understand in a similar way, at scale, in a very different society, how models that might have greater and greater functionality around language, agriculture, and education are actually used by a very large, inventive, creative, and diverse population.
Eventually, I think the question will be for the world whether the more closed architectures around AI that are in the current American frontier labs, and I am sure in some of the Chinese frontier labs as well, retain a special advantage at any given moment such that relationships will have to be worked out between the providers of those models and a country like India, or whether the delta between what is available through an open-weight or open-source framework, whether from the United States or China or Europe or somewhere else or Indian companies, closes that gap relative to the closed architectures.
It is a little too early to tell, but I think that will also drive the next wave of policy challenges for countries like India.
Nidhi Singh: I think a lot of these conversations have started. It is really interesting to see where this goes next.
But in a similar line, going back to the conversation we have had, we have now talked about Bletchley, Seoul, Paris, and Delhi. What are you hoping will happen in Geneva? The next summit is in Switzerland after four summits and four declarations and a whole lot of momentum that has been going towards this. What do you think Geneva should deliver to make the series feel like it is specifically building towards something?
Tino Cuéllar: I would love to see Geneva focus on how the architecture of AI is evolving and changing us as we change it. So from safety, trust, and security, to voluntary commitments, to action, to beneficial deployments and a broader global perspective, and now to some attention to the micro level of how organizations, societies, families, and companies are all being shaped by this technology. What can we learn from that?
This is not necessarily a subject for summitry and agreements, but I think it can inform important vectors for scientific cooperation and evaluation cooperation. How can we take what we are learning about how to better evaluate scheming capabilities in models, risks of loss of control, but also what a very beneficial human-machine relationship looks like, and share some of the knowledge for how we assess that and also pool some of the inventiveness and creativity of the world to help solve those problems?
There are always going to be some relationships that are a little bit more fraught, but to my mind there are many democracies, including perhaps ones in Asia, Europe, and the Americas, as well as many countries that have fundamentally different systems of government but have real incentive to figure out how to share.
Demis Hassabis talks about a CERN for AI, kind of like a scientific cooperative body to truly advance the frontier of how we understand the properties of this technology. But I think that begins with an understanding that if we want to really tell the story about fossil fuels or the internet or smartphones, three giant technologies that shaped our lives, we would pay a lot of attention not only to high-level government policy or company strategy, but instead to how individual behavior and organizations and institutions are all being shaped. And I think that conversation will lead to really interesting vectors around scientific, evaluation, and technical cooperation.
I use too many words like scientific, technical, and institution, so maybe that sounds boring, but I mean to make it sound exciting. Getting a better view into how people are actually using the technology, how it is changing their minds, their organizations, and their relationships, that is surely a security and safety issue, but it is also a giant beneficial deployment issue. It has a lot to do with how we ultimately scaled up healthcare, particularly public health interventions, that really gave us longer lifespans and more prosperity.
Nidhi Singh: Yeah, I think there is a lot of potential in that area to just study how this is going, how it is going to affect us, and that eventually, of course, impacts the trajectory of how we look at AI, how it is being adopted, and how we look to scale up.
My last question on this now, and this is a more fun behind-the-scenes question again, because you have been at all of the summits. What was the most interesting or surprising thing that you learned about how different countries think about AI? And is there a perspective that you have encountered at these summits that actually changed your mind about AI in any way?
Tino Cuéllar: Two things to say. In the very first summit, I had a chance to sit close to the leader of a European country. Some combination of the remarks that that person made and also just a little bit of chatter, I can literally connect that, like a straight line, to the conversations I had with some of the rank-and-file people that India managed to accommodate in the Delhi summit, walking through that very, very crowded convention center, the name of which I do not remember, but you remember it, and just feeling the crush of people trying to make sense of what this meant for them, how it would affect what we might call the intersection of politics and economics and society.
The questions about what they delegate to the technology, what they keep for decision-making for themselves, what it might be for their kids, what it might be for jobs. I must say that it is just fascinating to me, and it leaves me a little hopeful that the leader of an important country in Europe and just somebody I might have met and asked, “What are you doing here? What have you found interesting in these exhibits? Why did you come here?” would say things that are so similar, in a way, effectively about trying to make sense of the mysteries of this technology, its potential, and what it means for their lives and the lives of the people they care about.
So in a way, that changed my perspective too, that we are still a common humanity despite all these differences of borders, station in life, economics, culture, and language.
I think another perspective that feels important here is the idea that so much about what is going to fuel government strategy around this technology is not only an imperative around sovereignty or security and geopolitical goals, but also frankly a sense of how people understand it in their daily lives. I suspected it as much, but particularly in Delhi, when I saw how a conversation about this could play out in a larger audience, it was so clear that irrespective of almost any system of government, there is a broader question, which is: we are going through a period of rapid change in society, politics, and economics that is inflected further by this technology but was already picking up speed and making our lives quite different.
Previous technological changes around the smartphone and the internet are a piece of that puzzle, but also a sense that we had a kind of framework for global relationships that is frayed. And inside even some countries, we had a framework for understanding what the nation was about that is also frayed. So I sense that in that respect, I go to these summits, I talk to people, and AI is like a Rorschach. They are seeing in it some of the broader conversations that they really want to have about: what is my place in the world? What is my real community? Is it the nation? Is it the province? Is it my city? Is it my family? And I think we need to make room to make those conversations productive rather than polarizing.
Nidhi Singh: It is so interesting to hear that overlap from Bletchley Park to Bharat Mandapam on commonalities that so many people across different spectrums are worried about. It will be very interesting to see where this goes next.
Tino Cuéllar: Indeed. To me, some of that conversation will be ultimately not only about how the technology is changing and who controls it. Those are both very important variables. It is fair to say that even though many folks listening might say, “Well, we are still in an LLM paradigm, the technological paradigm is not that different,” I would argue that once we saw the rise of inferential architectures, where there is an inference step that increasingly consumes more and more compute, we are really dealing with a different technological architecture.
But ultimately, all of that goes to the question of what society we want to be. What priorities do we want to set? In a crowded and complicated world that is not teleological, it is not all going in one direction, we have some power, some agency to shape where we go. Individual people do with respect to their lives and families, how they use the technology without becoming overly dependent on it, for example. But whole countries do as well. And that is what makes this conversation so interesting.
Nidhi Singh: And exciting in a lot of ways. Still a lot of potential to do good as well.
Before we let you go, Tino, I have some rapid-fire questions for you. You cannot dodge them and you cannot give me a middle answer. Short answers and no hedging. Are you ready?
Tino Cuéllar: Okay, I have fastened my seatbelt. Two of them.
Nidhi Singh: Regulation or industry self-governance? What are you betting on?
Tino Cuéllar: Governance, which is another way of saying some regulation to solve the collective action problems, but not too much of it.
Nidhi Singh: Summit declarations or side conversations? Where does the real work happen?
Tino Cuéllar: Side conversations, but they are not always in back rooms. Sometimes they are with ordinary people who are trying to make sense of what is going on there, whether they are a security guard at Bletchley or somebody who is a student who attended the very massive and exciting, more public side of the summit in India.
Nidhi Singh: Are you more worried about AI moving too fast or governments moving too slow?
Tino Cuéllar: About AI moving too fast. I think in recent history there is pressure on governments to strike a middle road. Sometimes they do not want to stop technological innovation, particularly if they are functional governments that are not overly authoritarian.
Really authoritarian governments will often fear technology at the end of the day. They will try to use it, but they will also fear it. But I think that even for governments that are able to be flexible and use what they have learned around risk-based methodologies that are not overly intrusive, like we use in food safety, for example, or common law systems like tort law that are not all set in stone but can be adaptive, the truth is the technology is still moving awfully fast.
Nidhi Singh: What is the most overused phrase that you have heard at all four AI summits?
Tino Cuéllar: AI sovereignty. I started hearing that even in Bletchley. I think earlier I talked about how there is room for that conversation. I do not want to deny its importance, but it is not really clear so precisely what that is meant to signify. I would rather talk about AI benefits and AI trust and making sure that ultimately all countries benefit from the technology.
Nidhi Singh: What is one AI application that genuinely excites you?
Tino Cuéllar: The possibility of simulating sophisticated deliberation in real time among different voices, something that is integral to the success of the court that I used to serve on in California, or to the success of an organization like this one where we have people from India, Lebanon, Germany, Asia, and we are all part of one conversation.
Those conversations, which will need to continue happening among humans, will still be valuable always. They take time, they take energy, they happen across time zones. But actually simulating in real time hundreds of voices deliberating and learning from each other and seeing how that can improve the quality of our work, the feedback we get for our work, and ultimately in more scientific pursuits, breakthroughs around science, that is incredibly exciting.
Nidhi Singh: What is one AI risk that you think people still are not taking seriously enough?
Tino Cuéllar: I do not know that people fully understand what we mean when we say loss of control. The way it might be best understood is by recognizing that if we think about the essence of AI as being a human-generated technology for intelligent choice and interaction that is not an individual human, the first draft of that was the corporate form. It was actually the ability to come together and create, from partnerships, an entity that had its own legal persona and that could effectively make a kind of decision happen that was not just any one individual but was collective in some way.
It is fair to say that corporations bring us enormous economic benefits, but they also create staggering environmental disasters. Anybody who thinks that the global economy is one that is entirely controlled by a set of individual humans misses the reality that most of, or at least much of, what drives the global economy involves collective decision-making, often made through corporations.
Loss of control in that context is like, how do we align the behavior of those corporations with what is good not only for their shareholders, but to some extent, what is good for their larger environment, for the people who buy stuff from these corporations, and ultimately are part of the communities in which these corporations operate?
So the possibility that a sophisticated, very capable system might behave in ways that we did not predict and did not want, when we already are seeing pretty clear evidence of scheming, of lying, of misrepresenting on the part of these models, in some cases of model efforts in controlled environments to self-preserve and preserve other AI systems, suggests to me that without denying for a moment that there are real upsides from AI or suggesting that we should slow down the development of these technologies, we need to take the loss-of-control risk seriously.
Nidhi Singh: If AI could solve one global problem tomorrow, what should it be? Only one problem.
Tino Cuéllar: Access to quality healthcare, which is still very, very unevenly distributed and is a way of taking seriously the responsibility we have to our fellow humans to live in a world where not everything is equal, where opportunity is not always distributed 100 percent equally, but where the staggering degree of inequality in how much people can have access to a life-saving treatment for their kid, a way of diagnosing something that is making their kid less and less able to thrive, is a painful reality for anybody who has thought about it for a minute, or been a parent, or thought about just what we owe the rest of the world.
Nidhi Singh: Finally, my last question for the day and the most important one. What was the best food that you ate at any of the four summits?
Tino Cuéllar: Oh, for sure in India. But it is a little unfair because, as a vegetarian, I have to say that there is no better place in the world to eat as a vegetarian than India. The sheer range of spices, the subtlety of the flavors, and the fact that there is always a panoply of stuff that you can eat as a vegetarian gives me hope for the world, let’s put it that way.
Not to say that everybody has got to be a vegetarian. I fully support people eating what makes sense for them. But if you are going to be a vegetarian, then you might as well go to an AI summit in India because you will be very happy.
Nidhi Singh: Tino, this has been a really rich conversation. Thank you for being so generous with your time and your insights.
We will be back in two weeks with a new episode. To make sure you do not miss it, be sure to subscribe on Apple Podcasts, Spotify, YouTube Music, or wherever you get your podcasts from. To learn more about our research and team, you can visit us at CarnegieIndia.org. You can also find us on social media on X, Facebook, LinkedIn, and Instagram. Thank you for listening.
Tino Cuéllar: Nidhi, thank you so much for your terrific questions and for making these complicated issues accessible and interesting. I am glad to be your colleague.
Nidhi Singh: Thank you for listening. See you next time.