Re:Orient
Welcome to Re:Orient, a podcast by Asia Society India and Dalberg that covers the most pressing issues in South Asia, from AI and education to air pollution and the changing geopolitical order. Over six episodes, we speak to educationists, policymakers, thinkers and experts to gather meaningful – and often surprising – insights.
Inakshi Sobti, CEO Asia Society, and Gaurav Gupta, Global Managing Partner Dalberg Advisors, bookend each episode with key questions that lie at the heart of the issue: what matters, how it got to be this way, and what we can do about it. Gaurav then builds on these questions with our diverse group of guests, giving us insights on where South Asia lands on each of these themes.
Re:Orient
Episode 3: The Future of AI: Regulate or Innovate?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
If you're on the internet, you've made a Faustian pact - you have unlimited access to knowledge and information, and the ability to make connections in exchange for your data. With AI, the subject of data privacy has renewed significance. Tech companies freely and without permission use the vast amounts of data available online to train AI models. But do we care? Does it matter that we have little control over our images and words that are being used by companies for a profit if the payoff is the opportunity to use AI tools? There are other questions worth thinking about: How do we tackle the biases inherent in the data that AI is trained on? How do we regulate tech companies and make them accountable to people?
This is one facet of the issue. The other is the undeniable fact that AI has immense benefits. It improves productivity, it can deliver education to remote, underserved communities and it can enhance healthcare in economically challenged communities. While we raise important questions about the ethics of AI, perhaps we should also keep in mind that AI is an evolving technology and that it will in time overcome its current faults.
If you replace a judge by an AI, you may get a decision very instantly. But here, processes of reasoning also matter as much as the result does. And of course, I think a lot on the utilitarian plane, a lot of people will say, well, it doesn't really matter apart because judges also are prone to error. And that's why we have appeals. So where do you go for an appeal when an AI decides against you to another LLM?
SPEAKER_06Well, welcome to Reorient. I'm Garov Gupta.
SPEAKER_01And I'm in Akshi Sokti. We're going to dive into one of the most polarizing debates of our time. Is artificial intelligence the problem solver that it's been made out to be? Or does it cause more harm than good?
SPEAKER_06And as in the case with most things, there are both positive and problematic aspects to AI. I think central to it is this tension between users of tech and the tech companies that want to push innovation in an unrestricted way. And that does give rise to serious ethical concerns. So we discussed these with Apar Gupta, a lawyer and advocate of digital rights, Anchul Dewari, the founder of the online platform YouthKavars, and Urvash Yaneja, the founder of the think tank Digital Futures Lab.
SPEAKER_01And I think the key question is where do we draw the line, Goral? I think we can agree that in the case of privacy, for instance, that line has blurred. It doesn't bother us anymore that tech companies are accessing our data without consent as long as we get to use tools like ChatGPT. But there are other issues, right? Such as biased data, the fact that power is increasingly getting concentrated in the hands of a few technocrats, the disruption that AI will cause to the job market, and so on. In fact, this last point is particularly anxiety-inducing. And I'm quite keen to know LinkedIn founder Reed Hoffman's views on AI's effect on jobs.
SPEAKER_06Yeah, as you hear in our interview with him, he echoes a widely held belief that while there will be upheaval, AI will help us to work more efficiently. And really it's worth listening to that because he drops a lot of great nuggets. And if you remember, Sundam Mahalingham from our first episode had similar thoughts that AI would do the grunt work for us, giving us the freedom to focus on the more creative aspects of a job. From discussing the pitfalls of AI, we move on to the benefits. With Reed, I also talk about how AI has the potential to revolutionize education and healthcare, especially for the underprivileged. And we wrap up with what I think is a really sort of wonderful example of AI being used for good, thanks to the Taiwanese entrepreneur Eric Wang, who many of you are probably using one of his products right now, as he's one of the biggest makers of phone cases in the world that owns the company RhinoShield.
SPEAKER_01I'm excited to know more. Let's bring on our guests.
SPEAKER_06What is that attitude? Do you think actually young people are qualified to make a decision on that everyday transaction they're having on the internet where they're sharing their data in return for effectively free goods? Do they actually understand what they're letting go of?
SPEAKER_05That's an interesting question because I feel like there is no direct way for us to know if they fully understand it. But I have a couple of things that I will share. At some point at the beginning of last year, we had conducted a very small survey specific to Kerala to understand what young people's concerns when it comes to AI and access to digital technologies is. Only 12% of the respondents identified privacy as a significant concern. And there are two reasons that we have noticed for this. First is that there is a lack of easy to understand information about what does privacy actually mean and what does the violation of that do to me as an individual. And the second thing is that people often feel that I'm not important enough, so what can my data really do? It's not something that is worth protecting.
SPEAKER_06Today, those that are able to hoard data and those that have processing capabilities have an enormous amount of power that we don't realize that we are giving to them. Apar, in the past, you've campaigned for net neutrality. What is the fight today? Is it around data protection and the way it's been used?
SPEAKER_03The unfortunate issue which has developed in India has been that we have not had any data regulation and we still don't have it. The Digital Data Protection Act, which has been passed, is still not enforced in India and we're still debating its rules. Because the understanding, and I may be in a minority, has been that we need to power economic growth and technology development in India on the basis of the least amount of regulation which is possible. I think that there will be a lack of any kind of regulation because we when you say that all the publicly scraped data which is needed for making a foundational level LLM model, a large language model, the Digital Data Protection Act actually exempts that kind of data, anything which is in the public domain from being gathered and being trained for LLM. So basically, Gorov, Anshul or me, any of our public statements can be training data for an LLM without any kind of control under data protection law.
SPEAKER_06How has Europe fared with higher data protection versus the US with lower data protection in terms of building a new sets of organizations and companies in this space? So I would just wanted to double check. Are you still saying that there is an inherent tension between that? That, you know, yes, you can have protection, but that restricts obviously the room to maneuver of companies. Do you see that there's a way to achieve both?
SPEAKER_03There is no denying, Gorov, that there is a tension. But we have to choose what is the path we which we need to take. And even in the United States, fines have been imposed on social media companies for gathering user data from the Federal Trade Commission's consumer practices, if they're unfair consumer practices. But what speaks to India's complete inability to even slap one rupee or fine on any technology company for any improper data practices leads to things which are happening today, in which, for instance, you have differential pricing for using different kinds of handsets for the same quick e-commerce service, in which an iPhone user gets treble or four times the price for which an Android user gets it, or where a dying battery gets you a higher surge pricing for a cab which you're booking. These are the practices which impact people on a daily basis. And I'm not even talking about surveillance or policing right now.
SPEAKER_06I guess I just wanted to hear a little bit about the solution side, right? So, Ansul, you mentioned that you know people are putting a very low price on their data. And I certainly think part of it is none of us realize maybe actually our data is low price, but our data together is a high price. That once you stick it all together, you can be manipulated, you can be suggested to buy certain things and so forth, and that's valuable. So, what's the way through on that? Is there a way at the consumer level, at the individual level, to control this, or is this all about top-down regulation?
SPEAKER_05I don't know if I have a silver bullet answer to that, but I feel like if we take into account the fact that people put a low price to their own data as an individual, it's also safe to assume that they will generally put a low price to the collective data unless there are severe social implications. So if you look at who are the people who are sharing consistent concerns about their data being violated, you will notice that it's largely representatives or organizations that work for already marginalized groups. There have been cases like, for example, in 2023, there were riots in Delhi, there was a facial recognition technology used by the police to identify people who were involved. And uh largely all of the people who were identified were only Muslims, for example, right? So you'll find a lot more groups that are working with Muslims, with Dalits, uh, you know, who are way more vocal because they understand that the current data sets that these systems are also trained on are in themselves biased. So, how do you ensure that there is a level playing field or there is a you know potential for innovation when the data itself is representing a certain kind of a bias? So I feel the fight currently when it comes to social security, protection, violation, is mostly coming from groups that have traditionally been facing these problems, and it's not something that has reached mainstream uh discourse yet.
SPEAKER_03In a way, we look at technology as giving us the answers towards development socioeconomically, which have not existed before. But if we think about it, technology will also do some things which will harm people. And I just want to come to one use case of AI which demonstrates it. A lot of people have now started focusing on AI-based solutions, and even the Supreme Court's own e-committee is looking at how AI can help solve some of these problems of what is called as judicial pendency, in which one case drags on for 10 to 15 years. But what are the risks which emerge? One is homogenization. So if you replace a judge by an AI, right, you may get a decision very instantly. But here, processes of reasoning also matter as much as the result does. And of course, I think a lot on the utilitarian plane, a lot of people will say, well, it doesn't really matter apart because judges also are prone to error. And that's why we have appeals. So, where do you go for an appeal when an AI decides against you to another LLM? That's a question I'll place to people who are thinking about it in utilitarian terms. The second is let's just bring it on the basis of what happens when the data has been trained historically on types of cases which have been prioritized, which have been on the basis of human biases, which are now put to scale efficiently in a way, but is not explained because LLMs basically guess the next thing. Will it mean that bail will be denied or higher amounts of sorety will be placed on bail in criminal cases if a person based on historical records comes from a lower socioeconomic group and thereby is classified by the AI as a potential flight risk? And these are important social judgments we as a society need to take rather than a large language model.
SPEAKER_06Utilitarians aren't trying to put judges out of work. So, I mean, utilitarian might say, great, now we have we all had 10 judges, there weren't enough. And now those 10 judges, they still keep their jobs, now they all are part of the appeals process. And they can actually continue to focus on refining the quality where 80-90% of the donkey work is being done.
SPEAKER_05I just want to make one point, which is that I think it depends a lot on the utility of what it's being used for. Like, for example, in Kerala and Columb, the Kerala High Court has already, I think, begun using uh AI for check bounce cases because I think they see it as a low-hanging fruit, they see it as a potential for uh, you know, quick decision making, there isn't a lot of conflict there. Similarly, uh, you know, uh, we came across another use case where uh because judges need referencing of past cases very frequently, and often it's a manual process. Somebody has to go and you know bring a book, read it out, blah, blah, blah. They are now using trained LLMs to refer to past cases very quickly and reducing the in-court time as well. So there are, I think it depends a lot on the utility, what can enable a human to make quicker decisions rather than making the decision for the human. That is probably how I would view it. But if you go outside of the code system also, I think there are cases where just digital technology, forget AI, has not been very, you know, useful for people at large. For example, there are issues even till date in the Manrega scheme where workers who were forced to use digital tools to register their attendance were not able to because the tools were not working, they were not paid in time, you know, their daily wage was taken away. There are also cases where in Rajasthan, somebody who hadn't died was marked as dead, and you know, like they were not able to then go for employment, they were not able to get their money in time. And these are not one-off cases, these are cases that have happened, you know, consistently enough for us to be concerned about what would happen. And this is just an app which is not AI enabled. This does make us wonder that what would happen if it does become AI enabled and if it does, you know, have that element as well. And the second thing, also very important here, is that who gets access first is a very big question. You know, the three of us already probably use many AI tools in our work and so on and so forth, right? So, adaptability for us is a lot easier. Um, you're you know, already dealing with a population that may have broadband connections, but they don't have full knowledge of digital technology usage.
SPEAKER_06Point that was brought up about this idea of all this discussion we're having about privacy and making sure that our algorithms and so forth are not harming us, you know, lead to this idea of regulation. And then there is this sort of conversation of is it the European way, which you know regulates more, but maybe has therefore diminished in terms of the competitiveness of these sectors and the fact that we still all of everything we're talking about, digital internet, these are also huge job creators uh and future job creators in the economy. And what does that mean in balancing that?
SPEAKER_03Gaarova, I think uh there is uh inherent value in us recognizing that technology does lead to a high degree of human productivity, it does lead to national goals in terms of economic or social goals, it results in all of that. And with that recognition, we need to also make sure that what are the harms which are resulting which can be mitigated to the maximum extent possible. A lot of what we are looking towards in terms of economic development through AI also needs to be looked at from the perspective of the potential impact on employment opportunities for lower-level skilled professions. Where I do recognize that AI may result in people writing a better email quite often, which is a big thing for a lot of people in India. But yet at the same time, they may not even be hired to write that email, right? Because they will their jobs itself will not be given to a human being, it will be given to an AI agent in that way, and they can be called the same name. They meant be called Apar, Gorava, and Shul, right? We won't be working there. So I think the government needs to look at AI from the perspective of the rapid developments which are occurring in technology, allocate money towards research and constantly look at the state of technology development and its social use and impact on uh both not only the economy, but also on the rights of ordinary citizens.
SPEAKER_06The ethics of AI have dominated the conversation in the public sphere. In some ways, though, it feels like it's not necessarily a new conversation. There is, you know, very much anytime there's new tech, there's a new set of challenges that are raised, but they always come down to who owns what and who has control of it what, etc. Urvision Aja is an important voice in this conversation. She has spent a decade focused on the ethics of tech, and I was interested to hear what she thinks is different about the current state of affairs. What's what's different about this debate versus what we've been having over the past decade?
SPEAKER_02I mean, in some sense, like we think about the privacy issue, what feels different is that it feels like we've given up on that fight, right? There was a couple of years ago when we thought privacy was really, really important. It was a fundamental right, that you know, we must protect privacy. And it now seems that, you know, with generative AI and all the conveniences we're experiencing and all the kind of hype around how it's such a transformative technology, those kind of conversations have taken a back seat. We seem to have said, like, okay, well, we can't have privacy if we want AI, and we definitely want AI. So let's uh let's say that's a good idea. Lack of privacy is actually a competitive advantage. Lack of privacy is a competitive advantage.
SPEAKER_06Which is crazy, right? That we're almost trading that away as competition. Give me an example, like a visceral example of where you're already seeing the impact of that sort of privacy overreach playing out. Or what is one of your big concerns there?
SPEAKER_02If you just look at the production of any like Chat GPT, right? We're just talking about data that's scraped from the internet without any consent from the people who are the creators of that data. And that's the case with all these large language models, whether it's uh Chat GPT or mid-journey or whatever it might be, right? So, in some sense, the size of these models and the scale at which they're being developed predicates that there is a violation of privacy happening somewhere. So then that's what I mean. We seem to have kind of just said, like, okay, we're okay with that now, right? Because we want more AI and these tools are uh giving us so much benefit or they have so much promise.
SPEAKER_06Isn't this conversation always technology dependent? Like, I mean, you take a musician and before the you know advent of tapes and CDs or sort of the ability to record, the idea of their rights, I mean, this is not just privacy, it's IP and stuff, you know, was them live singing something. And technology enabled them to extend that sort of span of control of IP in many ways, right? One could argue that in a similar way, technology can take it away, right, as well. Like, okay, some technologies have come in, they're able to listen to you and then recreate you much more simply, it's et cetera, et cetera. Like fundamentally, it is not that the notion of IP, the notion of privacy has a particular definition. It seems to always coexist with the nature of what technology is present in that moment.
SPEAKER_02Oh, for sure. And you know, none of this is to say that technology is bad. That's not the argument here either. Nor that notions of privacy or copyright or intellectual property are kind of stagnant in their definition. Obviously, they have to evolve with the times. But I think the point is rather that the nature of how they evolve has to be something that is subject to democratic participation and democratic control and democratic engagement. The nature of how they evolve should not be dictated only by a small section of society that is kind of pulling us towards a certain kind of innovation trajectory, but it should be something that is a more open and inclusive conversation. Similarly, even the examples that you're giving about artists, many of them have benefited, etc., but there's many that haven't benefited as well. And so I think the recognition here is that, you know, the gains of technology will not be equitably spread and the benefits and harms will not be equitably spread. And so while we are benefiting from technology and we see all the benefits, we need to simultaneously look at what the harms are and who is getting disenfranchised and make sure that we carry them along.
SPEAKER_06You know, the way I interpret what you've just said is that every technology moves us to a new equilibrium, and that new equilibrium may not necessarily be better in terms of at least how the gains are distributed. And what I find particularly, you know, interesting is probably the too light a word, almost dangerous about tech is that it has this sort of winner-takes all capability. And something that's quite amazing about it, because social media, AI, these are all capabilities that also are deeply influential, they influence how we think. That you're now in a world where a set of billionaires who, if they control this technology, can also use the technology to convince everyone else that what's good for them, a small group of people, is in fact good for the masses. And it's interesting, many of the politics that you're seeing play out over the past one and perhaps longer years is that you're having billionaire-backed candidates be seen as the answer to the working class.
SPEAKER_02Yeah, I'm glad you raised that. That was the other point that I was gonna raise in response to your first question about what's different about the current AI moment, right? Which is exactly what you're saying, that the barriers to entry, the barriers to actually participate in the so-called AI revolution, not as users of AI technology, right? So not as users and applications, but as people who are actually gonna contribute to the development of AI or the governance of AI, are so high. In many ways, AI is kind of predicated on a concentration of power and reproduces a concentration of power. So if we look at, you know, how current AI systems evolved and why big tech companies are the kind of forefront of it, is because they had access to this huge amount of data and this continuous stream of data and a business model that continues to generate that data. And then they had access to this incredibly computational infrastructure. And even now, even when smaller companies try participating in the so-called AI revolution, very often what you're talking about is them getting API access or something of the sort, right? You look further down the stack and it is the same few set of players, whether it's or Google or Microsoft, et cetera, who eventually control the core components of that stack. Uh, and so there's a huge concentration of power. And I think that's perhaps one of the biggest issues with the current kind of AI moment and what AI is. And we're seeing it globally, right? Like that concentration of power is not just market concentration. It has an impact on politics, it has an impact on how our society is functioning, it has an impact on what our children think, it has an impact on what is taught at schools. Uh, and we're also seeing now every sector is becoming a technology sector. Healthcare is a technology sector, education, the technology sector, mobility is the technology sector, right? So these companies have also spread into all of these sectors and have a huge amount of power, and it's very, very difficult to actually exercise control. And I think now with uh advances in large language models and the promise of uh generative AI, it's become even harder in some sense to regulate these companies. It is that, you know, if you regulate us too much, A, you might lose out on the global AI race, so don't regulate us, or we're the only ones who actually understand this technology, and this technology is very dangerous, so don't interfere too much because we're the only ones who can both deliver the promise of AI and save you from complete catastrophic risk.
SPEAKER_06Uruvishi, Upper, and Anshul all agree that AI-owning tech companies must be regulated to ensure fair practices, to ensure that consumers are not taken advantage of. So, but I want to turn to Reed Hoffman at this moment to respond to the idea of AI regulation. Reed is invested in AI, he's a champion of the tech and believes that it can be transformative. So let's take the example of climate. One of the greatest downsides of AI is the amount of energy it consumes, at least in its present form. I asked Reed about what kind of governance model is needed to ensure that companies are doing all they can to mitigate their impact on the environment. And I think that is a you know a broader also conversation of how do you do regulation without killing the innovation.
SPEAKER_00Part of the thing that I think people on a regulatory front too often go to, whether they're regulators or government people or journalists or commentators or critics, is immediately go to, well, put me in charge and I will tell the companies what to do or not do. And that's like bad network design mechanism. But what you can do is say, Well, look, we have the following concerns. So you're going to mandate that the companies publish the following kinds of reports, or at least do the following kinds of reports that they report to the government. And by the way, how can we trust those reports? Well, it's pretty straightforward. We have this notion of auditing. Like all of the big companies are all audited. And so if you said, okay, your auditors, who are a third-party organization, must validate to either the government or the public, depending on what the thing is, that the following kind of thing is happening within, you know, climate and energy, that the notion of how green energy is being developed, what the intersection curves look like, and so forth, then that actually, in fact, begins to move you substantially in the right direction, just because you're measuring it. And then by the way, you say you're looking at the measurements and the predictions, which are validated by the auditors, and you go, this is a problem, then you take that problem to the next level.
SPEAKER_06Critics have talked a lot about the challenges of governance, representation, and equity. But Urishi, what I've not heard someone say enough about is, hey, here's AI's capabilities that could improve the world's governance, that could improve the ability for small to be heard by big. And actually, AI is now all over government. It's starting to become a tool that's used in decision making. Do you see an ability to take that technology by the horns and say, hey, okay, well, let me now actually then redirect it towards maybe helping achieve some of these positive outcomes that you are seeking?
SPEAKER_02Yeah, I mean, there's again like a few bits to what you're saying that need to be unpacked, right? I think the issue is very rarely the technology itself, right? That's not the issue. There are some problems with the technology, yes. But I think we can fix them. There are technical fixes, there are other fixes, etc., right? The issue is not really the technology. The issue is the incentives that go into building the technology, the issue is the business model of the folks that are building the technology. Those are the real things. Who gets to participate in building the technology? What are the goals towards which we're orienting the technology, right? It's not the technology in itself. So, yes, you can take the technology by the horns, but it's really about how do we take the economic systems in which that technology is being produced by the horns. That's the question we need to be asking if we want to redirect technology towards some of the use cases that you're talking about, right? So if we take a very concrete example, there are some really interesting use cases of using AI for civic participation. An example, like in Taiwan, there's this police platform which allows more kind of participatory form of democracy where people can kind of vote on various issues, and there's some kind of like sentiment analysis that can be conducted to understand how people feel about a particular issue, et cetera, right? Now with large language models, there's some really interesting experiments being done on how do we do kind of large-scale sentiment analysis, right? Like how do we really understand what the masses want and in some sense go where the state has never been able to reach, right? Uh so that that there's a lot of interesting potential there. But the issue that then arises is that okay, once we do that and we maximize this potential, where is that data going to be stored? Who owns the server? If the server is owned by a big tech company that we do we are not able to regulate or that we don't trust, or if the server and the cloud infrastructure is owned by a government that we don't trust or not be able to hold democratically accountable, then that very same system also becomes ripe for surveillance and misuse. Right? But the way I see it also is that a lot of this is being funded by venture capital, right? And venture capital may not be the right kind of capital required to drive some of that positive impact that we need. You might actually need more patient capital, you and I might need more boring kind of old school bank loans or something else, right? To drive the kind of returns that to maximize the potential of this technology. Because the way venture capital will work is that it's trying to kind of capture the market, push startups to kind of move fast, break things, et cetera, right? Without taking as much accountability for what the outcomes of that investment might be. The folks who are investing in that venture fund are not as invested, and many of the startup founders are also not that invested. Many of them will just be looking to sell out and move on to the next startup, right? And so the disconnect there, I think, is really important for us to also acknowledge. And I've seen this a lot in the Indian context where you have a lot of lot of very, very well-meaning startups who will be building kind of AI for agriculture, AI for healthcare, AI for all of that, right? Part of I think the challenge is also like how do we dial it back, right? If you only have X amount of capital available or X amount of resources available, but the technology is 200 X of that and the trajectory that it's on is going to keep increasing. At some point, do we need to also ask the question? I don't know the answer, but at some point do we also need to ask the question then is this the best trajectory? Or do we need to kind of course correct in terms of also what public spending can actually afford? That's the conversation I think we're not having.
SPEAKER_06I feel like we time and time again, we sort of go back to this analogy of building monuments. I mean, I know the story of the Taj is overblown, but this idea that you build monuments off the backs of labor that is often paid badly or treated badly and so forth. I guess in a in a similar way, is that also what's happening here? I've heard lots of stories about identifying whether this is correct, whether this is not correct, and people are exposed to all these images, et cetera. You can take us inside that sort of human infrastructure that sits us underneath the digital infrastructure.
SPEAKER_02The speed of AI production and uh the nature of AI production is predicated on an exploitation of labor. Because you have cheap labor somewhere that uh is able that you can, you know, piece out these tasks of the image labeling, data annotation, etc. etc. We're talking about people who work for very low wages and very deplorable working conditions, talk about things like content moderators is also a huge amount of mental trauma and stress that they experience. I mean, imagine watching, you know, horrific images of like beheadings or children being murdered or whatever, right? And you have to watch like a thousand of them every day. Like it does something to your brain. And I think there is also this kind of amongst development organizations, since we were talking about philanthropy earlier, this narrative that these things are at least creating jobs for those who don't otherwise have access to livelihoods or people who are, you know, uh not very h skilled, etc. But what we've seen in our research is that it's not actually creating jobs for the lowest skilled workers. Many of the workers who are actually doing this work are actually IT graduates in places like India or places like Kenya. So they're IT graduates who have not been able to find work in the IT industry. That's kind of fueling how AI systems are built, right? It's also allowing things like, you know, Google just released this latest model for like uh instantaneous translation. That's happening because there is someone at the back end, or many, many people who have been doing that labeling work. But there has to be a way of doing this better. There has to be a way in which we can still build these systems where we can have something like live translation because that has so many benefits, but at the same time also ensure that we are treating the workers who build these systems equitably, that we're paying them fairly, that they have decent working conditions.
SPEAKER_06We've had a great debate so far about the ethical issues around AI from the perspectives of data privacy, corporate governance, and labor. I think it's crucial to keep these questions in mind while thinking about AI, but I'm also excited about the possibilities of it. I'm going to turn back to Reed at this point to hear his thoughts about how AI is going to affect the future of work and how AI can benefit disadvantaged communities. What do you see as some of these new exciting AI technologies that allow you to extend the impact of LinkedIn, a professional network, to many of the folks that perhaps are not being reached today?
SPEAKER_00I think that part of the thing that we most need to do is to help people discover how artificial intelligence can be an amplifier for them, can be an increase in their agency. And they say, well, how do I learn this new tool in order to do whichever of these tasks better more creatively as work transforms, as industries transform? And the answer is AI can help you do that. The notion of an agent isn't only an agent that is something that, you know, can help you go act in the world. And I do think that we will see that there will almost like be nothing like an individual contributor anymore, that what we will be seeing is folks who, you know, kind of deploy with a set of copilots, a set of agents. They're managing a set of agents. And that will transform the nature of work. You say, Oh, well, how do I learn how to do that? Well, there will also be agents that are helping, you know, that are like your coach, your trainer, your, you know, assistant, et cetera, in helping you do that as well. And I think that's that's part of how we we we we help enable this transformation in a very human way.
SPEAKER_06What is your mental model of if there's a company like some of the big foundational learning models, what is their obligation to be pointing the direction of AI towards like an AI for good of really fundamental changes in humanity?
SPEAKER_00So I think there's gonna be a lot of job transformation and a lot of product and service transformation because of AI. Because if you roughly think now, like one of the ways we've been trying to make progress in the world is to have electricity everywhere. You know, so electricity can power, you know, not just HTRAC, but machinery and everything else. Now we're gonna try to have intelligence everywhere. We're gonna bring intelligence to everything, and that will transform a whole bunch of different things, including, of course, not just products and services and companies and markets and industries, but also jobs. And I think what companies, you know, they're obviously most focused on how do I provide products and services and jobs within my, you know, kind of both my country and with my region and globally in order to do that. And that's part of how I provide to the future of economic opportunity and part of that adaptation of the, you know, kind of Adam Smith wealth of nations and theory of moral sentiments is, you know, we're providing these things being of service together, and it's an efficient mechanism of taking risks and trying things and all the rest. Now, that being said, companies are also members of society. And when we have these kind of big transformations, part of what you want to be having companies do is not just driving as intensely as they do on cost structure, namely, hey, could we deliver our current product and services with half the people that we currently have? And will that be none as a good thing in terms of productivity, which of course is a good thing, by the way, there's not a zero sum amount of work. There then could be 2x the number of firms, you know, kind of competing and so forth, but there's a transition difficulty. And it's that transition difficulty is part of being members of society, which is hey, how do I not just explore the cost equation, but I explore the growth of product and services, the growth of capabilities equation, and being investing in the human capital of the talent equation to facilitate that in various ways. And I think that's one of the things that companies need to keep in mind is that they navigate what is going to be a very broad transformation as we begin to have not just electricity everywhere, but intelligence everywhere.
SPEAKER_06So we had an nexus right now, right, of challenges going on. So we've got sort of climate change, increasing inequality, you're observing widening geopolitical divides, and obviously this massive job realignment due to AI and automation. How do you navigate a pathway where it's the public can see how AI could make each of those things worse? And right now, even that initial start feels like we're on the negative side of the ledger, right? Certainly on climate, we're on the negative side of the ledger. There's a sense that people see the numbers of hundreds of billions of dollars, 200 million being paid to an individual to do this. That also feeds the whole, well, is this just another inequality force, right? So all of these big disruptions, AI is alongside, how do we think about it addressing them rather than what I think is part of the public imagination at the moment that it's exacerbating these disruptive forces?
SPEAKER_00There's a complex set of questions there. One, on the talent side, you know, people go, oh my God, they're being paid so much money. And it's like, well, actually, in fact, starting pitchers for some US baseball teams are like$30 million a year. Or frankly, I think it goes up to$37 million a year. And you go, which one is generating more value for society? The AI researcher who may be creating a model that gives intelligent guidelines to everything that may be happening in human life. You know, everything from hey, I have my kid has this medical issue to, you know, how do I do this kind of work and so forth? You go, well, actually, in fact, you know, paying talent is a is a good thing. There's kind of markets for doing that. So that I don't worry about. Now, then they say, well, what about, you know, kind of the data centers for providing this intelligence? We say, well, once we get this intelligence going, if we're not massively improved on the climate impact for the use of energy, we're clearly not building the intelligences we should and we can. Everything from being much more intelligent washers and HVAC and all the rest, and maybe even like how do you run grids more efficiently and what other kinds of things you do. So there's a whole bunch of adding intelligence to that, which I anticipate will be a net positive. Now, that doesn't mean that there won't be a lot more electricity used because people are using intelligence for things, and one part of it will be green and making sure that there's like green energy and efficient use of electricity. But the other thing is, by the way, you know, part of the reason why we use so much energy today, and part of what the reason why it causes the climate change that we're looking for is because actually, in fact, the transport and the HVAC and the and the manufacturer of buildings all very important. And so getting into that and making those things green as well is one of the things intelligence can help with, even as you're using electricity across the whole thing.
SPEAKER_06So so can I just I just want to jump in on this bit here? I think uh, you know, there's a lot of potential around the efficiency aspects. At the same time, just energy itself has become the competitive advantage dimension of you know this game right now towards this superintelligence. That ultimately, regardless of whatever you make more efficient, would mean that almost the demand for energy is never ending and that there is almost a much higher incentive to get all the oil out of the ground before you go down any further, more expensive technology. I mean, that is an incentives issue, right?
SPEAKER_00I don't think we can innovate our way out of that easily, or at least we run a risk that speaking at least within the US envelope, all the hyperscalers are actually, in fact, taking a bunch of the money that they're investing in data centers and else and doing future purchase orders on uh nuclear energy, fusion and fission. They're doing future purchase orders on geothermal. And so they're massively increasing the amount of investment, you know, as customers uh in these kind of alternative energy schemes, no matter what else is going on, because they themselves want to reflect that the the data centers of intelligence they're building, that they're paying attention as, you know, kind of as good stewards in society or else to building clean energy. So it isn't, you know, there are other incentives also at play, but there is, of course, what is the first race is how do we build as much intelligence as we can by building scalable training centers, which consume a lot of energy in the training, even before you get to the inference. And that does happen and they're doing it that way. But there's a side benefit that isn't just the benefit to the data centers, because you go, oh, we've got geothermal really working now. Now we know how to build it not just for data centers, but for cities and for every every location that has it.
SPEAKER_06Your argument, and I think this is consistent with some of the other things you've said, is that look, even though right now there's almost a capex aspect of this, ultimately, when you get to the flow, the innovation will outpace the size of the problem. That's a prediction, right? And some point you also need a governance model that says, wait, is that actually happening? Because at some point you take a call, right? There's some irreversibility about some of the harms that we're doing. What's your sense of how this happens? I mean, there's a lot of rich individuals sort of with a belief system that might, you know, the public might argue, well, comes from the fact that those people have a lot to gain, right?
SPEAKER_00Well, so one, those people having a lot to gain is part of how we made a whole bunch of progress in society already. So, you know, incentives to the upside is generally speaking a good thing. And, you know, when people say, well, that's caused climate change, yes, because there's 8 billion people with like a substantial number living in the middle class, that's what causes climate change, right? And that comes out from the system. That's a good thing. We wouldn't want to go, oh, we're gonna arbitrarily draw a lottery and move down to 4 billion people. That would be an inhumane thing.
SPEAKER_06It's part of a famous Hollywood film, yes.
SPEAKER_00Yes, exactly. We all look at the positive mechanisms of God and then adjust those for how do we then adjust for these climate change issues?
SPEAKER_06You know, in what ways do you believe that AI can serve as like a scaling force for empathy, equity, and empowerment? Like those kind of metrics, especially communities that are historically excluded. Is there an opportunity here to bridge certain divides?
SPEAKER_00One, like on empathy, like what my startup inflection AI kind of um started really importantly in the field with basically saying, you know, training EQ as important as IQ, and we tend to model off how we interact with things. So, like having these agents be empathetic and everything else is a broad-based empathy. Another thing is in terms of inclusion, which is wealthy people have always been able to afford tutors for their children, et cetera. And now you can have a tutor in every pocket. And what's more, when you begin to think about like healthcare outcomes, one of the things that I think we are a small number of years away from is a medical assistant that's on every smartphone that's running.
SPEAKER_06I thought it would be fun to go from thinkers and ethicists to entrepreneurs and talk about interesting innovation from an I guess unexpected quarter. Eric runs a company that makes phone cases. In other words, he puts plastics out in the world. In a moment of self-reflection, he decided he needed to use his skills as a technologist to remove plastic from the world. And the idea he came up with was made possible using basic AI. So what we're gonna hear is this uh really a story that goes from the sort of 30,000-foot conversation we've been having to one about how we're going to see AI on a daily basis and the potential it offers to just start sometimes cleaning the world up, making up for our mistakes, or just adding more social impact. And that's why I really asked Eric to just share this story. Eric, tell us about your midlife crisis.
SPEAKER_04You know, as in this industry, we are producing one billion punk case every year. So each case is about 100 grams. So that's a lot of waste. No one's recycling this. So I feel, hey, you know, this is actually what I can add value. So, you know, that that's almost like my lifeboat for my midlife crisis.
SPEAKER_06It's a pretty good midlife crisis, yeah.
SPEAKER_04I didn't Yeah, yeah, yeah. Because I I think purpose for me is really important. You know, I'm a material science. I I should really use my knowledge to fix this. Years ago, we launched a product called Circular Next that was the first phone case made entirely from used phone case post-consumer from our users. It was a really, really proud moment for our team. In our shop, we work with convenience store in Taiwan. In our shop, we have collection points. So in Taiwan, it's really interesting. If you go to Family Mart, that that's like a convenience store. Yeah, you you return the phone case, they give you ice cream. Yeah. Because what we want to do is treat our customer as a partner, they are our supplier. So in the linear economy, you know, your relationship with customer kind of ends when you finish the purchase. But in our model, we now need to educate the customer. You know, you have to bring back the stuff to us. So we work with a company called Algenesis. You know, they are using seaweed to create a plastic. If customer, every customer takes back they use materials, we can recycle everything. So well the thing is, what if we can go plastic negative? Meaning that if we can use ocean plastic, rebo plastic to make our phone case. So I start looking into um the ocean plastic. I try to buy some, but quickly I realize this space is messy, it's very complicated. Essentially, what we want to do in the big picture is we want to take the ocean plastic, and then we want to have a technology to be able to use that plastic to create a blend that we can use it as a raw material for our product. But we are not making it.
SPEAKER_06But if you could talk a bit about the technology that's going into uh circular blue, especially around how you're you know, I think uh the version of AI that you're using to help sort of power that, I think it would be really interesting.
SPEAKER_04So we have a stationary base, the mothership, that will house or nest multiple drones, and the drone will do active collections. And we also have an air drone that takes aerial photo. When they take aerial photo, they use AI to identify to do label. And then mapping with GIS, mapping with the AI will figure out the coordinate of the where the rubbish is, and then send that coordinate to the ocean drone. Then ocean drone will go and pick it up. We take picture of the ocean and then we identify foreign objects that is not animals, uh specifically plastic. So we label all the plastic. Essentially, that's what we use AI for right now.
SPEAKER_06Eric, your story is a great note on which to wrap up this conversation. Thank you.
SPEAKER_01That was a meaty primer to the AI debate, Gorov. The ethical issues, the safeguards that we need to consider, the many excellent uses of AI, and so on.
SPEAKER_06Yeah, although I want to reflect on this episode a bit because I think in a way there are these like two frames. People seem to be talking about it like social media. You know, we went through this social media boom where we gave up a lot of privacy and we didn't really know what we were giving up, and we're now discussing some of the harms. And there's a question of whether AI is that kind of impact but on steroids. But I also kind of wonder: should we be using a different frame? I mean, do we need to use the frame of nuclear technology? Because social media may even be able to change governments, may even cause huge mental health issues, but it can't fundamentally destroy the planet. And there's there's a sense here that AI just has these bigger impacts. And I just wonder if we have the right frame.
SPEAKER_01Well, it's a good reflection. That's the point of reorient, isn't it? To provoke us to think in tangents. In the next episode, we're going to examine the cultural sector from various points of view, with Asni Meta, the chairman of the Bhagavad Gilad Museum, Roshan Abbas, a media professional, actor Nandata Das, and Yasmina Zelang, whose organization Airloom Naga makes handcrafted textiles in beautiful Nagaland. Stay tuned.