Monday, March 24, 2025

Unlocking the Potential: Accelerating Mass Adoption of Gen AI

Share


Will generative AI live up to its hype?

On this episode of At the Edge, tech visionaries Navin Chaddha, managing partner at Mayfield Fund; Kiran Prasad, CEO and co-founder of Big Basin Labs; and Naba Banerjee, McKinsey senior adviser and former director of trust and operations at Airbnb, join guest host and McKinsey Senior Partner Brian Gregg to discuss the inevitability of an AI-supported world and ways businesses can leverage AI’s astonishing capabilities while managing its risks.

The conversation delves into the current state of the AI revolution and the challenges and opportunities that come with it. Naba Banerjee reflects on the initial hype surrounding AI applications like ChatGPT and the frustration of not seeing immediate widespread adoption. Kiran Prasad highlights the business impact of AI, particularly in areas like user engagement and personalization. He emphasizes the need for time and learning in adopting new AI technologies like gen AI.

Navin Chaddha provides a historical perspective on AI, noting its decades-long evolution and drawing parallels with the rapid advancements in semiconductor technology. He predicts a significant impact from gen AI in the coming years, driven by exponential growth in capabilities.

When discussing the adoption rate for companies, the panelists offer insights into the different timelines for consumer versus enterprise adoption. Navin Chaddha identifies friction points in enterprise adoption, such as concerns around privacy data, talent shortages, and the resistance to outsourcing data hosting. Kiran Prasad shares his experience as a startup founder, highlighting the pervasive use of AI in various aspects of his business and the gradual learning curve for users.

Overall, the conversation underscores the transformative potential of generative AI while acknowledging the challenges of widespread adoption. As businesses navigate the complex landscape of AI technologies, the key lies in understanding the nuances of different use cases, addressing concerns around data privacy and talent acquisition, and embracing a learning mindset to fully leverage the capabilities of AI.

For more in-depth conversations on cutting-edge technology, follow the At the Edge series on your preferred podcast platform.

Users’ ability to understand how to engage with an agent and use it to accomplish things will just take time. The solutions will be there, but the adoption rate will be potentially lower. As companies continue to integrate AI into their operations, it’s crucial to focus on the problem-solving aspect rather than just implementing AI for the sake of it.

One example of successful AI implementation comes from Airbnb, where Naba Banerjee, the head of trust and safety, used AI to tackle the issue of teenagers throwing parties in Airbnb rentals during the lockdown. By building an AI model that could keep up with trends and patterns, Airbnb was able to reduce party incidents by 55 percent. This highlights the importance of using AI to solve specific problems rather than just for the sake of having AI.

Looking ahead, Navin Chaddha envisions a future where every human has a digital companion, or AI teammate, to work alongside them. The goal is for AI to augment human capabilities and help individuals reach their full potential. This partnership between humans and AI will be essential for organizations to thrive in the future.

Kiran Prasad emphasizes the importance of the agent approach over the copilot approach when it comes to using AI. Instead of treating AI as a mere assistant, the agent approach envisions AI as a ghostwriter that can do the work while receiving feedback and direction from humans. This collaborative approach is key to maximizing productivity and creativity in the workplace.

When it comes to addressing budget concerns related to AI implementation, Navin Chaddha suggests moving towards a business model that focuses on outcomes rather than hours or seats. This shift in mindset is similar to the evolution of payment models in other industries, such as the transition from perpetual licenses to monthly payments.

Overall, the key takeaway is that AI should be leveraged as a tool to solve specific problems and enhance human capabilities. As organizations continue to embrace AI, it’s essential to focus on collaboration between humans and AI to drive innovation and productivity in the future.

And then cloud compute happened, where you pay as you use, like you do for electricity. So, get these digital workers. They’re off most of the time. They answer calls. They reconcile AR [accounts receivable]. They file tax returns. Then you pay for them [the digital workers]. The tech is getting there, but the workflow isn’t there, because they need enough practice to get better.

In an enterprise setting, there’s one more thing I don’t like. It’s the amount of training that is required on closed data. Training on open data on the internet is much easier to create a scalable service. But I have custom data, which is complicated. That’s why you have to go to the fringes, which requires business model innovation.

Kiran Prasad: The mapping shift is like Uber. If you originally wanted to have a driver, you had to make enough money to have a driver and pay them full time. Then Uber made drivers easily accessible. That did not mean everybody got rid of their cars.

The future CEO in the age of AI

Brian Gregg: Naba, in a world where you have half machines and half humans, what does the leadership team of tomorrow look like? How does a CEO and her or his team operate in this hybrid world?

Naba Banerjee: I think it will take away a lot of the fear associated with leadership. People who want to start their own companies, or who want to lead companies, or be a senior leader, they think they have to be this person of exceptional talent with very creative vision and make the best decisions all the time.

They will be able to use AI to say, “Simulate these five scenarios for me and give me all of my risk-versus-benefit numbers. Help me understand if I’m going to get sued or not.” Exactly like what you are doing, Kiran. They can ask AI to come up with creative ideas and challenge each other.

Everyone cannot be exceptional at everything, but everyone is exceptional in at least one thing. But those other areas of your personality that may have kept you behind, now you can push forward with AI.

We will probably see many more leaders emerge. On the flip side, it’ll get harder to distinguish yourself because suddenly it’s an equalizer. Everyone has the same resources available. So that’s the conundrum that, though I’m not a fortune teller, I’m very excited to see.

Brian Gregg: Many of today’s CEOs followed a certain track, such as an MBA or a graduate degree, then a job usually in a commercial function like marketing and sales, and then worked their way up. Kiran, what does this CEO of the future look like? Is it the same pathway with a few tweaks? You’re playing the role right now.

Kiran Prasad: It’s the same pathway, but with more than a few tweaks. Part of what you do as you get into larger and larger leadership roles is you get really effective at understanding strategically where you want to go and then delegating tasks.

In an agentic world, you will be able to choose which tasks to delegate to an employee versus delegating to an agent. But you still need somebody who’s setting the strategy. What will continue to be an even more important skill is communication.

How effectively and concisely can you convey what you’re trying to accomplish to a person versus an agent? And what happens as it permeates through an organization is that it typically dissolves. In an agentic world, you’re going to be able to maintain fidelity going from an agent to an agent that’s trying to accomplish things. The more effective you are, the game of telephone will be more precise.

You have to be able to predict where the future’s going and guide strategy more effectively. So the whole “I’m going to just A/B test it” baloney is going to be less.

Brian Gregg: Navin, do you agree with this version of the CEO?

Navin Chaddha: I look at it as the CEO will always have to be raising money, because without money, you can’t do anything. Second, they’re in the business of mobilizing resources. This time, it won’t just be human talent. It’ll also be AI teammates. Then you have to make a decision. But smart CEOs, like athletes, surround themselves with coaches. And this time around, I’m going to have a lot of digital coaches who can improve my “serve.” CEOs have a tough time giving feedback. I’ll have a candor coach. They might be afraid of speaking. The best ones demonstrate vulnerability. They have to maintain a persona.

But with a digital teammate, it’s all confidential. My only input to AI-native CEOs is to get somebody with a fresh mind as their chief of staff. Now the question is: Will it be a digital teammate or a human teammate? Maybe it’s a combination.

Kiran Prasad: My view is that it’s a digital teammate. If you look right now at what is the biggest adoption for AI beyond ChatGPT, two other ones are Character.AI and Replika. They are effectively psychiatrists.

Naba Banerjee: AI therapists.

Kiran Prasad: Weirdly enough, people keep saying, “I don’t know if I trust AI.” But the number-one use case that seems to be working is the one where they have to trust the AI, which is insane!

AI as tool or takeover?

Brian Gregg: If we’re talking about 2028, when half the jobs are done by these digital teammates, what is the downside effect on humanity, on the employee base, and on society?

Navin Chaddha: I think humans are smart. I look at AI as yet another horse. It’s yet another tool. Humans will figure out how to ride it the way they did PCs and mobile. We’ll just get better. And when this productivity, amplification, and implementation comes, more revenues, more profitability, and more jobs get created.

The Impact of AI on Trust and Safety in the Digital Age

Introduction

In today’s rapidly evolving digital landscape, the integration of artificial intelligence (AI) has become increasingly prevalent across industries. As AI technology continues to advance, it has the potential to revolutionize the way we interact with digital platforms and services. However, with this innovation comes a new set of challenges, particularly in the realm of trust and safety.

Industry Insights

So essentially, when GDP growth happens, now it just turns out that the population can’t keep up, so some of them will be AI teammates. So I’m very bullish. Every time a tech wave happens, humans win. Tech is the great equalizer. When offshoring happened, people thought India would take away all US jobs, but the US got richer and richer with globalization.

Naba Banerjee: I feel like I have to balance that view. The trust and safety world exposed me to another part of humanity that at times I wish I hadn’t seen. I know different marketplaces, dating sites, are trying to create an environment where humans can meet each other. After COVID, so many people are meeting for the first time digitally. And it is always scary for humans. Stranger danger is still considered to be one of the top fears that prospective hosts on Airbnb have. About 60 percent of prospective guests say they’re scared of being scammed.

It’s not true. Very few incidents actually happen, but this is a fear that humans fundamentally have. With synthetically generated humans through AI, it’s so easy to re-create voices of people, digital twins, and fake IDs. We are seeing the way we have typically kept communities safe—trust and safety and risk teams and what they have done—that all those defenses are failing. We are not ready for the world that is coming. There’s also a lot of bias in the data that is being used to train these [synthetically generated] humans.

Market Trends

The integration of AI into digital platforms has ushered in a new era of convenience and efficiency. However, as Naba Banerjee highlights, there are inherent risks associated with the proliferation of synthetically generated humans through AI. The fear of being scammed or encountering fraudulent identities remains a prevalent concern among users.

Furthermore, the presence of bias in AI algorithms poses a significant challenge in ensuring inclusive and equitable digital experiences. As AI technologies become more sophisticated, it is crucial for organizations to address these issues proactively to maintain trust and safety in the digital realm.

Recommendations

In light of these challenges, organizations must take a strategic approach to enhance trust and safety in the digital age. Here are some actionable recommendations:

1. Implement Robust Verification Processes: Organizations should invest in robust verification processes to authenticate user identities and prevent fraudulent activities. This may include multi-factor authentication, biometric verification, and identity verification checks.

2. Enhance Transparency and Accountability: Organizations should prioritize transparency and accountability in their AI algorithms to mitigate bias and ensure fair and equitable outcomes. This may involve regular audits, explainable AI frameworks, and clear communication with users.

3. Invest in Diversity and Inclusion Initiatives: Organizations should prioritize diversity and inclusion in their AI training data to ensure representation of diverse demographics. By addressing bias in data, organizations can create more inclusive and equitable digital experiences for all users.

4. Collaborate with Trust and Safety Experts: Organizations should collaborate with trust and safety experts to stay ahead of emerging threats and challenges in the digital landscape. By leveraging their expertise, organizations can proactively address issues related to fraud, security, and privacy.

FAQ

Q: How can organizations mitigate bias in AI algorithms?
A: Organizations can mitigate bias in AI algorithms by investing in diverse and inclusive training data, implementing explainable AI frameworks, and conducting regular audits to monitor for bias.

Q: What are some best practices for enhancing trust and safety in the digital age?
A: Some best practices include implementing robust verification processes, enhancing transparency and accountability in AI algorithms, investing in diversity and inclusion initiatives, and collaborating with trust and safety experts.

Conclusion

In conclusion, the integration of AI into digital platforms presents both opportunities and challenges for trust and safety. While AI technology has the potential to streamline processes and enhance user experiences, organizations must be vigilant in addressing issues related to fraud, bias, and security. By implementing strategic measures and collaborating with trust and safety experts, organizations can navigate the complexities of the digital age and uphold trust and safety standards for all users.

Written By:

Read more

Related News