It was a huge pleasure to host Arka Dhar at our IEPC event last week on “AI for Good”. While there are many people speaking on this topic, few are as well informed as Arka, the current Product Lead at OpenAI, the company behind ChatGPT, and also a recent graduate of our Sloan Programme. Arka’s expertise in artificial intelligence (AI) and his insightful discussion on “AI for good” left a lasting impact on the attendees.
With questions from Professors Ioannis Ioannou and Julian Birkinshaw, and from attendees in the packed lecture theatre, Arka covered a broad spectrum of topics. Here, Julian Birkinshaw shares some key highlights from their talk.
Rising Above Mediocrity in the Age of ChatGPT
When asked “which industries and jobs are most likely to be replaced by ChatGPT, Arka made a thought-provoking statement: “Although this might be controversial, I think the jobs most likely to be replaced are those where mediocrity is sufficient.” It’s a great point – ChatGPT is the master of mediocrity, in that it can write a good-enough report in an instant, and it can sneak a passing grade in most exams. As Arka mentioned ha can recognize what was written by ChatGPT. What this means to us as individuals is that we have to rise above mediocrity. When writing a report or an exam, we need to inject it with some creative, personal, or idiosyncratic content, to show off our uniquely human qualities, which AI cannot replicate.
OpenAI: A Unique Organization in the AI Landscape
Arka shed light on the distinct nature of OpenAI as an organization. He described it as more akin to a university research institute than a traditional commercial entity. It has about 400 employees with a flat organizational structure, and it has a capped-profit structure, which allows investors to earn up to 100 times their investment but not more. The company has deliberately set a limit to prevent excessive profiteering. That’s still a lot of money, of course, but the company faces the challenging task of balancing its original not-for-profit mission with the cut-throat capitalist landscape it is now competing in. Arka shared an interesting anecdote about OpenAI’s CEO, Sam Altman, testifying before the US Congress recently. A senator remarked that this was the first time a business executive advocated for more industry regulation, not less which is a testament to OpenAI’s commitment to responsible AI development.
Ensuring ChatGPT’s Alignment with Societal Values
Arka delved into the critical issue of maintaining Chat GPT as a ‘force for good’. There will always be people trying to corrupt any generative AI product, to get it to say dangerous or illegal things. Given the potential for misuse and manipulation, OpenAI conducts rigorous testing before every product launch. Teams of experts do their utmost to push ChatGPT over to the dark side, towards generating dangerous or illegal content. OpenAI’s success in preventing such occurrences is especially when compared to past AI experiments such as Microsoft’s doomed experiment with the Tay Chatbot in 2016. Arka explained that they have figured out that Chat GPT is “steerable” which means it’s possible to gain alignment between what it says and what society deems acceptable. But of course, there are many different views about what values it should reflect. We can agree that it shouldn’t suggest anything illegal or dangerous, but many other values (e.g. right to life versus right to choose) are strongly contested.
Arka’s talk covered many more intriguing points, making it evident that AI for good is an endlessly fascinating field of exploration. We eagerly anticipate future opportunities to welcome Arka back to our campus, keeping us abreast of the latest developments in this rapidly evolving domain. As AI continues to shape our world, it is vital that we engage in meaningful conversations and ethical considerations to ensure its positive impact on society.