OpenAI to loop in Indian developers to work on AI safety
To focus on responsible and safe use of artificial intelligence (AI) technologies, ChatGPT maker OpenAI will also loop in Indian developers to work alongside US developers for resolving critical challenges related to the technology, Anna Makanju, vice president of OpenAI, said on Tuesday.The company next month, will also host a developer gathering in Bengaluru, which will witness participation from OpenAI vice president of engineering Srinivas Narayanan and other product leaders. The developer conference will involve collaboration between Silicon Valley developers, and Indian developers to address some of the most important issues in AI.“Our plan is to convene developers here in India to work alongside OpenAI product leaders on some of the most difficult products and safety challenges,” Makanju said on the first day of the Global Partnership on Artificial Intelligence (GPAI) summit in the Capital.
“Countries like India are critical to AI’s future and India has the critical ingredients. Some of the most impressive technology towns in the world, a track record of developing extraordinary technology businesses and a focus on competing on the global stage,” Makanju added.Comments from Makanju over leveraging Indian developers to resolve challenges of generative AI technologies have come after OpenAI CEO Sam Altman visited India in June and also interacted with Prime Minister Narendra Modi.“Our visit here resulted in real concrete changes on how we do our work. And perhaps most importantly, it led to a focus on reducing costs associated with our tools and this is a change that we think actually is going to help people all over the world, including India, to have access and benefits from this technology,” Makanju added.Critical issues which Altman highlighted during his visit to India in June were around deepfakes, plagiarism, and how a regulatory model can be brought given that the AI landscape is constantly evolving. Lately, issues regarding biasness of generative AI platforms were also flagged by the government in India.“We understand, we learn, we study the risks associated with products and then translate it into concrete steps to make them safer,” Makanju said.One of the key ways through which OpenAI tries to develop transparent models and works towards mitigating risks of technologies is through something called red teaming, a practice of rigorously challenging plans, policies, systems and assumptions by adopting an adversarial approach towards models. The company said it conducts testing of AI models prior to deployment, and engages external experts for feedback. It then uses these internal-external feedbacks and tests to improve the model’s behaviour, Makanju said.Lately, when countries including India are working on frameworks to regulate AI technologies, companies including OpenAI have been calling for the need for a governance structure as that will ensure that AI is built securely.“In the US, we have worked with the Biden administration and other leading AI labs to form a set of voluntary commitments on safety and security… This is a global issue and we need a global approach to regulation. We must harmonise our efforts,” Makanju said.According to Makanju, there is a need to establish an international body that can help ensure that the most powerful systems in the world are safe and their benefits are evenly distributed.In India, the government is looking at light-touch regulation for AI technologies keeping in view that it does not lead to user harm, while also ensuring innovation is not compromised.