Google’s Saikat Mitra on how privacy and safety are interlinked besides the need to find a balance

In a world which is becoming more about being less private, people often tend to forget the need for safety. From financial frauds to identity theft, among others, are some of the issues that the current set of Internet users seem to be facing. Add to this the entry of Generative artificial intelligence (AI) and the rise of deepfakes. Truthfully, the new world is a complicated one and this is one of the key reasons why safety and privacy go hand in gloves.

Sign Up to get access to the Financial Express Exclusive and Premium Stories.Register NowAlready have a account? Sign in

In a conversation with FE Transform-X Saikat Mitra, VP and Head of Trust and Safety – APAC, Google, talks about how the Internet giant’s key area of focus is to ensure the online safety of users, besides YouTube ramping mechanisms to fight misinformation. (Edited Excerpts)

Google has three philosophies of product development. Our products should be secure by default, and private by design, and that control should be in the hands of the users. I believe privacy is a counterbalance to safety. I have teams around the world that are working on keeping people safe and secure. That’s how, as a company, we value privacy, safety, and security, and we challenge ourselves internally to figure out how to continue ensuring online privacy, safety and security.

How is Google’s DigiKavach different from the online security measures of other companies?

What we are trying to do with DigiKavach is create a 360-degree threat intelligence structure which goes beyond our platform. The first part is internal, which is using our technology for holistic threat intelligence gathering – so that uses our own AI for threat intelligence. We also gain insights and intel from media reporting, and third parties. Now as we investigate, we try to understand, what is called the modus operandi, exactly what they are trying to do. Our team comes up with insights and intelligence on how the scammers are conducting these scams Then we start thinking of products, and where those vulnerabilities could exist. Lastly, how can we prevent them?

For example, in the current cases of UPI scams, we have found that many times users get a call from someone – in this case, pretending to be a LIC agent, who would say, ‘Your father has asked him to deposit a certain amount in your bank account.’ That fraudster then shares messages of the extra amount being deposited and then makes urgent requests to return the amount through a QR code sent by him. So what happens as you speak to the person to check your bank account from any online wallet, they install malware and hack the phone. In GPay we have a protection mechanism when the screen that comes where a user enters her UPI id, that screen blackens out, despite two-level protection, if a screen is being shared.

This brings me to the second part of DigiKavach where we want to work with the industry, through cross sharing of knowledge and expertise with the industry. For example, Android has a feature called, ‘flag underscore secure’ and if you set that in any sensitive screen on our financial application, the remote screen recording applications cannot read that screen, which prevents one from being compromised.

The third part is user education, which has brought us towards researching a concept called digital inoculation. We aim to create digital antibodies, and we’re researching in this area.

With regards to common scams, how can Google’s DigiKavach contribute towards preventing scams, especially for users in tier-1 and 2 cities?

It depends on the balance between products becoming inherently safer and user empowerment. We’ve extensively worked on Android as a product and developed a multi-layered security approach to protect people from online threats. We are confident in the case of the Play Store, it’s difficult to put malware on it thanks to consistently improved security features and policy enhancements — in combination with our investments in machine learning systems and app review processes.

We are also holding several discussions on the future of OTP. One of the reasons why these message scams happen is because OTP is the last line of defence, which is good as it has two-factor authentication. However, a lot of countries, barring India, don’t have as good two-factor authentication, but is OTP the best two-factor authentication? Probably not, as I think the future would be more inclined towards device attestation-based two-factor authentication. In that context, I think we’re in discussions on how technology can solve these challenges.

There’s a prevailing concern around deepfakes, with the government trying to draft AI-based regulations to control it. If such vulnerabilities happen at a massive scale, do you think platforms, such as Google, are prepared to handle them?

From what I understand, deepfakes have been there for a while and it’s not like they just came up today, as we have seen them over the years. So, why today? Well, with Generative AI, it has become simpler to produce deepfakes. Moreover, I think we need to separate creation from dissemination. I think there are two separate problems. According to me, I think anybody who provides a Generative AI platform needs to be responsible. The reason behind that is there needs to be responsibility for the creation of this. Probably, we are the only company that said, “All images generated by us will be watermarked”. So, on the creation side, we are thinking about the responsibility over people using our products and our platforms. That is where the term provenance comes in, which is my number one point.

Since we’re not the only platform, this could be an area where governments, civil society, and the media can also drive that behaviour. The second point is dissemination, which is how one transmits and detects transmission. We already have policies against misinformation, and misrepresentation, among others. I believe the gap is the technology, and how effectively one can detect deepfakes, or whether something belongs to synthetic media or not. We have seen deepfake examples of actors and public figures, this year. However, the deepfakes lacked a level of precision. So, I feel that is an area where Google is working. There’s an audio-based project called Audio LM, which in our internal tests is performing very well in detecting synthetic audio. YouTube is also working on policies that should be launched in the next few months. These policies will ask content creators to disclose if they are using synthetic media in a way that alters reality. Despite everything, I also believe that synthetic media has shown many positive use cases.

In my opinion, synthetic media doesn’t mean entirely bad, as the world will have a valid need for synthetic media. If one uses synthetic media to alter reality, YouTube will expect content creators to have disclosure, which will include penalties for non-compliance. YouTube will also inform viewers that content may be altered or synthetic by adding a new label to the description panel indicating that some of the content was altered or synthetic; and, for certain types of content about sensitive topics, it will apply a more prominent label to the video player.

In the coming months, YouTube will also make it possible for users to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process.

According to you, what is the biggest challenge for online safety and security in India, or otherwise globally?

I think we can’t look at the terms in isolation, as we have to look at them together. Privacy and security are interlinked. I think the world needs to figure out what is the acceptable balance for different applications. Privacy and Security are very important, and I think we need to find that balance.

Follow us onTwitter,Facebook,LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

admin