In Conversation with Winners of Facebook Research Award 2020
Dr. Ayesha Ali from the Economics Department at the LUMS School of Humanities and Social Sciences, and Dr. Ihsan Ayyub Qazi and Dr. Agha Ali Raza from the Computer Science Department at the LUMS School of Science and Engineering have just won the prestigious Facebook Research Award 2020 for their proposal on ‘Countering Deepfake Misinformation Among Low Digital-literacy Populations.'
This year, Facebook received over 1,000 proposals from 600 institutions and 77 countries around the world and only 25 proposals were selected for funding. Proposals were evaluated by a selection committee comprising members of Facebook’s research and policy teams, and the selection process was incredibly competitive. Moreover, this is the second time in a row that our faculty have won this highly selective award, which will provide funding of USD 90,000 to carry out research for understanding the consumption of ‘deepfake’ technologies and how to combat their spread in developing countries.
We talked to Dr. Ayesha Ali and Dr. Ihsan Ayyub Qazi to know more about the project and their work around tackling misinformation.
Q: Can you tell us more about the deepfake technology and how your proposal aims to tackle this relatively new phenomenon?
Dr. Qazi: Recent advancements in machine learning have led to the creation of Deepfake technology, which allows us to impersonate any person’s audio or video at very low cost. We usually trust people’s faces and voice when seeing videos on social media platform like WhatsApp and Facebook but if they can be faked accurately, it can be very damaging when used for nefarious purposes. This is a particularly challenging problem in developing countries where a significant fraction of internet users have low digital literacy skills. To address this challenge, we have set a broad research agenda for ourselves involving a mix of innovations related to evidence-based educational and technological interventions. However, for this particular project we are focusing on (a) factors driving belief formation when exposed to deepfakes and (b) designing evidence based educational interventions.
Dr. Ali: We’d like to diagnose and understand how people are consuming misinformation, how they are engaging with it, and how they’re processing it. Once we understand this, we can think about the ways we can educate users and teach them to be more cognizant of the quality of the information they are consuming, and recognise what could potentially be fake news. The diagnosis part will also help us think about policies for de-incentivising fake news generation.
Dr. Qazi: We realise that one cannot combat misinformation with technology alone. It is a multifaceted issue. To address this problem, user education, technology for filtering fake news, as well as evidence-based policymaking for de-incentivising fake news generation are needed.
Q: Dr. Agha Ali Raza, Assistant Professor at the School of Science and Engineering, has worked with you on this particular proposal. Can you tell us more about his contributions?
Dr. Qazi: Dr. Raza’s expertise lies in natural language processing (NLP) and speech recognition. He’s a leading expert in this area and has done excellent work on speech systems especially related to Urdu language. In this project, Dr. Agha’s expertise will inform the design of deepfake models in the Urdu language.
Q: Misinformation and fake news is a common thread in both your proposals that won Facebook awards. Why is this topic important, especially in the context of Pakistan?
Dr. Ali: We think that the spread of misinformation is a grand challenge facing all societies and we’d like to make a contribution towards understanding the phenomenon and coming up with ways to address and overcome this challenge. In the context of Pakistan, the problem has multiple dimensions. One dimension, which is unique, is that we have a large fraction of people who are using the internet and social media but are less digitally literate than you would expect an average user would be in the developed world. This means that a lot of the types of information that people are sharing is non-textual, it’s usually either an audio or video clip, or a picture. Research shows that people tend to engage more with non-textual pieces of news and information, instead of text based messages, and the impact is greater as well.
Dr. Qazi: Fake news is affecting people across the world but we think the problem is much more challenging in developing countries. Low digital literacy and a lack of prior exposure to technology, make people particularly more vulnerable to fake news and misinformation.
When we started working on this project, we were really horrified at the negative potential this technology can have. For instance, in our society if you impersonate the voice of a religious scholar and say things they haven’t said, or impersonate a public figure like a politician, it can be very polarising and can lead to violence. So it’s very dangerous in that context, especially where people are not accustomed to verifying information. Which is why we think that researchers working in this space have a responsibility to dive in and contribute in whatever way they can.
Q: Technology is usually touted as something that helps form connections, why do you think misinformation and technology like Deepfake has caught so much steam?
Dr. Qazi: It has to do a lot with the nature of how information flows over the internet. The internet has brought about a lot of freedom where anyone can build a website, write a blog post, and share their perspectives on social media platforms like Facebook, Twitter and Instagram etc. However, when everyone is a publisher of opinions, how do you run an editorial process (and verify information) at the scale of a few billion users?
The velocity with which information gets shared is also unprecedented in history. If you contrast this with traditional media like newspapers etc., they have had a very systematic editorial process through which information flows and gets verified to a certain degree, and there are delays involved. So partly, it is the nature of technology that was designed for a purpose, but has negative externalities, and partly it is a pretty difficult problem to manage, at the scale on which these social media platforms work.
Dr. Ali: I’d like to reframe the problem in economic terms. First of all, the incentives of these platforms like Facebook and Whatsapp etc. are to provide a real time ability for people to communicate with one another. They have very little incentive to check information and make sure that it goes through an editorial process. This leads to an overabundance of misinformation in the online space, relative to a world where there were direct and immediate costs for platforms for allowing such content.
Secondly, in this market of news and information, people benefit in different ways from generating misinformation. There are political benefits, economic benefits, so on and so forth. On the other end, there is a demand for misinformation as well. There are people who consume or demand misinformation for different reasons. One of which we talked about, prior beliefs; people like to hear things which conform to their prior beliefs. Sometimes, misinformation also has entertainment value. People just get a kick out of it, and share and forward it as much as they can. All this basically leads to this phenomenon of misinformation to exist in this marketplace.
Q: Coming from the School of Science and Engineering and School of Humanities and Social Sciences, your expertise are in different disciplines, how did you both decide to collaborate on this particular research?
Dr. Qazi: Fake news and misinformation is something we have both been thinking about for quite some time. We started working on this broad research about 2-3 years back when we co-advised a Masters student’s thesis on the same subject. During that time, we collected data on social media platforms about misinformation, its spread and its impact. Through that survey, we learnt many things about how to conduct research on fake news and misinformation in the developing context. That led to one thing after another, and we gradually broadened our research agenda.
Dr. Ali: This is a multifaceted challenge. So when we think about ways in which we can combat it and come up with policy solutions for it, we needed to combine forces. We needed to have a technology perspective, a behavioural social science perspective and a policy perspective. Apart from that, it was just our joint interest, we were both really motivated to work on this and that organically brought about this collaboration. In fact our backgrounds being diverse was more of a strength than a barrier or obstacle. We could think about the tools that each discipline could bring to address the different parts and pieces of the puzzle.
Q: You’ve won the Facebook grant twice. What would be your advice to other researchers who are hoping to do impactful research as well?
Dr. Ali: One has to learn from one’s mistakes. We didn’t win an award the first time we wrote the proposal. We wrote multiple proposals before we won our first award. If you are really passionate about your research, the problem or the challenge that you’re trying to solve, you have to keep on trying. There will be instances in which it will not work out, but this is basically a part of the process and one can learn about how you can make your proposal or product better and have the impact you were hoping to.
Dr. Qazi: In my view, it begins by identifying an important, diagnosing it deeply and then embedding it in a local context. One of the important aspects in any research is having the clarity about what problem are you solving, and why are you among the people who can solve it well. Therefore, it is a combination of picking a problem you really care about, having the proper expertise in it and doing a really good job within the scope you have promised.
Q: How has the research environment at LUMS helped you take your research to the next level?
Dr. Ali: What’s unique about LUMS is that it provides space for faculty across Schools to talk to one another and for students to interact with faculty, so that really is the key ingredient. Because all of those interactions and those exchange of ideas is what ultimately leads to projects such as these.
Dr. Qazi: We benefited a lot from LUMS in a variety of ways. At the resources front, LUMS has been very generous with its Faculty Initiative Fund (FIF). Prior to winning the Facebook grant we jointly won FIF awards which allowed us to conduct research by recruiting research assistants and doing some early work, which really laid the foundation of the follow up research project which we are now doing. Similarly, our students have been brilliant, their curiosity, and their excitement is something that we all value. The overall culture of flexibility and freedom LUMS affords is quite a rarity in this region.
The original announcement from Facebook can be accessed here.