Research Comms Podcast: How can we improve understanding of artificial intelligence to make sure it works for everyone?

‘In the health space, it's accepted that patient involvement is done every step of the way. We should think about how best to conduct participative research in artificial intelligence, more of a citizen-centric approach’ Sophie McIvor on the need to involve the public in discussions around the future of AI.

Sophie McIvor is the Director of Communications and Engagement at the Alan Turing Institute, the UK’s national institute for data science and artificial intelligence. She has been in the role since shortly after the Institute opened in 2015 and is spearheading part of its new strategy, focused on public engagement with AI.

In this episode of Research Comms we discuss why the public needs to be engaging with emerging technologies like AI, how people have more power than they think when it comes to influencing the progression of such technologies, and the importance of participatory research.

SUBSCRIBE ON

or wherever you listen to your podcasts…


The following excerpt from the interview has been edited for brevity and clarity.

Artificial intelligence has applications in so many areas of our lives - how has the Alan Turing Institute chosen its priorities?

We've committed to organise our work around three big challenges - grand challenges as they're being called - which we think AI and data science, and which the Alan Turing Institute, have the best crack at trying to progress.

One of those is health. There are amazing datasets in health and a huge body of work already, which we feel like AI and data science could help to automate and improve.

Then there’s defense and national security. So this is a very data driven area, and there’s a huge opportunity to use data wrangling and AI and data science to help national security, to spot issues and to get ahead of that.

On environment and sustainability - I don't think there's a research organisation out there which won't have climate at the top of its list. And how can we better predict what will happen? How can we model different situations? How can we support sustainability efforts?

We're also looking at big skills initiatives. The big challenge with AI is that we don't have enough people with the right skills to do it. We don't have people taking up those subjects at schools. We don't have people continuing them on in their life and it's not just the computer science people. We need people from all backgrounds to be involved in AI and make sense of it and explain it and think about how it might apply to their areas.

The part I'm most passionate about in my role is driving a more informed public conversation. Getting better information out there about AI and data science, which is a big part of it, not just for public audiences, but for policymakers who are trying to legislate in this space. And we're also doing a lot of work on fairness and ethics and ensuring that's at the heart of how AI is designed and deployed in future as well.

Overall, the goal of the Institute is that in the future - we hope in a couple of decades - that we have some really popular examples of how AI has improved the public sector or the health sector, or done something of great societal good - this would be an amazing outcome.

Have you listened to these other episodes of the Research Comms podcast?

Why is it so important that we encourage and facilitate public dialogue around emerging technologies, like artificial intelligence?

I think it's so important because it's already happening. AI's in our phones, our laptops, not just in software, but making decisions that really impact us. And people need to know and understand it because it's so important. Part of the challenge is that the term AI generally doesn't mean a lot to people, it's a bit of a tech buzz word, potentially overused in lots of different capacities. And it also has connotations of sci fi and sentience and killer robots and so on. So when you talk about actual applications of AI, you know, driverless cars, software learning to predict whether you'll get ill or how the climate will change or how to land a plane, you'll get much better conversations. What we're trying to do is move a very general and potentially mixed understanding of AI into real application areas and how they might impact people's lives.

What are the practical ways that you're going to be encouraging members of the public to talk about these these areas?

We aim to be a trustworthy, neutral, national voice on big issues and data science. There's a lot of noise that comes with new technology. There are different things that using our national status and the strong scientific backing we can give to what we say to add narrative to the new developments in AI and data science to support public policy. And we've done that successfully, I think on ChatGPT3 recently through media briefings and events and so on. We're also looking to build a community of best practice around the UK for other organisations, whether in university or industry, doing outreach or in public engagement on AI and data science.

Part of the problem is what is being called ‘AI literacy’. So you can't really start to do public engagement in these spaces unless everyone has the same baseline knowledge.

If we could all nationally come together to have that community of best practice, that would really help in pushing forward some of this understanding. We're also talking about how to incorporate public views and attitudes into the design and deployment of AI. This is something that's done really effectively in other areas of research. As many of your listeners will know in the health space, it's really accepted that patient involvement is done in every step of the way. I think we should think about how best to conduct participative research, more of a citizen-centric approach. And to find out where people are at. We're doing a big project with the Ada Lovelace Institute to survey the national population on their attitudes to AI and how can we feed that into policymaking and science and research and so on.

You mentioned chatGPT. There are concerns that some people have about job losses in certain sectors as a result of it. How would you allay people's fears or encourage people to interact with these new technologies in a positive way?

So, it's been enormously interesting watching ChatGPT3 land and the sustained interest from the media and from the general populace in it. Certainly in terms of the scientists working in institutions, this is not a new thing. It's been on its way for a while, but it has been fascinating from a communications perspective to see it run.

To cite a researcher who presented at the Institute last week and has been involved in some of the public discussions around it, when a new technology lands, there are four phases that she's been observing in the media and elsewhere, which I think might reflect how people on the ground are feeling about it as well. The first is the initial excitement and curiosity - ‘What is it? How does it work?’ And so on. And then it goes very quickly to concerns - ‘How is it going to be used by teachers and schools? How are we going to control it?’ And then you get discussions about consciousness and sentience coming, ‘How powerful is it? How does it know these things?’ And then finally, maybe where we're landing now is around limitations and frustrations - ‘Oh, it doesn't work for this, it doesn't work for that.’ So I think people are on that journey as they go through it.

What the scientists at the Institute would say is that it's not thinking. It's basically reading loads and loads of information on the website and giving it back to you in a really convincing, well written way. Mike Wooldridge, who works for us, called it a ‘glorified word processor.’

We need people to be more educated, more aware, more mindful of how they use tools. There's a great positive angle on all of this and I think it'll be really exciting to see how it plays out, how tools like this could help with some of the more time intensive, creaky parts of their professional lives. Whether that's writing an invitation to an event - those things which a computer could just do pitch perfect.

Like everyone else, I'm concerned about transparency, showing your working when something is done, and retaining the first person narrative needs, which is the gold dust in so much written communication.

Are you using it in your own work at the Turing for your communications yet? Is it one of the tools that you use as part of your kind of day to day?

We're certainly not going to be the pioneers in that! Everything we're hearing is about how great it is, but also how it needs controls around it and transparency and so on. But if you haven't played around with it, I would recommend it. I asked it to write a communication strategy and it came out with something that was okay. It wasn't great, but it's totally okay.

So I think we should be thinking about how, as communications professionals, we might build that into our workings, you know? I think the people who get ahead are the ones who can who don't reject outright, but consider how to build AI into what we already do and help us in our day-to-day. And then it is an assistant, right? It's not it's not replacing anything. It's an assistant.

Research Comms is presented by Peter Barker, director of Orinoco Communications, a digital communications and content creation agency that specialises in helping to communicate research. Find out how we’ve helped research organisations like yours by taking a look at past projects…


 

EXPLORE MORE FROM THE ORINOCO COMMS BLOG


Previous
Previous

Research Comms Podcast: Fusion energy: How to prepare the world for transformative technology

Next
Next

How to communicate your research using animation