top of page

Artificial Intelligence, ethics, and COVID-19- Interview with Dr. Ian Walden

  • Writer: The GFCC
    The GFCC
  • Dec 1, 2020
  • 6 min read

Simone Melo


Connected home gadgets predicting human behavior, driverless cars, and robots cleaning the house might seem a futurist movie-like scenario for most people. But as technology develops and scales, societies adapt to a world operating through machines and algorithms. If someone is still unfamiliar with artificial intelligence (AI), they probably know Facebook or find it hard to imagine living and working without Google. Behind the two tech giants, it is AI that reacts to human behavior and manipulates user interaction patterns. The way AI is affecting societies is already a pressing matter, that has reached law courts and impacted national elections. The GFCC interviewed Dr. Ian Walden, professor of Information and Communication Law at Queen Mary University London, and expert in digital investigations with extensive work in consultancy for the World Bank, the European Commission, UNCTAD, etc. Dr. Walden talks to the GFCC about the use of AI in the health sector, GovTech development’s implications on citizen’s privacy, and the importance of trust in a digitized environment.


Credit: Markus Spike/ Unsplash
Credit: Markus Spike/ Unsplash

GFCC: What are the most prominent legal and ethical implications arising from the use of AI in the health sector?


Dr. Ian Walden: AI in the health sector raises difficult questions about competing public interests. On the one hand, you have the public interest in maintaining the privacy of people’s data. On the other hand, there are clear benefits of processing information about patients. The COVID-19 crisis is a good example. In the United Kingdom (UK), we initially used an app for contact tracing based on a government’s centralized database. The app was unpopular for a range of reasons. People were not happy with the idea of the government holding to their data. Eventually, the UK adopted Google and Apple applications, which have information on a decentralized basis. It is an option that improves our privacy but perhaps doesn’t generate data that could be useful for public health reasons in terms of monitoring trends or particular characteristics. It is always a balance between protecting individual privacy by investing in systems that use a minimum amount of personal data and recognizing public health needs.


Dr. Walden is an expert in AI and ethics with experience in designing law reforms worldwide
Dr. Walden is an expert in AI and ethics with experience in designing law reforms worldwide

GFCC: What are the real implications of governments holding data in a centralized base regarding people’s privacy?


Dr. Ian Walden: The question goes down to whether someone trusts the government to look after their data. We talk about this as if it’s only about privacy, but it’s not really. It’s about trust.

GFCC: There has been a lot of discussion about policing bias after the killing of George Floyd. Protests sparked in the US against the use by police of facial recognition technologies that were allegedly racist. How could we better manage algorithms to promote an ethical digital system?


Dr. Ian Walden: We have to distinguish between the training data and the algorithm. The system may generate decisions that are discriminatory because that is what the data shows. We need to have more checks over the information, not just the algorithm. If the training data comes from an inaccurate or too narrow source, the algorithm will generate prejudice or bias. If the training data is representative, then the algorithm should avoid that. I think we focus a lot on the algorithm, but the problem is often the training data.

GFCC: Building on that, of course, there is a problem with how we collect data. But there is also the fact that police could make decisions based on algorithms that do not interpret social contexts. When it comes to racism, the data can show a higher incidence of crimes committed by people of color and completely overlook the fact that there is a historical burden that places these people often in more violent suburbs and often exposed to criminality.


Dr. Ian Walden: That is a very old problem. It could be the algorithm. I’m not saying you couldn’t build prejudice and bias into the algorithm, but I think the problem is more the training data. And it goes back to my point about health. We need to be much more careful about the data that we use.


GFCC: Another critical point as we transition to fully automated and digital societies connects to liability. Are legal frameworks ready to respond to future needs? Who is responsible for crimes committed by machines?


Dr. Ian Walden: In terms of AI-specific law, we haven’t seen much. We have seen some legal development concerning automated driverless cars. In the UK, we have legislation that essentially says that a person in the vehicle has to be liable and have insurance. The liability is easy to assign or attribute in this case. And the law attributes to payment of compensation. I am personally not very keen on creating a unique legal personality for robots. You can always identify a natural or legal person who has set up this system and attract liability.


Credit: Metamorworks/Shutterstock
Credit: Metamorworks/Shutterstock

GFCC: With the development of technologies and the improvement of deep machine learning, we will soon have machines making decisions that the person who set up the system did not predict. Who will be responsible for unlawful consequences in that case?


Dr. Ian Walden: If someone sets up a system, and this system can do things that this person can’t control. This person must respond to the consequences.


GFCC: There is still a lot of opacity in the ways users understand the behavior of algorithms. Is there legal development to increase transparency on the operation of algorithms?


Dr. Ian Walden: The leading example is the General Data Protection Regulation (GDPR) in the European Union. The agreement requires explicitly an explanation of the logic involved in the decision-making. We also see technological developments. When people build systems like AI using neural networks, they can design a parallel system to audit or trackback how decisions are made. This type of mechanism is clearly within our technical capability. If we can have a system that can make very sophisticated decisions, it should be entirely possible to design a parallel monitoring process to know how that decision was made.


GFCC: One of the consequences of the GDPR is the obligation on the consumer’s end of authorizing the terms of use. But in practical terms, it does not have a big impact since most people only accept without looking at the content.


Dr. Ian Walden: That’s the legal fiction that we live. We know that most people don’t read the terms of conditions, but we think it is the best solution. But if you were forced to read the terms and conditions, everyone would be very upset. It’s a compromise.


GFCC: What are the challenges and opportunities on the use of artificial intelligence and ultimately tech solutions by governments?


Dr. Ian Walden: It will depend on the area. Facial recognition systems are generating lots of concern. But at the same time, we expect government services to be efficient and effective, and we want to pay fewer taxes. Governments will have to deploy AI, which could improve government services by enabling us to interact. AI would allow governments to provide services much more tailored to the individual needs of the citizen. But when we talk about AI and governments, we always have a big concern about surveillance. The more AI is implemented into government services, the more potential there is for them to know more about our individual lives.


GFCC: There is a lot of fear and controversies that AI will replace people and leave a significant population without jobs. What could governments do to fill this gap and regulate the situation?


Dr. Ian Walden: Clearly, AI will result in a loss of jobs, and governments are going to have to do something about that. In Norway, the government is looking at the concept of universal income. And to pay for that, we would need the money generated through taxes from companies utilizing AI. The big problem is, what do we do with the people who are not working? Upskilling and reskilling are only possible for a percentage of the population. We need to engage in a global debate on this issue.



Comments


bottom of page