Below you can find an excerpt from an interview with Iris Phan. She is a fully qualified lawyer and specializes in AI and automated systems. This interview covers the topics of risks and opportunities of AI and the role of ethics, data protection, and transparency. The interview is presented in a translated and abridged version.
Go to her LinkedIn profile here.
⠀
Good day to you. Could you please introduce yourself?
My name is Iris Phan, I have been working here at the computer center of Leibniz University in Hanover for over 6 years. I work in the area of the computer center as a lawyer for IT law and data protection law. In the modern era, a lot of systems have become automated. So I also do a lot of work in that area. Furthermore, I also studied the philosophy of science and have been a lecturer at the Institute of Philosophy at Leibniz University for three years now.
Thank you very much for the opportunity to interview you. The first question is: Is AI suitable for industrial application at all?
I think that’s the best place to start: AI sounds as if it’s really about intelligence, as in everyday language. People and especially the media are often making comparisons between artificial intelligence and human intelligence. Considering replacement scenarios, one must determine the intended goal and identify the elements that require replacement. Mostly work that is dull, dangerous, and dirty. That in itself is a very positive development, but it’s not that new. Machines have been replacing people and their work for a relatively long time.
How does the use of AI affect environmental impact?
The suitability greatly depends on the specific AI being utilized and its intended purpose. For example, predictive maintenance naturally helps the environment by preventing us from producing too much industrial waste or things that we don’t even need. Instead, we can simply adjust our procurement to demand in a better and more controlled way. That certainly reduces the burden on the environment. I can see many applications where it has a positive effect.
Can the development of AI be predicted or controlled?
Often the process happens inside a black box. That would suggest that we can’t control it. But in a comparison of countries, we can see that political agendas can help control the process to a certain extent. A crucial role is played by what is politically desired, which ideally aligns with the demands of society. The allocation of research funds also plays a part You can predict and control the development, but only to a certain extent.
Are there dangers in using artificial intelligence in an industrial context?
The danger isn’t the technology itself, it’s thoughtless development. If you don’t reflect on problems and just carry on, that’s what I would call a danger. It’s perfectly normal for mistakes to happen during development. But if you don’t think about it and ignore ethical concerns, it can have a negative impact and slow down progress.
What should AI be allowed to decide and what not?
Especially in industrial companies, there are often work processes that tire. This does not happen to machines. They may have different fatigue levels, but as I’ve heard, in your current field of activity you are trying to counteract this: In terms of predictive maintenance, people are trying to determine the fatigue of machines. Individuals with a lot of experience may be able to do that, but there are very few of them. So this area it’s a super relief for people, it saves resources and protects the environment. It has many advantages. In the area of legal decisions, I see things a bit differently. It’s about people and decisions for people, so at least one person, in certain cases even more, should have a look at it. Machines are very rational, but that can also be misleading in some cases. You always have to look individually from person to person. So I can’t imagine the application of AI in this area at the moment, but maybe my imagination is just a bit limited.
Why do ethics, data protection and transparency have to be considered in the development of AI?
Considering it at an early stage is essential. After all, it’s like monitoring. It would be nice if we could manage to think through such processes in advance. For example, by someone from the field of ethics, a lawyer, a natural scientist, and so on. That way, you can identify certain risks in advance. The conditions are constantly changing, so you have to start this process again and again, even after a few years. That’s why I think this area is particularly important. It is important to consider such things much earlier in development.
Thank you very much for taking the time for the interview.
Yes of course. Wonderful, thank you too.
Image by Freepik.