Springe direkt zu Inhalt

Responsibility and AI: Ulrike Schäfer on Human-Centered Data Science

Ulrike Schäfer

Ulrike Schäfer
Image Credit: privat

Ulrike is a doctoral researcher at the HCC Research Group. Her work focuses on human-centered data science, explainable AI and human-AI collaboration. Ulrike's work is part of the project »ENKIS«. 

When we read about new » artificial intelligence« (AI) products in the newspaper, it's often about how incredibly far we've already come and what could be technically possible in the future. There's ChatGPT as a text generator, Grammarly as a writing aid, DeepL as a translator, and apps that generate believable images with the help of a few words as prompts. All of this is impressive. But what we should not forget is what the consequences of this are.

Why should we focus on humans when discussing AI?

Humans are affected by AI as direct and indirect stakeholders on a range of levels and stages in the development and deployment process. Whose data is actually used to train the AI models? Have these people, for example, consented to their art being used to train generative AI models? What happens legally if an AI makes a high-stakes decision that is adapted without questioning it, and the decision turns out to be wrong in the end? Do casual users understand what the output of the AI application they are using means and how much they can trust and rely on it? For example, do users know that ChatGPT is only a language model that does not really "understand" and cannot separate the true from the false?

One example that I find very illustrative is when an AI "learns" to make its own diagnoses based on diagnoses made by doctors. Couldn't it be that it adopts or even reinforces the same decision-making errors that human doctors make? This problem exists in the field of dermatology, for example. There is much less data - in this case, images - on skin cancer in darker skin types. In the training of doctors, this inadequate database means that skin cancer is less likely to be detected and diagnosed in people of color. The same is then also true for the AI: an AI is trained on existing labeled image data. If the database is insufficient, an AI will perform less reliably for underrepresented groups, in this case, people of color, than for lighter skin types.

In addition to such serious and direct effects - i.e., the unreliable AI-supported diagnosis of diseases - there are also indirect or long-term effects on people in other areas that need to be considered. For example, the interest in learning other languages could wane when people solely rely on AI-based translation services and language tools, the ability to formulate texts could decrease, and cognitive performance could decline as a result.

Ultimately, it would be advisable for us as a society and specifically for us as researchers not to avoid such challenges and open questions arising from them but to address them right now. Even more so, we need to act quickly, as technical AI developments are not pausing. 

What is the responsibility of education in all of this?

At the HCC Research Group, one of the courses we offer is Human-Centered Data Science (HCDS) as part of the ENKIS project. The responsibility to mitigate the problems that arise with the introduction of AI in decision-making lies with many people. In addition to politicians, lawmakers, and other people in regulatory positions, we also see responsibility in the developers of AI systems themselves. In our courses, we teach prospective data scientists and computer scientists. Some of them have already worked or will work on AI applications; others may work in consulting. It is important for us to give them the awareness and skillset to think about the possible consequences of their work. 

So far, their studies have focused on technical aspects and methods; our students can develop ML models, they can code and develop software, and much more. What is overlooked to a certain degree are possible sources of error that are not in the code but, for example, reside as biases in the data that is being used. Also, factors like different levels of AI expertise of the people who will ultimately use the AI applications and the stress level of the task for which the AI is being used have not yet been the focus of the curriculum. We intend to broaden the perspectives of our future data scientists with the content of our HCDS course.

What is the vision of HCC regarding human-centered data science, and what do we do to improve the status quo?

We hope that not only the research world but also the general public will realize that we should focus on people in a holistic sense when dealing with and developing AI. Technology should not be developed for its own sake but with human needs and ethical and social standards in mind - and with a focus on its applicability and comprehensibility. What purpose would technology serve if it had no impact on people or the environment? None. In fact, we want technology to have a big impact on our lives, but in a way that enhances our lives in a positive and sustainable way. Therefore, at the HCC, we strive to communicate and actively discuss diverse research findings with our students, to work in an interdisciplinary manner, and to research relevant topics ourselves.

What research areas do we focus on to get to the bottom of such challenges?

In our last study in the field of AI, Lars Sipos, Katrin Glinka, Prof. Dr. Claudia Müller-Birn, and I investigated the explanation needs of users when they interact with AI-based systems. Our main aim was to find out what information and explanations users need from an AI application in order for them to be able to meaningfully interact with the AI system in a specific context of use. Even if this focus on the user's needs seems obvious to us working in a Human-Centered Computing research group, such needs have not yet been given much consideration in the field of AI. One of the reasons for this is that this field of research on Explainable AI (XAI) is still quite new, and many questions remain unanswered. Currently, we are focusing on investigating influencing factors that can affect the cooperation between humans and AI in decision-making situations in a range of real-world application domains.

Ulrike giving an input presentation about chatbots at the Girls' Day 2023

Ulrike giving an input presentation about chatbots at the Girls' Day 2023

Text: Ulrike Schäfer, edited by Katrin Glinka.