Publications • Human-Centered Computing • Department of Mathematics and Computer Science

Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

Benjamin, Jesse Josua; Kinkeldey, Christoph; Müller-Birn, Claudia; Korjakow, Tim; Herbst, Eva-Maria

New York: ACM | 2022

Appeared in: Proceedings of the ACM on Human-Computer Interaction 6

During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.

Participatory Design of a Machine Learning Driven Visualization System for Non-Technical Stakeholders

Benjamin, Jesse Josua; Kinkeldey, Christoph; Müller-Birn, Claudia

Appeared in: Mensch und Computer 2020-Workshopband

Literature Mapping Study for Machine Learning Interpretability Techniques

Korjakow, Tim; Benjamin, Jesse Josua; Kinkeldey, Christoph; Müller-Birn, Claudia

Berlin: Freie Universität Berlin | 2019

With the surge of the application of machine learning (ML) systems in our daily life there is an increasing demand to make operation and results of these systems interpretable for people with different backgrounds (ML experts, non-technical experts etc.). A wide range of research exists, particular in ML research on specific interpretability techniques (e.g., extracting and displaying information from ML pipelines). However, often a background in machine learning or mathematics is required to interpret the results of the interpretability technique itself. Therefore there is an urgent lack of techniques which may help non-technical experts in using such systems. The grounding hypothesis of this analysis is that, especially for non-technical experts, context is an influential factor in how people make sense of complex algorithmic systems. Therefore an interaction between a user and an application assumed to be an interplay between a user and his historical context, the context of the situation in which the interaction is embedded and the algorithmic system. Interpretability techniques are the common link which bring all these different aspects together. In order to evaluate the assumption that most of the current interpretability research is tailored to a technical audience and gain an overview over existing interpretability techniques we conducted a literature mapping study studying the state of interpretability research in the field of natural language processing (NLP). The results of this analysis suggest that indeed most techniques are not evaluated in a context where a non-technical expert may use it and that even most publications lack a proper definition of interpretability. Keywords: Literature Mapping Study, Interpretability Research, Natural Language Processing.

Towards Supporting Interpretability of Clustering Results with Uncertainty Visualization

Kinkeldey, Christoph; Korjakow, Tim; Benjamin, Jesse Josua

Geneve: The Eurographics Association | 2019

Appeared in: EuroVis Workshop on Trustworthy Visualization (TrustVis)

Interpretation of machine learning results is a major challenge for non-technical experts, with visualization being a common approach to support this process. For instance, interpretation of clustering results is usually based on scatterplots that provide information about cluster characteristics implicitly through the relative location of objects. However, the locations and distances tend to be distorted because of artifacts stemming from dimensionality reduction. This makes interpretation of clusters difficult and may lead to distrust in the system. Most existing approaches that counter this drawback explain the distances in the scatterplot (e.g., error visualization) to foster the interpretability of implicit information. Instead, we suggest explicit visualization of the uncertainty related to the information needed for interpretation, specifically the uncertain membership of each object to its cluster. In our approach, we place objects on a grid, and add a continuous ''topography'' in the background, expressing the distribution of uncertainty over all clusters. We motivate our approach from a use case in which we visualize research projects, clustered by topics extracted from scientific abstracts. We hypothesize that uncertainty visualization can increase trust in the system, which we specify as an emergent property of interaction with an interpretable system. We present a first prototype and outline possible procedures for evaluating if and how the uncertainty visualization approach affects interpretability and trust.

Keywords: Ikon

PreCall: A Visual Interface for Threshold Optimization in ML Model Selection

Kinkeldey, Christoph; Müller-Birn, Claudia; Gülenman, Tom; Benjamin, Jesse Josua; Halfaker, Aaron

Appeared in: HCML Perspectives Workshop at CHI 2019

Machine learning systems are ubiquitous in various kinds of digital applications and have a huge impact on our everyday life. But a lack of explainability and interpretability of such systems hinders meaningful participation by people, especially by those without a technical background. Interactive visual interfaces (e.g., providing means for manipulating parameters in the user interface) can help tackle this challenge. In this paper we present PreCall, an interactive visual interface for ORES, a machine learning-based web service for Wikimedia projects such as Wikipedia. While ORES can be used for a number of settings, it can be challenging to translate requirements from the application domain into formal parameter sets needed to configure the ORES models. Assisting Wikipedia editors in finding damaging edits, for example, can be realized at various stages of automatization, which might impact the precision of the applied model. Our prototype PreCall attempts to close this translation gap by interactively visualizing the relationship between major model metrics (recall, precision, false positive rate) and a parameter (the threshold between valuable and damaging edits). Furthermore, PreCall visualizes the probable results for the current model configuration to improve the human’sunderstanding of the relationship between metrics and outcome when using ORES. We describe PreCall’s components and present a use case that highlights the benefits of our approach. Finally, we pose further research questions we would like to discuss during the workshop.

Keywords: Ikon

Understanding Knowledge Transfer Activities at a Research Institution through Semi-Structured Interviews

Benjamin, Jesse Josua; Müller-Birn, Claudia; Kinkeldey, Christoph

Berlin: Freie Universität Berlin | 2019

Project IKON aims to explore potentials for transferring knowledge generated in research projects at a major Berlin research institution, the Museum für Naturkunde (MfN, natural history museum). Knowledge transfer concerns both the exchange of knowledge among the employees (researchers as well as communicators or management staff), and with the broader public. IKON as a research project coincides with a continuous effort by the research institution to open itself up in terms of its activities and stored knowledge. To understand the specific requirements of IKON, we conducted semi-structured interviews with 5 employees of varying positions (researchers, management staff, employees) at the research institution. Our questions were conceived to understand: (1) what unites individual perspectives of knowledge transfer; (2) by whom and through which means (i.e., actors and infrastructures) is knowledge transfer carried out; (3) where relations between actors and infrastructures in the current state of knowledge transfer need support or intervention. From our results, we infer high-level implications for the design of an application that can support employees of the MfN in (1) collaborating with each other and (2) conceptualize Knowledge Transfer Activities based on semantically related research projects.

Keywords: Ikon