The Philosophy of Science Group at the Department of Philosophy
cordially invites you to this mini workshop. (Please note that the order
of the presentations has changed.)
Best,
Tarja Knuuttila
*
*
*Mini workshop on AI and computing — 20.05.2025*
Lecture Room 3D (Room D0316, 3rd floor) Universitätsstraße 7, 1010 Vienna
Organized by: Univ.-Prof. Tarja Knuuttila
17:00 -18:00
Dr. Nick Wiggershaus (University of Lille)
*
*
*Computational Artifacts and the Problem of Creation*
As computer science integrates principles from logic, engineering, and
physics, the ontological status of its core entities, such as computer
programs, remains contested. Programs are often characterized as hybrids
that have a “dual nature.” In attempts to untangle such hybrids,
philosophers of computing have applied the concept of ‘technical
artifact’ (combining teleological function and physical structure) to
computing. While productive, it overlooks a notorious problem from the
philosophy of art: the /Problem of Creation/, which asks how abstract
objects like musical works or novels can be brought into existence
through concrete human activity. I argue that, like repeatable artworks,
computational artifacts have different representational modes (e.g.,
symbolic, mathematical, diagrammatic) and implementational media (e.g.,
ink on paper, chalk on a whiteboard, electrical signals, punched cards,
etc.). Just as a novel or a musical work is not identical to any one
performance or copy, a computer program persists across implementations.
This invites a philosophical conundrum: How can programmers /create
/abstract objects that are not located in space or time? By
appropriating solutions to the Problem of Creation, we gain alternative
ways to characterize the ontological status of programs and other
computing objects. I conclude by exploring whether we can understand
computational artifacts as /abstract /technical artifacts.
18:15-19:15
Dr. Laura Savolainen (University of Helsinki)
*Emperor’s New Crowds: “Untrustworthy” Workers and “Ground Truth”*
Ground-truth datasets are supposed to nail down facts about the “world”
represented by data, so that machine learning models trained on them
will behave reliably in that same world. Yet when annotation is
outsourced to platform workers whom engineers do not know, and often
mistrust, how is such reliability achieved or even imagined? Based on 27
interviews with machine learning researchers and practitioners, this
paper investigates how ground-truth datasets are stabilised when 1)
annotators are positioned as unreliable non-experts, 2) recognised
domain experts are prohibitively expensive, and 3) the platform
architecture itself suppresses deliberation, feedback, and learning.
Given these constraints, I illustrate ground-truthing as a canny,
iterative practice shaped by task design choices, aggregation methods,
disciplinary conventions, and the affective politics of trusting data
supplied by unknown workers. Rather than reflecting the world, the
resulting datasets operationalize narrowly bounded problem formulations
that satisfy performance goals ‘well enough’ for downstream modelling.
By analysing the epistemic hierarchies, organizational constraints and
judgment calls embedded in these pipelines, the discussion offers a
concrete case for re-evaluating realist assumptions about data,
evidence, and representation in contemporary AI research. Moreover, the
analysis opens normative space for re-imagining data pipelines around
more transparent authority structures and richer human feedback for more
reliable processes and outputs.
Show replies by date