Values guide, justify, and explain behaviour. Ensuring AI systems are aligned with human values has been raised as a core problem for AI.
By putting moral considerations at the core of the AI system itself, we contribute to making AI more trustworthy.
VALAWAI aims at developing value-aware AI systems; that is, AI systems that are able to understand and abide by a value system and explain its own behaviour or understand the behaviour of others in terms of a value system.
Defining awareness. Moral frameworks. Mapping out norms and values in specific domains.
Quantitative measures for awareness. Inspiration for information processing architectures enabling awareness.
Operational architectures. Toolbox of components supporting mental functions for value-awareness.
VALAWAI will apply Value-Aware AI to three challenging application domains in domains very much need a moral dimension and adding value-awareness would therefore offer a clear and strong added value for their users.
Given their societal impact, these application domains have a high potential for innovation.
Support moderators to implement guardrails and monitor user behavior. Support users to behave ethically on social media.
Constrain robot behavior within boundaries of norms and values. Constrain user behavior to be ethical. Support design and monitoring.
Support formulation and adaptation of norms. Support stake holders in medical decision making.
VALAWAI is a multidisciplinary project using neuroscience, robotics, computer science, and engineering.