The project was selected from a competition with a success rate of less than 5%
The Research Centre in Applied Ethics at the Faculty of Philosophy, University of Bucharest, has been awarded over one million euros for the research project „NIHAI” (Norms of Assertion in Linguistic Human-AI Interaction). This was possible through the „Crisis-Perspectives from Humanities” call under the HERA/CHANSE program.
The funding will cover a 36-month period, running from March 2025 to February 2028. The University of Bucharest is the only Romanian university with a funded project in this program.
The project will be carried out by an interdisciplinary European consortium coordinated by Professor Markus Kneer, PhD, from the University of Graz, Austria (project leader), Professor Markus Christen, PhD, from the University of Zurich, Switzerland, Lecturer Mihaela Constantinescu, PhD, from the Faculty of Philosophy, University of Bucharest and Executive Director of the Research Centre in Applied Ethics, and Assistant Professor Izabela Skoczen, PhD, from Jagiellonian University, Poland.
A key industry partner is Polaris News, an organization led by renowned journalist Hannes Grassegger, specializing in developing journalistic tools that integrate large language models to support independent local journalism.
At the University of Bucharest, the project team includes Lecturer Mihaela Constantinescu, PhD, as Project Director (Principal Investigator Romania) and Cristina Voinea, a postdoctoral researcher at both the Research Centre in Applied Ethics and Marie Skłodowska-Curie Fellowship within the Uehiro Oxford Institute at the University of Oxford.
Crafting Ethical Guidelines for Responsible Design of Chatbots Using Large Language Models
The „NIHAI” project is dedicated to developing ethical guidelines for the responsible design of conversational bots (chatbots) powered by large language models (LLMs) such as ChatGPT, Claude, or Gemini. These guidelines, based on both philosophical and empirical research, will establish rules to define what such chatbots should and should not say.
In times when misinformation, fake news, and conspiracy theories are spreading freely, trust in media, science, and government is steadily declining. This growing “communication crisis” has been amplified by the widespread adoption of digital technologies and is expected to deepen as our interactions increasingly involve LLMs.
To address this issue, the research team aims to explore what people expect when interacting with LLM-based chatbots, how they respond when these expectations are not met, and whether these expectations and reactions vary across different languages and cultures. The project will also investigate key factors that influence trust in conversational bots.
This research lies at the intersection of ethics, psychology, linguistics, computer science, media, and communication, using mixed-method approaches, such as experimental moral philosophy.
For more details about the project, visit talkingtobots.net.