Responsible AI: Concepts, critical perspectives and an Information Systems research agenda
Keywords:Artificial Intelligence, Responsible AI, Trustworthy AI, Ethical AI, Human-Centred AI.
Being responsible for Artificial Intelligence (AI) harnessing its power while minimising risks for individuals and society is one of the greatest challenges of our time. A vibrant discourse on Responsible AI is developing across academia, policy making and corporate communications. In this editorial, we demonstrate how the different literature strands intertwine but also diverge and propose a comprehensive definition of Responsible AI as the practice of developing, using and governing AI in a human-centred way to ensure that AI is worthy of being trusted and adheres to fundamental human values. This definition clarifies that Responsible AI is not a specific category of AI artifacts that have special properties or can undertake responsibilities, humans are ultimately responsible for AI, for its consequences and for controlling AI development and use. We explain how the four papers included in this special issue manifest different Responsible AI practices and synthesise their findings into an integrative framework that includes business models, services/products, design processes and data. We suggest that IS Research can contribute socially relevant knowledge about Responsible AI providing insights on how to balance instrumental and humanistic AI outcomes and propose themes for future IS research on Responsible AI.