AI in Society (15 credits)

Credit-bearing course

Knowledge Representation and Reasoning

Societal Aspects of AI

Artificial Intelligence (AI) can be understood as systems which show intelligent behaviour by analysing their environment and with some degree of autonomy act to reach certain goals. In colloquial language, AI has become an umbrella term for information technology, robotics, and digitalisation more broadly, including machine learning techniques enabling computers to improve themselves.

AI can do much good in society. Applications of AI in society include self-driving cars, financial trading networks, bots used in search engines, computer game players, image recognition, and generation of text. AI can be helpful in our everyday lives: in health care to make diagnoses, with apps for self-care, for credit ratings, in spam filters, to assist the police in combating crime, or to support the insurance companies with risk assessments. AI can also be helpful to solve severe challenges such as treatment of chronical diseases, reducing traffic accidents, fighting climate change, or predict cyber security threats. Although AI development has a history from the 1950s onwards, at the moment we are in the middle of a radical expansion of AI technology, making it hard to foresee possible new AI applications even in the near future.

But AI development also gives rise to concerns about negative consequences for society and for individuals. In science fiction, a common theme has long been that of the intelligent robot outsmarting humans and taking over the world. Although such a scenario is a topic also in academia, most researchers do not regard general superintelligence an immediate threat, or a threat at all. More urgent issues are about effects of AI on humans and society here and now. Automation processes could lead to job losses and to changes of the working conditions that strike hardest on marginalised people. Machine learning processes relying on big data might reinforce biases and prejudices inherent in the data. Self-improving machines make decisions on grounds that could be difficult to discern, even for the programmers behind the algorithms. Micro-targeted political advertisements and ‘fake news’ in social media platforms can distort democratic deliberation and everyday talk. Other questions include: How could we assess fairness and issues of responsibility in automated systems? What are the effects on democracy and accountability of AI expert systems advising policy-makers? How could we make sure that participation in policy processes on AI not only include technical AI experts but are inclusive and representative? How should responsible polities handle the fact that authoritarian regimes develop AI systems that could harm people worldwide? We need a responsible and ethical AI, but it is far from self-evident which morality should be guiding, or how this should be decided. These and other concerns regarding AI in society call for social science to take an active role in the development of AI.