School of Computer Science, University of Birmingham, UK |
Presented at Machine Intelligence 20 - Human-Like Computing Workshop, Cumberland Lodge, Windsor, UK, 23-25 October 2016.
many thanks to Andrew Howes for presenting this work for me as I was unable to attend in person.
Download position paper (PDF, 131K), poster (PDF, 5.17M)
AbstractWhen reporting on the EPSRC Human–Like Computing (HLC) workshop to the human–computer interaction (HCI) community, I identified four main goals for the area:
The second of these is the key focus of the MI20-HLC call:
However, this goal by necessity implies the third, as more human-like capabilities by their nature change the nature of interaction design, which, for the past thirty years, focused on the control of the computer as a relatively passive partner. The first and the last goals will be important secondary outcomes for those on AI/robotics and cognitive science/HCI respectively and are likely to be mutually reinforcing. Indeed I found computational modelling of regret both improved machine learning and also helped validate and elucidate a cognitive model of regret. An obvious application of (i) is to help with (ii), again something I have found myself in collaborative work on web-scale inference, inspired by spreading activation models of the brain, but then applied to aiding human form-filling. Although paradoxically, as was evident with Weizenbaum’s Eliza in the 1960s and Ramanee Peiris's work on personal interviews in the 1990s, the most human-like interactions may not depend on human-like computation! Yet this paradox might resolve as in preliminary work on the emergence of 'self', I suggest that the best way to create systems that embody human-like internal dynamics, may be to focus on human-like external behaviour. From a HCI point of view (ii) and (iii) are most central. The core of HCI is to understand embodied interactions of people with computers and one another in real world situations, a crucial input into (ii). However, as noted, most user interface design advice assumes a passive computational device. I've been involved in some formal modelling of interactions where the computer system is more active, and there is work on ambient intelligence and human-robot interactions, but substantial research is needed on (iii). I have also had a long-standing personal interest in the broader social and societal issues of IT and AI including the first paper on privacy in the HCI literature. As far back as 1992, " Human Issues in the use of Pattern Recognition Techniques" looked at issues with black-box algorithms including the potential for gender and ethnic discrimination, issues that have recently come to the fore both with celebrated cases, such as Google's 'racist' search results, and the EU General Data Protection Regulation, which will mean that, in some circumstances, algorithms will have to be able explain their results. Of course, this too is a challenge not an obstacle, indeed the 1992 paper led directly to the development of more humanly comprehensible database interrogation algorithms. Keywords: Human-like computing, intelligent interfaces, low-intention interaction, HCI, privacy, black-box algorithms, artificial intelligence.
Related work
|
|
http://www.hcibook.com/alan/papers/mi20-human-like-2016/ |
Alan Dix 3/8/2016 |