Seeking the human in human-like computing

Alan Dix

School of Computer Science, University of Birmingham, UK
Talis, Birmingham, UK

Presented at Machine Intelligence 20 - Human-Like Computing Workshop, Cumberland Lodge, Windsor, UK, 23-25 October 2016.

many thanks to Andrew Howes for presenting this work for me as I was unable to attend in person.

Download position paper (PDF, 131K), poster (PDF, 5.17M)


When reporting on the EPSRC Human–Like Computing (HLC) workshop to the human–computer interaction (HCI) community, I identified four main goals for the area:

  1. emulating human capabilities as a good model for general AI and robotics
  2. improving interaction with people through human–like computation
  3. developing new interaction paradigms for interacting with HLC agents
  4. learning more about human cognition and embodiment through HLC

The second of these is the key focus of the MI20-HLC call:

" Human-Like Computing (HLC) research aims to endow machines with human-like perceptual, reasoning and learning abilities which support collaboration and communication with human beings."

However, this goal by necessity implies the third, as more human-like capabilities by their nature change the nature of interaction design, which, for the past thirty years, focused on the control of the computer as a relatively passive partner. 

The first and the last goals will be important secondary outcomes for those on AI/robotics and cognitive science/HCI respectively and are likely to be mutually reinforcing.   Indeed I found computational modelling of regret both improved machine learning and also helped validate and elucidate a cognitive model of regret.

An obvious application of (i) is to help with (ii), again something I have found myself in collaborative work on web-scale inference, inspired by spreading activation models of the brain, but then applied to aiding human form-filling.  Although paradoxically, as was evident with Weizenbaum’s Eliza in the 1960s and Ramanee Peiris's work on personal interviews in the 1990s, the most human-like interactions may not depend on human-like computation!  Yet this paradox might resolve as in preliminary work on the emergence of 'self', I suggest that the best way to create systems that embody human-like internal dynamics, may be to focus on human-like external behaviour.

From a HCI point of view (ii) and (iii) are most central.  The core of HCI is to understand embodied interactions of people with computers and one another in real world situations, a crucial input into (ii).  However, as noted, most user interface design advice assumes a passive computational device. I've been involved in some formal modelling of interactions where the computer system is more active, and there is work on ambient intelligence and human-robot interactions, but substantial research is needed on (iii).

I have also had a long-standing personal interest in the broader social and societal issues of IT and AI including the first paper on privacy in the HCI literature.  As far back as 1992, " Human Issues in the use of Pattern Recognition Techniques" looked at issues with black-box algorithms including the potential for gender and ethnic discrimination, issues that have recently come to the fore both with celebrated cases, such as Google's 'racist' search results, and the EU General Data Protection Regulation, which will mean that, in some circumstances, algorithms will have to be able explain their results.  Of course, this too is a challenge not an obstacle, indeed the 1992 paper led directly to the development of more humanly comprehensible database interrogation algorithms.

Keywords: Human-like computing, intelligent interfaces, low-intention interaction, HCI, privacy, black-box algorithms, artificial intelligence.


Seeking the human in human-like computing from Alan Dix

Related work

  1. BBC (2015).  Google apologises for Photos app's racist blunder.  BBC News, Technology, 1 July 2015
  2. Council of the European Union (2016). Position of the council on general data protection regulation. 8 April 2016.
  3. A. J. Dix (1990).  Information processing, context and privacy.  Human-Computer Interaction - INTERACT'90, Ed. D. G. D. Diaper G. Cockton & B. Shakel. North-Holland. 15-20.
  4. Dix, A. (1992). Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451.
  5. Dix, A. and Patrick, A. (1994). Query By Browsing. Proceedings of IDS’94: The 2nd International Workshop on User Interfaces to Databases, Ed. P. Sawyer. Lancaster, UK, Springer Verlag. 236-248.
  6. Dix, A. (2002). Beyond Intention - pushing boundaries with incidental interaction. Proceedings of Building Bridges: Interdisciplinary Context-Sensitive Computing, Glas-gow University, 9 Sept 2002.
  7. Dix, A. (2005). the brain and the web – a quick backup in case of accidents. Interfaces, 65, pp. 6-7. Winter 2005.
  8. Dix, A..(2005b).  The adaptive significance of regret. (unpublished essay, 2005)
  9. Dix, A., Katifori, A., Lepouras, G., Vassilakis, C. and Shabir, N. (2010). Spreading Activation Over Ontology-Based Resources: From Personal Context To Web Scale Reasoning. Internatonal Jnl. of Semantic Comp., 4(1). 59–102.
  10. A. Dix. (2016). Human-Like Computing and Human–Computer Interaction. Proc. Human Centred Design for Intelligent Environments (HCD4IE) Workshop. HCI2016.
  11. Dix, A. (2017). Activity modelling for low-intention interaction. in The Handbook on Formal Methods in Human Computer Interaction. Springer (forthcoming).
  12. EPSRC (2016). Human-Like Computing Report of a Workshop held on 17 & 18
    February 2016, Bristol, UK.
  13. Katifori, A.. Vassilakis, C. and A. Dix, A. (2010). Ontologies and the Brain: Using Spreading Activation through Ontologies to Support Personal Interaction. Cog. Sys Research, 11. 25–41.
  14. Peiris, D. (1997). Computer interviews: enhancing their effectiveness by simulating interpersonal techniques. PhD Thesis, University of Dundee.
  15. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (January 1966), 36-45. doi: 10.1145/365153.365168

Alan Dix 3/8/2016