Intelligent interfaces got a bad name possibly because the expectations were to high, but also because early work often ignored the larger interaction context. Appropriate intelligence is about embedding 'intelligence', often in the form of very simple heuristics, within interaction contexts that make them 'appropriate'. Part of this is about choosing domains where intelligence helps, and part is about the design if the interaction itself.
Designing interactions for appropriate intelligence is based on two principles:
The first is the goal that is most obvious. When you demo a system you want it to work as often as possible and do impressive things when the intelligence gives the right results. The odd mistake is something you put aside and say "it will get better with time - this is just an early version ...". However, users remember bad experiences more than good once. The demo may be impressive, but it is the times it gets it wrong that determine the user's ultimate experience.
It is the second principle which is therefore the heart of appropriate intelligence. Just like the doctor's dictum "First, do no harm" by focusing on reducing negative feelings we make something that is acceptable to use and therefore adds value even if the 'good' that it does is marginal.
As examples contrast the Microsoft paper clip and the Excel sum button. The former may be useful (first principle), but when it gets it wrong interrupts the flow of your work (violates second principle). In contrast, the Excel sum button produces a reasonably good default selection to sum (first principle), but if it gets it wrong the user simply selects as normal - there is no action cost at all, just the cost of seeing whether it is right (upholds second principle)
As well as looking at existing examples, the principles can be used to drive design. In aQtive onCue these were an important aspect of the design process. This is described in the following paper:
Ubiquitous and context sensitive computing applications often depend on probabilistic or uncertain sensor data and inferences based on them. In these areas intelligent interface will be the norm not the exception. The importance of designing appropriate intelligence therefore becomes paramount.
In early work Finlay, Hassall and I looked at setting bounds of intelligence and adaptiveness in the interface. We emphasised the importance of a deterministic ground. Things that remain fixed and stable, giving a 'medium' within which adaptation and richer intelligent interactions can take place. More recently in work with Liu, Sun and Narasipuram, we looked at norms to control and limit the actions of automatic agents and the 'meta-norms' that set overall ground rules within which new rules or norms can be set, learnt or inferred.
|http://www.hcibook.com/alan/topics/appropriate/||maintained by Alan Dix|