Skip to content
Show report in:

UMINF 14.19

Adaptive Human-Agent Dialogues for Reasoning about Health

The aim of this research is to develop new theories, methods and technology, which enables adaptive and personalised dialogues between a human and a software agent, to handle everyday queries about health that are perceived as meaningful and useful to the human. Some of the challenges to build such human-agent dialogue system are the following. The agent needs to have knowledge about the human, the topic of the dialogue, the knowledge domain of the topic, and also about the physical and social environment. Moreover, the agent must know about itself, its role, purpose and limitations. It must know how to be cooperative and be able to behave and express with empathy while conducting a dialogue activity. In some situations, it needs to reason and make decisions about a topic together with the human and about its own behavior. To be able to do this, it needs the capability to evaluate its behavior in the context in which the dialogue takes place. These challenges are addressed by developing formal semantic models to provide the agent with tools to build their knowledge and to be able to reason and make decisions. These models were developed based on literature studies, theories of human activity, argumentation theory, personas and scenarios.

The models were formalised and implemented using Semantic Web technology, and integrated into a human-agent dialogue system. The system was evaluated with a group of therapists and a group of elderly people, who showed curiosity and interest in having dialogues with a software agent on various topics.

The formal models that the agent constructs are adapted to the specific situation and to the human actor participating in a dialogue. They are based on four models: a model with knowledge about the human actor, a model of itself, a domain model, and a dialogue activity model. The dialogue activity is based on argumentation schemes, which function as patterns of reasoning and for the dialogue execution. These models allow the agent and the human actor to conduct flexible and nested sub-dialogues with different purposes within a main dialogue about a topic. The agent can adapt its moves to the human actor’s trail of reasoning, to the human’s priorities and goals, and to some extent behave in an empathic way during the dialogue, and in this way adapt to the human’s emotional state. A method for the agent to be able to evaluate its behavior was also developed and evaluated. The proportion of appropriate moves in relation to the local context of earlier moves in the dialogue was 90% in the pilot study, which indicates that the agent’s strategies for selecting moves can be improved.

Future research will focus on further development of reasoning methods, learning and assessment methods, and interface design. The results will be applied to additional knowledge domains to test its domain independence and will be evaluated with different groups of potential users.


No keywords specified


Jayalakshmi Baskar

Back Edit this report
Entry responsible: Account Deleted - might not work

Page Responsible: Frank Drewes