The ARVIS team aims to improve mankind’s quality of life by enabling intelligent robots, virtual agents and autonomous systems with perceptual and cognitive capabilities. In addition to attaining Scientific Contributions, we look at Societal contributions through applied research.

The team is part of the Computer Science group and ASTUTE, School of Engineering and Applied Science of Aston University, Birmingham, UK.

 

Essentially, ARVIS will focus on specific areas as follows:

 

Socially Assistive Human-Robot Interaction

Assisting users through social interaction. Socially assistive robots attempt to provide the appropriate emotional, cognitive, and social cues to encourage human development, learning, or therapy. Our aim is to develop the technology that can make this possible. 

Some examples include:

  • Guide a child toward long-term behavioural goals
  • Be customised to the particular needs of the child
  • Develop and change as the child does
  • Engage the child as a peer, not as a parent, teacher, toy, or pet
  • Assist elderly in their homes or Care Institutions;
  • Social Interaction with any individual within different contexts (shopping, museums, etc.).

To achieve this vision, our team will advance the state-of-the-art in socially assistive human-robot interaction from short-term to long-term interactions in structured environments, being adaptive, engaging, and effective. This progress will require transformative computing research in interrelated research areas such as machine perception, machine learning and human-robot interaction. We will develop computational models of the dynamics of social interaction, so that robots can automatically detect, analyse, and influence agency, intention, and other social interaction primitives in dynamic environments. Machine learning algorithms will be developed to adapt and personalise interactions to individual physical, social, and cognitive differences, enabling robots to teach and shape behaviour in ways that are tailored to the needs, preferences, and capabilities of each individual.

pepper
Pepper the Robot

Computer Vision & Machine Learning

Investigation and development of algorithms for:  (i) obtaining 3D models of objects from visual data such as digital photographs or video; (ii) detection, tracking and recognition of objects on images; (iii) a variety of classification problems using classical (generative/discriminative) models and deep learning models.

Our research interests within Computer Vision broadly fall into the following four areas:

  1. problems in the interface between Computer Vision and Graphics including Shape-from-X
    video analysis and large scale content-based image retrieval;
  2. deep reinforcement learning and in particular the idea of transferring knowledge between agents and tasks;
  3. extracting joint semantic representations of images, video and text.

These are towards the following applications:

  1. Intelligent Drones and Mobile Agents
  2. Machine Vision for Autonomous Vehicles;
  3. Machine Vision for Manufacturing and Automation;
  4. And others: IoT; Surveillance; Behaviour Analysis.

Perception and Cognition

Perception is derived as a single unified awareness from a sensory processes. The humans’ senses provide patterns to our brain to generate perceptions of our surround environment. Whereas all our logical interpretations and reasoning generate a structured and rational understanding of a situation or a problem. Within Artificial Perception and Cognition, we will focus on attention mechanisms using innovative interaction modalities and multi-objective dynamic decision-making.

We will use multi-modal sensor fusion enabling an Interactive Artificial Agent to capture context and behavioural cues for human-robot or robot-object or robot-robot interactions. The goal is to explore an innovative human-machine and machine-machine interaction as means to maximize interaction throughput. On the topic of decision-making, we will research techniques enabling robotic systems and humans that operate asynchronously, to employ distributed consensus in solving multi-objective problems under uncertainty. Using representations of the internal and external team states, we obtain local policies triggering actions, conforming to global objectives, adapted to each agent’s abilities and the environment context. 

Pepper uses EMG to detect a user's hand gesture
Real-time EMG hand gesture classification

Intelligent & Autonomous Systems

The primary goal is to research and develop intelligent systems for a wide range of applications by combining artificial perception and machine learning to provide a degree of autonomy to a system. Examples include: Human-Machine Interaction; Monitoring Tasks; Grasping and Dexterous Manipulation; Perception (sensor fusion) and Decision-Making for Autonomous Vehicles; Smart Sensors and Robotics within SmartCities and Assisted Living, Wearable Computing (EEG, EMG, IMU) and Affective Computing for health care.

Virtual Agents

By using artificial humans to help humans with cognitive and emotional demanding tasks e.g. in caregiving and training; and to develop artificial humans for the embodiment of psychological and neurobiological models of human cognition, affect, personality and behaviour regulation.

 

 

 

Publications

Read the results of our work as conference papers, journal articles, and book chapters.