Secure AI Assistants: why it matters and what we have learnt so far
Photo by The-Unwinder (CC-BY-SA)

Research Blog

Secure AI Assistants: why it matters and what we have learnt so far

In the first instalment of our research blogs we look at why there is a need to understand and communicate security in AI Assistants and what insights we have gained from SAIS research so far.

Why security in AI Assistants matters

AI Assistants are here to stay, and with a wide range of uses and applications, both automated and based on instruction from the user, they have become part of the fabric of our lives. Still, bringing up the subject of voice assistants or online chatbots most people would respond by questioning how safe and trustworthy they are, what happens to their data, and whether they are inferring information about the user from their activity. Current public opinion includes questioning how AI assistant technology works, is used, and the motives of the people and organisations behind it; even if they then choosing to purchase and use AI assistants anyway.

Among those with a deep interest in researching and testing the security issues posed by AI Assistants are “Secure AI aSsistants” (SAIS) project researchers. The goals of their project are to provide an understanding of attacks on AI assistants (AIS) by exploring and analysing the whole AI assistants ecosystem, from the AI models used in them to the stakeholders involved.

How we research security in AIS

SAIS has created methods to monitor the security behaviour of AI assistants using model-based AI technique that are designed to collect and analyse data from the ecosystem with ease. In the next blog we will present the work that led to the tool Skillvet, which allows us to understand and quantify the gap between what users think they are giving permissions for versus and what information the skill is actually using.

Another area of work includes establishing methods to reason about the expected behaviour of AI assistants, thereby creating a framework with which to analyse if AI Assistants are deviating from the expected behaviour. The benefit here is that if expectations are clear and recognisable, deviation from them could be an indicator of an attack on security, malicious or otherwise.

The next stage is communicating not just this research, but also explanations of AI ecosystems and an understanding of security threats to users.

Communicating our work

All of these are the exciting areas of work that will be discussed in upcoming research blogs, where the research and its impact will be explored in depth.

We will also publish a podcast series with interviews from the team and partners looking to explain the important questions of: how voice AI assistants work, what happens to personal data and how AI assistants can become more secure?


Connect with us on LinkedIn and Twitter

Our podcast Always Listening

If you would like to share any comments, or just to get in contact with the SAIS team email: sais-comms@kcl.ac.uk

SAIS is a cross-disciplinary collaboration between the departments of Informatics, Digital Humanities and The Policy Institute at King’s College London, and the Department of Computing at Imperial College London, working with non-academic partners: Microsoft, Humley, Hospify, Mycroft, policy and regulation experts, and the general public, including non-technical users.