Outreach

Our engagement activities to share our research

Part of our mission is to share insights from our work through engagement and outreach activities.

Research Blog

In our research blogs series we tell the story of our research, present our findings and the impact our work has on industry, policy-makers and citizens, as well as future implications. It is an in-depth exploration of how we investigate security and privacy issues and develop methods for solving these problems.

Blog 1: Secure AI Assistants: why it matters and what we have learnt so far

Blog 2: Improving transparency in AI Assistants: Researching Alexa privacy and accountability to users

Blog 3: Legal Obligation and Ethical Best Practice: Towards Meaningful Verbal Consent for Voice Assistants

Research Summary: Can You Meaningfully Consent in Eight Seconds?

Research Summary: A Systematic Review of Ethical Concerns with Voice Assistants

Blog 4: Sentient Beings

Podcast

“Always Listening - Can I trust my AI Assistant?” is podcast series produced to address the normal concerns users of voice AI assistant as well as the business and academic communities have. Through example driven explanations and comments from the SAIS team and their industry partners we discuss the technology, privacy issues and future of the industry for a non-scientific audience. We explore essential questions about security and privacy in voice AI assistants, with a particular focus on data from users and the security and privacy implications that these raise. Highlights include explaining the AI Assistant ecosystem, looking at Amazon Alexa as an example, and highlighting security measures in place as well as areas to be aware of.

Episode 1: How do AI Assistants work?

Episode 2: My data and controlling how it is used

Episode 3: Misinformation and the future of AI

We have been working with Salomé Bazin from Cellule Studio to create Sentient Beings, an evolving soundscape inviting us to question our relationship with AI assistants, how and where we use our voices and the value we place on them.


As a society, the business model of convenience and utility in exchange for our personal data has become habituated in our day-to-day.

Our voices are increasingly becoming a way in which we interact with digital tools - and by extension, the technological corporations who control them. How might this different, more conversational mode of interaction change the ways in which we give away our information? And what additional information might corporations glean from our voices?

The ways in which corporations are able to extract and aggregate information from our voices is opaque. How much vocal data of yourself do think these corporations already have? It may be more than you expect.

Sentient Beings is part of AI: Who’s Looking After Me? Opening 21 June at Science Gallery London.

Other

We hosted a meetup of the Conversational AI London group on 19/06/2023, with talks by Dr William Seymour and Dr Jide Edu from the SAIS project:

  • Legal Obligation or Ethical Best Practice? Exploring Verbal Consent for Voice Assistants - William Seymour

Alexa now offers “voice forward consent”, allowing users to verbally agree to data sharing. This is great for usability, but not necessarily for making sure people understand and agree with what they’re supposedly consenting to. At King’s College London we’ve been building on work around informed consent and working with experts from academia, industry, and the public sector to sketch out requirements for better verbal consent for voice interfaces. In this talk I’ll highlight some of the key themes around the (in)ability to opt-out, minimising consent decisions, and established consent principles.

  • Assessing and Measuring Alexa Skill Privacy Practices - Jide Edu

At King’s we’ve also been systematically tracking skills in the Alexa ecosystem, looking at the use of permissions by tens of thousands of Alexa developers and and measuring how well these match up with what’s reported by developers in privacy policies. Our tracability tool, SkillVet, has identified thousands of potentially over-privileged skills and bad privacy practices and can help developers proactively audit their skill data practices. In this talk I’ll also discuss how we’ve also been exploring patterns in how developers (re)use invocation names across marketplaces and measuring phonetic similarity and density of invocations.

Conversational AI London is a group “for developers, designers, data scientists, growth hackers, writers, product geeks and generally curious people interested in voice assistants, chatbots and conversational AI.”


Be part of the conversation - we would love to hear your comments and feedback.
Email us on: sais-comms@kcl.ac.uk

Connect with us on LinkedIn and Twitter