AI memory vulnerability highlights the incredible complexity of securing the emerging LLM data realm.

I have been engaged in conversations regarding AI security with industry colleagues and business owners in different verticals for the past year. I started having the conversations following the (in my opinion) disturbing trend of "now with AI" that seemed to invading every aspect of my heavily software laden world. I started asking the question to people I thought might know or have an insight "how do we know the AI models are safe?" and the deafening silence was instructive. Since I started discussing my concern it has become apparent that we know very little about what is actually happening within these models and worse, we know very little about what securing them will ultimately entail.

Now, am I stating that there are not individuals or even entire teams at OpenAI or Google, Facebook etc.. that are not incredibly well versed in their own technologies and their functions, no. What I AM saying is that I would be incredibly surprised if the companies that are adding an AI feature or service to their existing product have had the time to hire or acquire any skills or expertise on these models to ensure the security of their integration or data handling practices.

THAT concerns me.

Especially as these features invade healthcare, public service and military institutions. I hope beyond hope that there are wise and careful members of leadership that are taking these AI models seriously and carefully considering the risks as they implement these new systems.

We shall unfortunately have to wait and see.

Harden the Target, Stay Vigilant!