Welcome to 2024! If you’re like me, you’ll likely be mistakenly writing “2023” for the next month or so. This mistake can be easily forgiven. But, you know what mistake isn’t as forgiving? Ignoring the potential of Backdoor Attacks on AI and ML models…(Did you like that segue?
)
For cybersecurity pros, identifying these threats is crucial for the security of these AI systems.
What exactly are “Backdoor Attacks” when it comes to AI models?:
Simply put… Backdoor attacks typically consist of sneaky intruders subtly tweaking AI models during training, making them behave unexpectedly when certain triggers hit. This type of attack can make detection difficult. Anyone else thinking of the shape-shifting alien that assumes the appearance of its victims in “The Thing”?
Why it Matters:
These secret tweaks can lead to chaos in crucial sectors like healthcare and finance. This could result in compromised healthcare AI misdiagnosing, a financial AI letting fraud slip through, or even messing with autonomous vehicle systems causing traffic accidents!
More info, here , if you’re interested!
A general tip from the team: Remember that a strong defense can often be a great offense in the realm of cybersecurity. Implementing practices such as penetration testing (pentesting) can play a pivotal role in identifying and mitigating potential backdoor threats, securing AI systems.
All that to say…
What’s your biggest worry about backdoor attacks on AI moving into 2024? Which industry do you think is most at risk, and why?
17 Spice ups
Yet another reason why AI (although it’s really all still machine learning at this point) needs to be used simply as one tool in an ever-expanding tool-set rather than relied upon for actually guiding decisions.
2 Spice ups
Interesting take, for sure! Consider me curious, @merlinyoda - From your POV, are there any considerations orgs should keep in mind when implementing AI as part of a larger decision-making toolkit?
@kelly-for-cobalt A great article and I have recently completed a couple of short courses on Alison.com on the security implications of using AI. At the end of the day it is a tool , albeit a very powerful tool, but it isn’t inherently evil or malicious. If you take a screwdriver (should have used a better tool as this open for all sorts of innuendo and mirth) it can used for screwing ; stirring things ; a pry bar ; chisel and you can weaponize it but using it a stabby-thing. We as the user , have ultimate control and direction how we are going to use a tool.
Having worked in the industry for over 20 years I have seen so many advancements and breakthroughs that have changed the way we work. I remember the days of having to purchase expensive manuals or having to remember tones of error codes. With the advance of the internet and search engines this knowledge became more decentralized , free and accessible but it also made this information dangerous.
At the end of the day AI is not going away it will be used more and more to do the heavy lifting , so we just need to consider it as a factor when we are planning our security.
1 Spice up
Thought-provoking points! I like your screwdriver analogy - Reminds me of this quote from our 2024 cybersecurity predictions , “I predict we will see increased reliance on AI. Both defenders and attackers are expected to utilize intelligence (AI) and machine learning (ML) in their strategies. AI driven attacks may become more focused and harder to detect. Consequently the use of AI in cybersecurity defense mechanisms becomes essential.”
AI is going to be used for lots of good intentions, and… also lots of bad! Good to be aware of both so your security can be properly prepped and shielded.
Virer7
(Viyer7)
6
I must say that when I saw the title I was immediately drown to it. However, I was disappointed the second I saw “AI”
Am I worried about “Backdoor attacks”? hell yeah… The only thing that can match a backdoor attack is a Zero Day and if you think about it a Zero Day is kind of a backdoor so yes I’m very worried. Am I worry about the AI models backdoor attacks in 2024? Not so much.
At least for small companies I reckon it will take a while before AI becomes a “thing”. AI adoption in the real world is a lot slower than the media is wants us to believe. Gen X still very active in the workforce and will be for another 10 years or more. Gen X is the generation that still think that the “cloud” is actually some device in out space and the ELDERS OF THE INTERNET are real.
When the millennials become the new owners of small businesses then I will be really worry about “AI backdoors” fortunately for me I might be retired by then so come to think about. NO. I’m not really worry.
Thanks for sharing your thoughts and observations, @alietcasas8331 - Helps to have a well-rounded discussion on the topic 
jeremym
(JeremyM)
8
We’re implementing a new ERP this year, so its on my radar.
1 Spice up
Good to hear! Never can be too careful.
merlinyoda
(MerlinYoda)
10
I find AI to be useful for dealing with “low-hanging fruit” and similarly mundane tasks but anything of any importance should have human verification involved in the loop. After seeing instances of code provided by ChatGPT that could prove to be outright destructive (see topic here: Friendly reminder about ChatGPT generated code ) I would be loathe to trust current models with anything of any notable importance.
1 Spice up
Valid point, @merlinyoda ! I missed @pfarrell 's previous Community conversation while I was out on holiday, so thanks for sharing it. Definitely a good thing to keep in mind when it comes to AI!
chris7979
(chris7979)
12
I would be curious to see where the Intel vPro® Platform hardware shield plays into a role in backdoor attacks, seems like it would be an opportunity to thwart it.