Artificial intelligence is quickly becoming a piece of the of the IT landscape and even changing the way IT work is done. In cybersecurity, it promises to help with such capabilities as threat detection, response automation, and empower security teams to preemptively defend against attackers. Will your organization implement A.I. for security? What do you think will be the biggest obstacle?

Drop a comment below! :down_arrow:

#IAmIntel

5 Spice ups

Just Hype. I remember old hypes that faded away, such as IBM’s “grid computing”, CASE tools and “googling”. I would not trust anything with “AI” sticked to it related to security. Want to better defend your organization? Use people.

1 Spice up

I think, if we’re being honest, there are a lot of moving parts to this topic. As someone who has battled with dyslexia my whole life, AI has really helped me in parts of my career, such as grammar, spelling, and so on. But it’s safe to say I’ve noticed these kinds of trends in the tech industry for many years now. We have to keep things shiny and sparkly to keep the income stream coming in. Pretty much every cell company out there is pushing their new-generation phones with all-new AI to help improve your life. But how many times can we resell the same phone before people catch on to what’s going on? As long as we can keep it shiny and sparkly, who’s going to say anything?

I believe I’ve seen this same trend in enterprise-level equipment and software as well. It’s all hype. Sure, AI is going to get big in the security world, but give it a few years, and once the sparkle wears off, it’ll just become another tool—and eventually, it’ll be forgotten. You can only beat the dead horse so many times. you know !!

1 Spice up

It’s definitely going to help IF used properly, I think a lot of people have the wrong idea about AI in general; it should be an aid not a replacement.

1 Spice up

When I read this, I just keep going back to Wargames 1984.

One wrong command and they were at Defcon 1.

2 Spice ups

AI is certainly overhyped and not nearly as big of a game-changer as many are trying to make it out to be I think.

AI = scripts that can utilize other scripts, and sometimes scripts that can write other useful scripts. The catch of course is that AI can only do what it’s been programmed to do (which requires someone to already be able to do what it can do). Sure, AI can do it faster and replicate the results of its designer’s knowledge to some degree, but the primary issue in cybersecurity isn’t a lack of knowledge - it’s poor decision making and incompetence.

AI doesn’t have a fix for those things unless you take humans out of it - but with AI, all you’re doing is scripting the same decisions that people still had to make in the first place (and are typically the original source-problem to begin with). The overwhelming majority of breaches that have ever historically occurred were caused by people making dumb mistakes and available patches not being applied in a timely manner.

AI can’t fix either of the two overwhelming problems in Cybersecurity any better than existing automation already can (in fact, not installing patches in a timely manner is most often already a deliberate action by the admins to not install those patches sooner in most cases). So considering that somewhere around 95% of all breaches are caused by stuff AI really won’t have any bearing on overall, I don’t see much reason for it to change any games in a significant way where cybersecurity is concerned.

1 Spice up

It would be short sighted to think that AI is a ‘flash in the pan’ or something that isn’t here to stay. Hard to say which form it will eventually take, but it’s not going to fade out.

With that being said, there’s a big risk of becoming too dependent on it without some other form of control or double checking, especially where security is concerned. I think there is likely a day when the reliability get’s close to 95%, and maybe then we can have a “trust but trust” model, but we’ll never (or should never) get to 100% reliance on it for security, as there are far too many new things coming up that would be outside of the scope of the LLM and it’s training. Pattern analysis and other methods for finding vulnerabilities would be trainable for many things, but there will always be zero-days, or even 1, 2, or 3 month ones that wouldn’t be trained for.

We have implemented some AI trials, and will continue to move forward with other AI tools as things progress, but it’s not going to replace my team any time soon.

1 Spice up

I think AI can be most helpful in scanning logs and doing detection type work. I don’t really trust it with remediation at this point.

3 Spice ups

I think it should be treated like any other tool we use, with a “trust but verify” mindset. You CAN use it for remediation analysis for sure. Actual remediation of the vuln would require potentially granting admin privs to a tool, and should be risk assessed just like we do any other tool. Could you use it to ‘deploy’ Chrome, Edge, FF, and other browser patching? Probably, since it’s not usually mission critical. Should you patch all of your systems with it? That’s a different level of risk, so in our case, I would say “No”, but maybe some shops it would be OK? Just depends.

1 Spice up

Is AI an over hyped technology? Absolutely
Can it work in Security? Sure.
We are talking about patterns and AI is great at that. This can be used in a number of different ways. There are patterns that users might show when it comes to things that can lead to security holes. Let’s be honest that an AI can review a user’s password history and extrapolate a pattern used like changing the last character of their password. Some users might use an easily guessed password using month, season and/or year. This can be used to find out if a user could be a security weakness.

An AI could be trained to look for behavior that is more involved compared to an IDS/IPS. We all know that Cyber Security is a defensive game. We have to adapt to new threats faster and faster. AI offers a way to adapt faster, or find these threats faster. I’m not saying this is what all IDS/IPS should be doing. This is something that can help high risk or targeted systems.

The overall issue with AI in this would be that you need to dedicate so much hardware to running the models. For most SMB, it’s not cost effective. It’s a money hole. For large companies, governments and services. We want to stop potential harm where most vulnerable. Until this is more focused this is just a matter of people trying to use AI as a magic tool that will fix everything.

1 Spice up

I mean, you’re not wrong—being cheeky about this—but I believe this goes back to what I was saying: just make it shiny and sparkly again, and let’s see if we can resell it. In a nutshell, anti-virus programs have been doing this since they hit the market. How many years ago was that?

2 Spice ups

Good advice!

#IAmIntel

Taking the “let’s see approach” seems to be the consensus with IT pros!

#IAmIntel

I agree with that idea. It can be helpful when used with other security measures.

#IAmIntel

“Shall we play a game?” :+1:

#IAmIntel

Recently, I’ve seen other IT pros talking about using AI Agents (basically bots) that can investigate and determine what the security breach is. Then they can execute a solution or fix based on a given set of criteria. But even then a human will need to give the bots the criteria.

#IAmIntel

I think that is big consideration for IT pros, whether or not AI could/would replace humans. For the foreseeable future, I think IT is safe.

#IAmIntel

Good point. AI seems to use a lot of resources.

#IAmIntel

Surely the discussion is - WAS AI in Sec a game changer?
What early detections have done for banking is fantastic, can we just pull our collective heads in, stop overinflating the new shiny & just deploy good systems like this?
Not being critical here, just highlighting the progress that a lack of hype can provide.

I can believe that, although I wonder if the complexity required for a bot to do this would be prohibitive in reality versus having a capable analyst (or set of analysts) rather use such a bot to collect the most relevant log data and environment info as feasible and then turn it over to the assigned analysts to make final determination.

Like all other things on computers, I expect the best-case to essentially be labor multiplication activities by automating time consuming work that isn’t really complicated and is rather just tedious to relieve the time requirements for various such activities. I recognize that maybe I’m just selling coders’ capabilities short, but then I’ve never seen an AI deployed in anything anywhere in real-life that I felt like was even remotely close to consistently good at logical problem solving that wasn’t so simple it shouldn’t be a significant task for a human to solve either.