Hi,

I am looking for a solution for a client of ours, and we cannot get an appropriate response from the vendor.

Basically, they are using Qualys as their TVM and using Trend Micro Deep Security as their endpoint vulnerability management. However, they are getting a lot of vulnerabilities being reported from their Qualys instance as opposed to trend. Lets say 100 from qualys and 80 from trend.

My challenge:

I am looking for a way to do vulnerability management to rule out the false positives from this (2 tools) and almost rely on Trend (IPS) as a single view for vulnerabilities.

In summary:

Looking to correlate vulnerabilities between both tools, to get a true image?

@wilsonsalvador7354

5 Spice ups

Both could be right, but looking for different things, one might be authenticated the other may not.

There is no single tool out there that will be 100% accurate, these tools are intended to highlight known weaknesses.

For example, I use Nessus at work and GVM at home - they find different things, GVM for example will point out undisclosed paths, Nessus wont, not on a standard scan, but if you run different options, it will, too.

My guess is they just work differently, however, I’ve named two others above you can try.

Nessus is a paid product, GVM has both community and paid options.

1 Spice up

Rod, very kind of you to respond thanks. Yes I use the same sentiment when speaking of Qualys and Tenable also. I just wanted to get somewhat of a second opinion on this. For example:

One host is reporting Log4J with qualys, but we know it isnt there. And the same host with Trend isnt reporting it - because it isnt there :slight_smile:

1 Spice up

It reminds me of years ago when Spyware was HUGE on systems everywhere!

You would run Spybot Removal and then run another like Ad-Aware and they would always identify different things detected to be removed, they just had different definition files they used in their search db’s when they performed their removal so they always identified something little off from the other!

I wonder if it’s similar here with your 2 products in question?

1 Spice up

Scanners work differently.

Qualys might be seeing a version reference which is known to be vulnerable, but Trend may not care for version numbers and instead, test the file.

Given your mixed results, what makes you so sure it’s not vulnerable?

You could scan it with a 3rd product and take the answer based on the highest number, 2 find it - it exists, 2 don’t, it doesn’t, but you could go on and on forever like this.

I can’t speak to Trend, but I have seen Qualys be overzealous at times. Perhaps that is true here, maybe you can tweak some settings to tone it down a little.

1 Spice up

Log4j has been an interesting vulnerability. The biggest issue is that it is embedded is a ton of other applications. There is a good chance that it is really there just not obvious. My advice would be to see what application it is flagging and/or directory. Then dig into that application, it’s annoying and at times non-obvious to dig into.

I’ve got to agree about different scanners reporting different vulnerabilities. Some scanners go off the CVE only, some apply secret sauce. The tool we use for GCP, Wiz, is a secret sauce scanner. One thing that they flag between critical and high is based on network path analysis. Frequently, if a vulnerability is publicly exposed it is a critical, the same vulnerability that only has internal ports open is flagged as a high.

I don’t recommended running two different scanners as it can be frustrating to deal with. The only caveat is if you use an aggreation tool like Kenna. We can feed multiple sources into it and get a better picture of our overall security posture.

Another thing that scanners can do and I’d have to look into with the two you are using is misconfigurations, especially with an authenticated scan. This tends to be another secret sauce area with some misconfigurations either not considered a problem or having different severity ratings.

I’d sum it up as scanners can be annoying and as a rule of thumb pick one and stick to it. Depending on how regulated your organization is, you may end up with 3rd party audits using something different. I could practically write a dissertation after spending over 10 years at Big Blue dealing with internal and 3rd party audits.

@rod-it ​ is correct, you can look and you can dig, but in rare cases the indicators of vulnerability can get somewhat ambiguous sometimes on some levels, different systems update their references in different time scales as well, and each vendor has to adopt a standard.

For instance and an extreme simplification, one may qualify you for being vulnerable, on you have product X installed version Y.
Another may do it by detecting you have file in system hash = known vulnerable version, regardless of how it got there.
And all levels in between, no system is 100% perfect.

In those situations it is best to weigh on the side of caution, and alert vulnerability because a false positive weighs less than a false negative.
You would rather be trying to rule out something you suspect, than feeling confident in something you cannot be sure of.
You are welcome to use Action1 as a neutral arbitrator here, free for the first 100 endpoints, so you could run them side by side for as long as you like.
What I would do then is target some of these where the discrepancy stand out, and do some research, such as this CVE, states that the vulnerability is in library/binary X. Directly investigate the report not the solution, is the condition indeed valid or a caution call?

If it pans out that the report is either false positive OR false negative, we would love to hear about it if our product registered either.
Such a thing would be a great opportunity for a vendor to learn, adapt, and improve.

https://www.action1.com/free and if you find we are the path you would like to pursue, lets talk about how we offer to move you for free from any competitor for the remainder of your existing contract.