A few thoughts on the Orca v PANW testing dust up

Mark Kraynak
5 min readOct 22, 2020

--

(originally posted in a very clunky way on LinkedIn)

I don’t have a dog in this fight. But the underlying issue is one that I think is holding back the security industry and I care about it quite a lot. Many who know me know that while I’m not religious, I do have a nearly religious belief in alignment of interest.

My view is that there is (sadly) no effective and unbiased way to get public product evaluations done. I lived this problem for years at Imperva. It matters *a lot* who writes the test and everyone who writes these tests in practice are inherently biased. In my experience this, and not a lack of confidence or aversion to transparency, is why vendors insert the prohibitive clauses into licenses to limit unapproved benchmarks. The reality is that even the best products will fail a test designed to make them fail.

So what are the practical options for test design and execution?

First option is the vendor, like Orca in this case, which has obvious bias problems. And to be clear and I’m not throwing stones at Orca by saying this. We wrote plenty of tests at Imperva and even created a tool, The WAF Testing Framework or WTF, (which I still think is my best ever naming effort, by far that was intended in part to show customers exactly how much better our product was than others, and (hopefully the #irony isn’t lost here) PANW was one of the products we targeted. We did not do any public testing or try to publish any videos. The idea was to enable customers to do the testing on their own, in part because of the legal hassle that would inevitably come…just as it has for Orca. If the vendor writes the test, their interest is to show their product in the best light. And their tests will do that.

The obvious next alternative would be customers doing their own testing. The reality is that customers have to manage a pretty broad range of different security products and rarely have the capacity to test any given product area in depth. The exception to this is that very large enterprises and sometimes larger tech software companies do have the capacity to do this. But there are two caveats to this. First, they tend to only expend this effort for a limited number of products they consider critical to their operations. Second, because they are big and uniquely technical in a lot of ways, their testing criteria look pretty different than what would make sense for most smaller, more mainstream organizations. In other words, it skews the testing to a small set of big or high sophistication environments.

Not really germane to testing, but I’ve been meaning for a while to talk about the demise of Peerlyst. It was attempting to be an unbiased place to get feedback on security product (as well as resource hub for training, etc.). But the business model, in the end, didn’t work. I think a big part of the problem is that they relied on sponsorship, from vendors, to support a free model for users. This is a classic misalignment of interest and the saying “If you’re not the customer, that means you’re the product,” holds true here, I think. Which is a great segue to the next option…

The third alternative to this would be to go through an outside testing house. There are quite a few of these around and they market themselves as independent. But the reality is that their customers are vendors either who commission the tests (and get to influence the testing criteria) or who become customers if their products do well enough that they want to purchase distribution rights for marketing purposes. In both cases, the vendors end up having quite a bit of influence over the testing criteria and, in most cases, also the day to day work of performing the tests.

That last bit is not insignificant. Apart from influencing the criteria, the most important thing to do to win a competitive review was to make sure to have a very skilled solutions architect managing the review process. We found this to be so important that we generally wouldn’t participate in reviews in which we couldn’t provide direct support. For sure that’s a comment on the ease of use of the products. But the reality is that security products can be complex and the nature of testing is to find all the corner cases. So familiarity with how a given product works can meaningfully affect results. In our case, our policy model was fundamentally different that most of our competitors (who had similar models to each other, at least from our point of view). This both made it possible for us to be more accurate, but also meant that it was very likely any given reviewer would be less familiar with our constructs and therefore less likely to configure the product the right way. Bottom line, as I recall, we never lost a review we were able to directly support.

Another tangential side note is that syndicated research houses (i.e. Gartner, Forrester, etc.) don’t do testing, but similarly follow the influence curve of this set. They have all kinds of rules about independence and I believe they intend to be independent, but the reality is that those who pay for access get to make their case more than those who don’t and over time I do believe it results in (indirectly) paid influence.

FWIW, probably the best of the three options is to use a testing provider like this with multiple competing sponsors that get to do what is essentially a jump ball to set up the testing criteria. What you get in this case isn’t necessarily an unbiased assessment of the products in question, but some mix of that overlaid with the execution capability and support capability/intensity of the vendors in question (i.e. those vendors that feel the urgency more will put more resources into influencing and supporting the testing).

I think a better option (though maybe not realistic) would be to have a user supported testing house that didn’t take vendor money and held transparent review contests in different categories. But this would require users to pay, which is hard to get them to do.

--

--

Mark Kraynak

Mark Kraynak is a technology executive, company builder and erstwhile poet/engineer. He’s currently a founding partner at Acrew Capital.