The Times They Are A-Changin'

By Accuvant LABS R&D Team ·

We at Accuvant LABS have been overwhelmed by the positive feedback we’ve received for our research paper “Browser Security Comparison – A Quantitative Approach”.  By now many have had a chance to sit down with the paper and understand the materials, as evidenced by the sheer amount of feedback we’ve received.  We want to thank everyone who has supplied us with their feedback; your words have not fallen on deaf ears.

Google approached Accuvant LABS to design a methodology for comparing browser security, and execute on the methodology we defined by producing a paper.  We both explicitly agreed that Accuvant LABS would have complete independence in the methodology and conclusion before we proceeded.  The Accuvant LABS R+D team sat down and reviewed what had been published in the past.  Once we had fully immersed ourselves in what is considered "state-of-the-art" regarding security comparisons, we were shocked that over the past 10 years there has been fairly limited innovations in methodology, testing, and delivery regarding security comparisons. Shock quickly gave way to a rush of enthusiasm. We realized we could create a fundamentally better approach to security comparisons. Our most important innovations fall into two categories.

First, the criteria of past papers didn’t capture recent advancements in browser security.  If a paper presupposes the idea that an end-user will download and run any executable that’s presented to them then, maybe naively, we believe that supposition represents a need to educate the end-user rather than being relevant to the security of software.  Perhaps we just have more faith in the intelligence of end-users than the people who have been previously developing criteria.  This fundamentally different viewpoint can simply be attributed to the fact that, when you’re a team of researchers that have an outstanding record of identifying vulnerabilities, you have a different perspective.  What we decided to focus on were the specific properties of a browser that would make it difficult for attackers to compromise a browser. We couldn't think of a more relevant way to analyze browser security.

Second, we wanted to create a delivery vehicle that would demand rational conversation.  Since we know that any system or criteria developed would have flaws, as does every system, we wanted to foster an environment for productive conversation.  By releasing our criteria, methodology, and data the only arguments that can be made by a rational person are relevant arguments.  If anyone believes there are issues with our data, they can run our tools to generate the data and ensure that our data is correct.  If our tools are incorrect, the source code is available so anyone can cite an actual line of code that leads to flawed data.  If someone feels our criteria are incomplete, they can develop additional criteria, create tools to generate data for the new criteria, generate the data, and then disclose the results.  We did this with the hope that any dissention or disagreement with our paper would create a driver that would further advance the topic as a whole to the industry, rather than just leaving it to individuals to irrationally condemn, and by doing so contribute no value to the process.

The open nature of our research also had an ancillary benefit. With our level of transparency, we simply could not be biased. We can confidently tout our lack of bias because everything is available for scrutiny. Anyone who thinks we are biased should point to specific erroneous data, a flaw in a tool, or a problem with our methodology. Much of the feedback we've received thus far shows that many people understand that the transparency we maintain prevents bias, and we couldn't be happier.

Whenever you start along the path of research you inevitably find new paths to take; however, "real researchers release" therefore, we had to draw the line somewhere.  The next browser security comparison paper we’d release would include Opera.  Some great points were made about how many view Opera as the most secure browser, so it makes sense to include it in a security comparison.  We didn’t include Opera because we wanted to make our research paper relevant to the largest quantity of end-users.

We have sat down and created a list of additional criteria that we could focus on for the next release of our paper. These additional criteria have come from our own brainstorming sessions and feedback we've received. These criteria include, but are in no way limited to:

  • Browser Centric Cross-Domain Attacks
  • DOM Security Implementation
  • Anti-XSS Protection
  • Anti-CSRF Protection
  • Frame Poisoning
  • Heap Implementation Security
  • SEHOP
One other interesting item is Safari.  The majority of users browsing the web with Safari use OS X. For our research paper, we limited our analysis to Windows as a host platform.  In order to do Safari justice, we debated the use of two approaches.  The first approach was to create a completely separate paper judging the security of browsers on OS X.  The second approach was to find a way to translate Windows security properties into OS X security properties in order to allow fairness in the comparison.  With the second approach, it would then be rational to attack the translation, and it wouldn’t be productive.  Therefore, in the future we’d like to approach it in an OS X centric methodology.

 

Again, we’d like to thank everyone for joining the conversation and invite the stragglers to join us.  There are wonderful points that many incredibly intelligent people are making.  There are also some more amusing points of discussion that have really made our week.  Only together, as a community, can we advance the state of security.

Dictated, but not read.