PCI DSS 2.0 is on the Horizon
A new version of the PCI Data Security Standard (PCI-DSS) is targeted for release in October. A lot of companies are aware that the revised standard is coming out, and many of our clients have been asking us what the revisions will entail, and what they’ll mean to them.
I think Seana Pitt, American Express’ vice president of global merchant policies and data management, did a great job summing up PCI DSS in this quote, “PCI DSS is intended to protect cardholder data, wherever it resides. All payment card brand members must comply and ensure their compliance of their merchants and service providers who store, process, or transmit credit card account numbers. The program applies to all payment channels.”
The standard was originally developed on a two-year lifecycle, meaning it was published for two years. During those two years, the PCI Security Standards Council reached out to merchants, Qualified Security Assessors (QSAs), banks, processors and service providers worldwide for ongoing feedback and comments. The idea was to obtain regular input from key stakeholders in order to continuously strengthen the standards and keep them inline with the threat landscape. The Council recently announced that they were changing the development lifecycle period for PCI-DSS from two years to three years moving forward. The extended cycle is good news for merchants because it gives them more time to understand and implement the requirements.
While the new version of the standard, 2.0, is scheduled for release at the end of October, it “officially” goes into effect on Jan. 1, 2011. That means that many companies will have only a few months (November and December) to address any of the changes. “Applying a risk-based approach to addressing vulnerabilities,” PCI DSS Requirement 6.2, could be the most impactful change for risk and compliance management. Today, it is possible that any identified system vulnerability would cause a fail. With this new version of the standard, a risk-based approach can be used to determine if the indentified vulnerability has exploit potential in the environment.
Virtualization also is interesting. The current version of the standard allows users to have only one primary function per server. With virtualization, it is possible to have many servers within a server, all within one physical box. The revised version of the standard clarifies that virtualization is allowed. There is also expected to be a subsequent, independent document that defines validation requirements on how to implement or assess a virtualized environment.
The lion’s share of the modifications in v2.0 relates to clarification and guidance, to make sure everyone has a common interpretation. Because of this update, there will be more consistency and interpretation of the standard for companies that need to comply. In addition, it’s going to level playing field by forcing companies that don’t currently meet minimum requirements to enforce more stringent controls.
One final note: companies will [still] need to validate every year. I recommend that even companies beginning the validation assessment before Jan. 1, 2011 use the new standard. It may provide more flexibility and clarification.