Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Perimeter

7/26/2010
10:00 AM
Adrian Lane
Adrian Lane
Commentary
50%
50%

What You Should Know About Tokenization

A week ago Visa released a set of best practices and recommendations for tokenization. Unfortunately, "best practices" leaves plenty of room for poor implementations.

A week ago Visa released a set of best practices and recommendations for tokenization. Unfortunately, "best practices" leaves plenty of room for poor implementations.A few months back I wrote a post about token deployment strategies for meeting PCI compliance. What I did not discuss were some of the differences between the different tokenization technologies on the market.

Token solutions have become popular because they remove credit card data from most processing systems, thus eliminating them from inspection during PCI assessment. For example, if you have a dozen systems (order entry, customer management, payment gateways, general ledger, etc.) and you substitute a token for the Primacy Account Number, then you remove a huge portion of the PCI audit. For a lot of merchants, that means a savings of 50 percent. No credit card numbers, no security threat, so no reason to poke around.

But that assumes the token is secure. The critical part of a token strategy is to ensure the token does not betray the original credit card number. Tokens created via any mathematical function, be it cryptography or hashing, always start with the account number. That means there is a chance they can be reversed back into the original if not carefully implemented or deployed. But we know from experience that poorly implemented algorithms, bad entropy or pseudo-random number generators, or improper use of padding/salting results in tokens that are easy to hack. The only two recommendations made by Visa are for mathematical derivatives, and there is considerable leeway in its guidance. In other words, a solution that meets Visa's criteria can provide poor security.

What does this mean to you? Several things:

1. Visa should have included in its recommendation the use of completely random numbers. This is far more secure because there is simply no way to reverse-engineer the credit card number from the token given there is no mathematical relationship. The only way to gain access to the original data is through the token server itself. I recommend you select this option if it is available from your vendor.

2. If you are looking at a solution that uses cryptographic functions, then you need to understand you will be using some form of a format-preserving encryption to form the token. Despite being based on accepted strong cryptographic algorithms, the format-preserving options are not specifically endorsed by Visa or the PCI Standards Council. Make sure your vendor has had its product professionally reviewed by a noted expert in the field of cryptanalysis. Also, verify that your auditor will remove systems using encryption from the scope of the audit -- otherwise you miss out on cost savings.

3. If you are looking at a solution that uses a hashing variant, then first make sure the method used is acceptable to Visa and PCI. Second, verify that the vendor implementation has been reviewed by the cryptanalysis community. Finally, see if you can locate a product that provides random salt values for each token. Static salt values or salting with a finite set of merchant IDs offers poor security and makes the hashes vulnerable to dictionary attacks.

Take the time to verify these options so you can get full value for your tokenization investment.

Adrian Lane is an analyst/CTO with Securosis LLC, an independent security consulting practice. Special to Dark Reading. Adrian Lane is a Security Strategist and brings over 25 years of industry experience to the Securosis team, much of it at the executive level. Adrian specializes in database security, data security, and secure software development. With experience at Ingres, Oracle, and ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...