Monday, August 31, 2020

Using Shuffle for Data Anonymization

Data anonymization is a popular topic today for both enterprise and open public data uses.  Less commonly used techniques are data shuffling and swapping, techniques which work well when retention of data distribution is important.  For example, retaining the age distributions of employees.

Shuffling data reorders one or more columns so that the statistical distribution remains the same but the shuffled values no longer can be used to re-identify entities in rows.  Since shuffling doesn’t remove or alter uniquely identifying individual values, it may need to be combined with other techniques to properly anonymize a data set. 

Random Shuffle


Basic data shuffling will randomly permute a list of elements.  Consider as an example, an open public data set for the City of Atlanta employee salary data in 2015.  This data is in the public record, and contains the employees' full names, ages and salaries. 

We'll start de-identification by first replacing each employee’s name with a sequential ID.  

Original public data with name replaced:

Employee
Age
Annual Salary
0001
38
46,000
0002
52
26,700
0003
44
46,575
0004
42
42,867
0005
32
28,035
0006
44
67,800
0007
33
46,378
0008
28
39,328
0009
58
125,000
0010
45
60,466

To further de-identify the data set, while preserving data utility, we might shuffle the age column. Quasi-identifiers are attributes which do not uniquely identify an individual, but are sufficiently correlated that they can sometimes or when combined with other data can be used to re-identify someone. 

Shuffling the age column removes the correlation between age and salary.  The statistical distributions of age and salary are unaffected with this approach.

With age shuffled:

Employee
Age
Annual Salary
0001
44
46,000
0002
28
26,700
0003
44
46,575
0004
38
42,867
0005
52
28,035
0006
45
67,800
0007
42
46,378
0008
32
39,328
0009
58
125,000
0010
33
60,466

Shuffling Data


All permutations are equally likely when shuffling, including the original list.  In this instance, run using python's shuffle, employee 0003's age after shuffling retained the same value.

Data shuffling has been endorsed by the EU Data Protection Board in 2014 which wrote:




Many ETL tools provide shuffling as a de-identification technique, including Talend, Informatica, Ab Initio and even Oracle.  

History of the Fisher-Yates Shuffle


The first shuffle algorithm was described by Ronald Fisher and Frank Yates in their 1938 book Statistical tables for biological, agricultural and medical research.  A software version of the algorithm, optimized by Richard Durstenfeld to run in linear O(n) time, was popularized in Donald Knuth’s The Art of Computer Programming Volume II.  

Below is a python example of this algorithm:

import random 
list = [38, 52, 44, 42, 32, 44, 33, 28, 58, 45]
print ("The original list is : " + str(list)) 

# Fisher–Yates Shuffle
for i in range(len(list)-1, 0, -1): 
    # Pick a random index from 0 to i  
    j = random.randint(0, i + 1)  
    
    # Swap arr[i] with the element at random index  
    list[i], list[j] = list[j], list[i]  
      
print ("The shuffled list is : " +  str(list))

Python and Java, however, provide built-in shuffle methods.  Below is python code to shuffle the employee ages in the example above: 

list = [38, 52, 44, 42, 32, 44, 33, 28, 58, 45]
random.shuffle(list)
[44, 28, 44, 38, 52, 45, 42, 32, 58, 33]

Well Known Uses

The U.S. Census Bureau began using a variant of data swapping for the 1990 decennial census. The method was tested with extensive simulations, and the results were considered to be a success and essentially the same methodology was used for actual data releases.  In the U.S. Census Bureau’s version, records were swapped between census blocks for individuals or households that have been matched on a predetermined set of k variables.  A similar approach was used  for the U.S. 2000  census. The Office for National Statistics in the UK applied data swapping as part of its disclosure control procedures for the U.K. 2001 Census.
In 2002, researchers at the U.S. National Institute of Statistical Science (NISS) developed WebSwap, a public web-based tool to perform data swapping in databases of categorical variables.

Other Shuffle and Swapping Techniques


Group shuffle is used when two or more rows need to be shuffled together.  Some data sets may have columns with highly correlated data, so shuffling a single column in isolation would diminish the analytical value of the data.  

Consider our earlier employee example with the addition of a years-of-service column.  Since a persons age is related to their potential employment longevity, it would make sense to shuffle these together.

Direct swapping is a non-random approach which handpicks records by finding pairs to swap.  For a record set with age and salary, you can swap the salary of individuals of the same age. For example the following rows :

Dept. Id
Age
Annual Salary
A03031
44
95,000
A68002
44
60,000

swapping salary where age = 44:

Dept. Id
Age
Annual Salary
A03031
44
60,000
A68002
44
95,000

Direct swapping works with categorical data (e.g., eye color green, blue, brown or black) or a discrete number of values, in our example age represents a range 0..110.
   
Rank swap is similar, swapping pairs which are not exact matches, but close in value. It works well on continuous variables. For example blood pressure within 5 hg could be chosen as  pairs to swap:

original:

Systolic Blood Pressure (Hg)
Height
Weight
121
5' 8" 
155
122
5' 11"
180

swapped:

Systolic Blood Pressure (Hg)
Height
Weight
121
5' 11" 
180
122
5' 8"
155

In practice, both direct and rank swapping typically swap a subset of the records, not the entire set.
  

Summary


Shuffle can be applied as one of several valuable techniques for data de-identification.  As the EU Data Protection Board recommends, it may not be appropriate for stand-alone use, and always requires careful analysis with a given data set that obvious identifiers and quasi-identifiers are protected.  Consider applying a k-anonymization test on the output data.

Tuesday, September 17, 2019

How Tokenized Data Protected Capital One When Encryption Failed


The Capital One data breach this past August impacted over 100M individuals but provided some important lessons and a few silver linings. With so many affected, the FBI’s quick arrest of the perpetrator was met with relief and questions of what had happened to the data. Although the data had been downloaded and posted privately on Github, the actual data was never sold or disseminated. In fact, much of the highly sensitive data, in particular Social Security and account numbers, had been "tokenized" and therefore remained secure.
Roughly 99% of the Social Security Numbers were protected. But why then were some 140,000 SSNs exposed? As it turns out, those that were exposed had not been tokenized. As per Capital One’s policies, all American SSNs should be and were tokenized. But, Capital One's policies did not require tokenizing the employee ID field, which in some cases consisted of a SSN. Also, the equally sensitive Canadian Insurance Numbers, with a slightly different format than the US numbers, were not tokenized and over 1M were exposed.
Data privacy professionals should note the importance of tokenizing data over encryption. The attacker in this instance gained unauthorized access to the encryption keys, and the encrypted data was then successfully decrypted and breached. Tokenized data, however, remained fully protected.

As illustrated by this incident, the risk of theft or misuse of an encryption key remains a major hurdle to fully securing data with encryption. The bank exposed the most sensitive PII (Personally Identifiable Information) tied to some of its US and Canadian customers when encryption failed to protect the data.

Tokenization

Tokenization methods vary, but the process is simple: a data field with a sensitive, personal identifier is replaced with a different, synthetic value, of the same format. Tokenization is usually performed consistently, whereby each unique value is always converted to another unique value within a data set. It is consistent, so the translation from original to synthetic value is the same, every time. Thus, a relational database using SSN as a key can still join on the synthetic SSN.
Payment vendor Square reports thatpayment experts are seeing more and more organizations moving from encryption to tokenization as a more cost-effective (and secure) way to protect and safeguard sensitive information.
The most frequenlty stolen Social Security Number of all time is 078-05-1120. The story behind the stolen SSN dates back to 1938 when Douglas Patterson, VP of E.H. Ferree Company, used his secretary’s real SSN on thousands of example Social Security Cards. The purpose of the fake card with a real number was to demonstrate how well it fit into a new line of wallets. Although the sample card was labeled specimen and printed in red instead of blue, the SSN was eventually used by over 40,000 people, some of whom had thought that the card in their wallet was their own.
SSN 078-05-1120 is now a retired number; so let’s use it as an example. The tokenization process would transform this into a synthetic number of exactly the same format, but no longer represents the original value: 

Tokenization can have another benefit. When the format is preserved, and the values are constrained (e.g., digits), the output can be indistinguishable from a real value. In the case of Capital One, it’s unlikely the hacker knew that the SSN values were synthetic, and not real values. According to the FBI, she wrote:
Unknown to the hacker, the SSNs were tokenized, so they were synthetic values unrelated to real Social Security Numbers.  Had Capital One also tokenized Canadian Insurance Numbers and employee ID fields, its likely none of these numbers would have been breached

Random Tokenization

Capital One appears to have used a tokenization technique, known as Format Preserving Encryption (FPE), which itself uses encryption. Had these keys, used by FPE, also been stolen, the tokenized data could have been decrypted.
A better approach is random tokenization, whereby, as the term implies, the tokens are selected randomly. Random tokenization uses a separate key-value database or token vault to consistently tokenize the input data. Unlike encryption, random tokenization does not rely on a single set of keys and the tokens are not vulnerable to mathematical or brute force attacks.
As Jonathan Deveaux wrote in Payments Journal: tokenization "can address the failings of encryption” and “if a hacker was successful and gained access to the tokenized data, it would still be protected as the information would have no exploitable value.”

Summary

Tokenization is recognized as a highly effective data protection technique, and the Capital One incident clearly illustrates its value and effectiveness.

Capital One had policies for tokenizing SSN data. Had Capital One simply tokenized more fields with PII data, neither the SSN or CIN numbers would have been breached and their financial exposure, estimated at $100-$150M, would likely be limited. In contrast, encryption as a data protection policy, remains highly vulnerable to theft or mis-use of the necessary encryption keys. 

References

United States of America vs Paige A Thompson a/k/a “erratic”, US District Court for the Western District of Washington, July 29, 2019

Sunday, January 7, 2018



A Mind Map illustrating the core concepts of Apache Cassandra. 

Monday, February 27, 2017

Understanding the Cassandra Partitioner


Partitioning Data with Hashing


Cassandra uses a Partitioner to distribute data across all the nodes in a Cassandra cluster. When a row of data is written or read, the partitioner calculates the hash value of the partition key. This hash value is called a Token and is mapped to a node which owns that token value. Each node in a cluster is configured to own a primary token range unique to the cluster. Therefore, once the hash or token value has been calculated, we can determine which node the data belongs to.

In Cassandra, Replication Factor (RF) is typically greater than one, so replicas of the data are stored on multiple nodes. For the partitioner, this simply means finding the token owner and then distributing replicas among the adjacent token range owners.

A well partitioned result distributes data evenly and randomly across the nodes in a cluster. The application developer influences evenness through the definition of the partition key.  Ensuring randomness requires a good hashing algorithm.

Hashing


Cassandra 1.0 used the classic 128-bit MD5 hashing algorithm for partitioning. MD5 was designed as a cryptographic hash function such that hash values must be random, evenly distributed, and in addition it must be hard or impossible to guess the original value from a hash value.

Beginning with Cassandra 1.2, Murmur3, a faster, non-cryptographic hash function, replaced MD5 as the default partition hash. The name was coined from the machine language operations multiply (MU) and rotate (R).

Murmur3Partitioner is now the default and should be used for all new clusters. It remains an option to configure the original RandomPartitioner using MD5 for compatibility with older clusters.

The published Murmur3 hash algorithm provides three versions optimized for different platforms. Murmur3Partitioner in Cassandra uses the x64 128-bit version, but it truncates the result and uses only the upper 64 bits of the hash value. The token range for Murmur3 in Cassandra is therefore -263 to +263-1.

The Murmur3Partitioner.java source code in Cassandra 3.x creates a 128 bit Murmur3 hash with the x64_128 algorithm.  The first 64 bits in long[0] are returned as the token value and the lower 64 bits in long[1] are ignored.

    private LongToken getToken(ByteBuffer key, long[] hash)
    {
        if (key.remaining() == 0)
            return MINIMUM;

        return new LongToken(normalize(hash[0]));
    }

    private long[] getHash(ByteBuffer key)
    {
        long[] hash = new long[2];
        MurmurHash.hash3_x64_128(key, key.position(), 
            key.remaining(), 0, hash);
        return hash;

    }


Hashing Examples

Enter a string to be hashed:


128 bits:

High 64 bits: as signed integer:
Low 64 bits: as signed integer:

Choosing Token Ranges


Token ranges are configured in cassandra.yaml by one of the following:
  • Manually by assigning each node a starting range
  • Automatically by enabling vnodes
VNodes are administratively easier and are recommended for most new environments. The default number of vnodes is 128, which creates 128 token ranges per node. For DSE clusters using Search the recommended number of vnodes is lower, with recommended values of16 or 32. The primary advantages of vnodes are:


  • When adding or removing nodes from a cluster, manual rebalancing is not required. 
  • Faster recovery from node failures or removal.  With vnodes, rebuilding can stream data from all online nodes.  This compares with manual tokens which will read from at most four nodes adjacent to the node being replaced. Especially with larger clusters, this can be an important factor providing operational agility.
You must not mix vnodes and manual tokens within a single data center.

Your choice of a partitioner and token range scheme determines where the data resides in a cluster. Changes to either of these on a production cluster is operationally difficult and may require migrating all of your data.



Calculating Manual Token Ranges

When configuring token ranges manually, it helps to use a token range calculator. 

Enter the number of nodes below, and it will calculate the starting token offsets beginning with zero for the Murmur3Partitioner.


Token Calculator

Number of nodes:

Initial Tokens:



Murmur3 Partitioning Example



As a practical example with real data, we can use the Murmur3 partitioner with the twelve zodiac signs to hash their values and observe how they would be partitioned if used as keys across a Cassandra cluster. I’ve chosen six nodes because it is a fairly common cluster size and an even divisor for the number of keys. A perfectly even distribution would provide two key values per node.

Step One: Calculate token ranges using the calculator above.  Using n=6 we have:


Node   Starting Range
1      0
2      3074457345618258602
3      6148914691236517204
4      9223372036854775806
5      -3074457345618258000
6      -6148914691236518000

Step Two: Calculate the hash value for each of the twelve zodiac sign names and match the hash with a node's token range:


Key Value    Murmur3 Hash         Node
Aries        6446536566984288488  3
Taurus       4155751160254564535  2
Gemini       1317029125904582964  1
Cancer      -8016596991533194765  6
Leo         -8583032252751962986  3
Virgo       -8041781948673145583  6
Libra       -2142727802591540075  4
Scorpio     -5744609807935173055  5
Sagittarius -0816785684867175026  1
Capricorn   -6957124044486481194  6
Aquarius    -3903387275638502447  5
Pisces       7634852637572685346  3

Result:  The result is fairly even for a random partitioner; two nodes have three values, two nodes have two values, and two nodes have one value. 



Verify Hashing in Cassandra


We can now verify the Murmur3 hash values calculated above match exactly with what is seen in Cassandra.


CREATE TABLE test.zodiac (
    sign text,
    body text,
    PRIMARY KEY (sign)

);

cqlsh> insert into zodiac (sign, body) values ('Aries', 'Mars');
cqlsh> insert into zodiac (sign, body) values ('Taurus', 'Earth');
cqlsh> insert into zodiac (sign, body) values ('Gemini', 'Mercury');
cqlsh> insert into zodiac (sign, body) values ('Cancer', 'Moon');
cqlsh> insert into zodiac (sign, body) values ('Leo', 'Sun');
cqlsh> insert into zodiac (sign, body) values ('Virgo', 'Mercury');
cqlsh> insert into zodiac (sign, body) values ('Libra', 'Venus');
cqlsh> insert into zodiac (sign, body) values ('Scorpio', 'Pluto');
cqlsh> insert into zodiac (sign, body) values ('Sagittarius', 'Jupiter');
cqlsh> insert into zodiac (sign, body) values ('Capricorn', 'Saturn');
cqlsh> insert into zodiac (sign, body) values ('Aquarius', 'Uranus');
cqlsh> insert into zodiac (sign, body) values ('Pisces', 'Neptune');


cqlsh> select sign, token(sign) from zodiac;

 sign        | system.token(sign)
-------------+----------------------
         Leo | -8583032252751962986
       Virgo | -8041781948673145583
      Cancer | -8016596991533194765
   Capricorn | -6957124044486481194
     Scorpio | -5744609807935173055
    Aquarius | -3903387275638502447
       Libra | -2142727802591540075
 Sagittarius-816785684867175026
      Gemini1721847210301305769
      Taurus4155751160254564535
       Aries6446536566984288488 
      Pisces7634852637572685346


(12 rows)