How are embeddings used for fully homomorphic encryption?

284 views Asked by At

How exactly do you perform one way encryption using embeddings from a deep neural network?

Fully homomorphic encryption (FHE) benefits society by ensuring full privacy. The Private Identity recognition algorithm uses FHE to enable encrypted match and search operations on an encrypted dataset without any requirement to store, transmit or use plaintext biometrics or biometric templates. The biometric data is irreversibly anonymized using a 1-way cryptographic hash algorithm and then discarded without the data ever leaving the local device.

My question is how exactly does this use embeddings to accomplish this? Where do embeddings come in?

1

There are 1 answers

0
Scott Streit On BEST ANSWER

An embedding is a set of floating point numbers taken from the N-1 layer of a softmax Deep Neural Network (DNN). Initially, the community used DNNs to get a resulting class (softmax), but an interesting property turned out to be the values at the layer before the softamx layer.

These values have interesting properties. They may function as a 1-way encryption. They also closely relate to the initial input. In a geometric distance (cosine, Euclidean) values are close to similar inputs. This means two pictures of my face will be closer (geometrically) than a picture of two different people This property allows operations on the resulting encryption.

One of the operations allowed is match. In the encrypted space, using the distance properties, we can match using only the embedding. Since we are only working in the encrypted space, we have an implementation of FHE and the embedding comes from the DNN.

Subsequently, we have found that a second DNN allows the classification, but only using embeddings. We now have privacy and performance.