I want to train a simple circuit in TFQ using a Sequential model as follows:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Input(shape=(), dtype=tf.dtypes.string))
model.add(
tfq.layers.PQC(
model_circuit=circuit,
operators=readout_op))
But instead of performing a readout op, I'd like the model to output the state vector so I can do some post-processing on it before I feed it into my loss function.
In principle, tfq.layers.State looks like it's appropriate for this task, but it is not clear to me from the examples how I would use the State layer in a model context, vs just using it to generate the state vector as shown in the docs:
state_layer = tfq.layers.State()
alphas = tf.reshape(tf.range(0, 1.1, delta=0.5), (3, 1)) # FIXME: #805
state_layer(parametrized_bell_circuit,
symbol_names=[alpha], symbol_values=alphas)
So my questions:
- can I force the PQC layer to output the state vector instead of performing a readout operation?
- can I use the State layer as a parameterized layer in a Sequential model (or train it in any other way?)
- or is there any alternative way that my model outputs a state vector?
The PQC layer will create and manage
tf.Variable
s for you. From there it will send your circuits through atfq.layers.Expectation
layer. Unfortunately there is no way to produce a full state vector from this layer.Yes, you can incorporate the state vector of input circuits into your model with the
tfq.layers.State
layer (https://www.tensorflow.org/quantum/api_docs/python/tfq/layers/State). Note that the produced state vector will NOT be differentiable. When creating TFQ we wanted to encourage users doing any complex modelling to try and make use of functionality that would have a 1:1 translation between a real chip and simulation (i.e. It is very easy to deploytfq.layers.Expectation
logic onto a true chip since we aren't breaking any rules, but withtfq.layers.State
we are cheating and pulling out the full state vector).