INTRODUCTION On March 9, 2020, Google AI confirmed the availability of TensorFlow Quantum (TFQ), an open-source library for rapid prototyping of quantum machine learning models. Earlier there were several other frameworks like Pennylane, but none of them were as fantastic as TFQ. TensorFlow Quantum comes as a toolbox in this field that was not available till yet. I've read and understood a couple of other frameworks, but after researching the TFQ, there's no denying that the TFQ is best. Let's continue to understand how we can design a quantum neural network using TensorFlow Quantum. How can we do Machine learning over parameterized quantum circuits? To grasp this explicitly, there is an example provided by Masoud Mohseni (Tech lead of TensorFlow Quantum). And, he said, "We need to note that when you print this kind of unit operations or random rotations in the space-time volume that you have, this is a kind of continuous parameterized rotation that mimics classical circuits like deep neural networks that map those inputs to outputs." This is the explanation behind the word Quantum Neural Networks. But how do we create these parameterized quantum circuits? The first step in developing hybrid quantum models is to be able to exploit quantum operations. To do so, TensorFlow Quantum depends on Cirq, an open-source platform for implementing quantum circuits on near-term computers. Cirq includes fundamental structures, such as qubits, gates, circuits and calculation operators, which are needed to define quantum computations. The concept behind the Cirq is to to provide a simple programming model that abstracts the fundamental building blocks of quantum applications. If you really want to learn more about quantum computation and cirq implementation, you can read my article here. Can we combine cirq and TensorFlow Quantum, and what are the challenges for that? Technical Hurdle 1 Quantum data can not be imported.Quantum data must be prepared on the fly.Both data and the model are layers in the quantum circuit. Technical Hurdle 2 QPU needs full quantum program for each run.QPU run in few microseconds.Relatively high latency CPU-QPU.Batches of jobs are relayed to the quantum computer. Tensorflow Quantum Team is coming up with certain incridible architecture concepts in the programming background to make it practical and conquer the hurdles. The architecture criteria are laid out below. Differentiability: Must support differentiation of quantum circuits and hybrid backpropagation.Circuit batching: Quantum data loaded as quantum circuits, Training over many different circuits in parallel.Execution Backend Agnostic: Switch from a simulator to real device easily with few changes.Minimalism: A bridge between Cirq and TF: Does not require a user to relearn how to interface with the quantum computer to solve the problems using machine learning. Step by step execution TFQ pipeline for hybrid discriminative model Step 1: Prepare a quantum dataset: the quantum data is loaded as a tensor, defined as a quantum circuit written in Cirq. The tensor is executed by TensorFlow on the quantum computer to generate a quantum dataset. Quantum datasets are prepared using unparameterized cirq.Circuit objects and are injected into the computational graph using tfq.convert_to_tensor Step 2: Evaluate a quantum neural network model: In this step, the researcher can prototype a quantum neural network using Cirq that they will later embed inside of a TensorFlow compute graph. Quantum models are constructed using cirq.Circuit objects containing SymPy symbols,and can be attached to quantum data sources using the tfq.AddCircuit layer. Step 3: Sample or Average: This step leverages methods for averaging over several runs involving steps (1) and (2). Sampling or averaging are performed by feeding quantum data and quantum models to the tfq.Sample or tfq.Expectation layers. Step 4: Evaluate a classical neural networks model: This step uses classical deep neural networks to distil such correlations between the measures extracted in the previous steps. Since TFQ is fully compatible withcore TensorFlow, quantum models can be attached directly to classical tf.keras.layers.Layer objects such as tf.keras.layers.Dense. Step 5: Evaluate Cost Function: Similar to traditional machine learning models, TFQ uses this step to evaluate a cost function. This could be based on how accurately the model performs the classification task if the quantum data was labelled, or other criteria if the task is unsupervised. Wrapping the model built in stages (1)through (4) inside a tf.keras.Model gives the user access to all the losses in the tf.keras.losses module. Step 6: Evaluate Gradients & Update Parameters — After evaluating the cost function, the free parameters in the pipeline should be updated in a direction expected to decrease the cost. To support gradient descent, TFQ exposes derivatives of quantum operations to the TensorFlow backpropagation machinery via the tfq.differentiators.Differentiatorinterface. This allows both the quantum and classical models parameters to be optimized against quantum data via hybrid quantum-classical backpropagation Coding demo #Importing dependencies !pip install --upgrade cirq==0.7.0 !pip install --upgrade tensorflow==2.1.0 !pip install qutip !pip install tensorflow-quantum import cirq import numpy as np import qutip import random import sympy import tensorflow as tf import tensorflow_quantum as tfq #Quantum Dataset def generate_dataset(qubit, theta_a, theta_b, num_samples): """Generate a dataset of points on `qubit` near the two given angles; labels for the two clusters use a one-hot encoding. """ q_data = [] bloch = {"a": [[], [], []], "b": [[], [], []]} labels = [] blob_size = abs(theta_a - theta_b) / 5 for _ in range(num_samples): coin = random.random() spread_x = np.random.uniform(-blob_size, blob_size) spread_y = np.random.uniform(-blob_size, blob_size) if coin < 0.5: label = [1, 0] angle = theta_a + spread_y source = "a" else: label = [0, 1] angle = theta_b + spread_y source = "b" labels.append(label) q_data.append(cirq.Circuit(cirq.ry(-angle)(qubit), cirq.rx(-spread_x)(qubit))) bloch[source][0].append(np.cos(angle)) bloch[source][1].append(np.sin(angle)*np.sin(spread_x)) bloch[source][2].append(np.sin(angle)*np.cos(spread_x)) return tfq.convert_to_tensor(q_data), np.array(labels), bloch #Genrate the dataset qubit = cirq.GridQubit(0, 0) theta_a = 1 theta_b = 4 num_samples = 200 q_data, labels, bloch_p = generate_dataset(qubit, theta_a, theta_b, num_samples #Model #We will use a parameterized rotation about the Y axis followed by a Z-axis measurement as the quantum portion of our model. For the classical portion, we will use a two-unit SoftMax which should learn to distinguish the measurement statistics of the two data sources. # Build the quantum model layer theta = sympy.Symbol('theta') q_model = cirq.Circuit(cirq.ry(theta)(qubit)) q_data_input = tf.keras.Input( shape=(), dtype=tf.dtypes.string) expectation = tfq.layers.PQC(q_model, cirq.Z(qubit)) expectation_output = expectation(q_data_input) # Attach the classical SoftMax classifier classifier = tf.keras.layers.Dense(2, activation=tf.keras.activations.softmax) classifier_output = classifier(expectation_output) model = tf.keras.Model(inputs=q_data_input, outputs=classifier_output) # Standard compilation for classification model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.1), loss=tf.keras.losses.CategoricalCrossentropy()) tf.keras.utils.plot_model(model, show_shapes=True, dpi=70) #Training history = model.fit(x=q_data, y=labels, epochs=50, verbose=0) test_data, _, _ = generate_dataset(qubit, theta_a, theta_b, 1) p = model.predict(test_data)[0] print(f"prob(a)={p[0]:.4f}, prob(b)={p[1]:.4f}") Conclusion So, we discovered about the Quantaum neural network in easy steps and even implemented it with TensorFlow Quantaum. Thanking and References Congratulations, Masoud Mohseni and the whole Tensorflow Quantum team for building such a wonderful system, it's a quantum leap in the history of machine learning. Please read the research paper to know more thoroughly. Thanks, Masoud Mohseni and whole Tensorflow Quantum team for creating such a great framework, It’s a quantum leap in the history of machine learning. please read the research paper to learn things deeply. Paper - TensorFlow Quantum: A Software Framework for Quantum Machine Learning https://arxiv.org/abs/2003.02989. Thank you everyone! INTRODUCTION INTRODUCTION On March 9, 2020, Google AI confirmed the availability of TensorFlow Quantum (TFQ), an open-source library for rapid prototyping of quantum machine learning models. Earlier there were several other frameworks like Pennylane, but none of them were as fantastic as TFQ. TensorFlow Quantum comes as a toolbox in this field that was not available till yet. I've read and understood a couple of other frameworks, but after researching the TFQ, there's no denying that the TFQ is best. Let's continue to understand how we can design a quantum neural network using TensorFlow Quantum. How can we do Machine learning over parameterized quantum circuits? To grasp this explicitly, there is an example provided by Masoud Mohseni (Tech lead of TensorFlow Quantum). And, he said, "We need to note that when you print this kind of unit operations or random rotations in the space-time volume that you have, this is a kind of continuous parameterized rotation that mimics classical circuits like deep neural networks that map those inputs to outputs." This is the explanation behind the word Quantum Neural Networks. But how do we create these parameterized quantum circuits? The first step in developing hybrid quantum models is to be able to exploit quantum operations. To do so, TensorFlow Quantum depends on Cirq, an open-source platform for implementing quantum circuits on near-term computers. Cirq includes fundamental structures, such as qubits, gates, circuits and calculation operators, which are needed to define quantum computations. The concept behind the Cirq is to to provide a simple programming model that abstracts the fundamental building blocks of quantum applications. If you really want to learn more about quantum computation and cirq implementation, you can read my article here . here Can we combine cirq and TensorFlow Quantum, and what are the challenges for that? Can we combine cirq and TensorFlow Quantum, and what are the challenges for that? Technical Hurdle 1 Technical Hurdle 1 Quantum data can not be imported. Quantum data must be prepared on the fly. Both data and the model are layers in the quantum circuit. Quantum data can not be imported. Quantum data must be prepared on the fly. Both data and the model are layers in the quantum circuit. Technical Hurdle 2 Technical Hurdle 2 QPU needs full quantum program for each run. QPU run in few microseconds. Relatively high latency CPU-QPU. Batches of jobs are relayed to the quantum computer. QPU needs full quantum program for each run. QPU run in few microseconds. Relatively high latency CPU-QPU. Batches of jobs are relayed to the quantum computer. Tensorflow Quantum Team is coming up with certain incridible architecture concepts in the programming background to make it practical and conquer the hurdles. The architecture criteria are laid out below. Differentiability: Must support differentiation of quantum circuits and hybrid backpropagation. Circuit batching: Quantum data loaded as quantum circuits, Training over many different circuits in parallel. Execution Backend Agnostic: Switch from a simulator to real device easily with few changes. Minimalism: A bridge between Cirq and TF: Does not require a user to relearn how to interface with the quantum computer to solve the problems using machine learning. Differentiability: Must support differentiation of quantum circuits and hybrid backpropagation. Differentiability: Differentiability: Circuit batching: Quantum data loaded as quantum circuits, Training over many different circuits in parallel. Circuit batching: Circuit batching: Execution Backend Agnostic: Switch from a simulator to real device easily with few changes. Execution Backend Agnostic: Execution Backend Agnostic: Minimalism: A bridge between Cirq and TF: Does not require a user to relearn how to interface with the quantum computer to solve the problems using machine learning. Minimalism: A bridge between Cirq and TF: Minimalism: A bridge between Cirq and TF: Step by step execution TFQ pipeline for hybrid discriminative model TFQ pipeline for hybrid discriminative model Step 1: Step 1: Prepare a quantum dataset: the quantum data is loaded as a tensor, defined as a quantum circuit written in Cirq. The tensor is executed by TensorFlow on the quantum computer to generate a quantum dataset. Quantum datasets are prepared using unparameterized cirq.Circuit objects and are injected into the computational graph using tfq.convert_to_tensor cirq.Circuit cirq.Circuit cirq.Circuit tfq.convert_to_tensor tfq.convert_to_tensor tfq.convert_to_tensor Step 2: Step 2: Evaluate a quantum neural network model: In this step, the researcher can prototype a quantum neural network using Cirq that they will later embed inside of a TensorFlow compute graph. Quantum models are constructed using cirq.Circuit objects containing SymPy symbols,and can be attached to quantum data sources using the tfq.AddCircuit layer. cirq.Circuit cirq.Circuit cirq.Circuit tfq.AddCircuit tfq.AddCircuit tfq.AddCircuit Step 3: Step 3: Sample or Average: This step leverages methods for averaging over several runs involving steps (1) and (2). Sampling or averaging are performed by feeding quantum data and quantum models to the tfq.Sample or tfq.Expectation layers. tfq.Sample tfq.Sample tfq.Sample tfq.Expectation tfq.Expectation tfq.Expectation Step 4: Step 4: Evaluate a classical neural networks model: This step uses classical deep neural networks to distil such correlations between the measures extracted in the previous steps. Since TFQ is fully compatible withcore TensorFlow, quantum models can be attached directly to classical tf.keras.layers.Layer objects such as tf.keras.layers.Dense. tf.keras.layers.Layer tf.keras.layers.Layer tf.keras.layers.Layer Step 5: Step 5: Evaluate Cost Function: Similar to traditional machine learning models, TFQ uses this step to evaluate a cost function. This could be based on how accurately the model performs the classification task if the quantum data was labelled, or other criteria if the task is unsupervised. Wrapping the model built in stages (1)through (4) inside a tf.keras.Model gives the user access to all the losses in the tf.keras.losses module. tf.keras.Model tf.keras.Model tf.keras.Model tf.keras.losses tf.keras.losses tf.keras.losses Step 6: Step 6: Evaluate Gradients & Update Parameters — After evaluating the cost function, the free parameters in the pipeline should be updated in a direction expected to decrease the cost. To support gradient descent, TFQ exposes derivatives of quantum operations to the TensorFlow backpropagation machinery via the tfq.differentiators.Differentiatorinterface . This allows both the quantum and classical models parameters to be optimized against quantum data via hybrid quantum-classical backpropagation tfq.differentiators.Differentiatorinterface tfq.differentiators.Differentiatorinterface tfq.differentiators.Differentiatorinterface Coding demo #Importing dependencies !pip install --upgrade cirq==0.7.0 !pip install --upgrade tensorflow==2.1.0 !pip install qutip !pip install tensorflow-quantum import cirq import numpy as np import qutip import random import sympy import tensorflow as tf import tensorflow_quantum as tfq #Quantum Dataset def generate_dataset(qubit, theta_a, theta_b, num_samples): """Generate a dataset of points on `qubit` near the two given angles; labels for the two clusters use a one-hot encoding. """ q_data = [] bloch = {"a": [[], [], []], "b": [[], [], []]} labels = [] blob_size = abs(theta_a - theta_b) / 5 for _ in range(num_samples): coin = random.random() spread_x = np.random.uniform(-blob_size, blob_size) spread_y = np.random.uniform(-blob_size, blob_size) if coin < 0.5: label = [1, 0] angle = theta_a + spread_y source = "a" else: label = [0, 1] angle = theta_b + spread_y source = "b" labels.append(label) q_data.append(cirq.Circuit(cirq.ry(-angle)(qubit), cirq.rx(-spread_x)(qubit))) bloch[source][0].append(np.cos(angle)) bloch[source][1].append(np.sin(angle)*np.sin(spread_x)) bloch[source][2].append(np.sin(angle)*np.cos(spread_x)) return tfq.convert_to_tensor(q_data), np.array(labels), bloch #Genrate the dataset qubit = cirq.GridQubit(0, 0) theta_a = 1 theta_b = 4 num_samples = 200 q_data, labels, bloch_p = generate_dataset(qubit, theta_a, theta_b, num_samples #Model #We will use a parameterized rotation about the Y axis followed by a Z-axis measurement as the quantum portion of our model. For the classical portion, we will use a two-unit SoftMax which should learn to distinguish the measurement statistics of the two data sources. # Build the quantum model layer theta = sympy.Symbol('theta') q_model = cirq.Circuit(cirq.ry(theta)(qubit)) q_data_input = tf.keras.Input( shape=(), dtype=tf.dtypes.string) expectation = tfq.layers.PQC(q_model, cirq.Z(qubit)) expectation_output = expectation(q_data_input) # Attach the classical SoftMax classifier classifier = tf.keras.layers.Dense(2, activation=tf.keras.activations.softmax) classifier_output = classifier(expectation_output) model = tf.keras.Model(inputs=q_data_input, outputs=classifier_output) # Standard compilation for classification model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.1), loss=tf.keras.losses.CategoricalCrossentropy()) tf.keras.utils.plot_model(model, show_shapes=True, dpi=70) #Training history = model.fit(x=q_data, y=labels, epochs=50, verbose=0) test_data, _, _ = generate_dataset(qubit, theta_a, theta_b, 1) p = model.predict(test_data)[0] print(f"prob(a)={p[0]:.4f}, prob(b)={p[1]:.4f}") #Importing dependencies !pip install --upgrade cirq==0.7.0 !pip install --upgrade tensorflow==2.1.0 !pip install qutip !pip install tensorflow-quantum import cirq import numpy as np import qutip import random import sympy import tensorflow as tf import tensorflow_quantum as tfq #Quantum Dataset def generate_dataset(qubit, theta_a, theta_b, num_samples): """Generate a dataset of points on `qubit` near the two given angles; labels for the two clusters use a one-hot encoding. """ q_data = [] bloch = {"a": [[], [], []], "b": [[], [], []]} labels = [] blob_size = abs(theta_a - theta_b) / 5 for _ in range(num_samples): coin = random.random() spread_x = np.random.uniform(-blob_size, blob_size) spread_y = np.random.uniform(-blob_size, blob_size) if coin < 0.5: label = [1, 0] angle = theta_a + spread_y source = "a" else: label = [0, 1] angle = theta_b + spread_y source = "b" labels.append(label) q_data.append(cirq.Circuit(cirq.ry(-angle)(qubit), cirq.rx(-spread_x)(qubit))) bloch[source][0].append(np.cos(angle)) bloch[source][1].append(np.sin(angle)*np.sin(spread_x)) bloch[source][2].append(np.sin(angle)*np.cos(spread_x)) return tfq.convert_to_tensor(q_data), np.array(labels), bloch #Genrate the dataset qubit = cirq.GridQubit(0, 0) theta_a = 1 theta_b = 4 num_samples = 200 q_data, labels, bloch_p = generate_dataset(qubit, theta_a, theta_b, num_samples #Model #We will use a parameterized rotation about the Y axis followed by a Z-axis measurement as the quantum portion of our model. For the classical portion, we will use a two-unit SoftMax which should learn to distinguish the measurement statistics of the two data sources. # Build the quantum model layer theta = sympy.Symbol('theta') q_model = cirq.Circuit(cirq.ry(theta)(qubit)) q_data_input = tf.keras.Input( shape=(), dtype=tf.dtypes.string) expectation = tfq.layers.PQC(q_model, cirq.Z(qubit)) expectation_output = expectation(q_data_input) # Attach the classical SoftMax classifier classifier = tf.keras.layers.Dense(2, activation=tf.keras.activations.softmax) classifier_output = classifier(expectation_output) model = tf.keras.Model(inputs=q_data_input, outputs=classifier_output) # Standard compilation for classification model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.1), loss=tf.keras.losses.CategoricalCrossentropy()) tf.keras.utils.plot_model(model, show_shapes=True, dpi=70) #Training history = model.fit(x=q_data, y=labels, epochs=50, verbose=0) test_data, _, _ = generate_dataset(qubit, theta_a, theta_b, 1) p = model.predict(test_data)[0] print(f"prob(a)={p[0]:.4f}, prob(b)={p[1]:.4f}") #Importing dependencies !pip install --upgrade cirq== 0.7 .0 !pip install --upgrade tensorflow== 2.1 .0 !pip install qutip !pip install tensorflow-quantum import cirq import numpy as np import qutip import random import sympy import tensorflow as tf import tensorflow_quantum as tfq #Quantum Dataset def generate_dataset ( qubit, theta_a, theta_b, num_samples ): """Generate a dataset of points on `qubit` near the two given angles; labels for the two clusters use a one-hot encoding. """ q_data = [] bloch = { "a" : [[], [], []], "b" : [[], [], []]} labels = [] blob_size = abs (theta_a - theta_b) / 5 for _ in range (num_samples): coin = random.random() spread_x = np.random.uniform(-blob_size, blob_size) spread_y = np.random.uniform(-blob_size, blob_size) if coin < 0.5 : label = [ 1 , 0 ] angle = theta_a + spread_y source = "a" else : label = [ 0 , 1 ] angle = theta_b + spread_y source = "b" labels.append(label) q_data.append(cirq.Circuit(cirq.ry(-angle)(qubit), cirq.rx(-spread_x)(qubit))) bloch[source][ 0 ].append(np.cos(angle)) bloch[source][ 1 ].append(np.sin(angle)*np.sin(spread_x)) bloch[source][ 2 ].append(np.sin(angle)*np.cos(spread_x)) return tfq.convert_to_tensor(q_data), np.array(labels), bloch #Genrate the dataset qubit = cirq.GridQubit( 0 , 0 ) theta_a = 1 theta_b = 4 num_samples = 200 q_data, labels, bloch_p = generate_dataset(qubit, theta_a, theta_b, num_samples #Model #We will use a parameterized rotation about the Y axis followed by a Z-axis measurement as the quantum portion of our model. For the classical portion, we will use a two-unit SoftMax which should learn to distinguish the measurement statistics of the two data sources. # Build the quantum model layer theta = sympy.Symbol( 'theta' ) q_model = cirq.Circuit(cirq.ry(theta)(qubit)) q_data_input = tf.keras.Input( shape=(), dtype=tf.dtypes.string) expectation = tfq.layers.PQC(q_model, cirq.Z(qubit)) expectation_output = expectation(q_data_input) # Attach the classical SoftMax classifier classifier = tf.keras.layers.Dense( 2 , activation=tf.keras.activations.softmax) classifier_output = classifier(expectation_output) model = tf.keras.Model(inputs=q_data_input, outputs=classifier_output) # Standard compilation for classification model. compile (optimizer=tf.keras.optimizers.Adam(learning_rate= 0.1 ), loss=tf.keras.losses.CategoricalCrossentropy()) tf.keras.utils.plot_model(model, show_shapes= True , dpi= 70 ) #Training history = model.fit(x=q_data, y=labels, epochs= 50 , verbose= 0 ) test_data, _, _ = generate_dataset(qubit, theta_a, theta_b, 1 ) p = model.predict(test_data)[ 0 ] print ( f"prob(a)= {p[ 0 ]: .4 f} , prob(b)= {p[ 1 ]: .4 f} " ) #Importing dependencies !pip install --upgrade cirq== 0.7 .0 !pip install --upgrade tensorflow== 2.1 .0 import cirq import numpy as np import qutip import random import sympy import tensorflow as tf import tensorflow_quantum as tfq #Quantum Dataset def generate_dataset ( qubit, theta_a, theta_b, num_samples ): """Generate a dataset of points on `qubit` near the two given angles; labels for the two clusters use a one-hot encoding. """ bloch = { "a" : [[], [], []], "b" : [[], [], []]} blob_size = abs (theta_a - theta_b) / 5 for _ in range (num_samples): if coin < 0.5 : label = [ 1 , 0 ] source = "a" else : label = [ 0 , 1 ] source = "b" bloch[source][ 0 ].append(np.cos(angle)) bloch[source][ 1 ].append(np.sin(angle)*np.sin(spread_x)) bloch[source][ 2 ].append(np.sin(angle)*np.cos(spread_x)) return tfq.convert_to_tensor(q_data), np.array(labels), bloch #Genrate the dataset qubit = cirq.GridQubit( 0 , 0 ) theta_a = 1 theta_b = 4 num_samples = 200 #Model #We will use a parameterized rotation about the Y axis followed by a Z-axis measurement as the quantum portion of our model. For the classical portion, we will use a two-unit SoftMax which should learn to distinguish the measurement statistics of the two data sources. # Build the quantum model layer theta = sympy.Symbol( 'theta' ) # Attach the classical SoftMax classifier classifier = tf.keras.layers.Dense( 2 , activation=tf.keras.activations.softmax) # Standard compilation for classification model. compile (optimizer=tf.keras.optimizers.Adam(learning_rate= 0.1 ), tf.keras.utils.plot_model(model, show_shapes= True , dpi= 70 ) #Training history = model.fit(x=q_data, y=labels, epochs= 50 , verbose= 0 ) test_data, _, _ = generate_dataset(qubit, theta_a, theta_b, 1 ) p = model.predict(test_data)[ 0 ] print ( f"prob(a)= {p[ 0 ]: .4 f} , prob(b)= {p[ 1 ]: .4 f} " ) Conclusion So, we discovered about the Quantaum neural network in easy steps and even implemented it with TensorFlow Quantaum. Thanking and References Thanking and References Congratulations, Masoud Mohseni and the whole Tensorflow Quantum team for building such a wonderful system, it's a quantum leap in the history of machine learning. Please read the research paper to know more thoroughly. Thanks, Masoud Mohseni and whole Tensorflow Quantum team for creating such a great framework, It’s a quantum leap in the history of machine learning. please read the research paper to learn things deeply. Paper - TensorFlow Quantum: A Software Framework for Quantum Machine Learning https://arxiv.org/abs/2003.02989 . https://arxiv.org/abs/2003.02989 Thank you everyone!