This paper is available on arxiv under CC 4.0 license.
Authors:
(1) Xiaoyu Ai, School of Electrical Engineering & Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia;
(2) Robert Malaney, School of Electrical Engineering & Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia.
Designing a practical Continuous Variable (CV) Quantum Key Distribution (QKD) system requires an estimation of the quantum channel characteristics and the extraction of secure key bits based on a large number of distributed quantum signals. Meeting this requirement in short timescales is difficult. On standard processors, it can take several hours to reconcile the required number of quantum signals. This problem is exacerbated in the context of Low Earth Orbit (LEO) satellite CV-QKD, in which the satellite flyover time is constrained to be less than a few minutes. A potential solution to this problem is massive parallelisation of the classical reconciliation process in which a large-code block is subdivided into many shorter blocks for individual decoding. However, the penalty of this procedure on the important final secured key rate is non-trivial to determine and hitherto has not been formally analysed. Ideally, a determination of the optimal reduced block size, maximising the final key rate, would be forthcoming in such an analysis. In this work, we fill this important knowledge gap via detailed analyses and experimental verification of a CV-QKD sliced reconciliation protocol that uses large block-length low-density parity-check decoders. Our new solution results in a significant increase in the final key rate relative to non-optimised reconciliation. In addition, it allows for the acquisition of quantum secured messages between terrestrial stations and LEO satellites within a flyover timescale even using off-the-shelf processors. Our work points the way to optimised global quantum networks secured via fundamental physics.
Continuous Variable (CV) Quantum Key Distribution (QKD) has been intensively studied and significant breakthroughs have been achieved in both theory and experiment (see [1] for review). Compared to Discrete Variable (DV) QKD [2]–[5], CV-QKD can be implemented with well-developed technologies (e.g., homodyne detectors) in commercial fibreoptic networks [6], [7] and free-space optical communications [8], [9], providing it a potential advantage in practical deployments [10]–[14].
Considering the finite-key security of CV-QKD and DV-QKD, there are three critical parameters. These are, No, the number of original quantum signals sent by the transmitter (Alice) that are collected by the receiver (Bob); Ne, the number of quantum signals from which the protocol parameters are estimated;[1] and, ǫ, the probability that a QKD protocol fails to generate secret keys [15], [16]. To satisfy an upper limit on the failure probability of parameter estimation, Alice and Bob set Ne to a large value, which in turn implies a larger No.
Despite the advantages in deployment, CV-QKD systems tend to demand a larger No to reach the same ǫ relative to DVQKD protocols. For example, to achieve a final key rate of 0.1 bits per pulse with ǫ = 10−9 , a CV-QKD protocol studied in [17] required No ≈ 109 signals. However, to achieve the same final key rate with ǫ = 10−14, the DV-QKD protocol in [18] required No ≈ 104 signals. This higher number of required signals in CV-QKD can render the classical post-processing (i.e. key reconciliation and privacy amplification[2] ) slow - possibly failing to meet target timescales for reconciliation.
The end-users of a CV-QKD system expect the system to deliver two identical and secure keys under a limited time interval. For example, for satellite-based deployments, we would hope that the reconciliation is completed while maintaining a line-of-sight connection with the ground station. For a CV-QKD-enabled satellite with orbital parameters similar to Micius [20], this would mean the reconciliation should be completed in less than a few minutes. For the protocol we use in this work (see later), and for ǫ = 10−9 , this, in turn, would require the data rate of reconciliation to be at least 3.6 × 106 bits per second. For real-time reconciliation (say in subsecond timescales), two orders of magnitude increases in the reconciliation rates would be required. Demands for smaller ǫ will exacerbate the issue. Ideally, the rate of reconciliation should always be faster than the rate of quantum signalling.
This all raises the question as to whether current CV-QKD reconciliation schemes are optimised for the highest possible key rates in bits per second. As we show here, this is not the case. Further optimisation is possible on all current schemes.
To understand the issue better, we define reconciliation in the context of CV-QKD as a two-step scheme where the inputs to the reconciliation are non-identical N = 2No − 2Ne quadrature values[3] held by Alice and Bob (after parameter estimation), and the output is an identical bit string held by Alice and Bob [21]–[23]. Assuming a reverse reconciliation scheme, Bob first converts the quadrature values encoded by Alice in each signal to m bits. Alice, after converting each of her encoded real numbers also to m bits, then initiates some discrepancy-correction algorithms based on pre-defined errorcorrection codes to ensure her mN bits are identical to Bob’s.
In this work, we will adopt Low-Density Parity-Check (LDPC) codes for the error correction.
However, as alluded to above, reconciling mN bits within a limited time frame can be challenging. State-of-the-art LDPCbased reconciliation schemes for CV-QKD systems involve parallelised computation on a Graphics Processing Unit (GPU) [13], [24] or Field-Programmable Gate Arrays (FPGAs) [25], [26]. Reconciliation schemes implemented on FPGAs offer more programmable flexibility, but sometimes at the cost of reduced memory access relative to GPUs. For our purposes, both hardware architectures are useful - both offer massive parallelisation opportunities. These parallelisation solutions generally take the following two-step approach: 1) The mN bits are organised as m N-bit blocks to be reconciled. Each N-bit block is divided into multiple shorter blocks of size, say, NR. This is usually just set to a block size that can be processed within some timescale. 2) Then the m NR-bit blocks are reconciled in parallel (via independent processors) using optimally-designed LDPC decoders. However, what is missing in this approach is a proper optimisation analysis as to what the optimal value of NR is. As we show below, simply reducing NR at the cost of additional processing units is not an optimal solution. It transpires that in QKD the “penalty” cost of reducing the code rate (implicit in the use of small block lengths) significantly influences the bit per second final key rate.
A more sophisticated analysis is required to determine the optimal reduced block length. Such an analysis is the key contribution of this work. Although we will adopt a specific CV-QKD protocol for our analysis, the key steps of our scheme will apply to any CV-QKD protocol. Our reconciliation scheme will deliver the highest reconciliation rate for a given processor speed - thus allowing for the optimal solution to CV-QKD reconciliation.
[1] More precisely, in a CV-QKD protocol, Alice and Bob randomly select a Ne-signal subset from the No signals to estimate the parameters.
[2] In this work, we focus on the key reconciliation step because it is the more time-consuming part in the post-processing steps while the privacy amplification involving only bit-wise operations can be easily implemented faster than the reconciliation [19].
[3] No and Ne are multiplied by 2 since Alice and Bob utilise both quadratures from heterodyne detection - the detection process we assume in this work.