paint-brush
Overcoming the limitation of key reconciliationby@multithreading

Overcoming the limitation of key reconciliation

tldt arrow

Too Long; Didn't Read

Continuous Variable (CV) Quantum Key Distribution (QKD) has been intensively studied and significant breakthroughs have been achieved.
featured image - Overcoming the limitation of key reconciliation
MultiThreading.Tech: #1 Publication on Concurrent Programming HackerNoon profile picture

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) Xiaoyu Ai, School of Electrical Engineering & Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia;

(2) Robert Malaney, School of Electrical Engineering & Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia.

III. OVERCOMING THE LIMITATIONS OF KEY RECONCILIATION

A. GPU-based SR

The process of SR is to reconcile mN bits. One can naively use m LDPC matrices with NR = N for each matrix. However, due to practical hardware limitations, the process is better implemented by dividing N into Nd blocks of some smaller NR so that the same LDPC decoders can reconcile these blocks in parallel. This process resembles the idea of Single Program Multiple Data (see [39], [40] for more details). As illustrated in Fig. 2, we implement SR by creating Nd LDPC decoders loaded with the same LDPC matrix on Nd GPU threads and let these decoders reconcile Nd blocks in parallel. This helps to reduce the SR timescale and assists in meeting the time constraints, such as those posed in satellite-based scenarios. Section V-A will demonstrate in detail the advantage of using such parallelisation.


 Fig. 2: Diagram of GPU-based SR adopted in this work. N quadratures values are divided into multiple blocks of length NR. Each block is loaded to one thread that is dedicated to performing SR for this block. The reconciled bits obtained from each thread are collected and ready for privacy amplification. The floor operator is shown as [·].


B. The Penalty of Using Finite-Length LDPC Codes


In SR, Bob needs to transmit syndrome bits to Alice based on the selected LDPC matrix with the code rate, Rj , for Sj via classical communications. For a given channel condition, selecting Rj closest to the capacity is the common approach to minimise the number of bits disclosed to the eavesdropper while Alice can still reconstruct Bob’s quantised bits without error [36]. Specifically, for a given T , we can obtain the SNR, γ, as [42]



where Y is a vector of Bob’s quadrature values of length NR, M(Y ) is a mN-bit string obtained by applying the quantisation function M(·) to each quadrature value in Y , and Π(M(Y )) is the entropy function of M(Y ). Increasing m to values that render the quantisation error negligible is always possible, but this would require the individual LDPC codes for every j th slice to be near perfect (capacity-achieving) otherwise the efficiency β will be low; m = 5 is found to be a good pragmatic compromise, and is adopted here. Given five slices a constant quantisation size of the real line across 2 5 bins centered on zero is chosen. This size, which is dependent on the adopted γ, optimises β (see [31] for further discussion).


The LDPC code rates, Rj , in Eq. 2 are the actual rates of the specific codes used for each slice (of length NR). Normally, in practice, NR is simply set to some value that allows target time-frames to be met, given that the decoding time is an increasing function of the block length [24]. We use Rj to obtain our experimental key rate in Eq. 32. A more nuanced value of that NR that optimises secure key rates is now analysed.



Function A is termed the “channel dispersion” since it represents the reduction of the code rate from the channel capacity due to a tolerated decoding error probability. It is the “price to pay” for using a code with finite block length, for a given γ.


Fig. 3: Diagram of SR. The red dashed arrows show the soft decoding output feeds from Sj−1 to Sj. The red dashed rectangles are graphical examples of a slice.


C. Analysing the Computational Complexity of S

With this all in place, it is now possible to solve for Dj as given by Eq. 1


Now we focus on the determination of Ej . When messages are propagated from the variable nodes to the check nodes, there are 2G multiplications and G additions [48]. When messages are propagating back to the variable nodes, there are 4G operations required (2G multiplications and 2G additions) [48]. Therefore, Ej is obtained by [46], [48]


The decoding time of the whole reconciliation process, ∆t, is given by





[7] We assume the least significant bit is l i 0 .


[8] The rationale behind this is that the soft decoding output of Sj−1 provides a priori information on the reliability of each bit in Sj [30]


[9] This approximation is accurate if the code achieves more than 80% of the capacity [43].


[10] In a BP decoder, a decoding iteration is one pass through the decoding algorithm