论文标题

软输入,软输入关节检测和宏伟

Soft-input, soft-output joint detection and GRAND

论文作者

Sarieddeen, Hadi, Médard, Muriel, Duffy, Ken. R.

论文摘要

猜测随机添加噪声解码(GRAND)是一种最大可能性(ML)解码方法,它标识了噪声效应任意代码书的损坏代码字。在联合检测和解码框架中,这项工作演示了如何利用接收到的符号和频道状态信息中的粗软信息,通过猜测,以猜测为log-likikelihoody比值(LLRS)生成软位可靠性输出。 LLR是通过与候选噪声循环单词相对应的欧几里得距离指标的连续计算生成的。指出噪声的熵比信息位小得多,少数噪声效应的猜测通常足以达到代码字,这允许为关键位生成LLR。 LLR饱和度应用于其余部分。在迭代(Turbo)模式下,在给定软输入的,软输入的大迭代中生成的LLR可作为增强的先验信息,可在随后的迭代中适应噪声序列的猜测顺序。仿真表明,一些涡轮式迭代的基于AWGN和Rayleigh褪色通道中基于ML检测的软晶格的性能的复杂性成本与符号数量相匹配(而不是指数级)。

Guessing random additive noise decoding (GRAND) is a maximum likelihood (ML) decoding method that identifies the noise effects corrupting code-words of arbitrary code-books. In a joint detection and decoding framework, this work demonstrates how GRAND can leverage crude soft information in received symbols and channel state information to generate, through guesswork, soft bit reliability outputs in log-likelihood ratios (LLRs). The LLRs are generated via successive computations of Euclidean-distance metrics corresponding to candidate noise-recovered words. Noting that the entropy of noise is much smaller than that of information bits, a small number of noise effect guesses generally suffices to hit a code-word, which allows generating LLRs for critical bits; LLR saturation is applied to the remaining bits. In an iterative (turbo) mode, the generated LLRs at a given soft-input, soft-output GRAND iteration serve as enhanced a priori information that adapts noise-sequence guess ordering in a subsequent iteration. Simulations demonstrate that a few turbo-GRAND iterations match the performance of ML-detection-based soft-GRAND in both AWGN and Rayleigh fading channels at a complexity cost that, on average, grows linearly (instead of exponentially) with the number of symbols.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源