论文标题
具有先验知识的多线性压缩学习
Multilinear Compressive Learning with Prior Knowledge
论文作者
论文摘要
最近提出的多线性压缩学习(MCL)框架将多线性压缩传感和机器学习结合到端到端系统中,该系统在设计传感和特征合成组件时考虑了信号的多维结构。 MCL背后的关键思想是假设存在张量子空间,该子空间可以从信号中捕获下游学习任务的基本特征。因此,找到这种判别张量子空间并优化系统以将信号投射到该数据歧管上的能力在多线性压缩学习中起着重要作用。在本文中,我们提出了一种新颖的解决方案,以解决上述两个要求,即如何找到感兴趣的信号高度可分离的那些张量子空间?以及如何优化感应和特征合成组件以将原始信号转换为第一个问题中发现的数据歧管?在我们的建议中,通过培训针对推理任务的非线性压缩学习系统来发现高质量的数据歧管。然后,通过多阶段监督培训逐渐将其感兴趣的数据歧管的知识逐渐转移到MCL组件中,并通过编码压缩测量,合成的特征和预测应该是如何的监督信息。提出的知识转移算法还具有半监督的适应性,使压缩学习模型能够有效地利用未标记的数据。广泛的实验表明,所提出的知识转移方法可以有效地训练MCL模型,以压缩感知并合成更好的学习任务,以改进的表现,尤其是当学习任务的复杂性增加时。
The recently proposed Multilinear Compressive Learning (MCL) framework combines Multilinear Compressive Sensing and Machine Learning into an end-to-end system that takes into account the multidimensional structure of the signals when designing the sensing and feature synthesis components. The key idea behind MCL is the assumption of the existence of a tensor subspace which can capture the essential features from the signal for the downstream learning task. Thus, the ability to find such a discriminative tensor subspace and optimize the system to project the signals onto that data manifold plays an important role in Multilinear Compressive Learning. In this paper, we propose a novel solution to address both of the aforementioned requirements, i.e., How to find those tensor subspaces in which the signals of interest are highly separable? and How to optimize the sensing and feature synthesis components to transform the original signals to the data manifold found in the first question? In our proposal, the discovery of a high-quality data manifold is conducted by training a nonlinear compressive learning system on the inference task. Its knowledge of the data manifold of interest is then progressively transferred to the MCL components via multi-stage supervised training with the supervisory information encoding how the compressed measurements, the synthesized features, and the predictions should be like. The proposed knowledge transfer algorithm also comes with a semi-supervised adaption that enables compressive learning models to utilize unlabeled data effectively. Extensive experiments demonstrate that the proposed knowledge transfer method can effectively train MCL models to compressively sense and synthesize better features for the learning tasks with improved performances, especially when the complexity of the learning task increases.