论文标题

用无监督的图学习嵌入非矢量空间中的单词

Embedding Words in Non-Vector Space with Unsupervised Graph Learning

论文作者

Ryabinin, Max, Popov, Sergei, Prokhorenkova, Liudmila, Voita, Elena

论文摘要

将单词表示为矢量空间的要素(Word2Vec,手套)已成为事实上的标准。尽管这种方法很方便,但对于语言来说是不自然的:单词形成具有潜在层次结构的图形,并且必须通过单词嵌入来揭示和编码该结构。我们介绍GraphGlove:端到端学习的无监督的图形表示。在我们的设置中,每个单词都是加权图中的节点,单词之间的距离是相应节点之间的最短路径距离。我们采用了一种以可区分加权图的形式学习数据表示的方法,并使用它来修改手套训练算法。我们表明,基于图形的表示形式基于单词相似性和类比任务的基于矢量的方法基本上优于向量的方法。我们的分析表明,学到的图的结构是分层的,与WordNet的结构相似,几何形状是高度不平整的,并且包含具有不同局部拓扑的子图。

It has become a de-facto standard to represent words as elements of a vector space (word2vec, GloVe). While this approach is convenient, it is unnatural for language: words form a graph with a latent hierarchical structure, and this structure has to be revealed and encoded by word embeddings. We introduce GraphGlove: unsupervised graph word representations which are learned end-to-end. In our setting, each word is a node in a weighted graph and the distance between words is the shortest path distance between the corresponding nodes. We adopt a recent method learning a representation of data in the form of a differentiable weighted graph and use it to modify the GloVe training algorithm. We show that our graph-based representations substantially outperform vector-based methods on word similarity and analogy tasks. Our analysis reveals that the structure of the learned graphs is hierarchical and similar to that of WordNet, the geometry is highly non-trivial and contains subgraphs with different local topology.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源