Jörg Conradt

Principal Investigator


EECS, CST

KTH Royal Institute of Technology, Sweden

Lindstedtsvägen 5
114 28 Stockholm, Sweden



Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information


Journal article


Florian Mirus, T. Stewart, J. Conradt
IEEE International Joint Conference on Neural Network, 2020

Semantic Scholar ArXiv DBLP DOI
Cite

Cite

APA   Click to copy
Mirus, F., Stewart, T., & Conradt, J. (2020). Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information. IEEE International Joint Conference on Neural Network.


Chicago/Turabian   Click to copy
Mirus, Florian, T. Stewart, and J. Conradt. “Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information.” IEEE International Joint Conference on Neural Network (2020).


MLA   Click to copy
Mirus, Florian, et al. “Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information.” IEEE International Joint Conference on Neural Network, 2020.


BibTeX   Click to copy

@article{florian2020a,
  title = {Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information},
  year = {2020},
  journal = {IEEE International Joint Conference on Neural Network},
  author = {Mirus, Florian and Stewart, T. and Conradt, J.}
}

Abstract

Vector Symbolic Architectures belong to a family of related cognitive modeling approaches that encode symbols and structures in high-dimensional vectors. Similar to human subjects, whose capacity to process and store information or concepts in short-term memory is subject to numerical restrictions, the capacity of information that can be encoded in such vector representations is limited and one way of modeling the numerical restrictions to cognition. In this paper, we analyze these limits regarding information capacity of distributed representations. We focus our analysis on simple superposition and more complex, structured representations involving convolutive powers to encode spatial information. In two experiments, we find upper bounds for the number of concepts that can effectively be stored in a single vector only depending on the dimensionality of the underlying vector space.


Share

Tools
Translate to