1 Introduction
2 Method
2.1 Random indexing
Expression | Description |
---|---|
\(a_{i_1, i_2, i_3, \ldots , i_\mathcal{N}}\), \(a_{\bar{i}}\)
| Array elements |
\(s_{\alpha _1, \alpha _2, \alpha _3, \ldots , \alpha _\mathcal{N}}\), \(s_{\bar{\alpha }}\)
| State array, accessed by encoder/decoder functions |
\(\mathcal{N}\)
| Dimensionality of array |
\(\mathcal{D}\)
| Dimension index, \(1 \le \mathcal{D} \le \mathcal{N}\)
|
\(N_\mathcal{D}\)
| Number of index vectors in dimension \(\mathcal{D}\), \(i_\mathcal{D} \in [1,N_\mathcal{D}]\)
|
\(L_\mathcal{D}\)
| Length of index vectors in dimension \(\mathcal{D}\), \(\alpha _\mathcal{D}\in [1,L_\mathcal{D}]\)
|
\(\chi _\mathcal{D}\)
| Number of nonzero trits in index vectors of dimension \(\mathcal{D}\)
|
\(S_e=\prod _\mathcal{D}\chi _\mathcal{D}\)
| Number of states that encode one array element |
\(S_s\propto \prod _\mathcal{D} L_\mathcal{D}\)
| Disk/memory space required to store the state array |
\(S_r\propto \sum _\mathcal{D} N_\mathcal{D}\chi _\mathcal{D}\)
| Disk/memory space required to store index vectors |
2.2 Encoding algorithm
2.3 Decoding algorithm
2.4 Generalised vector semantic analysis
3 Simulation experiments
3.1 Verification and comparison with PCA
3.2 Decoding error and comparison with ordinary RI
3.2.1 Effect of dimension reduction
3.2.2 Effect of sparseness of the index vectors
3.3 Natural language processing example
Word | Number of occurrences |
---|---|
Essential (given) | 855 |
Basic | 1920 |
Ordinary | 837 |
Eager | 480 |
Possible | 3348 |