Skip to main content

Overview

TSNE (t-Distributed Stochastic Neighbor Embedding) is a non-linear dimensionality reduction technique particularly well-suited for visualizing high-dimensional data in 2D or 3D space.

Constructor

new TSNE(options?: TSNEOptions)
options
TSNEOptions
Configuration options for TSNE

Methods

fit

fit(X: Matrix): this
Fit the TSNE model on training data.
X
Matrix
required
Training data of shape [nSamples, nFeatures]
Returns: The fitted TSNE instance

fitTransform

fitTransform(X: Matrix): Matrix
Fit the model and return the embedding.
X
Matrix
required
Training data of shape [nSamples, nFeatures]
Returns: Embedded data of shape [nSamples, nComponents]

Properties

embedding_
Matrix | null
Embedded coordinates in low-dimensional space, shape [nSamples, nComponents]
nFeaturesIn_
number | null
Number of features seen during fit
klDivergence_
number | null
Kullback-Leibler divergence after optimization

Example

import { TSNE } from '@elucidate/elucidate';

// High-dimensional data for visualization
const data = [
  [2.5, 3.2, 1.8, 4.1],
  [2.3, 3.0, 1.7, 3.9],
  [5.1, 6.2, 4.3, 7.8],
  [5.3, 6.5, 4.5, 8.1],
  [1.2, 1.5, 0.8, 1.9],
  [1.0, 1.3, 0.7, 1.7],
];

const tsne = new TSNE({ 
  nComponents: 2,
  perplexity: 3,
  learningRate: 200,
  maxIter: 1000,
  randomState: 42 
});

const embedding = tsne.fitTransform(data);
console.log('2D embedding:', embedding);
console.log('KL divergence:', tsne.klDivergence_);

// Visualize the results
embedding.forEach((point, i) => {
  console.log(`Point ${i}: [${point[0].toFixed(3)}, ${point[1].toFixed(3)}]`);
});

Notes

  • TSNE is primarily used for visualization, not for general dimensionality reduction
  • Results can vary between runs; use randomState for reproducibility
  • Perplexity should be smaller than the number of samples
  • The algorithm preserves local structure but may distort global structure

Build docs developers (and LLMs) love