Overview
The SVR (Support Vector Regressor) implements epsilon-support vector regression. It finds a function that deviates from the training targets by a value no greater than epsilon for each training point, while being as flat as possible.
Constructor
import { SVR } from '@scikitjs/sklearn';
const regressor = new SVR({
C: 1.0,
epsilon: 0.1,
kernel: 'rbf',
gamma: 'scale',
degree: 3,
coef0: 0
});
Parameters
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
Epsilon in the epsilon-insensitive loss function. No penalty is associated with points predicted within epsilon distance of the actual value.
kernel
'linear' | 'poly' | 'rbf' | 'sigmoid'
default:"rbf"
Kernel type to be used:
'linear': Linear kernel
'poly': Polynomial kernel
'rbf': Radial basis function kernel
'sigmoid': Sigmoid kernel
gamma
number | 'scale' | 'auto'
default:"scale"
Kernel coefficient:
'scale': 1 / (n_features * X.variance())
'auto': 1 / n_features
number: Custom gamma value
Degree of the polynomial kernel function. Ignored by other kernels.
Independent term in polynomial and sigmoid kernels.
Methods
fit()
Fit the SVM model according to the training data.
fit(X: Matrix, y: Vector, sampleWeight?: Vector): this
Target values (continuous).
Sample weights (reserved for future use).
Returns: this - The fitted regressor.
predict()
Perform regression on samples.
predict(X: Matrix): Vector
Returns: Vector - Predicted values.
score()
Return the coefficient of determination (R²) of the prediction.
score(X: Matrix, y: Vector): number
Returns: number - R² score.
getParams()
Get parameters for this estimator.
Returns: SVROptions - Current parameter settings.
setParams()
Set parameters for this estimator.
setParams(params: Partial<SVROptions>): this
params
Partial<SVROptions>
required
Parameters to update.
Returns: this - The updated regressor.
Attributes
Coefficients of the support vectors in the decision function.
Intercept term in the decision function.
Indices of support vectors in the training data.
Examples
Basic Regression
import { SVR } from '@scikitjs/sklearn';
// Simple linear relationship
const X = [[0], [1], [2], [3], [4]];
const y = [0, 1, 2, 3, 4];
const svr = new SVR({ kernel: 'linear', C: 1.0 });
svr.fit(X, y);
const predictions = svr.predict([[1.5], [2.5]]);
console.log(predictions); // [1.5, 2.5] approximately
Non-linear Regression with RBF Kernel
import { SVR } from '@scikitjs/sklearn';
// Non-linear relationship (quadratic)
const X = [[0], [1], [2], [3], [4]];
const y = [0, 1, 4, 9, 16]; // y = x²
const svr = new SVR({
kernel: 'rbf',
gamma: 'scale',
C: 100,
epsilon: 0.1
});
svr.fit(X, y);
const predictions = svr.predict([[1.5], [2.5]]);
console.log(predictions); // [2.25, 6.25] approximately
Multi-dimensional Regression
import { SVR } from '@scikitjs/sklearn';
// Price prediction based on [size, age]
const X = [
[1000, 5],
[1200, 3],
[1500, 8],
[1800, 2],
[2000, 10]
];
const y = [150000, 180000, 170000, 220000, 200000];
const svr = new SVR({ kernel: 'rbf', C: 1000 });
svr.fit(X, y);
const price = svr.predict([[1600, 5]]);
console.log(`Predicted price: $${price[0]}`);
Tuning Epsilon Parameter
import { SVR } from '@scikitjs/sklearn';
const X = [[1], [2], [3], [4], [5]];
const y = [1.1, 2.0, 2.9, 4.2, 4.8]; // Noisy linear data
// Smaller epsilon (strict)
const strictSVR = new SVR({
kernel: 'linear',
epsilon: 0.01,
C: 1.0
});
strictSVR.fit(X, y);
// Larger epsilon (tolerant)
const tolerantSVR = new SVR({
kernel: 'linear',
epsilon: 0.5,
C: 1.0
});
tolerantSVR.fit(X, y);
console.log('Strict support vectors:', strictSVR.support_.length);
console.log('Tolerant support vectors:', tolerantSVR.support_.length);
// Larger epsilon typically results in fewer support vectors
Polynomial Regression
import { SVR } from '@scikitjs/sklearn';
// Polynomial relationship
const X = [[0], [1], [2], [3], [4]];
const y = [1, 2, 5, 10, 17]; // y = x² + 1
const svr = new SVR({
kernel: 'poly',
degree: 2,
coef0: 1,
C: 100
});
svr.fit(X, y);
const predictions = svr.predict([[1.5], [2.5]]);
console.log(predictions);
Model Evaluation
import { SVR } from '@scikitjs/sklearn';
// Training data
const XTrain = [[1], [2], [3], [4], [5], [6]];
const yTrain = [2, 4, 6, 8, 10, 12];
// Test data
const XTest = [[2.5], [4.5]];
const yTest = [5, 9];
const svr = new SVR({ kernel: 'linear' });
svr.fit(XTrain, yTrain);
const r2 = svr.score(XTest, yTest);
console.log(`R² score: ${r2}`);
Comparing Kernels
import { SVR } from '@scikitjs/sklearn';
const X = [[0], [1], [2], [3], [4]];
const y = [0, 1, 4, 9, 16]; // y = x²
// Linear kernel
const linearSVR = new SVR({ kernel: 'linear', C: 1.0 });
linearSVR.fit(X, y);
// RBF kernel
const rbfSVR = new SVR({ kernel: 'rbf', gamma: 'scale', C: 1.0 });
rbfSVR.fit(X, y);
// Polynomial kernel
const polySVR = new SVR({ kernel: 'poly', degree: 2, C: 1.0 });
polySVR.fit(X, y);
const testPoint = [[2.5]];
console.log('Linear:', linearSVR.predict(testPoint));
console.log('RBF:', rbfSVR.predict(testPoint));
console.log('Polynomial:', polySVR.predict(testPoint));
Time Series Prediction
import { SVR } from '@scikitjs/sklearn';
// Temperature over days with cyclic pattern
const days = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const temps = [15, 18, 22, 20, 17, 16, 19, 23, 21, 18];
const X = days.map(d => [d]);
const y = temps;
const svr = new SVR({
kernel: 'rbf',
gamma: 'auto',
C: 10
});
svr.fit(X, y);
// Predict future days
const futureDays = [[11], [12]];
const predictions = svr.predict(futureDays);
console.log('Predicted temperatures:', predictions);
Inspecting Support Vectors
import { SVR } from '@scikitjs/sklearn';
const X = [[1], [2], [3], [4], [5]];
const y = [2, 4, 6, 8, 10];
const svr = new SVR({ kernel: 'linear', epsilon: 0.1 });
svr.fit(X, y);
console.log('Support vector indices:', svr.support_);
console.log('Support vectors:', svr.supportVectors_);
console.log('Dual coefficients:', svr.dualCoef_);
console.log('Intercept:', svr.intercept_);
Notes
- The epsilon parameter defines a margin of tolerance where no penalty is given to errors
- Support vectors are samples outside the epsilon-tube or on the margin
- Larger C values lead to less regularization (tighter fit to training data)
- The RBF kernel is effective for non-linear relationships
- Feature scaling is recommended for optimal performance
- The model uses a Gram matrix approach with ridge regression for efficient computation