diff --git a/docs/index.html b/docs/index.html index 9530d796..fbad5e9d 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1 +1 @@ -
0.17.0
Default export object.
(Tensor)
: Tensor class(Matrix)
: Matrix class(Graph)
: Graph class(Complex)
: Complex number(A2CAgent)
: A2C agent(LBABOD)
: Lower-bound for the Angle-based Outlier Detection(ABOD)
: Angle-based Outlier Detection(ADALINE)
: Adaptive Linear Neuron model(ADAMENN)
: Adaptive Metric Nearest Neighbor(AdaptiveThresholding)
: Adaptive thresholding(AffinityPropagation)
: Affinity propagation model(CentroidAgglomerativeClustering)
: Centroid agglomerative clustering(CompleteLinkageAgglomerativeClustering)
: Complete linkage agglomerative clustering(GroupAverageAgglomerativeClustering)
: Group average agglomerative clustering(MedianAgglomerativeClustering)
: Median agglomerative clustering(SingleLinkageAgglomerativeClustering)
: Single linkage agglomerative clustering(WardsAgglomerativeClustering)
: Ward's agglomerative clustering(WeightedAverageAgglomerativeClustering)
: Weighted average agglomerative clustering(AkimaInterpolation)
: Akima interpolation(ALMA)
: Approximate Large Margin algorithm(AODE)
: Averaged One-Dependence Estimators(AR)
: Autoregressive model(ARMA)
: Autoregressive moving average model(AROW)
: Adaptive regularization of Weight Vectors(ART)
: Adaptive resonance theory(AssociationAnalysis)
: Association analysis(Autoencoder)
: Autoencoder(AutomaticThresholding)
: Automatic thresholding(AverageShiftedHistogram)
: Average shifted histogram(BalancedHistogramThresholding)
: Balanced histogram thresholding(Ballseptron)
: Ballseptron(Banditron)
: Banditron(BayesianLinearRegression)
: Bayesian linear regression(BayesianNetwork)
: Bayesian Network(BernsenThresholding)
: Bernsen thresholding(BesselFilter)
: Bessel filter(BilinearInterpolation)
: Bilinear interpolation(BIRCH)
: Balanced iterative reducing and clustering using hierarchies(BOGD)
: Bounded Online Gradient Descent(BoxCox)
: Box-Cox transformation(BPA)
: Budgeted online Passive-Aggressive(BrahmaguptaInterpolation)
: Brahmagupta interpolation(MulticlassBSGD)
(BSGD)
: Budgeted Stochastic Gradient Descent(BudgetPerceptron)
: Budget Perceptron(ButterworthFilter)
: Butterworth filter(C2P)
: Clustering based on Closest Pairs(Canny)
: Canny edge detection(CAST)
: Clustering Affinity Search Technique(CategoricalNaiveBayes)
: Categorical naive bayes(CatmullRomSplines)
: Catmull-Rom splines interpolation(CentripetalCatmullRomSplines)
: Centripetal Catmull-Rom splines interpolation(CHAMELEON)
: CHAMELEON(ChangeFinder)
: Change finder(ChebyshevFilter)
: Chebyshev filter(CLARA)
: Clustering LARge Applications(CLARANS)
: Clustering Large Applications based on RANdomized Search(CLIQUE)
: CLustering In QUEst(CLUES)
: CLUstEring based on local Shrinking(CoTraining)
: Co-training(COF)
: Connectivity-based Outlier Factor(COLL)
: Conscience on-line learning(ComplementNaiveBayes)
: Complement Naive Bayes(ConfidenceWeighted)
: Confidence weighted(SoftConfidenceWeighted)
: Soft confidence weighted(CosineInterpolation)
: Cosine interpolation(CRF)
: Conditional random fields(CubicConvolutionInterpolation)
: Cubic-convolution interpolation(CubicHermiteSpline)
: Cubic Hermite spline(CubicInterpolation)
: Cubic interpolation(CumulativeMovingAverage)
: Cumulative moving average(CumSum)
: Cumulative sum change point detection(CURE)
: Clustering Using REpresentatives(DiscriminantAdaptiveNearestNeighbor)
: Discriminant adaptive nearest neighbor(DBCLASD)
: Distribution Based Clustering of LArge Spatial Databases(DBSCAN)
: Density-based spatial clustering of applications with noise(DecisionTreeClassifier)
: Decision tree classifier(DecisionTreeRegression)
: Decision tree regression(DelaunayInterpolation)
: Delaunay interpolation(DemingRegression)
: Deming regression(DENCLUE)
: DENsity CLUstering(DIANA)
: DIvisive ANAlysis Clustering(DiffusionMap)
: Diffusion map(DQNAgent)
: Deep Q-Network agent(DPAgent)
: Dynamic programming agent(ElasticNet)
: Elastic net(EllipticFilter)
: Elliptic filter(ENaN)
: Extended Natural Neighbor(ENN)
: Extended Nearest Neighbor(EnsembleBinaryModel)
: Ensemble binary models(ExponentialMovingAverage)
: Exponential moving average(ModifiedMovingAverage)
: Modified moving average(ExtraTreesClassifier)
: Extra trees classifier(ExtraTreesRegressor)
: Extra trees regressor(FastMap)
: FastMap(FINDIT)
: a Fast and INtelligent subspace clustering algorithm using DImension voting(Forgetron)
: Forgetron(FuzzyCMeans)
: Fuzzy c-means(FuzzyKNN)
: Fuzzy k-nearest neighbor(GAN)
: Generative adversarial networks(GasserMuller)
: Gasser–Müller kernel estimator(GaussianProcess)
: Gaussian process(GBDT)
: Gradient boosting decision tree(GBDTClassifier)
: Gradient boosting decision tree classifier(GeneralizedESD)
: Generalized extreme studentized deviate(GeneticAlgorithmGeneration)
: Genetic algorithm generation(GeneticKMeans)
: Genetic k-means model(GMeans)
: G-means(GMM)
: Gaussian mixture model(GMR)
: Gaussian mixture regression(SemiSupervisedGMM)
: Semi-Supervised gaussian mixture model(GPLVM)
: Gaussian Process Latent Variable Model(GrowingCellStructures)
: Growing cell structures(GrowingNeuralGas)
: Growing neural gas(GSOM)
: Growing Self-Organizing Map(GTM)
: Generative topographic mapping(HampelFilter)
: Hampel filter(HDBSCAN)
: Hierarchical Density-based spatial clustering of applications with noise(Histogram)
: Histogram(HLLE)
: Hessian Locally Linear Embedding(ContinuousHMM)
: Continuous hidden Markov model(HMM)
: Hidden Markov model(HoltWinters)
: Holt-Winters method(HopfieldNetwork)
: Hopfield network(Hotelling)
: Hotelling T-square Method(HuberRegression)
: Huber regression(ICA)
: Independent component analysis(CELLIP)
: Classical ellipsoid method(IELLIP)
: Improved ellipsoid method(IKNN)
: Locally Informative K-Nearest Neighbor(IncrementalPCA)
: Incremental principal component analysis(INFLO)
: Influenced Outlierness(InverseDistanceWeighting)
: Inverse distance weighting(InverseSmoothstepInterpolation)
: Inverse smoothstep interpolation(ISODATA)
: Iterative Self-Organizing Data Analysis Technique(IsolationForest)
: Isolation forest(Isomap)
: Isomap(IsotonicRegression)
: Isotonic regression(KalmanFilter)
: Kalman filter(KDEOS)
: Kernel Density Estimation Outlier Score(KernelDensityEstimator)
: Kernel density estimator(KernelKMeans)
: Kernel k-means(KernelizedPegasos)
: Kernelized Primal Estimated sub-GrAdientSOlver for SVM(KernelizedPerceptron)
: Kernelized perceptron(KLIEP)
: Kullback-Leibler importance estimation procedure(KMeans)
: k-means model(KMeanspp)
: k-means++ model(KMedians)
: k-medians model(KMedoids)
: k-medoids model(SemiSupervisedKMeansModel)
: semi-supervised k-means model(KModes)
: k-modes model(KNN)
: k-nearest neighbor(KNNAnomaly)
: k-nearest neighbor anomaly detection(KNNDensityEstimation)
: k-nearest neighbor density estimation(KNNRegression)
: k-nearest neighbor regression(SemiSupervisedKNN)
: Semi-supervised k-nearest neighbor(KPrototypes)
: k-prototypes model(KSVD)
: k-SVD(KolmogorovZurbenkoFilter)
: Kolmogorov–Zurbenko filter(LabelPropagation)
: Label propagation(LabelSpreading)
: Label spreading(LadderNetwork)
: Ladder network(LagrangeInterpolation)
: Lagrange interpolation(LanczosInterpolation)
: Lanczos interpolation(Laplacian)
: Laplacian edge detection(LaplacianEigenmaps)
: Laplacian eigenmaps(Lasso)
: Least absolute shrinkage and selection operator(LatentDirichletAllocation)
: Latent dirichlet allocation(LBG)
: Linde-Buzo-Gray algorithm(FishersLinearDiscriminant)
: Fishers linear discriminant analysis(LinearDiscriminant)
: Linear discriminant analysis(LinearDiscriminantAnalysis)
: Linear discriminant analysis(MulticlassLinearDiscriminant)
: Multiclass linear discriminant analysis(LDF)
: Local Density Factor(LDOF)
: Local Distance-based Outlier Factor(LeastAbsolute)
: Least absolute deviations(LeastSquares)
: Least squares(LinearInterpolation)
: Linear interpolation(LLE)
: Locally Linear Embedding(LeastMedianSquaresRegression)
: Least median squares regression(LMNN)
: Large Margin Nearest Neighbor(LOCI)
: Local Correlation Integral(LOESS)
: Locally estimated scatterplot smoothing(LOF)
: Local Outlier Factor(LoG)
: Laplacian of gaussian filter(LogarithmicInterpolation)
: Logarithmic interpolation(LogisticRegression)
: Logistic regression(MultinomialLogisticRegression)
: Multinomial logistic regression(LoOP)
: Local Outlier Probability(LOWESS)
: Locally weighted scatter plot smooth(LowpassFilter)
: Lowpass filter(LpNormLinearRegression)
: Lp norm linear regression(LSA)
: Latent Semantic Analysis(LSDD)
: Least-squares density difference(LSDDCPD)
: LSDD for change point detection(LSIF)
: least-squares importance fitting(LeastTrimmedSquaresRegression)
: Least trimmed squares(LTSA)
: Local Tangent Space Alignment(LVQClassifier)
: Learning Vector Quantization classifier(LVQCluster)
: Learning Vector Quantization clustering(MAD)
: Median Absolute Deviation(MADALINE)
: Many Adaptive Linear Neuron model(MarginPerceptron)
: Margin Perceptron(MarkovSwitching)
: Markov switching(MaxAbsScaler)
: Max absolute scaler(MaximumLikelihoodEstimator)
: Maximum likelihood estimator(MCD)
: Minimum Covariance Determinant(MixtureDiscriminant)
: Mixture discriminant analysis(MDS)
: Multi-dimensional Scaling(MeanShift)
: Mean shift(MetropolisHastings)
: Metropolis-Hastings algorithm(MinmaxNormalization)
: Min-max normalization(MIRA)
: Margin Infused Relaxed Algorithm(MLLE)
: Modified Locally Linear Embedding(MLPClassifier)
: Multi layer perceptron classifier(MLPRegressor)
: Multi layer perceptron regressor(MOD)
: Method of Optimal Direction(MONA)
: MONothetic Analysis Clustering(MonotheticClustering)
: Monothetic Clustering(MCAgent)
: Monte Carlo agent(Mountain)
: Mountain method(LinearWeightedMovingAverage)
: Linear weighted moving average(SimpleMovingAverage)
: Simple moving average(TriangularMovingAverage)
: Triangular moving average(MovingMedian)
: Moving median(MT)
: Mahalanobis Taguchi method(MutualInformationFeatureSelection)
: Mutual information feature selector(MutualKNN)
: Mutual k-nearest-neighbor model(NCubicInterpolation)
: n-cubic interpolation(NLinearInterpolation)
: n-linear interpolation(NadarayaWatson)
: Nadaraya–Watson kernel regression(NaiveBayes)
: Naive bayes(NAROW)
: Narrow Adaptive Regularization Of Weights(NaturalNeighborInterpolation)
: Natural neighbor interpolation(NeighbourhoodComponentsAnalysis)
: Neighbourhood components analysis(NearestCentroid)
: Nearest centroid classifier(NegationNaiveBayes)
: Negation Naive bayes(NeuralGas)
: Neural gas model(ComputationalGraph)
(Layer)
(NeuralnetworkException)
: Exception for neuralnetwork class(NeuralNetwork)
: Neuralnetwork(NiblackThresholding)
: Niblack thresholding(NICE)
: Flow-based generative model non-linear independent component estimation(NLMeans)
: Non-local means filter(NMF)
: Non-negative matrix factorization(NNBCA)
: Natural Neighborhood Based Classification Algorithm(NormalHERD)
: Normal Herd(OCSVM)
: One-class support vector machine(ODIN)
: Outlier Detection using Indegree Number(OnlineGradientDescent)
: Online gradient descent(OPTICS)
: Ordering points to identify the clustering structure(ORCLUS)
: arbitrarily ORiented projected CLUSter generation(OtsusThresholding)
: Otus's thresholding(PAM)
: Partitioning Around Medoids(ParticleFilter)
: Particle filter(PassingBablok)
: Passing-Bablok method(PA)
: Passive Aggressive(PAUM)
: Perceptron Algorithm with Uneven Margins(AnomalyPCA)
: Principal component analysis for anomaly detection(DualPCA)
: Dual Principal component analysis(KernelPCA)
: Kernel Principal component analysis(PCA)
: Principal component analysis(PossibilisticCMeans)
: Possibilistic c-means(PCR)
: Principal component regression(Pegasos)
: Primal Estimated sub-GrAdientSOlver for SVM(PercentileAnormaly)
: Percentile anomaly detection(AveragedPerceptron)
: Averaged perceptron(MulticlassPerceptron)
: Multiclass perceptron(Perceptron)
: Perceptron(PhansalkarThresholding)
: Phansalkar thresholding(PLS)
: Partial least squares regression(PLSA)
: Probabilistic latent semantic analysis(PoissonRegression)
: Poisson regression(PGAgent)
: Policy gradient agent(PolynomialHistogram)
: Polynomial histogram(PolynomialInterpolation)
: Polynomial interpolation(ProjectionPursuit)
: Projection pursuit regression(Prewitt)
: Prewitt edge detection(PriestleyChao)
: Priestley–Chao kernel estimator(PrincipalCurve)
: Principal curves(ProbabilisticPCA)
: Probabilistic Principal component analysis(ProbabilityBasedClassifier)
: Probability based classifier(MultinomialProbit)
: Multinomial probit(Probit)
: Probit(PROCLUS)
: PROjected CLUStering algorithm(Projectron)
: Projectron(Projectronpp)
: Projectron++(PTile)
: P-tile thresholding(QTableBase)
: Base class for Q-table(QAgent)
: Q-learning agent(QuadraticDiscriminant)
: Quadratic discriminant analysis(QuantileRegression)
: Quantile regression(RadiusNeighbor)
: radius neighbor(RadiusNeighborRegression)
: radius neighbor regression(SemiSupervisedRadiusNeighbor)
: Semi-supervised radius neighbor(RamerDouglasPeucker)
: Ramer-Douglas-Peucker algorithm(RandomForestClassifier)
: Random forest classifier(RandomForestRegressor)
: Random forest regressor(RandomProjection)
: Random projection(RANSAC)
: Random sample consensus(RadialBasisFunctionNetwork)
: Radial basis function network(GBRBM)
: Gaussian-Bernouili Restricted Boltzmann machine(RBM)
: Restricted Boltzmann machine(RBP)
: Randomized Budget Perceptron(RDF)
: Relative Density Factor(RDOS)
: Relative Density-based Outlier Score(KernelRidge)
: Kernel ridge regression(Ridge)
: Ridge regressioin(RKOF)
: Robust Kernel-based Outlier Factor(RecursiveLeastSquares)
: Recursive least squares(RepeatedMedianRegression)
: Repeated median regression(RNN)
: Recurrent neuralnetwork(RobertsCross)
: Roberts cross(RobustScaler)
: Robust scaler(ROCK)
: RObust Clustering using linKs(AggressiveROMMA)
: Aggressive Relaxed Online Maximum Margin Algorithm(ROMMA)
: Relaxed Online Maximum Margin Algorithm(RVM)
: Relevance vector machine(S3VM)
: Semi-Supervised Support Vector Machine(Sammon)
: Sammon mapping(SARSAAgent)
: SARSA agent(SauvolaThresholding)
: sauvola thresholding(SavitzkyGolayFilter)
: Savitzky-Golay filter(SDAR)
: Sequentially Discounting Autoregressive model(SegmentedRegression)
: Segmented regression(SelectiveNaiveBayes)
: Selective Naive bayes(SelectiveSamplingAdaptivePerceptron)
: Selective sampling Perceptron with adaptive parameter(SelectiveSamplingPerceptron)
: Selective sampling Perceptron(SelectiveSamplingSOP)
: Selective sampling second-order Perceptron(SelectiveSamplingWinnow)
: Selective sampling Winnow(SelfTraining)
: Self-training(SemiSupervisedNaiveBayes)
: Semi-supervised naive bayes(SezanThresholding)
: Sezan's thresholding(ShiftingPerceptron)
: Shifting Perceptron Algorithm(ILK)
: Implicit online Learning with Kernels(SILK)
: Sparse Implicit online Learning with Kernels(SincInterpolation)
: Sinc interpolation(SlicedInverseRegression)
: Sliced inverse regression(Slerp)
: Spherical linear interpolation(SliceSampling)
: slice sampling(SMARegression)
: Standardizes Major Axis regression(SmirnovGrubbs)
: SmirnovGrubbs test(SmoothstepInterpolation)
: Smoothstep interpolation(Snakes)
: Snakes (active contour model)(Sobel)
: Sobel edge detection(SoftKMeans)
: Soft k-means(SOM)
: Self-Organizing Map(SecondOrderPerceptron)
: Second order perceptron(SpectralClustering)
: Spectral clustering(SmoothingSpline)
: Spline smoothing(SplineInterpolation)
: Spline interpolation(SplitAndMerge)
: Split and merge segmentation(SquaredLossMICPD)
: Squared-loss Mutual information change point detection(SST)
: Singular-spectrum transformation(Standardization)
: Standardization(StatisticalRegionMerging)
: Statistical Region Merging(STING)
: STatistical INformation Grid-based method(Stoptron)
: Stoptron(SVC)
: Support vector clustering(SVM)
: Support vector machine(SVR)
: Support vector regression(TheilSenRegression)
: Theil-Sen regression(Thompson)
: Thompson test(TietjenMoore)
: Tietjen-Moore Test(TighterPerceptron)
: Tighter Budget Perceptron(TightestPerceptron)
: Tightest Perceptron(TrigonometricInterpolation)
: Trigonometric interpolation(SNE)
: Stochastic Neighbor Embedding(tSNE)
: T-distributed Stochastic Neighbor Embedding(TukeyRegression)
: Tukey regression(TukeysFences)
: Tukey's fences(RuLSIF)
: Relative unconstrained Least-Squares Importance Fitting(uLSIF)
: unconstrained Least-Squares Importance Fitting(UMAP)
: Uniform Manifold Approximation and Projection(UniversalSetNaiveBayes)
: Universal-set Naive bayes(VAE)
: Variational Autoencoder(VAR)
: Vector Autoregressive model(VBGMM)
: Variational Gaussian Mixture Model(VotedPerceptron)
: Voted-perceptron(WeightedKMeans)
: Weighted k-means model(WeightedKNN)
: Weighted K-Nearest Neighbor(WeightedLeastSquares)
: Weighted least squares(Winnow)
: Winnow(Word2Vec)
: Word2Vec(XGBoost)
: eXtreme Gradient Boosting regression(XGBoostClassifier)
: eXtreme Gradient Boosting classifier(XMeans)
: x-means(YeoJohnson)
: Yeo-Johnson power transformation(ZeroInflatedPoisson)
: Zero-inflated poisson(ZeroTruncatedPoisson)
: Zero-truncated poisson(AcrobotRLEnvironment)
: Acrobot environment(RLEnvironmentBase)
: Base class for reinforcement learning environment(RLIntRange)
: Integer number range state/actioin(RLRealRange)
: Real number range state/actioin(EmptyRLEnvironment)
: Empty environment(BlackjackRLEnvironment)
: Blackjack environment(BreakerRLEnvironment)
: Breaker environment(CartPoleRLEnvironment)
: Cartpole environment(DraughtsRLEnvironment)
: Draughts environment(GomokuRLEnvironment)
: Gomoku environment(GridMazeRLEnvironment)
: Grid world environment(InHypercubeRLEnvironment)
: In-hypercube environment(SmoothMazeRLEnvironment)
: Smooth maze environment(MountainCarRLEnvironment)
: MountainCar environment(PendulumRLEnvironment)
: Pendulum environment(ReversiRLEnvironment)
: Reversi environment(WaterballRLEnvironment)
: Waterball environment(accuracy)
: Returns accuracy.(cohensKappa)
: Returns Cohen's kappa coefficient.(fScore)
: Returns F-score with macro average.(precision)
: Returns precision with macro average.(recall)
: Returns recall with macro average.(davisBouldinIndex)
: Returns Davies-Bouldin index.(diceIndex)
: Returns Dice index.(dunnIndex)
: Returns Dunn index.(fowlkesMallowsIndex)
: Returns Fowlkes-Mallows index.(jaccardIndex)
: Returns Jaccard index.(purity)
: Returns Purity.(randIndex)
: Returns Rand index.(silhouetteCoefficient)
: Returns Silhouette coefficient.(coRankingMatrix)
: Returns Co-Ranking Matrix.(correlation)
: Returns correlation.(mad)
: Returns MAD (Median Absolute Deviation).(mae)
: Returns MAE (Mean Absolute Error).(mape)
: Returns MAPE (Mean Absolute Percentage Error).(mse)
: Returns MSE (Mean Squared Error).(msle)
: Returns MSLE (Mean Squared Logarithmic Error).(r2)
: Returns R2 (coefficient of determination).(rmse)
: Returns RMSE (Root Mean Squared Error).(rmsle)
: Returns RMSLE (Root Mean Squared Logarithmic Error).(rmspe)
: Returns RMSPE (Root Mean Squared Percentage Error).Exception for matrix class
Extends Error
(string)
Error message(any)
Some valueMatrix class
Sizes of the matrix.
Elements in the matrix.
Iterate over the elements.
Set a value at the position.
Reshape this.
Concatenate this and m.
Returns a matrix reduced along the axis with the callback function.
(any?)
Initial value(boolean = null
)
Keep dimensions or not. If null, negative axis retuns number and other axis returns Matrix.(Matrix | number)
: Reduced matrix or valueMultiply all elements by -1 in-place.
Set all elements to their logical NOT values.
Set all elements to their bitwise NOT values.
Set all elements to their absolute values.
Set all elements to their rounded values.
Set all elements to their floored values.
Set all elements to their ceil values.
Tensor class
Sizes of the tensor.
Elements in the tensor.
Iterate over the elements.
Returns a Matrix if the dimension of this tensor is 2.
Matrix
: MatrixConcatenate this and t.
Returns a tensor reduced along the axis with the callback function.
(any?)
Initial value(boolean = false
)
Keep dimensions or not.(Tensor | number)
: Reduced tensor or valueException for graph class
Extends Error
(string)
Error message(any)
Some valueEdge of graph
Graph class
Returns named graph
"balaban 10 cage"
| "bidiakis cube"
| "biggs smith"
| "brinkmann"
| "bull"
| "butterfly"
| "chvatal"
| "clebsch"
| "coxeter"
| "desargues"
| "diamond"
| "durer"
| "errera"
| "folkman"
| "foster"
| "franklin"
| "frucht"
| "goldner-harary"
| "golomb"
| "gray"
| "grotzsch"
| "harries"
| "heawood"
| "herschel"
| "hoffman"
| "holt"
| "kittell"
| "markstrom"
| "mcgee"
| "meredith"
| "mobius kantor"
| "moser spindle"
| "nauru"
| "pappus"
| "petersen"
| "poussin"
| "robertson"
| "shrikhande"
| "sousselier"
| "sylvester"
| "tutte"
| "tutte coxeter"
| "wagner"
| "wells"
)): Graph(("balaban 10 cage"
| "bidiakis cube"
| "biggs smith"
| "brinkmann"
| "bull"
| "butterfly"
| "chvatal"
| "clebsch"
| "coxeter"
| "desargues"
| "diamond"
| "durer"
| "errera"
| "folkman"
| "foster"
| "franklin"
| "frucht"
| "goldner-harary"
| "golomb"
| "gray"
| "grotzsch"
| "harries"
| "heawood"
| "herschel"
| "hoffman"
| "holt"
| "kittell"
| "markstrom"
| "mcgee"
| "meredith"
| "mobius kantor"
| "moser spindle"
| "nauru"
| "pappus"
| "petersen"
| "poussin"
| "robertson"
| "shrikhande"
| "sousselier"
| "sylvester"
| "tutte"
| "tutte coxeter"
| "wagner"
| "wells"
))
Name of the graphGraph
: Named graphEdges
Return degree of the node.
(number)
Index of target node((boolean | "in"
| "out"
) = true
)
Count undirected edges. If in
or out
is specified, only direct edges are counted and direct
parameter is ignored.((boolean | "in"
| "out"
) = true
)
Count directed edgesnumber
: Degree of the nodeReturn indexes of adjacency nodes.
"in"
| "out"
), direct: (boolean | "in"
| "out"
)): Array<number>(number)
Index of target node((boolean | "in"
| "out"
) = true
)
Check undirected edges. If in
or out
is specified, only direct edges are checked and direct
parameter is ignored.((boolean | "in"
| "out"
) = true
)
Check directed edgesArray<number>
: Indexes of adjacency nodesReturns indexes of each components.
Returns indexes of each biconnected components.
Add the node.
(unknown?)
Value of the nodeRemove all nodes.
Returns the edges.
"forward"
| "backward"
), direct: (boolean | "forward"
| "backward"
)): Array<Edge>(number)
Index of the starting node of the edge(number)
Index of the end node of the edge((boolean | "forward"
| "backward"
) = true
)
Get undirected edges or not. If forward
or backward
is specified, only direct edges are get and direct
parameter is ignored.((boolean | "forward"
| "backward"
) = true
)
Get directed edges or notArray<Edge>
: Edges between from
and to
Remove the edges.
Remove all edges.
Returns adjacency matrix
Returns adjacency list
"both"
| "in"
| "out"
))(("both"
| "in"
| "out"
) = both
)
Indegree or outdegreeReturns degree matrix.
"both"
| "in"
| "out"
))(("both"
| "in"
| "out"
) = both
)
Indegree or outdegreeReturns laplacian matrix.
Returns if this is plainer graph or not with add-path algorithm.
On the Cutting Edge: Simplified O(n) Planarity by Edge Addition https://xuzijian629.hatenablog.com/entry/2019/12/14/163726
boolean
: true
if this is plainer graphReturns if this is plainer graph or not with add-vertex algorithm.
Hopcroft, J. and Tarjan, R. "Efficient Planarity Testing", J. ACM, Vol. 21, No. 4, pp. 549-568 (1974) 西関 隆夫. "32. グラフの平面性判定法", 情報処理, Vol. 24, No. 4, pp. 521-528 (1983) K. S. Booth, "Testing for the Consecutive Ones Property, Interval Graphs, and Graph Planarity Using PQ-Tree Algorithms", Journal of computer and system sciences, 13, pp. 335-379 (1976)
boolean
: true
if this is plainer graphContract this graph.
Subdivision this graph.
Substitute other graph at the node.
Returns shortest path with breadth first search algorithm.
(number)
Index of start nodeArray<{length: number, prev: number, path: Array<number>}>
: Shortest length and path for all nodesReturns shortest path with Floyd–Warshall algorithm.
Returns Hamiltonian cycle
Returns minimum cut.
Returns minimum cut.
Returns minimum cut.
Returns bisection cut.
Complex number
A2C agent
(RLEnvironmentBase)
Environment(number)
Resolution of actions(number)
Number of processes(string)
Optimizer of the networkAngle-based Outlier Detection
(number = Infinity
)
Number of neighborhoodsLower-bound for the Angle-based Outlier Detection
Adaptive Linear Neuron model
(number)
Learning rateAdaptive Metric Nearest Neighbor
(number? = null
)
The number of neighbors of the test point(number = 3
)
The number of neighbors in N1 for estimation(number? = null
)
The size of the neighborhood N2 for each of the k0 neighbors for estimation(number? = null
)
The number of points within the delta intervals(number = 3
)
The number of neighbors in the final nearest neighbor rule(number = 0.5
)
The positive factor for the exponential weighting schemeAdaptive thresholding
"mean"
| "gaussian"
| "median"
| "midgray"
), k: number, c: number)(("mean"
| "gaussian"
| "median"
| "midgray"
) = 'mean'
)
Method name(number = 3
)
Size of local range(number = 2
)
Value subtracted from thresholdAffinity propagation model
Type: object
(number?)
: Data index of leaf node(number?)
: Distance between children nodes(number)
: Number of leaf nodes(Array<AgglomerativeClusterNode>?)
: Children nodes(Array<AgglomerativeClusterNode>)
: Leaf nodesAgglomerative clustering
"euclid"
| "manhattan"
| "chebyshev"
))(("euclid"
| "manhattan"
| "chebyshev"
) = 'euclid'
)
Metric nameReturns the specified number of clusters.
(number)
Number of clustersArray<AgglomerativeClusterNode>
: Cluster nodesReturns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeComplete linkage agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeSingle linkage agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeGroup average agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeWard's agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeCentroid agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeWeighted average agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeMedian agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeAkima interpolation
(boolean = false
)
Use modified method or notApproximate Large Margin algorithm
(number = 2
)
Power parameter for norm(number = 1
)
Degree of approximation to the optimal margin hyperplane(number = 1
)
Tuning parameter(number = 1
)
Tuning parameterAveraged One-Dependence Estimators
(number = 20
)
Discretized numberAutoregressive model
(number)
Order(("lsm"
| "yuleWalker"
| "levinson"
| "householder"
) = lms
)
Method nameFit model.
Autoregressive moving average model
Fit model.
Adaptive regularization of Weight Vectors
(number = 0.1
)
Learning rateAdaptive resonance theory
"l2"
)(number = 1
)
Threshold("l2"
= 'l2'
)
Method nameApriori algorithm
(number)
Minimum supportAssociation analysis
(number)
Minimum supportFit model.
Autoencoder
(number)
Input size(number)
Reduced dimension(string)
Optimizer of the networkAutomatic thresholding
Fit model.
Average shifted histogram
Balanced histogram thresholding
(number = 500
)
Minimum data countBallseptron
(number)
RadiusBanditron
(number = 0.5
)
GammaBayesian linear regression
Bayesian Network
(number)
Equivalent sample sizeFit model.
Bernsen thresholding
Bessel filter
Extends LowpassFilter
Bilinear interpolation
Balanced iterative reducing and clustering using hierarchies
(number)
(number = 10
)
Maximum number of entries for each non-leaf nodes(number = 0.2
)
Threshold(number = Infinity
)
Maximum number of entries for each leaf nodesBounded Online Gradient Descent
"uniform"
| "nonuniform"
), kernel: ("gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), loss: ("zero_one"
| "hinge"
))(number = 10
)
Maximum budget size(number = 1
)
Stepsize(number = 0.1
)
Regularization parameter(number = 0.1
)
Maximum coefficient(("uniform"
| "nonuniform"
) = nonuniform
)
Sampling approach(("zero_one"
| "hinge"
) = hinge
)
Loss type nameBox-Cox transformation
(number? = null
)
LambdaBudgeted online Passive-Aggressive
"simple"
| "projecting"
| "nn"
), kernel: ("gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 1
)
Regularization parameter(number = 10
)
Budget size(("simple"
| "projecting"
| "nn"
) = simple
)
VersionBrahmagupta interpolation
Budgeted Stochastic Gradient Descent
"removal"
| "projection"
| "merging"
), kernel: ("gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 10
)
Budget size(number = 1
)
Learning rate(number = 1
)
Regularization parameter(("removal"
| "projection"
| "merging"
) = removal
)
Maintenance type"removal"
| "projection"
| "merging"
), kernel: ("gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))Fit model.
Returns predicted values.
Array<any>
: Predicted valuesBudget Perceptron
Butterworth filter
Extends LowpassFilter
Clustering based on Closest Pairs
Canny edge detection
Clustering Affinity Search Technique
(number)
Affinity thresholdCategorical naive bayes
(number = 1.0
)
Smoothing parameterCatmull-Rom splines interpolation
Centripetal Catmull-Rom splines interpolation
(number = 0.5
)
Number for knot parameterizationCHAMELEON
(number = 5
)
Number of neighborhoodsChange finder
Chebyshev filter
Extends LowpassFilter
Clustering LARge Applications
(number)
Number of clustersClustering Large Applications based on RANdomized Search
(number)
Number of clustersCLustering In QUEst
CLUstEring based on local Shrinking
(number = 0.05
)
Speed factorCo-training
Connectivity-based Outlier Factor
(number)
Number of neighborhoodsConscience on-line learning
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))Complement Naive Bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameConfidence weighted
(number)
Confidence valueSoft confidence weighted
Extends ConfidenceWeighted
Cosine interpolation
Conditional random fields
Cubic-convolution interpolation
(number)
Tuning parameterFit model parameters.
Cubic Hermite spline
Cubic interpolation
Cumulative moving average
Cumulative sum change point detection
Type: object
Clustering Using REpresentatives
(number)
Number of representative pointsDiscriminant adaptive nearest neighbor
(number = null
)
Number of neighborhoodsDistribution Based Clustering of LArge Spatial Databases
Density-based spatial clustering of applications with noise
(number = 0.5
)
Radius to determine neighborhood(number = 5
)
Minimum size of cluster(("euclid"
| "manhattan"
| "chebyshev"
) = euclid
)
Metric nameDecision tree
Decision tree classifier
"ID3"
| "CART"
))Extends DecisionTree
(("ID3"
| "CART"
))
Method nameDecision tree regression
Extends DecisionTree
Delaunay interpolation
Deming regression
(number)
Ratio of variancesDENsity CLUstering
(number)
Smoothing parameter for the kernel((1
| 2
) = 1
)
Version numberDIvisive ANAlysis Clustering
Diffusion map
(number)
Power parameterDeep Q-Network agent
(RLEnvironmentBase)
Environment(number)
Resolution of actions(string)
Optimizer of the networkDQN Method
(("DQN"
| "DDQN"
))
New method nameDynamic programming agent
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionElastic net
(number = 0.1
)
Regularization strength(number = 0.5
)
Mixing parameter(("ISTA"
| "CD"
) = CD
)
Method nameElliptic filter
Extends LowpassFilter
Extended Natural Neighbor
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameExtended Nearest Neighbor
0
| 1
| 2
), k: number, metric: ("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))((0
| 1
| 2
) = 1
)
Version(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameType: object
Ensemble binary models
Exponential moving average
Modified moving average
Bsae class for Extremely Randomized Trees
Extra trees classifier
Extends ExtraTrees
Extra trees regressor
Extends ExtraTrees
FastMap
a Fast and INtelligent subspace clustering algorithm using DImension voting
Forgetron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number)
Budget parameterFuzzy c-means
(number = 2
)
Fuzziness factorFuzzy k-nearest neighbor
Generative adversarial networks
""
| "conditional"
))(number)
Number of noise dimension(string)
Optimizer of the generator network(string)
Optimizer of the discriminator network((number | null))
Class size for conditional type((""
| "conditional"
))
Type nameFit model.
(number)
Iteration count(number)
Learning rate for generator(number)
Learning rate for discriminator(number)
Batch size{generatorLoss: number, discriminatorLoss: number}
: Loss valueGasser–Müller kernel estimator
(number)
Smoothing parameter for the kernelGaussian process
"gaussian"
, beta: number)("gaussian"
= gaussian
)
Kernel name(number = 1
)
Precision parameterGradient boosting decision tree
(number = 1
)
Maximum depth of tree(number = 1.0
)
Sampling rate(number = 0
)
Learning rateGradient boosting decision tree classifier
Extends GBDT
(number = 1
)
Maximum depth of tree(number = 1.0
)
Sampling rate(number = 0
)
Learning rateGeneralized extreme studentized deviate
Type: object
(function (...any): void)
: Run model(function (): GeneticModel)
: Returns mutated model(function (GeneticModel): GeneticModel)
: Returns mixed model(function (): number)
: Returns a number how good the model isGenetic algorithm
(number)
Number of models per generation(any)
Models
Type: Array<GeneticModel>
The best model.
GeneticModel
: Best modelRun for all models.
(...any)
Arguments for runGenetic algorithm generation
(RLEnvironmentBase)
Environment(number = 100
)
Number of models per generation(number = 20
)
ResolutionReset all agents.
Returns the best score agent.
GeneticAlgorithmAgent
: Best agentRun for all agents.
Genetic k-means model
G-means
Gaussian mixture model
Semi-Supervised gaussian mixture model
Extends GMM
Gaussian mixture regression
Extends GMM
Gaussian Process Latent Variable Model
"gaussian"
, kernelArgs: Array<any>?)(number)
Reduced dimension(number)
Precision parameter(number = 1.0
)
Learning rate for z(number = 0.005
)
Learning rate for alpha(number = 0.2
)
Learning rate for kernel("gaussian"
= gaussian
)
Kernel name(Array<any>? = []
)
Arguments for kernelGrowing cell structures
Growing neural gas
Growing Self-Organizing Map
Generative topographic mapping
(number)
Input size(number)
Output size(number = 20
)
Grid size(number = 10
)
Grid size for basis functionHampel filter
Hierarchical Density-based spatial clustering of applications with noise
(number = 5
)
Minimum number of clusters to be recognized as a cluster(number = 5
)
Number of neighborhood with core distance(("euclid"
| "manhattan"
| "chebyshev"
) = euclid
)
Metric nameHistogram
(object? = {}
)
ConfigHessian Locally Linear Embedding
(number = 1
)
Number of neighborhoodsHidden Markov model
(number)
Number of statesHidden Markov model
Extends HMMBase
(number)
Number of statesContinuous hidden Markov model
Extends HMMBase
(number)
Number of statesHolt-Winters method
(number)
Weight for last value(number = 0
)
Weight for trend value(number = 0
)
Weight for seasonal data(number = 0
)
Length of seasonHopfield network
Hotelling T-square Method
Huber regression
(number = 1.35
)
Threshold of outliers(("rls"
| "gd"
) = rls
)
Method name(number = 1
)
Learning rateIndependent component analysis
Classical ellipsoid method
Improved ellipsoid method
(number = 0.9
)
Parameter controlling the memory of online learning(number = 0.5
)
Parameter controlling the memory of online learningLocally Informative K-Nearest Neighbor
Incremental principal component analysis
(number = 0.95
)
Forgetting factorInfluenced Outlierness
(number)
Number of neighborhoodsInverse distance weighting
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))(number = 5
)
Number of neighborhoods(number = 2
)
Power parameter(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameInverse smoothstep interpolation
Iterative Self-Organizing Data Analysis Technique
(number)
Initial cluster count(number)
Minimum cluster count(number)
Maximum cluster count(number)
Minimum cluster size(number)
Standard deviation as splid threshold(number)
Merge distanceIsolation forest
Isomap
(number = 0
)
Number of neighborhoodsIsotonic regression
Kalman filter
Kernel Density Estimation Outlier Score
"gaussian"
| "epanechnikov"
| function (number, number, number): number))Kernel density estimator
"gaussian"
| "rectangular"
| "triangular"
| "epanechnikov"
| "biweight"
| "triweight"
| function (number): number))(number = 0
)
Smoothing parameter for the kernelKernel k-means
(number = 3
)
Number of clustersKernelized Primal Estimated sub-GrAdientSOlver for SVM
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number)
Learning rateKernelized perceptron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 1
)
Learning rateKullback-Leibler importance estimation procedure
Bsae class for k-means like model
k-means model
Extends KMeansBase
k-means++ model
Extends KMeans
k-medoids model
Extends KMeans
k-medians model
Extends KMeans
semi-supervised k-means model
Extends KMeansBase
k-modes model
Bsae class for k-nearest neighbor models
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-nearest neighbor
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-nearest neighbor regression
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-nearest neighbor anomaly detection
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-nearest neighbor density estimation
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameSemi-supervised k-nearest neighbor
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-prototypes model
(number)
Weight for categorical datak-SVD
Kolmogorov–Zurbenko filter
Label propagation
(("rbf"
| "knn"
) = rbf
)
Method name(number = 0.1
)
Sigma of normal distribution(number = Infinity
)
Number of neighborhoodsLabel spreading
(number = 0.2
)
Clamping factor(("rbf"
| "knn"
) = rbf
)
Method name(number = 0.1
)
Sigma of normal distribution(number = Infinity
)
Number of neighborhoodsLadder network
Fit model.
(Array<(any | null)>)
Target values(number)
Iteration count(number)
Learning rate(number)
Batch size{labeledLoss: number, unlabeledLoss: number}
: Loss valueLagrange interpolation
"weighted"
| "newton"
| ""
))(("weighted"
| "newton"
| ""
) = weighted
)
Method nameLanczos interpolation
(number)
OrderFit model parameters.
Laplacian edge detection
(number)
Threshold((4
| 8
) = 4
)
Number of neighborhoodsLaplacian eigenmaps
"rbf"
| "knn"
), k: number, sigma: number, laplacian: ("unnormalized"
| "normalized"
))(("rbf"
| "knn"
) = rbf
)
Affinity type name(number = 10
)
Number of neighborhoods(number = 1
)
Sigma of normal distribution(("unnormalized"
| "normalized"
) = unnormalized
)
Normalized laplacian matrix or notLeast absolute shrinkage and selection operator
(number = 1.0
)
Regularization strength(("CD"
| "ISTA"
| "LARS"
) = CD
)
Method nameLatent dirichlet allocation
(number = 2
)
Topic countLinde-Buzo-Gray algorithm
Linear discriminant analysis
Fishers linear discriminant analysis
Multiclass linear discriminant analysis
Linear discriminant analysis
Local Density Factor
(number)
Number of neighborhoodsLocal Distance-based Outlier Factor
(number)
Number of neighborhoodsLeast absolute deviations
Least squares
Linear interpolation
Locally Linear Embedding
(number = 1
)
Number of neighborhoodsLeast median squares regression
(number = 5
)
Sampling countLarge Margin Nearest Neighbor
Local Correlation Integral
(number = 0.5
)
AlphaLocally estimated scatterplot smoothing
Local Outlier Factor
(number)
Number of neighborhoodsLaplacian of gaussian filter
(number)
ThresholdLogarithmic interpolation
Logistic regression
Multinomial logistic regression
Local Outlier Probability
(number)
Number of neighborhoodsLocally weighted scatter plot smooth
Lowpass filter
(number = 0.5
)
Cutoff rateLp norm linear regression
(number = 2
)
Power parameter for normLatent Semantic Analysis
Least-squares density difference
LSDD for change point detection
least-squares importance fitting
Least trimmed squares
(number = 0.9
)
Sampling rateLocal Tangent Space Alignment
(number = 1
)
Number of neighborhoodsLearning Vector Quantization clustering
(number)
Number of clustersLearning Vector Quantization classifier
1
| 2
| 3
))((1
| 2
| 3
))
Type numberMedian Absolute Deviation
Many Adaptive Linear Neuron model
Margin Perceptron
(number)
Learning rateMarkov switching
(number)
Number of regimeMax absolute scaler
Maximum likelihood estimator
"normal"
)("normal"
= normal
)
Distribution nameMinimum Covariance Determinant
Mixture discriminant analysis
(number)
Number of componentsMulti-dimensional Scaling
Mean shift
(number)
Smoothing parameter for the kernelMetropolis-Hastings algorithm
(number)
Output size("gaussian"
= gaussian
)
Proposal density nameMin-max normalization
Margin Infused Relaxed Algorithm
Modified Locally Linear Embedding
(number = 1
)
Number of neighborhoodsMulti layer perceptron classifier
Multi layer perceptron regressor
Method of Optimal Direction
MONothetic Analysis Clustering
Monothetic Clustering
Monte Carlo agent
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionMountain method
Simple moving average
Linear weighted moving average
Triangular moving average
Moving median
Mahalanobis Taguchi method
Mutual information feature selector
Mutual k-nearest-neighbor model
(number = 5
)
Number of neighborhoodsn-cubic interpolation
n-linear interpolation
Nadaraya–Watson kernel regression
(number?)
Sigmas of normal distributionNaive bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameNarrow Adaptive Regularization Of Weights
(number = 1
)
Tuning parameterNatural neighbor interpolation
Neighbourhood components analysis
Nearest centroid classifier
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameNegation Naive bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameNeural gas model
Exception for neuralnetwork class
Extends Error
(string)
Error message(any)
Some valueNeuralnetwork
(ComputationalGraph)
Graph of a network(("sgd"
| "adam"
| "momentum"
| "rmsprop"
) = sgd
)
Optimizer of the networkReturns neuralnetwork.
"sgd"
| "adam"
| "momentum"
| "rmsprop"
)): NeuralNetwork(Array<LayerObject>)
Network layers(string?)
Loss name(("sgd"
| "adam"
| "momentum"
| "rmsprop"
) = sgd
)
Optimizer of the networkNeuralNetwork
: Created NeuralnetworkLoad onnx model.
((Uint8Array | ArrayBuffer | File))
FilePromise<NeuralNetwork>
: Loaded NeuralNetworkReturns a copy of this.
NeuralNetwork
: Copied networkReturns calculated values.
(Matrix | Object<string, Matrix>)
: Calculated valuesFit model.
(number = 1
)
Iteration count(number = 0.1
)
Learning rate(number? = null
)
Batch size(object? = {}
)
OptionArray<number>
: Loss valueException for neuralnetwork layer class
Extends Error
(string)
Error message(any)
Some valueNeuralnetwork layer
(object)
ConfigBase class for Flow-based generative model
Extends Layer
Type: object
Computational graph for Neuralnetwork structure
Returns Graph.
(Array<LayerObject>)
Array of object represented a graphComputationalGraph
: GraphLoad onnx model.
((Uint8Array | ArrayBuffer | File))
FilePromise<ComputationalGraph>
: Loaded graphGraph nodes
Input nodes
Output nodes
Additive coupling layer
Extends FlowLayer
Adaptive piecewise linear layer
Extends Layer
Aranda layer
Extends Layer
Argmax layer
Extends Layer
Argmin layer
Extends Layer
Attention layer
Extends Layer
Average pool layer
Extends Layer
Batch normalization layer
Extends Layer
Bimodal derivative adaptive activation layer
Extends Layer
Bendable linear unit layer
Extends Layer
Bounded ReLU layer
Extends Layer
Continuously differentiable ELU layer
Extends Layer
Clip layer
Extends Layer
Concat layer
Extends Layer
Condition layer
Extends Layer
Constant layer
Extends Layer
Convolutional layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.kernel any | |
$0.channel any (default null ) | |
$0.stride any (default null ) | |
$0.padding any (default null ) | |
$0.w any (default null ) | |
$0.activation any (default null ) | |
$0.l2_decay any (default 0 ) | |
$0.l1_decay any (default 0 ) | |
$0.activation_params any (default {} ) | |
$0.channel_dim any (default -1 ) | |
$0.rest ...any |
(object)
objectConcatenated ReLU layer
Extends Layer
Dropout layer
Extends Layer
Elastic ELU layer
Extends Layer
ELU layer
Extends Layer
Embedding layer
Extends Layer
Elastic ReLU layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.rest ...any |
E-swish layer
Extends Layer
Fast ELU layer
Extends Layer
Flatten layer
Extends Layer
Flexible ReLU layer
Extends Layer
Fully connected layer
Extends Layer
Gaussian layer
Extends Layer
Global average pool layer
Extends Layer
Global Lp pool layer
Extends Layer
Global max pool layer
Extends Layer
GRU layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.size any | |
$0.return_sequences any (default false ) | |
$0.w_z any (default null ) | |
$0.w_r any (default null ) | |
$0.w_h any (default null ) | |
$0.u_z any (default null ) | |
$0.u_r any (default null ) | |
$0.u_h any (default null ) | |
$0.b_z any (default null ) | |
$0.b_r any (default null ) | |
$0.b_h any (default null ) | |
$0.rest ...any |
(object)
objectHard shrink layer
Extends Layer
Hard sigmoid layer
Extends Layer
Hard tanh layer
Extends Layer
Hexpo layer
Extends Layer
Huber loss layer
Extends Layer
Include layer
Extends Layer
Input layer
Extends Layer
Improved sigmoid layer
Extends Layer
Layer normalization layer
Extends Layer
Leaky ReLU layer
Extends Layer
Log softmax layer
Extends Layer
Lp pool layer
Extends Layer
LRN layer
Extends Layer
LSTM layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.size any | |
$0.return_sequences any (default false ) | |
$0.w_z any (default null ) | |
$0.w_in any (default null ) | |
$0.w_for any (default null ) | |
$0.w_out any (default null ) | |
$0.r_z any (default null ) | |
$0.r_in any (default null ) | |
$0.r_for any (default null ) | |
$0.r_out any (default null ) | |
$0.p_in any (default null ) | |
$0.p_for any (default null ) | |
$0.p_out any (default null ) | |
$0.b_z any (default null ) | |
$0.b_in any (default null ) | |
$0.b_for any (default null ) | |
$0.b_out any (default null ) | |
$0.rest ...any |
(object)
objectMatrix multiply layer
Extends Layer
Max pool layer
Extends Layer
Reduce mean layer
Extends Layer
Multiple parametric ELU layer
Extends Layer
MSE loss layer
Extends Layer
Multibin trainable linear unit layer
Extends Layer
Natural logarithm ReLU layer
Extends Layer
One-hot layer
Extends Layer
Output layer
Extends Layer
Pade activation unit layer
Extends Layer
Parametric deformable ELU layer
Extends Layer
Parametric ELU layer
Extends Layer
Piecewise linear unit layer
Extends Layer
Parametric ReLU layer
Extends Layer
Parametric rectified exponential unit layer
Extends Layer
Reduce product layer
Extends Layer
Parametric sigmoid function layer
Extends Layer
Penalized tanh layer
Extends Layer
Parametric tanh linear unit layer
Extends Layer
Random layer
Extends Layer
Reduce max layer
Extends Layer
Reduce min layer
Extends Layer
Rectified power unit layer
Extends Layer
Reshape layer
Extends Layer
Simple RNN layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.size any | |
$0.out_size any (default null ) | |
$0.activation any (default 'tanh' ) | |
$0.recurrent_activation any (default 'sigmoid' ) | |
$0.return_sequences any (default false ) | |
$0.w_xh any (default null ) | |
$0.w_hh any (default null ) | |
$0.w_hy any (default null ) | |
$0.b_xh any (default null ) | |
$0.b_hh any (default null ) | |
$0.b_hy any (default null ) | |
$0.activation_params any (default {} ) | |
$0.recurrent_activation_params any (default {} ) | |
$0.rest ...any |
(object)
objectRandomized ReLU layer
Extends Layer
Random translation ReLU layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.rest ...any |
Scaled ELU layer
Extends Layer
Sigmoid layer
Extends Layer
Self learnable AF layer
Extends Layer
Softplus linear unit layer
Extends Layer
Soft shrink layer
Extends Layer
Softargmax layer
Extends Layer
Softmax layer
Extends Layer
Softmin layer
Extends Layer
Softplus layer
Extends Layer
Sparse layer
Extends Layer
Split layer
Extends Layer
Shifted ReLU layer
Extends Layer
Soft root sign layer
Extends Layer
Scaled tanh layer
Extends Layer
Standard deviation layer
Extends Layer
Reduce sum layer
Extends Layer
Supervisor layer
Extends Layer
Swish layer
Extends Layer
Trainable AF layer
Extends Layer
Thresholded ReLU layer
Extends Layer
Transpose layer
Extends Layer
Variable layer
Extends Layer
Variance layer
Extends Layer
Niblack thresholding
Flow-based generative model non-linear independent component estimation
Reverse layer
Extends Layer
Non-local means filter
Non-negative matrix factorization
Natural Neighborhood Based Classification Algorithm
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameNormal Herd
(("full"
| "exact"
| "project"
| "drop"
) = exact
)
Method name(number = 0.1
)
Tradeoff value between passiveness and aggressivenessOne-class support vector machine
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)Outlier Detection using Indegree Number
Online gradient descent
"zero_one"
)(number = 1
)
Tuning parameter("zero_one"
= zero_one
)
Loss type nameOrdering points to identify the clustering structure
(number = Infinity
)
Radius to determine neighborhood(number = 5
)
Number of neighborhood with core distance(("euclid"
| "manhattan"
| "chebyshev"
) = euclid
)
Metric namearbitrarily ORiented projected CLUSter generation
Otus's thresholding
Partitioning Around Medoids
(number)
Number of clustersParticle filter
Passing-Bablok method
Passive Aggressive
0
| 1
| 2
))((0
| 1
| 2
) = 0
)
Version numberPerceptron Algorithm with Uneven Margins
Principal component analysis
Dual Principal component analysis
Kernel Principal component analysis
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelPrincipal component analysis for anomaly detection
Extends PCA
Possibilistic c-means
(number = 2
)
Fuzziness factorPrincipal component regression
Primal Estimated sub-GrAdientSOlver for SVM
Percentile anomaly detection
(number)
Percentile value(("data"
| "normal"
) = data
)
Distribution namePerceptron
(number)
Learning rateAveraged perceptron
(number)
Learning rateMulticlass perceptron
(number)
Learning ratePhansalkar thresholding
(number = 3
)
Size of local range(number = 0.25
)
Tuning parameter(number = 0.5
)
Tuning parameter(number = 2
)
Tuning parameter(number = 10
)
Tuning parameterPartial least squares regression
(number)
Limit on the number of latent factorsProbabilistic latent semantic analysis
(number = 2
)
Number of clustersPoisson regression
(number)
Learning ratePolicy gradient agent
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionPolynomial histogram
Polynomial interpolation
Projection pursuit regression
(number = 5
)
Number of functionsPrewitt edge detection
(number)
ThresholdPriestley–Chao kernel estimator
(number)
Smoothing parameter for the kernelPrincipal curves
Probabilistic Principal component analysis
(("analysis"
| "em"
| "bayes"
) = analysis
)
Method name(number)
Reduced dimensionType: object
Probability based classifier
(any)
Probit
Multinomial probit
Extends Probit
PROjected CLUStering algorithm
(number)
Number of clusters(number)
Number to multiply the number of clusters for sample size(number)
Number to multiply the number of clusters for final set size(number)
Average dimensions(number = 0.1
)
Minimum deviation to check the medoid is badProjectron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 0
)
ThresholdProjectron++
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 0
)
ThresholdP-tile thresholding
(number = 0.5
)
Percentile valueBase class for Q-table
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionStates
Type: Array<(Array<any> | RLRealRange | RLIntRange)>
Actions
Type: Array<(Array<any> | RLRealRange | RLIntRange)>
Q-learning agent
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionQuadratic discriminant analysis
Quantile regression
(number = 0.5
)
Quantile valueBsae class for radius neighbor models
(number = 1
)
Radius to determine neighborhood(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameradius neighbor
Extends RadiusNeighborBase
(number = 1
)
Radius to determine neighborhood(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameradius neighbor regression
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))Extends RadiusNeighborBase
(number = 1
)
Radius to determine neighborhood(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameSemi-supervised radius neighbor
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))Extends RadiusNeighborBase
(number = 5
)
Radius to determine neighborhood(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameRamer-Douglas-Peucker algorithm
(number = 0.1
)
Threshold of distanceBsae class for random forest models
(number)
Number of trees(number = 0.8
)
Sampling rate((DecisionTreeClassifier | DecisionTreeRegression))
Tree class(Array<any>? = null
)
Arguments for constructor of tree classRandom forest classifier
Extends RandomForest
(number)
Number of trees(number = 0.8
)
Sampling rate(("ID3"
| "CART"
) = CART
)
Method nameRandom forest regressor
Extends RandomForest
Random projection
"uniform"
| "root3"
| "normal"
))(("uniform"
| "root3"
| "normal"
) = uniform
)
Initialize method nameType: object
Random sample consensus
(any)
((number | null) = null
)
Sampling rateRadial basis function network
"linear"
| "gaussian"
| "multiquadric"
| "inverse quadratic"
| "inverse multiquadric"
| "thin plate"
| "bump"
), e: number, l: number)(("linear"
| "gaussian"
| "multiquadric"
| "inverse quadratic"
| "inverse multiquadric"
| "thin plate"
| "bump"
) = linear
)
RBF name(number = 1
)
Tuning parameter(number = 0
)
Regularization parameterRestricted Boltzmann machine
Gaussian-Bernouili Restricted Boltzmann machine
(number)
Size of hidden layer(number = 0.01
)
Learning rate(boolean = false
)
Do not learn sigma or notRandomized Budget Perceptron
(number)
Number of support vectorsRelative Density Factor
(number = 1.0
)
RadiusRelative Density-based Outlier Score
Ridge regressioin
(number = 0.1
)
Regularization strengthKernel ridge regression
"gaussian"
| function (Array<number>, Array<number>): number))(number = 0.1
)
Regularization strengthRobust Kernel-based Outlier Factor
"gaussian"
| "epanechnikov"
| "volcano"
| function (Array<number>): number))(number)
Number of neighborhoods(number)
Smoothing parameter(number)
Sensitivity parameterRecursive least squares
Repeated median regression
Recurrent neuralnetwork
"rnn"
| "lstm"
| "gru"
), window: number, unit: number, out_size: number, optimizer: string)(("rnn"
| "lstm"
| "gru"
) = lstm
)
Method name(number = 10
)
Window size(number = 10
)
Size of recurrent unit(number = 1
)
Output size(string = adam
)
Optimizer of the networkMethod
Type: ("rnn"
| "lstm"
| "gru"
)
Roberts cross
(number)
ThresholdRobust scaler
Type: object
RObust Clustering using linKs
(number)
ThresholdRelaxed Online Maximum Margin Algorithm
Aggressive Relaxed Online Maximum Margin Algorithm
Extends ROMMA
Relevance vector machine
Semi-Supervised Support Vector Machine
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelSammon mapping
SARSA agent
(RLEnvironmentBase)
Environment(number = 20
)
Resolutionsauvola thresholding
Savitzky-Golay filter
(number)
Number of coefficientsSequentially Discounting Autoregressive model
Segmented regression
(number = 3
)
Number of segmentsSelective Naive bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameSelective sampling Perceptron
Selective sampling Perceptron with adaptive parameter
Selective sampling second-order Perceptron
(number)
Smooth parameterSelective sampling Winnow
Self-training
Semi-supervised naive bayes
(number = 1
)
Weight applied to the contribution of the unlabeled dataSezan's thresholding
(number = 0.5
)
Tradeoff value between black and white(number = 5
)
Sigma of normal distributionShifting Perceptron Algorithm
(number)
Rate of weight decayImplicit online Learning with Kernels
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), loss: ("square"
| "hinge"
| "logistic"
))(number = 1
)
Learning rate(number = 1
)
Regularization constant(number = 1
)
Penalty imposed on point prediction violations.(("square"
| "hinge"
| "logistic"
) = hinge
)
Loss type nameSparse Implicit online Learning with Kernels
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), loss: ("square"
| "hinge"
| "graph"
| "logistic"
))Extends ILK
(number = 1
)
Learning rate(number = 1
)
Regularization constant(number = 1
)
Penalty imposed on point prediction violations.(number = 10
)
Buffer size(("square"
| "hinge"
| "graph"
| "logistic"
) = hinge
)
Loss type nameSinc interpolation
Fit model parameters.
Sliced inverse regression
(number)
Number of slicesSpherical linear interpolation
(number = 1
)
Angle subtended by the arcslice sampling
Standardizes Major Axis regression
SmirnovGrubbs test
(number)
Significance levelSmoothstep interpolation
(number = 1
)
OrderSnakes (active contour model)
(number)
Penalty for length(number)
Penalty for curvature(number)
Penalty for conformity with image(number = 100
)
Number of verticesSobel edge detection
(number)
ThresholdSoft k-means
(number = 1
)
Tuning parameterSelf-Organizing Map
(number)
Input size(number)
Output size(number = 20
)
Resolution of outputSecond order perceptron
(number = 1
)
Tuning parameterSpectral clustering
(("rbf"
| "knn"
) = rbf
)
Affinity type name(object = {}
)
ConfigAdd a new cluster.
Clear all clusters.
Spline smoothing
(number)
Smoothing parameterSpline interpolation
Split and merge segmentation
(("variance"
| "uniformity"
) = variance
)
Method name(number = 0.1
)
ThresholdSquared-loss Mutual information change point detection
(object)
Density ratio estimation model(number)
Window size(number?)
Take number(number?)
LagSingular-spectrum transformation
Standardization
(number = 0
)
Delta Degrees of FreedomStatistical Region Merging
(number)
ThresholdSTatistical INformation Grid-based method
Stoptron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 10
)
Cachs sizeSupport vector clustering
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelSupport vector machine
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelSupport vector regression
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelTheil-Sen regression
Thompson test
(number)
Significance levelTietjen-Moore Test
(number)
Number of outliersTighter Budget Perceptron
(number = 0
)
Margine(number = 0
)
Cachs size(("perceptron"
| "mira"
| "nobias"
) = perceptron
)
Update ruleTightest Perceptron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), accuracyLoss: ("zero_one"
| "hinge"
))Trigonometric interpolation
Stochastic Neighbor Embedding
T-distributed Stochastic Neighbor Embedding
Tukey regression
(number)
Error toleranceTukey's fences
(number)
Tuning parameterRelative unconstrained Least-Squares Importance Fitting
unconstrained Least-Squares Importance Fitting
Extends RuLSIF
Uniform Manifold Approximation and Projection
(number)
Reduced dimension(number = 10
)
Number of neighborhoods(number = 0.1
)
Minimum distanceUniversal-set Naive bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameVariational Autoencoder
""
| "conditional"
))(number)
Input size(number)
Number of noise dimension(string)
Optimizer of the network((number | null))
Class size for conditional type((""
| "conditional"
))
Type nameVector Autoregressive model
(number)
OrderVariational Gaussian Mixture Model
Voted-perceptron
(number = 1
)
Learning rateWeighted k-means model
(number)
Tuning parameterWeighted K-Nearest Neighbor
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
), weight: ("gaussian"
| "rectangular"
| "triangular"
| "epanechnikov"
| "quartic"
| "triweight"
| "cosine"
| "inversion"
))(number)
Number of neighbors(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric name(("gaussian"
| "rectangular"
| "triangular"
| "epanechnikov"
| "quartic"
| "triweight"
| "cosine"
| "inversion"
) = gaussian
)
Weighting scheme nameWeighted least squares
Winnow
(boolean = 2
)
Learning rate(number? = null
)
Threshold((1
| 2
) = 1
)
Version of modelWord2Vec
"CBOW"
| "skip-gram"
), n: number, wordsOrNumber: (number | Array<string>), reduce_size: number, optimizer: string)(("CBOW"
| "skip-gram"
))
Method name(number)
Number of how many adjacent words to learn(number)
Reduced dimension(string)
Optimizer of the networkeXtreme Gradient Boosting regression
(number = 1
)
Maximum depth of tree(number = 1.0
)
Sampling rate(number = 0.1
)
Regularization parameter(number = 0.5
)
Learning rateeXtreme Gradient Boosting classifier
Extends XGBoost
(number = 1
)
Maximum depth of tree(number = 1.0
)
Sampling rate(number = 0.1
)
Regularization parameter(number = 0
)
Learning ratex-means
Yeo-Johnson power transformation
(number? = null
)
LambdaZero-inflated poisson
Zero-truncated poisson
Acrobot environment
Extends RLEnvironmentBase
Real number range state/actioin
Integer number range state/actioin
Base class for reinforcement learning environment
(Array<(Array<any> | RLRealRange | RLIntRange)>)
: Action variables(Array<(Array<any> | RLRealRange | RLIntRange)>)
: States variablesReturns cloned environment.
RLEnvironmentBase
: Cloned environmentClose environment.
Reset environment.
Do actioin without changing environment and returns new state.
Empty environment
Extends RLEnvironmentBase
Blackjack environment
Extends RLEnvironmentBase
Breaker environment
Extends RLEnvironmentBase
Cartpole environment
Extends RLEnvironmentBase
Draughts environment
Extends RLEnvironmentBase
Gomoku environment
Extends RLEnvironmentBase
Grid world environment
Extends RLEnvironmentBase
In-hypercube environment
Extends RLEnvironmentBase
(number = 2
)
Dimension of the environmentSmooth maze environment
Extends RLEnvironmentBase
MountainCar environment
Extends RLEnvironmentBase
Pendulum environment
Extends RLEnvironmentBase
Reversi environment
Extends RLEnvironmentBase
Waterball environment
Extends RLEnvironmentBase
Returns accuracy.
number
: AccuracyReturns precision with macro average.
number
: PrecisionReturns recall with macro average.
number
: RecallReturns F-score with macro average.
number
: F-scoreReturns Cohen's kappa coefficient.
number
: Cohen's kappa coefficientReturns Davies-Bouldin index.
number
: Davies-Bouldin indexReturns Silhouette coefficient.
Array<number>
: Silhouette coefficientReturns Dunn index.
"max"
| "mean"
| "centroid"
), inter_d: "centroid"
): number(Array<any>)
Predicted categories(("max"
| "mean"
| "centroid"
) = 'max'
)
Intra-cluster distance type("centroid"
= 'centroid'
)
Inter-cluster distance typenumber
: Dunn indexReturns Purity.
number
: PurityReturns Rand index.
number
: Rank indexReturns Dice index.
number
: Dice indexReturns Jaccard index.
number
: Jaccard indexReturns Fowlkes-Mallows index.
number
: Fowlkes-Mallows indexReturns Co-Ranking Matrix.
number
: Co-Ranking Matrix valueReturns MSE (Mean Squared Error).
(number | Array<number>)
: Mean Squared ErrorReturns RMSE (Root Mean Squared Error).
(number | Array<number>)
: Root Mean Squared ErrorReturns MAE (Mean Absolute Error).
(number | Array<number>)
: Mean Absolute ErrorReturns MAD (Median Absolute Deviation).
(number | Array<number>)
: Median Absolute DeviationReturns RMSPE (Root Mean Squared Percentage Error).
(number | Array<number>)
: Root Mean Squared Percentage ErrorReturns MAPE (Mean Absolute Percentage Error).
(number | Array<number>)
: Mean Absolute Percentage ErrorReturns MSLE (Mean Squared Logarithmic Error).
(number | Array<number>)
: Mean Squared Logarithmic ErrorReturns RMSLE (Root Mean Squared Logarithmic Error).
(number | Array<number>)
: RootMean Squared Logarithmic ErrorReturns R2 (coefficient of determination).
(number | Array<number>)
: Coefficient of determinationReturns correlation.
(number | Array<number>)
: Correlation0.17.0
Default export object.
(Tensor)
: Tensor class(Matrix)
: Matrix class(Graph)
: Graph class(Complex)
: Complex number(A2CAgent)
: A2C agent(LBABOD)
: Lower-bound for the Angle-based Outlier Detection(ABOD)
: Angle-based Outlier Detection(ADALINE)
: Adaptive Linear Neuron model(ADAMENN)
: Adaptive Metric Nearest Neighbor(AdaptiveThresholding)
: Adaptive thresholding(AffinityPropagation)
: Affinity propagation model(CentroidAgglomerativeClustering)
: Centroid agglomerative clustering(CompleteLinkageAgglomerativeClustering)
: Complete linkage agglomerative clustering(GroupAverageAgglomerativeClustering)
: Group average agglomerative clustering(MedianAgglomerativeClustering)
: Median agglomerative clustering(SingleLinkageAgglomerativeClustering)
: Single linkage agglomerative clustering(WardsAgglomerativeClustering)
: Ward's agglomerative clustering(WeightedAverageAgglomerativeClustering)
: Weighted average agglomerative clustering(AkimaInterpolation)
: Akima interpolation(ALMA)
: Approximate Large Margin algorithm(AODE)
: Averaged One-Dependence Estimators(AR)
: Autoregressive model(ARMA)
: Autoregressive moving average model(AROW)
: Adaptive regularization of Weight Vectors(ART)
: Adaptive resonance theory(AssociationAnalysis)
: Association analysis(Autoencoder)
: Autoencoder(AutomaticThresholding)
: Automatic thresholding(AverageShiftedHistogram)
: Average shifted histogram(BalancedHistogramThresholding)
: Balanced histogram thresholding(Ballseptron)
: Ballseptron(Banditron)
: Banditron(BayesianLinearRegression)
: Bayesian linear regression(BayesianNetwork)
: Bayesian Network(BernsenThresholding)
: Bernsen thresholding(BesselFilter)
: Bessel filter(BilinearInterpolation)
: Bilinear interpolation(BIRCH)
: Balanced iterative reducing and clustering using hierarchies(BOGD)
: Bounded Online Gradient Descent(BoxCox)
: Box-Cox transformation(BPA)
: Budgeted online Passive-Aggressive(BrahmaguptaInterpolation)
: Brahmagupta interpolation(MulticlassBSGD)
(BSGD)
: Budgeted Stochastic Gradient Descent(BudgetPerceptron)
: Budget Perceptron(ButterworthFilter)
: Butterworth filter(C2P)
: Clustering based on Closest Pairs(Canny)
: Canny edge detection(CAST)
: Clustering Affinity Search Technique(CategoricalNaiveBayes)
: Categorical naive bayes(CatmullRomSplines)
: Catmull-Rom splines interpolation(CentripetalCatmullRomSplines)
: Centripetal Catmull-Rom splines interpolation(CHAMELEON)
: CHAMELEON(ChangeFinder)
: Change finder(ChebyshevFilter)
: Chebyshev filter(CLARA)
: Clustering LARge Applications(CLARANS)
: Clustering Large Applications based on RANdomized Search(CLIQUE)
: CLustering In QUEst(CLUES)
: CLUstEring based on local Shrinking(CoTraining)
: Co-training(COF)
: Connectivity-based Outlier Factor(COLL)
: Conscience on-line learning(ComplementNaiveBayes)
: Complement Naive Bayes(ConfidenceWeighted)
: Confidence weighted(SoftConfidenceWeighted)
: Soft confidence weighted(CosineInterpolation)
: Cosine interpolation(CRF)
: Conditional random fields(CubicConvolutionInterpolation)
: Cubic-convolution interpolation(CubicHermiteSpline)
: Cubic Hermite spline(CubicInterpolation)
: Cubic interpolation(CumulativeMovingAverage)
: Cumulative moving average(CumSum)
: Cumulative sum change point detection(CURE)
: Clustering Using REpresentatives(DiscriminantAdaptiveNearestNeighbor)
: Discriminant adaptive nearest neighbor(DBCLASD)
: Distribution Based Clustering of LArge Spatial Databases(DBSCAN)
: Density-based spatial clustering of applications with noise(DecisionTreeClassifier)
: Decision tree classifier(DecisionTreeRegression)
: Decision tree regression(DelaunayInterpolation)
: Delaunay interpolation(DemingRegression)
: Deming regression(DENCLUE)
: DENsity CLUstering(DIANA)
: DIvisive ANAlysis Clustering(DiffusionMap)
: Diffusion map(DQNAgent)
: Deep Q-Network agent(DPAgent)
: Dynamic programming agent(ElasticNet)
: Elastic net(EllipticFilter)
: Elliptic filter(ENaN)
: Extended Natural Neighbor(ENN)
: Extended Nearest Neighbor(EnsembleBinaryModel)
: Ensemble binary models(ExponentialMovingAverage)
: Exponential moving average(ModifiedMovingAverage)
: Modified moving average(ExtraTreesClassifier)
: Extra trees classifier(ExtraTreesRegressor)
: Extra trees regressor(FastMap)
: FastMap(FINDIT)
: a Fast and INtelligent subspace clustering algorithm using DImension voting(Forgetron)
: Forgetron(FuzzyCMeans)
: Fuzzy c-means(FuzzyKNN)
: Fuzzy k-nearest neighbor(GAN)
: Generative adversarial networks(GasserMuller)
: Gasser–Müller kernel estimator(GaussianProcess)
: Gaussian process(GBDT)
: Gradient boosting decision tree(GBDTClassifier)
: Gradient boosting decision tree classifier(GeneralizedESD)
: Generalized extreme studentized deviate(GeneticAlgorithmGeneration)
: Genetic algorithm generation(GeneticKMeans)
: Genetic k-means model(GMeans)
: G-means(GMM)
: Gaussian mixture model(GMR)
: Gaussian mixture regression(SemiSupervisedGMM)
: Semi-Supervised gaussian mixture model(GPLVM)
: Gaussian Process Latent Variable Model(GrowingCellStructures)
: Growing cell structures(GrowingNeuralGas)
: Growing neural gas(GSOM)
: Growing Self-Organizing Map(GTM)
: Generative topographic mapping(HampelFilter)
: Hampel filter(HDBSCAN)
: Hierarchical Density-based spatial clustering of applications with noise(Histogram)
: Histogram(HLLE)
: Hessian Locally Linear Embedding(ContinuousHMM)
: Continuous hidden Markov model(HMM)
: Hidden Markov model(HoltWinters)
: Holt-Winters method(HopfieldNetwork)
: Hopfield network(Hotelling)
: Hotelling T-square Method(HuberRegression)
: Huber regression(ICA)
: Independent component analysis(CELLIP)
: Classical ellipsoid method(IELLIP)
: Improved ellipsoid method(IKNN)
: Locally Informative K-Nearest Neighbor(IncrementalPCA)
: Incremental principal component analysis(INFLO)
: Influenced Outlierness(InverseDistanceWeighting)
: Inverse distance weighting(InverseSmoothstepInterpolation)
: Inverse smoothstep interpolation(ISODATA)
: Iterative Self-Organizing Data Analysis Technique(IsolationForest)
: Isolation forest(Isomap)
: Isomap(IsotonicRegression)
: Isotonic regression(KalmanFilter)
: Kalman filter(KDEOS)
: Kernel Density Estimation Outlier Score(KernelDensityEstimator)
: Kernel density estimator(KernelKMeans)
: Kernel k-means(KernelizedPegasos)
: Kernelized Primal Estimated sub-GrAdientSOlver for SVM(KernelizedPerceptron)
: Kernelized perceptron(KLIEP)
: Kullback-Leibler importance estimation procedure(KMeans)
: k-means model(KMeanspp)
: k-means++ model(KMedians)
: k-medians model(KMedoids)
: k-medoids model(SemiSupervisedKMeansModel)
: semi-supervised k-means model(KModes)
: k-modes model(KNN)
: k-nearest neighbor(KNNAnomaly)
: k-nearest neighbor anomaly detection(KNNDensityEstimation)
: k-nearest neighbor density estimation(KNNRegression)
: k-nearest neighbor regression(SemiSupervisedKNN)
: Semi-supervised k-nearest neighbor(KPrototypes)
: k-prototypes model(KSVD)
: k-SVD(KolmogorovZurbenkoFilter)
: Kolmogorov–Zurbenko filter(LabelPropagation)
: Label propagation(LabelSpreading)
: Label spreading(LadderNetwork)
: Ladder network(LagrangeInterpolation)
: Lagrange interpolation(LanczosInterpolation)
: Lanczos interpolation(Laplacian)
: Laplacian edge detection(LaplacianEigenmaps)
: Laplacian eigenmaps(Lasso)
: Least absolute shrinkage and selection operator(LatentDirichletAllocation)
: Latent dirichlet allocation(LBG)
: Linde-Buzo-Gray algorithm(FishersLinearDiscriminant)
: Fishers linear discriminant analysis(LinearDiscriminant)
: Linear discriminant analysis(LinearDiscriminantAnalysis)
: Linear discriminant analysis(MulticlassLinearDiscriminant)
: Multiclass linear discriminant analysis(LDF)
: Local Density Factor(LDOF)
: Local Distance-based Outlier Factor(LeastAbsolute)
: Least absolute deviations(LeastSquares)
: Least squares(LinearInterpolation)
: Linear interpolation(LLE)
: Locally Linear Embedding(LeastMedianSquaresRegression)
: Least median squares regression(LMNN)
: Large Margin Nearest Neighbor(LOCI)
: Local Correlation Integral(LOESS)
: Locally estimated scatterplot smoothing(LOF)
: Local Outlier Factor(LoG)
: Laplacian of gaussian filter(LogarithmicInterpolation)
: Logarithmic interpolation(LogisticRegression)
: Logistic regression(MultinomialLogisticRegression)
: Multinomial logistic regression(LoOP)
: Local Outlier Probability(LOWESS)
: Locally weighted scatter plot smooth(LowpassFilter)
: Lowpass filter(LpNormLinearRegression)
: Lp norm linear regression(LSA)
: Latent Semantic Analysis(LSDD)
: Least-squares density difference(LSDDCPD)
: LSDD for change point detection(LSIF)
: least-squares importance fitting(LeastTrimmedSquaresRegression)
: Least trimmed squares(LTSA)
: Local Tangent Space Alignment(LVQClassifier)
: Learning Vector Quantization classifier(LVQCluster)
: Learning Vector Quantization clustering(MAD)
: Median Absolute Deviation(MADALINE)
: Many Adaptive Linear Neuron model(MarginPerceptron)
: Margin Perceptron(MarkovSwitching)
: Markov switching(MaxAbsScaler)
: Max absolute scaler(MaximumLikelihoodEstimator)
: Maximum likelihood estimator(MCD)
: Minimum Covariance Determinant(MixtureDiscriminant)
: Mixture discriminant analysis(MDS)
: Multi-dimensional Scaling(MeanShift)
: Mean shift(MetropolisHastings)
: Metropolis-Hastings algorithm(MinmaxNormalization)
: Min-max normalization(MIRA)
: Margin Infused Relaxed Algorithm(MLLE)
: Modified Locally Linear Embedding(MLPClassifier)
: Multi layer perceptron classifier(MLPRegressor)
: Multi layer perceptron regressor(MOD)
: Method of Optimal Direction(MONA)
: MONothetic Analysis Clustering(MonotheticClustering)
: Monothetic Clustering(MCAgent)
: Monte Carlo agent(Mountain)
: Mountain method(LinearWeightedMovingAverage)
: Linear weighted moving average(SimpleMovingAverage)
: Simple moving average(TriangularMovingAverage)
: Triangular moving average(MovingMedian)
: Moving median(MT)
: Mahalanobis Taguchi method(MutualInformationFeatureSelection)
: Mutual information feature selector(MutualKNN)
: Mutual k-nearest-neighbor model(NCubicInterpolation)
: n-cubic interpolation(NLinearInterpolation)
: n-linear interpolation(NadarayaWatson)
: Nadaraya–Watson kernel regression(NaiveBayes)
: Naive bayes(NAROW)
: Narrow Adaptive Regularization Of Weights(NaturalNeighborInterpolation)
: Natural neighbor interpolation(NeighbourhoodComponentsAnalysis)
: Neighbourhood components analysis(NearestCentroid)
: Nearest centroid classifier(NegationNaiveBayes)
: Negation Naive bayes(NeuralGas)
: Neural gas model(ComputationalGraph)
(Layer)
(NeuralnetworkException)
: Exception for neuralnetwork class(NeuralNetwork)
: Neuralnetwork(NiblackThresholding)
: Niblack thresholding(NICE)
: Flow-based generative model non-linear independent component estimation(NLMeans)
: Non-local means filter(NMF)
: Non-negative matrix factorization(NNBCA)
: Natural Neighborhood Based Classification Algorithm(NormalHERD)
: Normal Herd(OCSVM)
: One-class support vector machine(ODIN)
: Outlier Detection using Indegree Number(OnlineGradientDescent)
: Online gradient descent(OPTICS)
: Ordering points to identify the clustering structure(ORCLUS)
: arbitrarily ORiented projected CLUSter generation(OtsusThresholding)
: Otus's thresholding(PAM)
: Partitioning Around Medoids(ParticleFilter)
: Particle filter(PassingBablok)
: Passing-Bablok method(PA)
: Passive Aggressive(PAUM)
: Perceptron Algorithm with Uneven Margins(AnomalyPCA)
: Principal component analysis for anomaly detection(DualPCA)
: Dual Principal component analysis(KernelPCA)
: Kernel Principal component analysis(PCA)
: Principal component analysis(PossibilisticCMeans)
: Possibilistic c-means(PCR)
: Principal component regression(Pegasos)
: Primal Estimated sub-GrAdientSOlver for SVM(PercentileAnormaly)
: Percentile anomaly detection(AveragedPerceptron)
: Averaged perceptron(MulticlassPerceptron)
: Multiclass perceptron(Perceptron)
: Perceptron(PhansalkarThresholding)
: Phansalkar thresholding(PLS)
: Partial least squares regression(PLSA)
: Probabilistic latent semantic analysis(PoissonRegression)
: Poisson regression(PGAgent)
: Policy gradient agent(PolynomialHistogram)
: Polynomial histogram(PolynomialInterpolation)
: Polynomial interpolation(ProjectionPursuit)
: Projection pursuit regression(Prewitt)
: Prewitt edge detection(PriestleyChao)
: Priestley–Chao kernel estimator(PrincipalCurve)
: Principal curves(ProbabilisticPCA)
: Probabilistic Principal component analysis(ProbabilityBasedClassifier)
: Probability based classifier(MultinomialProbit)
: Multinomial probit(Probit)
: Probit(PROCLUS)
: PROjected CLUStering algorithm(Projectron)
: Projectron(Projectronpp)
: Projectron++(PTile)
: P-tile thresholding(QTableBase)
: Base class for Q-table(QAgent)
: Q-learning agent(QuadraticDiscriminant)
: Quadratic discriminant analysis(QuantileRegression)
: Quantile regression(RadiusNeighbor)
: radius neighbor(RadiusNeighborRegression)
: radius neighbor regression(SemiSupervisedRadiusNeighbor)
: Semi-supervised radius neighbor(RamerDouglasPeucker)
: Ramer-Douglas-Peucker algorithm(RandomForestClassifier)
: Random forest classifier(RandomForestRegressor)
: Random forest regressor(RandomProjection)
: Random projection(RANSAC)
: Random sample consensus(RadialBasisFunctionNetwork)
: Radial basis function network(GBRBM)
: Gaussian-Bernouili Restricted Boltzmann machine(RBM)
: Restricted Boltzmann machine(RBP)
: Randomized Budget Perceptron(RDF)
: Relative Density Factor(RDOS)
: Relative Density-based Outlier Score(KernelRidge)
: Kernel ridge regression(Ridge)
: Ridge regressioin(RKOF)
: Robust Kernel-based Outlier Factor(RecursiveLeastSquares)
: Recursive least squares(RepeatedMedianRegression)
: Repeated median regression(RNN)
: Recurrent neuralnetwork(RobertsCross)
: Roberts cross(RobustScaler)
: Robust scaler(ROCK)
: RObust Clustering using linKs(AggressiveROMMA)
: Aggressive Relaxed Online Maximum Margin Algorithm(ROMMA)
: Relaxed Online Maximum Margin Algorithm(RVM)
: Relevance vector machine(S3VM)
: Semi-Supervised Support Vector Machine(Sammon)
: Sammon mapping(SARSAAgent)
: SARSA agent(SauvolaThresholding)
: sauvola thresholding(SavitzkyGolayFilter)
: Savitzky-Golay filter(SDAR)
: Sequentially Discounting Autoregressive model(SegmentedRegression)
: Segmented regression(SelectiveNaiveBayes)
: Selective Naive bayes(SelectiveSamplingAdaptivePerceptron)
: Selective sampling Perceptron with adaptive parameter(SelectiveSamplingPerceptron)
: Selective sampling Perceptron(SelectiveSamplingSOP)
: Selective sampling second-order Perceptron(SelectiveSamplingWinnow)
: Selective sampling Winnow(SelfTraining)
: Self-training(SemiSupervisedNaiveBayes)
: Semi-supervised naive bayes(SezanThresholding)
: Sezan's thresholding(ShiftingPerceptron)
: Shifting Perceptron Algorithm(ILK)
: Implicit online Learning with Kernels(SILK)
: Sparse Implicit online Learning with Kernels(SincInterpolation)
: Sinc interpolation(SlicedInverseRegression)
: Sliced inverse regression(Slerp)
: Spherical linear interpolation(SliceSampling)
: slice sampling(SMARegression)
: Standardizes Major Axis regression(SmirnovGrubbs)
: SmirnovGrubbs test(SmoothstepInterpolation)
: Smoothstep interpolation(Snakes)
: Snakes (active contour model)(Sobel)
: Sobel edge detection(SoftKMeans)
: Soft k-means(SOM)
: Self-Organizing Map(SecondOrderPerceptron)
: Second order perceptron(SpectralClustering)
: Spectral clustering(SmoothingSpline)
: Spline smoothing(SplineInterpolation)
: Spline interpolation(SplitAndMerge)
: Split and merge segmentation(SquaredLossMICPD)
: Squared-loss Mutual information change point detection(SST)
: Singular-spectrum transformation(Standardization)
: Standardization(StatisticalRegionMerging)
: Statistical Region Merging(STING)
: STatistical INformation Grid-based method(Stoptron)
: Stoptron(SVC)
: Support vector clustering(SVM)
: Support vector machine(SVR)
: Support vector regression(TheilSenRegression)
: Theil-Sen regression(Thompson)
: Thompson test(TietjenMoore)
: Tietjen-Moore Test(TighterPerceptron)
: Tighter Budget Perceptron(TightestPerceptron)
: Tightest Perceptron(TrigonometricInterpolation)
: Trigonometric interpolation(SNE)
: Stochastic Neighbor Embedding(tSNE)
: T-distributed Stochastic Neighbor Embedding(TukeyRegression)
: Tukey regression(TukeysFences)
: Tukey's fences(RuLSIF)
: Relative unconstrained Least-Squares Importance Fitting(uLSIF)
: unconstrained Least-Squares Importance Fitting(UMAP)
: Uniform Manifold Approximation and Projection(UniversalSetNaiveBayes)
: Universal-set Naive bayes(VAE)
: Variational Autoencoder(VAR)
: Vector Autoregressive model(VBGMM)
: Variational Gaussian Mixture Model(VotedPerceptron)
: Voted-perceptron(WeightedKMeans)
: Weighted k-means model(WeightedKNN)
: Weighted K-Nearest Neighbor(WeightedLeastSquares)
: Weighted least squares(Winnow)
: Winnow(Word2Vec)
: Word2Vec(XGBoost)
: eXtreme Gradient Boosting regression(XGBoostClassifier)
: eXtreme Gradient Boosting classifier(XMeans)
: x-means(YeoJohnson)
: Yeo-Johnson power transformation(ZeroInflatedPoisson)
: Zero-inflated poisson(ZeroTruncatedPoisson)
: Zero-truncated poisson(AcrobotRLEnvironment)
: Acrobot environment(RLEnvironmentBase)
: Base class for reinforcement learning environment(RLIntRange)
: Integer number range state/actioin(RLRealRange)
: Real number range state/actioin(EmptyRLEnvironment)
: Empty environment(BlackjackRLEnvironment)
: Blackjack environment(BreakerRLEnvironment)
: Breaker environment(CartPoleRLEnvironment)
: Cartpole environment(DraughtsRLEnvironment)
: Draughts environment(GomokuRLEnvironment)
: Gomoku environment(GridMazeRLEnvironment)
: Grid world environment(InHypercubeRLEnvironment)
: In-hypercube environment(SmoothMazeRLEnvironment)
: Smooth maze environment(MountainCarRLEnvironment)
: MountainCar environment(PendulumRLEnvironment)
: Pendulum environment(ReversiRLEnvironment)
: Reversi environment(WaterballRLEnvironment)
: Waterball environment(accuracy)
: Returns accuracy.(cohensKappa)
: Returns Cohen's kappa coefficient.(fScore)
: Returns F-score with macro average.(precision)
: Returns precision with macro average.(recall)
: Returns recall with macro average.(davisBouldinIndex)
: Returns Davies-Bouldin index.(diceIndex)
: Returns Dice index.(dunnIndex)
: Returns Dunn index.(fowlkesMallowsIndex)
: Returns Fowlkes-Mallows index.(jaccardIndex)
: Returns Jaccard index.(purity)
: Returns Purity.(randIndex)
: Returns Rand index.(silhouetteCoefficient)
: Returns Silhouette coefficient.(coRankingMatrix)
: Returns Co-Ranking Matrix.(correlation)
: Returns correlation.(mad)
: Returns MAD (Median Absolute Deviation).(mae)
: Returns MAE (Mean Absolute Error).(mape)
: Returns MAPE (Mean Absolute Percentage Error).(mse)
: Returns MSE (Mean Squared Error).(msle)
: Returns MSLE (Mean Squared Logarithmic Error).(r2)
: Returns R2 (coefficient of determination).(rmse)
: Returns RMSE (Root Mean Squared Error).(rmsle)
: Returns RMSLE (Root Mean Squared Logarithmic Error).(rmspe)
: Returns RMSPE (Root Mean Squared Percentage Error).Exception for matrix class
Extends Error
(string)
Error message(any)
Some valueMatrix class
Sizes of the matrix.
Elements in the matrix.
Iterate over the elements.
Set a value at the position.
Reshape this.
Concatenate this and m.
Returns a matrix reduced along the axis with the callback function.
(any?)
Initial value(boolean = null
)
Keep dimensions or not. If null, negative axis retuns number and other axis returns Matrix.(Matrix | number)
: Reduced matrix or valueMultiply all elements by -1 in-place.
Set all elements to their logical NOT values.
Set all elements to their bitwise NOT values.
Set all elements to their absolute values.
Set all elements to their rounded values.
Set all elements to their floored values.
Set all elements to their ceil values.
Tensor class
Sizes of the tensor.
Elements in the tensor.
Iterate over the elements.
Returns a Matrix if the dimension of this tensor is 2.
Matrix
: MatrixConcatenate this and t.
Returns a tensor reduced along the axis with the callback function.
(any?)
Initial value(boolean = false
)
Keep dimensions or not.(Tensor | number)
: Reduced tensor or valueException for graph class
Extends Error
(string)
Error message(any)
Some valueEdge of graph
Graph class
Returns named graph
"balaban 10 cage"
| "bidiakis cube"
| "biggs smith"
| "brinkmann"
| "bull"
| "butterfly"
| "chvatal"
| "clebsch"
| "coxeter"
| "desargues"
| "diamond"
| "durer"
| "errera"
| "folkman"
| "foster"
| "franklin"
| "frucht"
| "goldner-harary"
| "golomb"
| "gray"
| "grotzsch"
| "harries"
| "heawood"
| "herschel"
| "hoffman"
| "holt"
| "kittell"
| "markstrom"
| "mcgee"
| "meredith"
| "mobius kantor"
| "moser spindle"
| "nauru"
| "pappus"
| "petersen"
| "poussin"
| "robertson"
| "shrikhande"
| "sousselier"
| "sylvester"
| "tutte"
| "tutte coxeter"
| "wagner"
| "wells"
)): Graph(("balaban 10 cage"
| "bidiakis cube"
| "biggs smith"
| "brinkmann"
| "bull"
| "butterfly"
| "chvatal"
| "clebsch"
| "coxeter"
| "desargues"
| "diamond"
| "durer"
| "errera"
| "folkman"
| "foster"
| "franklin"
| "frucht"
| "goldner-harary"
| "golomb"
| "gray"
| "grotzsch"
| "harries"
| "heawood"
| "herschel"
| "hoffman"
| "holt"
| "kittell"
| "markstrom"
| "mcgee"
| "meredith"
| "mobius kantor"
| "moser spindle"
| "nauru"
| "pappus"
| "petersen"
| "poussin"
| "robertson"
| "shrikhande"
| "sousselier"
| "sylvester"
| "tutte"
| "tutte coxeter"
| "wagner"
| "wells"
))
Name of the graphGraph
: Named graphEdges
Return degree of the node.
(number)
Index of target node((boolean | "in"
| "out"
) = true
)
Count undirected edges. If in
or out
is specified, only direct edges are counted and direct
parameter is ignored.((boolean | "in"
| "out"
) = true
)
Count directed edgesnumber
: Degree of the nodeReturn indexes of adjacency nodes.
"in"
| "out"
), direct: (boolean | "in"
| "out"
)): Array<number>(number)
Index of target node((boolean | "in"
| "out"
) = true
)
Check undirected edges. If in
or out
is specified, only direct edges are checked and direct
parameter is ignored.((boolean | "in"
| "out"
) = true
)
Check directed edgesArray<number>
: Indexes of adjacency nodesReturns indexes of each components.
Returns indexes of each biconnected components.
Add the node.
(unknown?)
Value of the nodeRemove all nodes.
Returns the edges.
"forward"
| "backward"
), direct: (boolean | "forward"
| "backward"
)): Array<Edge>(number)
Index of the starting node of the edge(number)
Index of the end node of the edge((boolean | "forward"
| "backward"
) = true
)
Get undirected edges or not. If forward
or backward
is specified, only direct edges are get and direct
parameter is ignored.((boolean | "forward"
| "backward"
) = true
)
Get directed edges or notArray<Edge>
: Edges between from
and to
Remove the edges.
Remove all edges.
Returns adjacency matrix
Returns adjacency list
"both"
| "in"
| "out"
))(("both"
| "in"
| "out"
) = both
)
Indegree or outdegreeReturns degree matrix.
"both"
| "in"
| "out"
))(("both"
| "in"
| "out"
) = both
)
Indegree or outdegreeReturns laplacian matrix.
Returns if this is plainer graph or not with add-path algorithm.
On the Cutting Edge: Simplified O(n) Planarity by Edge Addition https://xuzijian629.hatenablog.com/entry/2019/12/14/163726
boolean
: true
if this is plainer graphReturns if this is plainer graph or not with add-vertex algorithm.
Hopcroft, J. and Tarjan, R. "Efficient Planarity Testing", J. ACM, Vol. 21, No. 4, pp. 549-568 (1974) 西関 隆夫. "32. グラフの平面性判定法", 情報処理, Vol. 24, No. 4, pp. 521-528 (1983) K. S. Booth, "Testing for the Consecutive Ones Property, Interval Graphs, and Graph Planarity Using PQ-Tree Algorithms", Journal of computer and system sciences, 13, pp. 335-379 (1976)
boolean
: true
if this is plainer graphContract this graph.
Subdivision this graph.
Substitute other graph at the node.
Returns shortest path with breadth first search algorithm.
(number)
Index of start nodeArray<{length: number, prev: number, path: Array<number>}>
: Shortest length and path for all nodesReturns shortest path with Floyd–Warshall algorithm.
Returns Hamiltonian cycle
Returns minimum cut.
Returns minimum cut.
Returns minimum cut.
Returns bisection cut.
Complex number
A2C agent
(RLEnvironmentBase)
Environment(number)
Resolution of actions(number)
Number of processes(string)
Optimizer of the networkAngle-based Outlier Detection
(number = Infinity
)
Number of neighborhoodsLower-bound for the Angle-based Outlier Detection
Adaptive Linear Neuron model
(number)
Learning rateAdaptive Metric Nearest Neighbor
(number? = null
)
The number of neighbors of the test point(number = 3
)
The number of neighbors in N1 for estimation(number? = null
)
The size of the neighborhood N2 for each of the k0 neighbors for estimation(number? = null
)
The number of points within the delta intervals(number = 3
)
The number of neighbors in the final nearest neighbor rule(number = 0.5
)
The positive factor for the exponential weighting schemeAdaptive thresholding
"mean"
| "gaussian"
| "median"
| "midgray"
), k: number, c: number)(("mean"
| "gaussian"
| "median"
| "midgray"
) = 'mean'
)
Method name(number = 3
)
Size of local range(number = 2
)
Value subtracted from thresholdAffinity propagation model
Type: object
(number?)
: Data index of leaf node(number?)
: Distance between children nodes(number)
: Number of leaf nodes(Array<AgglomerativeClusterNode>?)
: Children nodes(Array<AgglomerativeClusterNode>)
: Leaf nodesAgglomerative clustering
"euclid"
| "manhattan"
| "chebyshev"
))(("euclid"
| "manhattan"
| "chebyshev"
) = 'euclid'
)
Metric nameReturns the specified number of clusters.
(number)
Number of clustersArray<AgglomerativeClusterNode>
: Cluster nodesReturns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeComplete linkage agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeSingle linkage agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeGroup average agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeWard's agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeCentroid agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeWeighted average agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeMedian agglomerative clustering
Extends AgglomerativeClustering
Returns a distance between two nodes.
(AgglomerativeClusterNode)
Node(AgglomerativeClusterNode)
Nodenumber
: DistanceReturns new distance.
(number)
Number of datas in a merging node A(number)
Number of datas in a merging node B(number)
Number of datas in a current node(number)
Distance between node A and current node(number)
Distance between node B and current node(number)
Distance between node A and node Bnumber
: New distance between current node and merged nodeAkima interpolation
(boolean = false
)
Use modified method or notApproximate Large Margin algorithm
(number = 2
)
Power parameter for norm(number = 1
)
Degree of approximation to the optimal margin hyperplane(number = 1
)
Tuning parameter(number = 1
)
Tuning parameterAveraged One-Dependence Estimators
(number = 20
)
Discretized numberAutoregressive model
(number)
Order(("lsm"
| "yuleWalker"
| "levinson"
| "householder"
) = lms
)
Method nameFit model.
Autoregressive moving average model
Fit model.
Adaptive regularization of Weight Vectors
(number = 0.1
)
Learning rateAdaptive resonance theory
"l2"
)(number = 1
)
Threshold("l2"
= 'l2'
)
Method nameApriori algorithm
(number)
Minimum supportAssociation analysis
(number)
Minimum supportFit model.
Autoencoder
(number)
Input size(number)
Reduced dimension(string)
Optimizer of the networkAutomatic thresholding
Fit model.
Average shifted histogram
Balanced histogram thresholding
(number = 500
)
Minimum data countBallseptron
(number)
RadiusBanditron
(number = 0.5
)
GammaBayesian linear regression
Bayesian Network
(number)
Equivalent sample sizeFit model.
Bernsen thresholding
Bessel filter
Extends LowpassFilter
Bilinear interpolation
Balanced iterative reducing and clustering using hierarchies
(number)
(number = 10
)
Maximum number of entries for each non-leaf nodes(number = 0.2
)
Threshold(number = Infinity
)
Maximum number of entries for each leaf nodesBounded Online Gradient Descent
"uniform"
| "nonuniform"
), kernel: ("gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), loss: ("zero_one"
| "hinge"
))(number = 10
)
Maximum budget size(number = 1
)
Stepsize(number = 0.1
)
Regularization parameter(number = 0.1
)
Maximum coefficient(("uniform"
| "nonuniform"
) = nonuniform
)
Sampling approach(("zero_one"
| "hinge"
) = hinge
)
Loss type nameBox-Cox transformation
(number? = null
)
LambdaBudgeted online Passive-Aggressive
"simple"
| "projecting"
| "nn"
), kernel: ("gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 1
)
Regularization parameter(number = 10
)
Budget size(("simple"
| "projecting"
| "nn"
) = simple
)
VersionBrahmagupta interpolation
Budgeted Stochastic Gradient Descent
"removal"
| "projection"
| "merging"
), kernel: ("gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 10
)
Budget size(number = 1
)
Learning rate(number = 1
)
Regularization parameter(("removal"
| "projection"
| "merging"
) = removal
)
Maintenance type"removal"
| "projection"
| "merging"
), kernel: ("gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))Fit model.
Returns predicted values.
Array<any>
: Predicted valuesBudget Perceptron
Butterworth filter
Extends LowpassFilter
Clustering based on Closest Pairs
Canny edge detection
Clustering Affinity Search Technique
(number)
Affinity thresholdCategorical naive bayes
(number = 1.0
)
Smoothing parameterCatmull-Rom splines interpolation
Centripetal Catmull-Rom splines interpolation
(number = 0.5
)
Number for knot parameterizationCHAMELEON
(number = 5
)
Number of neighborhoodsChange finder
Chebyshev filter
Extends LowpassFilter
Clustering LARge Applications
(number)
Number of clustersClustering Large Applications based on RANdomized Search
(number)
Number of clustersCLustering In QUEst
CLUstEring based on local Shrinking
(number = 0.05
)
Speed factorCo-training
Connectivity-based Outlier Factor
(number)
Number of neighborhoodsConscience on-line learning
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))Complement Naive Bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameConfidence weighted
(number)
Confidence valueSoft confidence weighted
Extends ConfidenceWeighted
Cosine interpolation
Conditional random fields
Cubic-convolution interpolation
(number)
Tuning parameterFit model parameters.
Cubic Hermite spline
Cubic interpolation
Cumulative moving average
Cumulative sum change point detection
Type: object
Clustering Using REpresentatives
(number)
Number of representative pointsDiscriminant adaptive nearest neighbor
(number = null
)
Number of neighborhoodsDistribution Based Clustering of LArge Spatial Databases
Density-based spatial clustering of applications with noise
(number = 0.5
)
Radius to determine neighborhood(number = 5
)
Minimum size of cluster(("euclid"
| "manhattan"
| "chebyshev"
) = euclid
)
Metric nameDecision tree
Decision tree classifier
"ID3"
| "CART"
))Extends DecisionTree
(("ID3"
| "CART"
))
Method nameDecision tree regression
Extends DecisionTree
Delaunay interpolation
Deming regression
(number)
Ratio of variancesDENsity CLUstering
(number)
Smoothing parameter for the kernel((1
| 2
) = 1
)
Version numberDIvisive ANAlysis Clustering
Diffusion map
(number)
Power parameterDeep Q-Network agent
(RLEnvironmentBase)
Environment(number)
Resolution of actions(string)
Optimizer of the networkDQN Method
(("DQN"
| "DDQN"
))
New method nameDynamic programming agent
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionElastic net
(number = 0.1
)
Regularization strength(number = 0.5
)
Mixing parameter(("ISTA"
| "CD"
) = CD
)
Method nameElliptic filter
Extends LowpassFilter
Extended Natural Neighbor
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameExtended Nearest Neighbor
0
| 1
| 2
), k: number, metric: ("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))((0
| 1
| 2
) = 1
)
Version(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameType: object
Ensemble binary models
Exponential moving average
Modified moving average
Bsae class for Extremely Randomized Trees
Extra trees classifier
Extends ExtraTrees
Extra trees regressor
Extends ExtraTrees
FastMap
a Fast and INtelligent subspace clustering algorithm using DImension voting
Forgetron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number)
Budget parameterFuzzy c-means
(number = 2
)
Fuzziness factorFuzzy k-nearest neighbor
Generative adversarial networks
""
| "conditional"
))(number)
Number of noise dimension(string)
Optimizer of the generator network(string)
Optimizer of the discriminator network((number | null))
Class size for conditional type((""
| "conditional"
))
Type nameFit model.
(number)
Iteration count(number)
Learning rate for generator(number)
Learning rate for discriminator(number)
Batch size{generatorLoss: number, discriminatorLoss: number}
: Loss valueGasser–Müller kernel estimator
(number)
Smoothing parameter for the kernelGaussian process
"gaussian"
, beta: number)("gaussian"
= gaussian
)
Kernel name(number = 1
)
Precision parameterGradient boosting decision tree
(number = 1
)
Maximum depth of tree(number = 1.0
)
Sampling rate(number = 0
)
Learning rateGradient boosting decision tree classifier
Extends GBDT
(number = 1
)
Maximum depth of tree(number = 1.0
)
Sampling rate(number = 0
)
Learning rateGeneralized extreme studentized deviate
Type: object
(function (...any): void)
: Run model(function (): GeneticModel)
: Returns mutated model(function (GeneticModel): GeneticModel)
: Returns mixed model(function (): number)
: Returns a number how good the model isGenetic algorithm
(number)
Number of models per generation(any)
Models
Type: Array<GeneticModel>
The best model.
GeneticModel
: Best modelRun for all models.
(...any)
Arguments for runGenetic algorithm generation
(RLEnvironmentBase)
Environment(number = 100
)
Number of models per generation(number = 20
)
ResolutionReset all agents.
Returns the best score agent.
GeneticAlgorithmAgent
: Best agentRun for all agents.
Genetic k-means model
G-means
Gaussian mixture model
Semi-Supervised gaussian mixture model
Extends GMM
Gaussian mixture regression
Extends GMM
Gaussian Process Latent Variable Model
"gaussian"
, kernelArgs: Array<any>?)(number)
Reduced dimension(number)
Precision parameter(number = 1.0
)
Learning rate for z(number = 0.005
)
Learning rate for alpha(number = 0.2
)
Learning rate for kernel("gaussian"
= gaussian
)
Kernel name(Array<any>? = []
)
Arguments for kernelGrowing cell structures
Growing neural gas
Growing Self-Organizing Map
Generative topographic mapping
(number)
Input size(number)
Output size(number = 20
)
Grid size(number = 10
)
Grid size for basis functionHampel filter
Hierarchical Density-based spatial clustering of applications with noise
(number = 5
)
Minimum number of clusters to be recognized as a cluster(number = 5
)
Number of neighborhood with core distance(("euclid"
| "manhattan"
| "chebyshev"
) = euclid
)
Metric nameHistogram
(object? = {}
)
ConfigHessian Locally Linear Embedding
(number = 1
)
Number of neighborhoodsHidden Markov model
(number)
Number of statesHidden Markov model
Extends HMMBase
(number)
Number of statesContinuous hidden Markov model
Extends HMMBase
(number)
Number of statesHolt-Winters method
(number)
Weight for last value(number = 0
)
Weight for trend value(number = 0
)
Weight for seasonal data(number = 0
)
Length of seasonHopfield network
Hotelling T-square Method
Huber regression
(number = 1.35
)
Threshold of outliers(("rls"
| "gd"
) = rls
)
Method name(number = 1
)
Learning rateIndependent component analysis
Classical ellipsoid method
Improved ellipsoid method
(number = 0.9
)
Parameter controlling the memory of online learning(number = 0.5
)
Parameter controlling the memory of online learningLocally Informative K-Nearest Neighbor
Incremental principal component analysis
(number = 0.95
)
Forgetting factorInfluenced Outlierness
(number)
Number of neighborhoodsInverse distance weighting
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))(number = 5
)
Number of neighborhoods(number = 2
)
Power parameter(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameInverse smoothstep interpolation
Iterative Self-Organizing Data Analysis Technique
(number)
Initial cluster count(number)
Minimum cluster count(number)
Maximum cluster count(number)
Minimum cluster size(number)
Standard deviation as splid threshold(number)
Merge distanceIsolation forest
Isomap
(number = 0
)
Number of neighborhoodsIsotonic regression
Kalman filter
Kernel Density Estimation Outlier Score
"gaussian"
| "epanechnikov"
| function (number, number, number): number))Kernel density estimator
"gaussian"
| "rectangular"
| "triangular"
| "epanechnikov"
| "biweight"
| "triweight"
| function (number): number))(number = 0
)
Smoothing parameter for the kernelKernel k-means
(number = 3
)
Number of clustersKernelized Primal Estimated sub-GrAdientSOlver for SVM
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number)
Learning rateKernelized perceptron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 1
)
Learning rateKullback-Leibler importance estimation procedure
Bsae class for k-means like model
k-means model
Extends KMeansBase
k-means++ model
Extends KMeans
k-medoids model
Extends KMeans
k-medians model
Extends KMeans
semi-supervised k-means model
Extends KMeansBase
k-modes model
Bsae class for k-nearest neighbor models
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-nearest neighbor
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-nearest neighbor regression
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-nearest neighbor anomaly detection
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-nearest neighbor density estimation
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameSemi-supervised k-nearest neighbor
Extends KNNBase
(number = 5
)
Number of neighborhoods(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric namek-prototypes model
(number)
Weight for categorical datak-SVD
Kolmogorov–Zurbenko filter
Label propagation
(("rbf"
| "knn"
) = rbf
)
Method name(number = 0.1
)
Sigma of normal distribution(number = Infinity
)
Number of neighborhoodsLabel spreading
(number = 0.2
)
Clamping factor(("rbf"
| "knn"
) = rbf
)
Method name(number = 0.1
)
Sigma of normal distribution(number = Infinity
)
Number of neighborhoodsLadder network
Fit model.
(Array<(any | null)>)
Target values(number)
Iteration count(number)
Learning rate(number)
Batch size{labeledLoss: number, unlabeledLoss: number}
: Loss valueLagrange interpolation
"weighted"
| "newton"
| ""
))(("weighted"
| "newton"
| ""
) = weighted
)
Method nameLanczos interpolation
(number)
OrderFit model parameters.
Laplacian edge detection
(number)
Threshold((4
| 8
) = 4
)
Number of neighborhoodsLaplacian eigenmaps
"rbf"
| "knn"
), k: number, sigma: number, laplacian: ("unnormalized"
| "normalized"
))(("rbf"
| "knn"
) = rbf
)
Affinity type name(number = 10
)
Number of neighborhoods(number = 1
)
Sigma of normal distribution(("unnormalized"
| "normalized"
) = unnormalized
)
Normalized laplacian matrix or notLeast absolute shrinkage and selection operator
(number = 1.0
)
Regularization strength(("CD"
| "ISTA"
| "LARS"
) = CD
)
Method nameLatent dirichlet allocation
(number = 2
)
Topic countLinde-Buzo-Gray algorithm
Linear discriminant analysis
Fishers linear discriminant analysis
Multiclass linear discriminant analysis
Linear discriminant analysis
Local Density Factor
(number)
Number of neighborhoodsLocal Distance-based Outlier Factor
(number)
Number of neighborhoodsLeast absolute deviations
Least squares
Linear interpolation
Locally Linear Embedding
(number = 1
)
Number of neighborhoodsLeast median squares regression
(number = 5
)
Sampling countLarge Margin Nearest Neighbor
Local Correlation Integral
(number = 0.5
)
AlphaLocally estimated scatterplot smoothing
Local Outlier Factor
(number)
Number of neighborhoodsLaplacian of gaussian filter
(number)
ThresholdLogarithmic interpolation
Logistic regression
Multinomial logistic regression
Local Outlier Probability
(number)
Number of neighborhoodsLocally weighted scatter plot smooth
Lowpass filter
(number = 0.5
)
Cutoff rateLp norm linear regression
(number = 2
)
Power parameter for normLatent Semantic Analysis
Least-squares density difference
LSDD for change point detection
least-squares importance fitting
Least trimmed squares
(number = 0.9
)
Sampling rateLocal Tangent Space Alignment
(number = 1
)
Number of neighborhoodsLearning Vector Quantization clustering
(number)
Number of clustersLearning Vector Quantization classifier
1
| 2
| 3
))((1
| 2
| 3
))
Type numberMedian Absolute Deviation
Many Adaptive Linear Neuron model
Margin Perceptron
(number)
Learning rateMarkov switching
(number)
Number of regimeMax absolute scaler
Maximum likelihood estimator
"normal"
)("normal"
= normal
)
Distribution nameMinimum Covariance Determinant
Mixture discriminant analysis
(number)
Number of componentsMulti-dimensional Scaling
Mean shift
(number)
Smoothing parameter for the kernelMetropolis-Hastings algorithm
(number)
Output size("gaussian"
= gaussian
)
Proposal density nameMin-max normalization
Margin Infused Relaxed Algorithm
Modified Locally Linear Embedding
(number = 1
)
Number of neighborhoodsMulti layer perceptron classifier
Multi layer perceptron regressor
Method of Optimal Direction
MONothetic Analysis Clustering
Monothetic Clustering
Monte Carlo agent
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionMountain method
Simple moving average
Linear weighted moving average
Triangular moving average
Moving median
Mahalanobis Taguchi method
Mutual information feature selector
Mutual k-nearest-neighbor model
(number = 5
)
Number of neighborhoodsn-cubic interpolation
n-linear interpolation
Nadaraya–Watson kernel regression
(number?)
Sigmas of normal distributionNaive bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameNarrow Adaptive Regularization Of Weights
(number = 1
)
Tuning parameterNatural neighbor interpolation
Neighbourhood components analysis
Nearest centroid classifier
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameNegation Naive bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameNeural gas model
Exception for neuralnetwork class
Extends Error
(string)
Error message(any)
Some valueNeuralnetwork
(ComputationalGraph)
Graph of a network(("sgd"
| "adam"
| "momentum"
| "rmsprop"
) = sgd
)
Optimizer of the networkReturns neuralnetwork.
"sgd"
| "adam"
| "momentum"
| "rmsprop"
)): NeuralNetwork(Array<LayerObject>)
Network layers(string?)
Loss name(("sgd"
| "adam"
| "momentum"
| "rmsprop"
) = sgd
)
Optimizer of the networkNeuralNetwork
: Created NeuralnetworkLoad onnx model.
((Uint8Array | ArrayBuffer | File))
FilePromise<NeuralNetwork>
: Loaded NeuralNetworkReturns a copy of this.
NeuralNetwork
: Copied networkReturns calculated values.
(Matrix | Object<string, Matrix>)
: Calculated valuesFit model.
(number = 1
)
Iteration count(number = 0.1
)
Learning rate(number? = null
)
Batch size(object? = {}
)
OptionArray<number>
: Loss valueException for neuralnetwork layer class
Extends Error
(string)
Error message(any)
Some valueNeuralnetwork layer
(object)
ConfigBase class for Flow-based generative model
Extends Layer
Type: object
Computational graph for Neuralnetwork structure
Returns Graph.
(Array<LayerObject>)
Array of object represented a graphComputationalGraph
: GraphLoad onnx model.
((Uint8Array | ArrayBuffer | File))
FilePromise<ComputationalGraph>
: Loaded graphGraph nodes
Input nodes
Output nodes
Additive coupling layer
Extends FlowLayer
Adaptive piecewise linear layer
Extends Layer
Aranda layer
Extends Layer
Argmax layer
Extends Layer
Argmin layer
Extends Layer
Attention layer
Extends Layer
Average pool layer
Extends Layer
Batch normalization layer
Extends Layer
Bimodal derivative adaptive activation layer
Extends Layer
Bendable linear unit layer
Extends Layer
Bounded ReLU layer
Extends Layer
Continuously differentiable ELU layer
Extends Layer
Clip layer
Extends Layer
Concat layer
Extends Layer
Condition layer
Extends Layer
Constant layer
Extends Layer
Convolutional layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.kernel any | |
$0.channel any (default null ) | |
$0.stride any (default null ) | |
$0.padding any (default null ) | |
$0.w any (default null ) | |
$0.activation any (default null ) | |
$0.l2_decay any (default 0 ) | |
$0.l1_decay any (default 0 ) | |
$0.activation_params any (default {} ) | |
$0.channel_dim any (default -1 ) | |
$0.rest ...any |
(object)
objectConcatenated ReLU layer
Extends Layer
Dropout layer
Extends Layer
Elastic ELU layer
Extends Layer
ELU layer
Extends Layer
Embedding layer
Extends Layer
Elastic ReLU layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.rest ...any |
E-swish layer
Extends Layer
Fast ELU layer
Extends Layer
Flatten layer
Extends Layer
Flexible ReLU layer
Extends Layer
Fully connected layer
Extends Layer
Gaussian layer
Extends Layer
Global average pool layer
Extends Layer
Global Lp pool layer
Extends Layer
Global max pool layer
Extends Layer
GRU layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.size any | |
$0.return_sequences any (default false ) | |
$0.w_z any (default null ) | |
$0.w_r any (default null ) | |
$0.w_h any (default null ) | |
$0.u_z any (default null ) | |
$0.u_r any (default null ) | |
$0.u_h any (default null ) | |
$0.b_z any (default null ) | |
$0.b_r any (default null ) | |
$0.b_h any (default null ) | |
$0.rest ...any |
(object)
objectHard shrink layer
Extends Layer
Hard sigmoid layer
Extends Layer
Hard tanh layer
Extends Layer
Hexpo layer
Extends Layer
Huber loss layer
Extends Layer
Include layer
Extends Layer
Input layer
Extends Layer
Improved sigmoid layer
Extends Layer
Layer normalization layer
Extends Layer
Leaky ReLU layer
Extends Layer
Log softmax layer
Extends Layer
Lp pool layer
Extends Layer
LRN layer
Extends Layer
LSTM layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.size any | |
$0.return_sequences any (default false ) | |
$0.w_z any (default null ) | |
$0.w_in any (default null ) | |
$0.w_for any (default null ) | |
$0.w_out any (default null ) | |
$0.r_z any (default null ) | |
$0.r_in any (default null ) | |
$0.r_for any (default null ) | |
$0.r_out any (default null ) | |
$0.p_in any (default null ) | |
$0.p_for any (default null ) | |
$0.p_out any (default null ) | |
$0.b_z any (default null ) | |
$0.b_in any (default null ) | |
$0.b_for any (default null ) | |
$0.b_out any (default null ) | |
$0.rest ...any |
(object)
objectMatrix multiply layer
Extends Layer
Max pool layer
Extends Layer
Reduce mean layer
Extends Layer
Multiple parametric ELU layer
Extends Layer
MSE loss layer
Extends Layer
Multibin trainable linear unit layer
Extends Layer
Natural logarithm ReLU layer
Extends Layer
One-hot layer
Extends Layer
Output layer
Extends Layer
Pade activation unit layer
Extends Layer
Parametric deformable ELU layer
Extends Layer
Parametric ELU layer
Extends Layer
Piecewise linear unit layer
Extends Layer
Parametric ReLU layer
Extends Layer
Parametric rectified exponential unit layer
Extends Layer
Reduce product layer
Extends Layer
Parametric sigmoid function layer
Extends Layer
Penalized tanh layer
Extends Layer
Parametric tanh linear unit layer
Extends Layer
Random layer
Extends Layer
Reduce max layer
Extends Layer
Reduce min layer
Extends Layer
Rectified power unit layer
Extends Layer
Reshape layer
Extends Layer
Simple RNN layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.size any | |
$0.out_size any (default null ) | |
$0.activation any (default 'tanh' ) | |
$0.recurrent_activation any (default 'sigmoid' ) | |
$0.return_sequences any (default false ) | |
$0.w_xh any (default null ) | |
$0.w_hh any (default null ) | |
$0.w_hy any (default null ) | |
$0.b_xh any (default null ) | |
$0.b_hh any (default null ) | |
$0.b_hy any (default null ) | |
$0.activation_params any (default {} ) | |
$0.recurrent_activation_params any (default {} ) | |
$0.rest ...any |
(object)
objectRandomized ReLU layer
Extends Layer
Random translation ReLU layer
Extends Layer
(Object)
Name | Description |
---|---|
$0.rest ...any |
Scaled ELU layer
Extends Layer
Sigmoid layer
Extends Layer
Self learnable AF layer
Extends Layer
Softplus linear unit layer
Extends Layer
Soft shrink layer
Extends Layer
Softargmax layer
Extends Layer
Softmax layer
Extends Layer
Softmin layer
Extends Layer
Softplus layer
Extends Layer
Sparse layer
Extends Layer
Split layer
Extends Layer
Shifted ReLU layer
Extends Layer
Soft root sign layer
Extends Layer
Scaled tanh layer
Extends Layer
Standard deviation layer
Extends Layer
Reduce sum layer
Extends Layer
Supervisor layer
Extends Layer
Swish layer
Extends Layer
Trainable AF layer
Extends Layer
Thresholded ReLU layer
Extends Layer
Transpose layer
Extends Layer
Variable layer
Extends Layer
Variance layer
Extends Layer
Niblack thresholding
Flow-based generative model non-linear independent component estimation
Reverse layer
Extends Layer
Non-local means filter
Non-negative matrix factorization
Natural Neighborhood Based Classification Algorithm
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameNormal Herd
(("full"
| "exact"
| "project"
| "drop"
) = exact
)
Method name(number = 0.1
)
Tradeoff value between passiveness and aggressivenessOne-class support vector machine
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)Outlier Detection using Indegree Number
Online gradient descent
"zero_one"
)(number = 1
)
Tuning parameter("zero_one"
= zero_one
)
Loss type nameOrdering points to identify the clustering structure
(number = Infinity
)
Radius to determine neighborhood(number = 5
)
Number of neighborhood with core distance(("euclid"
| "manhattan"
| "chebyshev"
) = euclid
)
Metric namearbitrarily ORiented projected CLUSter generation
Otus's thresholding
Partitioning Around Medoids
(number)
Number of clustersParticle filter
Passing-Bablok method
Passive Aggressive
0
| 1
| 2
))((0
| 1
| 2
) = 0
)
Version numberPerceptron Algorithm with Uneven Margins
Principal component analysis
Dual Principal component analysis
Kernel Principal component analysis
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelPrincipal component analysis for anomaly detection
Extends PCA
Possibilistic c-means
(number = 2
)
Fuzziness factorPrincipal component regression
Primal Estimated sub-GrAdientSOlver for SVM
Percentile anomaly detection
(number)
Percentile value(("data"
| "normal"
) = data
)
Distribution namePerceptron
(number)
Learning rateAveraged perceptron
(number)
Learning rateMulticlass perceptron
(number)
Learning ratePhansalkar thresholding
(number = 3
)
Size of local range(number = 0.25
)
Tuning parameter(number = 0.5
)
Tuning parameter(number = 2
)
Tuning parameter(number = 10
)
Tuning parameterPartial least squares regression
(number)
Limit on the number of latent factorsProbabilistic latent semantic analysis
(number = 2
)
Number of clustersPoisson regression
(number)
Learning ratePolicy gradient agent
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionPolynomial histogram
Polynomial interpolation
Projection pursuit regression
(number = 5
)
Number of functionsPrewitt edge detection
(number)
ThresholdPriestley–Chao kernel estimator
(number)
Smoothing parameter for the kernelPrincipal curves
Probabilistic Principal component analysis
(("analysis"
| "em"
| "bayes"
) = analysis
)
Method name(number)
Reduced dimensionType: object
Probability based classifier
(any)
Probit
Multinomial probit
Extends Probit
PROjected CLUStering algorithm
(number)
Number of clusters(number)
Number to multiply the number of clusters for sample size(number)
Number to multiply the number of clusters for final set size(number)
Average dimensions(number = 0.1
)
Minimum deviation to check the medoid is badProjectron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 0
)
ThresholdProjectron++
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 0
)
ThresholdP-tile thresholding
(number = 0.5
)
Percentile valueBase class for Q-table
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionStates
Type: Array<(Array<any> | RLRealRange | RLIntRange)>
Actions
Type: Array<(Array<any> | RLRealRange | RLIntRange)>
Q-learning agent
(RLEnvironmentBase)
Environment(number = 20
)
ResolutionQuadratic discriminant analysis
Quantile regression
(number = 0.5
)
Quantile valueBsae class for radius neighbor models
(number = 1
)
Radius to determine neighborhood(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameradius neighbor
Extends RadiusNeighborBase
(number = 1
)
Radius to determine neighborhood(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameradius neighbor regression
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))Extends RadiusNeighborBase
(number = 1
)
Radius to determine neighborhood(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameSemi-supervised radius neighbor
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
))Extends RadiusNeighborBase
(number = 5
)
Radius to determine neighborhood(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric nameRamer-Douglas-Peucker algorithm
(number = 0.1
)
Threshold of distanceBsae class for random forest models
(number)
Number of trees(number = 0.8
)
Sampling rate((DecisionTreeClassifier | DecisionTreeRegression))
Tree class(Array<any>? = null
)
Arguments for constructor of tree classRandom forest classifier
Extends RandomForest
(number)
Number of trees(number = 0.8
)
Sampling rate(("ID3"
| "CART"
) = CART
)
Method nameRandom forest regressor
Extends RandomForest
Random projection
"uniform"
| "root3"
| "normal"
))(("uniform"
| "root3"
| "normal"
) = uniform
)
Initialize method nameType: object
Random sample consensus
(any)
((number | null) = null
)
Sampling rateRadial basis function network
"linear"
| "gaussian"
| "multiquadric"
| "inverse quadratic"
| "inverse multiquadric"
| "thin plate"
| "bump"
), e: number, l: number)(("linear"
| "gaussian"
| "multiquadric"
| "inverse quadratic"
| "inverse multiquadric"
| "thin plate"
| "bump"
) = linear
)
RBF name(number = 1
)
Tuning parameter(number = 0
)
Regularization parameterRestricted Boltzmann machine
Gaussian-Bernouili Restricted Boltzmann machine
(number)
Size of hidden layer(number = 0.01
)
Learning rate(boolean = false
)
Do not learn sigma or notRandomized Budget Perceptron
(number)
Number of support vectorsRelative Density Factor
(number = 1.0
)
RadiusRelative Density-based Outlier Score
Ridge regressioin
(number = 0.1
)
Regularization strengthKernel ridge regression
"gaussian"
| function (Array<number>, Array<number>): number))(number = 0.1
)
Regularization strengthRobust Kernel-based Outlier Factor
"gaussian"
| "epanechnikov"
| "volcano"
| function (Array<number>): number))(number)
Number of neighborhoods(number)
Smoothing parameter(number)
Sensitivity parameterRecursive least squares
Repeated median regression
Recurrent neuralnetwork
"rnn"
| "lstm"
| "gru"
), window: number, unit: number, out_size: number, optimizer: string)(("rnn"
| "lstm"
| "gru"
) = lstm
)
Method name(number = 10
)
Window size(number = 10
)
Size of recurrent unit(number = 1
)
Output size(string = adam
)
Optimizer of the networkMethod
Type: ("rnn"
| "lstm"
| "gru"
)
Roberts cross
(number)
ThresholdRobust scaler
Type: object
RObust Clustering using linKs
(number)
ThresholdRelaxed Online Maximum Margin Algorithm
Aggressive Relaxed Online Maximum Margin Algorithm
Extends ROMMA
Relevance vector machine
Semi-Supervised Support Vector Machine
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelSammon mapping
SARSA agent
(RLEnvironmentBase)
Environment(number = 20
)
Resolutionsauvola thresholding
Savitzky-Golay filter
(number)
Number of coefficientsSequentially Discounting Autoregressive model
Segmented regression
(number = 3
)
Number of segmentsSelective Naive bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameSelective sampling Perceptron
Selective sampling Perceptron with adaptive parameter
Selective sampling second-order Perceptron
(number)
Smooth parameterSelective sampling Winnow
Self-training
Semi-supervised naive bayes
(number = 1
)
Weight applied to the contribution of the unlabeled dataSezan's thresholding
(number = 0.5
)
Tradeoff value between black and white(number = 5
)
Sigma of normal distributionShifting Perceptron Algorithm
(number)
Rate of weight decayImplicit online Learning with Kernels
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), loss: ("square"
| "hinge"
| "logistic"
))(number = 1
)
Learning rate(number = 1
)
Regularization constant(number = 1
)
Penalty imposed on point prediction violations.(("square"
| "hinge"
| "logistic"
) = hinge
)
Loss type nameSparse Implicit online Learning with Kernels
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), loss: ("square"
| "hinge"
| "graph"
| "logistic"
))Extends ILK
(number = 1
)
Learning rate(number = 1
)
Regularization constant(number = 1
)
Penalty imposed on point prediction violations.(number = 10
)
Buffer size(("square"
| "hinge"
| "graph"
| "logistic"
) = hinge
)
Loss type nameSinc interpolation
Fit model parameters.
Sliced inverse regression
(number)
Number of slicesSpherical linear interpolation
(number = 1
)
Angle subtended by the arcslice sampling
Standardizes Major Axis regression
SmirnovGrubbs test
(number)
Significance levelSmoothstep interpolation
(number = 1
)
OrderSnakes (active contour model)
(number)
Penalty for length(number)
Penalty for curvature(number)
Penalty for conformity with image(number = 100
)
Number of verticesSobel edge detection
(number)
ThresholdSoft k-means
(number = 1
)
Tuning parameterSelf-Organizing Map
(number)
Input size(number)
Output size(number = 20
)
Resolution of outputSecond order perceptron
(number = 1
)
Tuning parameterSpectral clustering
(("rbf"
| "knn"
) = rbf
)
Affinity type name(object = {}
)
ConfigAdd a new cluster.
Clear all clusters.
Spline smoothing
(number)
Smoothing parameterSpline interpolation
Split and merge segmentation
(("variance"
| "uniformity"
) = variance
)
Method name(number = 0.1
)
ThresholdSquared-loss Mutual information change point detection
(object)
Density ratio estimation model(number)
Window size(number?)
Take number(number?)
LagSingular-spectrum transformation
Standardization
(number = 0
)
Delta Degrees of FreedomStatistical Region Merging
(number)
ThresholdSTatistical INformation Grid-based method
Stoptron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number))(number = 10
)
Cachs sizeSupport vector clustering
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelSupport vector machine
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelSupport vector regression
"gaussian"
| "linear"
| function (Array<number>, Array<number>): number), kernelArgs: Array<any>?)(Array<any>? = []
)
Arguments for kernelTheil-Sen regression
Thompson test
(number)
Significance levelTietjen-Moore Test
(number)
Number of outliersTighter Budget Perceptron
(number = 0
)
Margine(number = 0
)
Cachs size(("perceptron"
| "mira"
| "nobias"
) = perceptron
)
Update ruleTightest Perceptron
"gaussian"
| "polynomial"
| function (Array<number>, Array<number>): number), accuracyLoss: ("zero_one"
| "hinge"
))Trigonometric interpolation
Stochastic Neighbor Embedding
T-distributed Stochastic Neighbor Embedding
Tukey regression
(number)
Error toleranceTukey's fences
(number)
Tuning parameterRelative unconstrained Least-Squares Importance Fitting
unconstrained Least-Squares Importance Fitting
Extends RuLSIF
Uniform Manifold Approximation and Projection
(number)
Reduced dimension(number = 10
)
Number of neighborhoods(number = 0.1
)
Minimum distanceUniversal-set Naive bayes
"gaussian"
)("gaussian"
= gaussian
)
Distribution nameVariational Autoencoder
""
| "conditional"
))(number)
Input size(number)
Number of noise dimension(string)
Optimizer of the network((number | null))
Class size for conditional type((""
| "conditional"
))
Type nameVector Autoregressive model
(number)
OrderVariational Gaussian Mixture Model
Voted-perceptron
(number = 1
)
Learning rateWeighted k-means model
(number)
Tuning parameterWeighted K-Nearest Neighbor
"euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
), weight: ("gaussian"
| "rectangular"
| "triangular"
| "epanechnikov"
| "quartic"
| "triweight"
| "cosine"
| "inversion"
))(number)
Number of neighbors(("euclid"
| "manhattan"
| "chebyshev"
| "minkowski"
) = euclid
)
Metric name(("gaussian"
| "rectangular"
| "triangular"
| "epanechnikov"
| "quartic"
| "triweight"
| "cosine"
| "inversion"
) = gaussian
)
Weighting scheme nameWeighted least squares
Winnow
(boolean = 2
)
Learning rate(number? = null
)
Threshold((1
| 2
) = 1
)
Version of modelWord2Vec
"CBOW"
| "skip-gram"
), n: number, wordsOrNumber: (number | Array<string>), reduce_size: number, optimizer: string)(("CBOW"
| "skip-gram"
))
Method name(number)
Number of how many adjacent words to learn(number)
Reduced dimension(string)
Optimizer of the networkeXtreme Gradient Boosting regression
(number = 1
)
Maximum depth of tree(number = 1.0
)
Sampling rate(number = 0.1
)
Regularization parameter(number = 0.5
)
Learning rateeXtreme Gradient Boosting classifier
Extends XGBoost
(number = 1
)
Maximum depth of tree(number = 1.0
)
Sampling rate(number = 0.1
)
Regularization parameter(number = 0
)
Learning ratex-means
Yeo-Johnson power transformation
(number? = null
)
LambdaZero-inflated poisson
Zero-truncated poisson
Acrobot environment
Extends RLEnvironmentBase
Real number range state/actioin
Integer number range state/actioin
Base class for reinforcement learning environment
(Array<(Array<any> | RLRealRange | RLIntRange)>)
: Action variables(Array<(Array<any> | RLRealRange | RLIntRange)>)
: States variablesReturns cloned environment.
RLEnvironmentBase
: Cloned environmentClose environment.
Reset environment.
Do actioin without changing environment and returns new state.
Empty environment
Extends RLEnvironmentBase
Blackjack environment
Extends RLEnvironmentBase
Breaker environment
Extends RLEnvironmentBase
Cartpole environment
Extends RLEnvironmentBase
Draughts environment
Extends RLEnvironmentBase
Gomoku environment
Extends RLEnvironmentBase
Grid world environment
Extends RLEnvironmentBase
In-hypercube environment
Extends RLEnvironmentBase
(number = 2
)
Dimension of the environmentSmooth maze environment
Extends RLEnvironmentBase
MountainCar environment
Extends RLEnvironmentBase
Pendulum environment
Extends RLEnvironmentBase
Reversi environment
Extends RLEnvironmentBase
Waterball environment
Extends RLEnvironmentBase
Returns accuracy.
number
: AccuracyReturns precision with macro average.
number
: PrecisionReturns recall with macro average.
number
: RecallReturns F-score with macro average.
number
: F-scoreReturns Cohen's kappa coefficient.
number
: Cohen's kappa coefficientReturns Davies-Bouldin index.
number
: Davies-Bouldin indexReturns Silhouette coefficient.
Array<number>
: Silhouette coefficientReturns Dunn index.
"max"
| "mean"
| "centroid"
), inter_d: "centroid"
): number(Array<any>)
Predicted categories(("max"
| "mean"
| "centroid"
) = 'max'
)
Intra-cluster distance type("centroid"
= 'centroid'
)
Inter-cluster distance typenumber
: Dunn indexReturns Purity.
number
: PurityReturns Rand index.
number
: Rank indexReturns Dice index.
number
: Dice indexReturns Jaccard index.
number
: Jaccard indexReturns Fowlkes-Mallows index.
number
: Fowlkes-Mallows indexReturns Co-Ranking Matrix.
number
: Co-Ranking Matrix valueReturns MSE (Mean Squared Error).
(number | Array<number>)
: Mean Squared ErrorReturns RMSE (Root Mean Squared Error).
(number | Array<number>)
: Root Mean Squared ErrorReturns MAE (Mean Absolute Error).
(number | Array<number>)
: Mean Absolute ErrorReturns MAD (Median Absolute Deviation).
(number | Array<number>)
: Median Absolute DeviationReturns RMSPE (Root Mean Squared Percentage Error).
(number | Array<number>)
: Root Mean Squared Percentage ErrorReturns MAPE (Mean Absolute Percentage Error).
(number | Array<number>)
: Mean Absolute Percentage ErrorReturns MSLE (Mean Squared Logarithmic Error).
(number | Array<number>)
: Mean Squared Logarithmic ErrorReturns RMSLE (Root Mean Squared Logarithmic Error).
(number | Array<number>)
: RootMean Squared Logarithmic ErrorReturns R2 (coefficient of determination).
(number | Array<number>)
: Coefficient of determinationReturns correlation.
(number | Array<number>)
: Correlation