如何中国核武使用三原则SVM基于Suffix Array的String Kernel核函数

相关文章推荐
1. SVM 目标函数及约束条件SVM 的介绍及数学推导参考:我的CSDN,此处直接跳过,直接给出 SVM 的目标函数和约束条件:minw,b12wTws.t.yn(wTxn+b)≥1,n=1,..N...
用一个具体文本分类的例子来看看这种向高维空间映射从而分类的方法如何运作,想象一下,我们文本分类问题的原始空间是1000维的(即每个要被分类的文档被表示为一个1000维的向量),在这个维度上问题是线性不...
将所有的样本都选做landmarks
一种方法是将所有的training data都做为landmarks,这样就会有m个landmarks(m个trainnign data),这样featur...
1、分类思想:在原坐标系里线性不可分的数据用核函数投影(非线性映射)到另一个空间(特征空间),尽量使得数据在新的空间里线性可分。2、若直接在高维空间进行分类或回归,需要确定非线性映射函数的形式、参数以...
本文转自/html/y.html
Kernel Functions
Below is a list of some kernel...
SVM判断模型只与支持向量有关:
# coding=utf-8
#Created on Nov 4, 2010
#Chapter 5 source file for Machine Learing...
他的最新文章
他的热门文章
您举报文章:
举报原因:
原文地址:
原因补充:
(最多只允许输入30个字)转:收集一下SVM核函数-Kernel&Function
转自:/html/y.html
Kernel Functions
Below is a list of some kernel functions available from the
existing literature. As was the case with previous articles,
every&&for the formulas below are readily
available from&their&. I can not guarantee all of them are perfectly correct,
thus use them at your own risk. Most of them have links to articles
where they have been originally used or proposed.
The Linear kernel is the simplest kernel function. It is given by
the inner product plus an optional constant c. Kernel algorithms
using a linear kernel are often equivalent to their non-kernel
counterparts, i.e.&with
linear kernel is the same as&.
<img ALT="k(x, y) = x^T y + c"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Polynomial kernel is a non-stationary kernel. Polynomial
kernels are well suited for problems where all the training data is
normalized.
<img ALT="k(x, y) = (\alpha x^T y + c)^d " ORIGINAL="/gif.latex?k(x, y) = (\alpha x^T y + c)^d" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: background-position: background-repeat:" NAME="image_operate_46735"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
Adjustable parameters are the slope alpha, the constant term c and
the polynomial degree d.
The Gaussian kernel is an example of radial basis function
<img ALT="k(x, y) = \exp\left(-\frac{ \lVert x-y \rVert ^2}{2\sigma^2}\right) " ORIGINAL="/png.latex?k(x, y) = \exp\left(-\frac{ \lVert x-y \rVert ^2}{2\sigma^2}\right)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
Alternatively, it could also be implemented using
<img ALT="k(x, y) = \exp\left(- \gamma \lVert x-y \rVert ^2 ) " ORIGINAL="/png.latex?k(x, y) = \exp\left(- \gamma \lVert x-y \rVert ^2 )" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The adjustable parameter sigma plays a major role in the
performance of the kernel, and should be carefully tuned to the
problem at hand. If overestimated, the exponential will behave
almost linearly and the higher-dimensional projection will start to
lose its non-linear power. In the other hand, if underestimated,
the function will lack regularization and the decision boundary
will be highly sensitive to noise in training data.
The exponential kernel is closely related to the Gaussian kernel,
with only the square of the norm left out. It is also a radial
basis function kernel.
<img ALT="k(x, y) = \exp\left(-\frac{ \lVert x-y \rVert }{2\sigma^2}\right) " ORIGINAL="/png.latex?k(x, y) = \exp\left(-\frac{ \lVert x-y \rVert }{2\sigma^2}\right)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Laplace Kernel is completely equivalent to the exponential
kernel, except for being less sensitive for changes in the sigma
parameter. Being equivalent, it is also a radial basis function
<img ALT="k(x, y) = \exp\left(- \frac{\lVert x-y \rVert }{\sigma}\right)" ORIGINAL="/png.latex?k(x, y) = \exp\left(- \frac{\lVert x-y \rVert }{\sigma}\right)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
It is important to note that the observations made about the sigma
parameter for the Gaussian kernel also apply to the Exponential and
Laplacian kernels.
The ANOVA kernel is also a radial basis function kernel, just as
the Gaussian and Laplacian kernels. It is said to&(Hofmann, 2008).
<img ALT="k(x, y) = \sum_{k=1}^n \exp (-\sigma (x^k - y^k)^2)^d" ORIGINAL="/png.latex?k(x, y) = \sum_{k=1}^n \exp (-\sigma (x^k - y^k)^2)^d" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Hyperbolic Tangent Kernel is also known as the Sigmoid Kernel
and as the Multilayer Perceptron (MLP) kernel. The Sigmoid Kernel
comes from the&&field, where the bipolar sigmoid
function is often used as an&&for artificial neurons.
<img ALT="k(x, y) = \tanh (\alpha x^T y + c) " ORIGINAL="/png.latex?k(x, y) = \tanh (\alpha x^T y + c)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
It is interesting to note that a SVM model using a sigmoid kernel
function is equivalent to a two-layer, perceptron neural network.
This kernel was quite popular for support vector machines due to
its origin from neural network theory. Also, despite being only
conditionally positive definite, it has been found
There are two adjustable parameters in the sigmoid kernel, the
slope alpha and the intercept constant c. A common value for alpha
is 1/N, where N is the data dimension. A more detailed study on
sigmoid kernels can be found in the&.
The Rational Quadratic kernel is less computationally intensive
than the Gaussian kernel and can be used as an alternative when
using the Gaussian becomes too expensive.
<img ALT="k(x, y) = 1 - \frac{\lVert x-y \rVert^2}{\lVert x-y \rVert^2 + c} " ORIGINAL="/png.latex?k(x, y) = 1 - \frac{\lVert x-y \rVert^2}{\lVert x-y \rVert^2 + c}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Multiquadric kernel can be used in the same situations as the
Rational Quadratic kernel. As is the case with the Sigmoid kernel,
it is also an example of an non-positive definite kernel.
<img ALT="k(x, y) = \sqrt{\lVert x-y \rVert^2 + c^2} " ORIGINAL="/png.latex?k(x, y) = \sqrt{\lVert x-y \rVert^2 + c^2}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Inverse Multi Quadric kernel. As with the Gaussian kernel, it
results in a kernel matrix with full rank () and thus forms a infinite dimension feature space.
<img ALT="k(x, y) = \frac{1}{\sqrt{\lVert x-y \rVert^2 + \theta^2}}" ORIGINAL="/png.latex?k(x, y) = \frac{1}{\sqrt{\lVert x-y \rVert^2 + c^2}}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The circular kernel comes from a statistics perspective. It is an
example of an isotropic stationary kernel and is positive definite
<img ALT="k(x, y) = \frac{2}{\pi} \arccos ( - \frac{ \lVert x-y \rVert}{\sigma}) - \frac{2}{\pi} \frac{ \lVert x-y \rVert}{\sigma} \sqrt{1 - \left(\frac{ \lVert x-y \rVert^2}{\sigma} \right)}" ORIGINAL="/png.latex?k(x, y) = \frac{2}{\pi} \arccos ( - \frac{ \lVert x-y \rVert}{\sigma}) - \frac{2}{\pi} \frac{ \lVert x-y \rVert}{\sigma} \sqrt{1 - \left(\frac{ \lVert x-y \rVert^2}{\sigma} \right)}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:" NAME="image_operate_55898"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
<img ALT="\mbox{if}~ \lVert x-y \rVert & \sigma \mbox{, zero otherwise}" ORIGINAL="/png.latex?\mbox{if}~ \lVert x-y \rVert & \sigma \mbox{, zero otherwise}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The spherical kernel is similar to the circular kernel, but is
positive definite in R3.
<img ALT="k(x, y) = 1 - \frac{3}{2} \frac{\lVert x-y \rVert}{\sigma} + \frac{1}{2} \left( \frac{ \lVert x-y \rVert}{\sigma} \right)^3" ORIGINAL="/png.latex?k(x, y) = 1 - \frac{3}{2} \frac{\lVert x-y \rVert}{\sigma} + \frac{1}{2} \left( \frac{ \lVert x-y \rVert}{\sigma} \right)^3" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
<img ALT="\mbox{if}~ \lVert x-y \rVert & \sigma \mbox{, zero otherwise}" ORIGINAL="/png.latex?\mbox{if}~ \lVert x-y \rVert & \sigma \mbox{, zero otherwise}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Wave kernel is also&&().
<img ALT="k(x, y) = \frac{\theta}{\lVert x-y \rVert \right} \sin \frac{\lVert x-y \rVert }{\theta}" ORIGINAL="/png.latex?k(x, y) = \frac{\theta}{\lVert x-y \rVert \right} \sin \frac{\lVert x-y \rVert }{\theta}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Power kernel is also known as the (unrectified) triangular
kernel. It is an example of scale-invariant kernel () and is also only conditionally positive
<img ALT="k(x,y) = - \lVert x-y \rVert ^d" ORIGINAL="/png.latex?k(x,y) = - \lVert x-y \rVert ^d" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Log kernel seems to be particularly interesting for images, but
is only conditionally positive definite.
<img ALT="k(x,y) = - log (\lVert x-y \rVert ^d + 1)" ORIGINAL="/png.latex?k(x,y) = - log (\lVert x-y \rVert ^d + 1)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:" NAME="image_operate_26568"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The&&kernel
is given as a piece-wise cubic polynomial,
<img ALT="k(x, y) = 1 + xy + xy~min(x,y) - \frac{x+y}{2}~min(x,y)^2+\frac{1}{3}\min(x,y)^3" ORIGINAL="/png.latex?k(x, y) = 1 + xy + xy~min(x,y) - \frac{x+y}{2}~min(x,y)^2+\frac{1}{3}\min(x,y)^3" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:" NAME="image_operate_44878"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
However, what it actually mean is:
<img ALT="k(x,y) = \prod_{i=1}^d 1 + x_i y_i + x_i y_i \min(x_i, y_i) - \frac{x_i + y_i}{2} \min(x_i,y_i)^2 + \frac{\min(x_i,y_i)^3}{3}" ORIGINAL="/png.latex?k(x,y) = \prod_{i=1}^d 1 + x_i y_i + x_i y_i \min(x_i, y_i) - \frac{x_i + y_i}{2} \min(x_i,y_i)^2 + \frac{\min(x_i,y_i)^3}{3}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:" NAME="image_operate_62198"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
With<img ALT="x,y \in R^d" ORIGINAL="/png.latex?x,y \in R^d" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The B-Spline kernel is defined on the interval [&1, 1]. It is given
by the recursive formula:
<img ALT="k(x,y) = B_{2p+1}(x-y)" ORIGINAL="/png.latex?k(x,y) = B_{2p+1}(x-y)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
<img ALT="\mbox{where~} p \in N \mbox{~with~} B_{i+1} := B_i \otimes B_0." ORIGINAL="/png.latex?\mbox{where~} p \in N \mbox{~with~} B_{i+1} := B_i \otimes B_0." STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
In the&&it is given by:<img ALT="k(x, y) = \prod_{p=1}^d B_{2n+1}(x_p - y_p)" ORIGINAL="/png.latex?k(x, y) = \prod_{p=1}^d B_{2n+1}(x_p - y_p)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:" NAME="image_operate_5568"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
Alternatively, Bn&can be computed using
the explicit expression_r():
<img ALT="B_n(x) = \frac{1}{n!} \sum_{k=0}^{n+1} \binom{n+1}{k} (-1)^k (x + \frac{n+1}{2} - k)^n_+" ORIGINAL="/png.latex?B_n(x) = \frac{1}{n!} \sum_{k=0}^{n+1} \binom{n+1}{k} (-1)^k (x + \frac{n+1}{2} - k)^n_+" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
Where x+&is defined as
<img ALT="x^d_+ = \begin{cases} x^d, & \mbox{if }x & 0 \\ 0, & \mbox{otherwise} \end{cases}" ORIGINAL="/png.latex?x^d_+ = \begin{cases} x^d, & \mbox{if }x & 0 \\ 0, & \mbox{otherwise} \end{cases}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The&&kernel
is well known in the theory of function spaces of fractional
smoothness. It is given by:
<img ALT="k(x, y) = \frac{J_{v+1}( \sigma \lVert x-y \rVert)}{ \lVert x-y \rVert ^ {-n(v+1)} } " ORIGINAL="/png.latex?k(x, y) = \frac{J_{v+1}( \sigma \lVert x-y \rVert)}{ \lVert x-y \rVert ^ {-n(v+1)} }" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
where J is the&. However, in
the&, the Bessel kernel is said to be:
<img ALT="k(x,x') = - Bessel_{(nu+1)}^n (\sigma |x - x'|^2) " ORIGINAL="/png.latex?k(x,x') = - Bessel_{(nu+1)}^n (\sigma |x - x'|^2)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Cauchy kernel comes from the&&(). It is a long-tailed kernel and can be used to give
long-range influence and sensitivity over the high dimension
<img ALT="k(x, y) = \frac{1}{1 + \frac{\lVert x-y \rVert^2}{\sigma} }" ORIGINAL="/png.latex?k(x, y) = \frac{1}{1 + \frac{\lVert x-y \rVert^2}{\sigma} }" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Chi-Square kernel comes from the&.
<img ALT="k(x,y) = 1 - \sum_{i=1}^n \frac{(x_i-y_i)^2}{\frac{1}{2}(x_i+y_i)}" ORIGINAL="/png.latex?k(x,y) = 1 - \sum_{i=1}^n \frac{(x_i-y_i)^2}{\frac{1}{2}(x_i+y_i)}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Histogram Intersection Kernel is also known as the Min Kernel
and has been proven useful in image classification.
<img ALT="k(x,y) = \sum_{i=1}^n \min(x_i,y_i)" ORIGINAL="/png.latex?k(x,y) = \sum_{i=1}^n \min(x_i,y_i)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Generalized Histogram Intersection kernel is built based on
the&for image classification
but applies in a much larger variety of contexts (). It is given by:
<img ALT="k(x,y) = \sum_{i=1}^m \min(|x_i|^\alpha,|y_i|^\beta)" ORIGINAL="/png.latex?k(x,y) = \sum_{i=1}^m \min(|x_i|^\alpha,|y_i|^\beta)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Generalized T-Student Kernel&, thus having a positive
semi-definite Kernel matrix (). It is given by:
<img ALT="k(x,y) = \frac{1}{1 + \lVert x-y \rVert ^d}" ORIGINAL="/png.latex?k(x,y) = \frac{1}{1 + \lVert x-y \rVert ^d}" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
The Bayesian kernel could be given as:
<img ALT="k(x,y) = \prod_{l=1}^N \kappa_l (x_l,y_l)" ORIGINAL="/png.latex?\150dpi k(x,y) = \prod_{l=1}^N \kappa_l (x_l,y_l)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
<img ALT="\kappa_l(a,b) = \sum_{c \in \{0;1\}} P(Y=c \mid X_l=a) ~ P(Y=c \mid X_l=b)" ORIGINAL="/png.latex?\kappa_l(a,b) = \sum_{c \in \{0;1\}} P(Y=c \mid X_l=a) ~ P(Y=c \mid X_l=b)" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
However, it really depends on the problem being modeled. For more
information, please&, in which they used a SVM with
Bayesian kernels in the prediction of protein-protein
interactions.
The Wavelet kernel () comes from&&and is given as:
<img ALT="k(x,y) = \prod_{i=1}^N h(\frac{x_i-c_i}{a}) \: h(\frac{y_i-c_i}{a})" ORIGINAL="/png.latex?k(x,y) = \prod_{i=1}^N h(\frac{x_i-c}{a}) \: h(\frac{y_i-c}{a})" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
Where a and c are the wavelet dilation and translation
coefficients, respectively (the form presented above is a
simplification, please see the original paper for details). A
translation-invariant version of this kernel can be given as:
<img ALT="k(x,y) = \prod_{i=1}^N h(\frac{x_i-y_i}{a})" ORIGINAL="/png.latex?k(x,y) = \prod_{i=1}^N h(\frac{x_i-y_i}{a})" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
Where in both h(x) denotes a mother wavelet function. In the paper
by Li Zhang, Weida Zhou, and Licheng Jiao, the authors suggests a
possible h(x) as:
<img ALT="h(x) = cos(1.75x)exp(-\frac{x^2}{2})" ORIGINAL="/png.latex?h(x) = cos(1.75x)exp(-\frac{x^2}{2})" STYLE="margin-top: 0 margin-right: 0 margin-bottom: 0 margin-left: 0 padding-top: 0 padding-right: 0 padding-bottom: 0 padding-left: 0 border-top-width: 0 border-right-width: 0 border-bottom-width: 0 border-left-width: 0 border-style: border-color: font-size: 13 background-image: background-attachment: background-origin: background-clip: background-color: display: background-position: background-repeat:"
TITLE="转:收集一下SVM核函数-Kernel&Function" />
Which they also prove as an admissible kernel function.
以上网友发言只代表其个人观点,不代表新浪网的观点或立场。}

我要回帖

更多关于 红核妇洁洗液如何使用 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信