Three classes, two features classification using LinearSVC(one-to-rest)

20 views Asked by At
mglearn.discrete_scatter(X[:, 0], X[:, 1], y)
line = np.linspace(-15, 15)
for coef, intercept, color in zip(linear_svm.coef_, linear_svm.intercept_,
                                      ['b', 'r', 'g']):
        plt.plot(line, -(line * coef[0] + intercept) / coef[1], c=color)
    plt.ylim(-10, 15)
    plt.xlim(-10, 8)
    plt.xlabel("Feature 0")
    plt.ylabel("Feature 1")
    plt.legend(['Class 0', 'Class 1', 'Class 2', 'Line class 0', 'Line class 1',
                'Line class 2'], loc=(1.01, 0.3))

I was studying introduction to machine learning with python and had a question.

I cannot understand '-(line * coef[0] + intercept) / coef[1]' part. Why is there minus and divide them with coef[1]?

The dataset examples I used is sklearn.datasets.make_blobs Shape of coef_ is (3, 2) and intercept is (3, ) And the book says each for contains the coefficient vector. I cannot understand this too.

I understood column of (3, 2) means coef for each features, but don't know row.

0

There are 0 answers