美文网首页
Non-Linear-DecisionBoundary of L

Non-Linear-DecisionBoundary of L

作者: 付剑飞 | 来源:发表于2017-08-20 10:52 被阅读0次

    关于算法这块,这周看了斯坦福大学Andrew Ng的公开课。还是极力推荐的,每节课10分钟左右,讲的思路清晰,内容丰富,编程作业也很值得去做。(在这里好想吐槽一下国内培训机构七月算法的培训视频呀,根本看不下去。)

    上课形式是这样滴

    编程作业提交是这样滴


    Nice work.每次都是一百分也挺有成就感的。每周的编程作业会有一份非常详细的pdf文档解释

    pdf文档是这样滴

    上课也挺搞笑的,很愿意听下去。关键是,免费

    coursera的地址如下:
    https://www.coursera.org/learn/machine-learning/home/week/3

    每集的内容我也下载下来了,等全部整理完毕会上传到网盘。有需要请留言。

    然后工具就换成了Octave/Matlab,果然还是Matlab用起来比较顺手。

    下图分别是用逻辑回归对非线性边界处理的效果,因为非线性,需要将特征组合成多种多项式,扩大特征,这样容易出现过拟合的现象,于是需要正则化。下图分别是正则化系数取0,1,100的效果。取0相当于不正则化。


    散点图 lamdba=0,显然出现过拟合(overfitting)
    lamdba=1,效果比较好
    lamdba=100,显然欠拟合(underfitting)

    具体Matlab代码如下:
    ex2_reg.m

    %% Machine Learning Online Class - Exercise 2: Logistic Regression
    %
    %  Instructions
    %  ------------
    %
    %  This file contains code that helps you get started on the second part
    %  of the exercise which covers regularization with logistic regression.
    %
    %  You will need to complete the following functions in this exericse:
    %
    %     sigmoid.m
    %     costFunction.m
    %     predict.m
    %     costFunctionReg.m
    %
    %  For this exercise, you will not need to change any code in this file,
    %  or any other files other than those mentioned above.
    %
    
    %% Initialization
    clear ; close all; clc
    
    %% Load Data
    %  The first two columns contains the X values and the third column
    %  contains the label (y).
    
    data = load('ex2data2.txt');
    X = data(:, [1, 2]); y = data(:, 3);
    
    plotData(X, y);
    
    % Put some labels
    hold on;
    
    % Labels and Legend
    xlabel('Microchip Test 1')
    ylabel('Microchip Test 2')
    
    % Specified in plot order
    legend('y = 1', 'y = 0')
    hold off;
    
    
    %% =========== Part 1: Regularized Logistic Regression ============
    %  In this part, you are given a dataset with data points that are not
    %  linearly separable. However, you would still like to use logistic
    %  regression to classify the data points.
    %
    %  To do so, you introduce more features to use -- in particular, you add
    %  polynomial features to our data matrix (similar to polynomial
    %  regression).
    %
    
    % Add Polynomial Features
    
    % Note that mapFeature also adds a column of ones for us, so the intercept
    % term is handled
    X = mapFeature(X(:,1), X(:,2));
    
    % Initialize fitting parameters
    initial_theta = zeros(size(X, 2), 1);
    
    % Set regularization parameter lambda to 1
    lambda = 1;
    
    % Compute and display initial cost and gradient for regularized logistic
    % regression
    [cost, grad] = costFunctionReg(initial_theta, X, y, lambda);
    
    fprintf('Cost at initial theta (zeros): %f\n', cost);
    fprintf('Expected cost (approx): 0.693\n');
    fprintf('Gradient at initial theta (zeros) - first five values only:\n');
    fprintf(' %f \n', grad(1:5));
    fprintf('Expected gradients (approx) - first five values only:\n');
    fprintf(' 0.0085\n 0.0188\n 0.0001\n 0.0503\n 0.0115\n');
    
    fprintf('\nProgram paused. Press enter to continue.\n');
    pause;
    
    % Compute and display cost and gradient
    % with all-ones theta and lambda = 10
    test_theta = ones(size(X,2),1);
    [cost, grad] = costFunctionReg(test_theta, X, y, 10);
    
    fprintf('\nCost at test theta (with lambda = 10): %f\n', cost);
    fprintf('Expected cost (approx): 3.16\n');
    fprintf('Gradient at test theta - first five values only:\n');
    fprintf(' %f \n', grad(1:5));
    fprintf('Expected gradients (approx) - first five values only:\n');
    fprintf(' 0.3460\n 0.1614\n 0.1948\n 0.2269\n 0.0922\n');
    
    fprintf('\nProgram paused. Press enter to continue.\n');
    pause;
    
    %% ============= Part 2: Regularization and Accuracies =============
    %  Optional Exercise:
    %  In this part, you will get to try different values of lambda and
    %  see how regularization affects the decision coundart
    %
    %  Try the following values of lambda (0, 1, 10, 100).
    %
    %  How does the decision boundary change when you vary lambda? How does
    %  the training set accuracy vary?
    %
    
    % Initialize fitting parameters
    initial_theta = zeros(size(X, 2), 1);
    
    % Set regularization parameter lambda to 1 (you should vary this)
    lambda = 1;
    
    % Set Options
    options = optimset('GradObj', 'on', 'MaxIter', 400);
    
    % Optimize
    [theta, J, exit_flag] = ...
        fminunc(@(t)(costFunctionReg(t, X, y, lambda)), initial_theta, options);
    
    % Plot Boundary
    plotDecisionBoundary(theta, X, y);
    hold on;
    title(sprintf('lambda = %g', lambda))
    
    % Labels and Legend
    xlabel('Microchip Test 1')
    ylabel('Microchip Test 2')
    
    legend('y = 1', 'y = 0', 'Decision boundary')
    hold off;
    
    % Compute accuracy on our training set
    p = predict(theta, X);
    
    fprintf('Train Accuracy: %f\n', mean(double(p == y)) * 100);
    fprintf('Expected accuracy (with lambda = 1): 83.1 (approx)\n');
    

    具体方法实现就不想贴了

    相关文章

      网友评论

          本文标题:Non-Linear-DecisionBoundary of L

          本文链接:https://www.haomeiwen.com/subject/inxfdxtx.html