APDaga DumpBox : The Thirst for Learning...

  • 🌐 All Sites
  • _APDaga DumpBox
  • _APDaga Tech
  • _APDaga Invest
  • _APDaga Videos
  • 🗃️ Categories
  • _Free Tutorials
  • __Python (A to Z)
  • __Internet of Things
  • __Coursera (ML/DL)
  • __HackerRank (SQL)
  • __Interview Q&A
  • _Artificial Intelligence
  • __Machine Learning
  • __Deep Learning
  • _Internet of Things
  • __Raspberry Pi
  • __Coursera MCQs
  • __Linkedin MCQs
  • __Celonis MCQs
  • _Handwriting Analysis
  • __Graphology
  • _Investment Ideas
  • _Open Diary
  • _Troubleshoots
  • _Freescale/NXP
  • 📣 Mega Menu
  • _Logo Maker
  • _Youtube Tumbnail Downloader
  • 🕸️ Sitemap

Coursera: Machine Learning (Week 2) [Assignment Solution] - Andrew NG

Coursera: Machine Learning (Week 2) [Assignment Solution] - Andrew NG

Recommended Machine Learning Courses: Coursera: Machine Learning    Coursera: Deep Learning Specialization Coursera: Machine Learning with Python Coursera: Advanced Machine Learning Specialization Udemy: Machine Learning LinkedIn: Machine Learning Eduonix: Machine Learning edX: Machine Learning Fast.ai: Introduction to Machine Learning for Coders
  • ex1.m - Octave/MATLAB script that steps you through the exercise
  • ex1 multi.m - Octave/MATLAB script for the later parts of the exercise
  • ex1data1.txt - Dataset for linear regression with one variable
  • ex1data2.txt - Dataset for linear regression with multiple variables
  • submit.m - Submission script that sends your solutions to our servers
  • [*] warmUpExercise.m - Simple example function in Octave/MATLAB
  • [*] plotData.m - Function to display the dataset
  • [*] computeCost.m - Function to compute the cost of linear regression
  • [*] gradientDescent.m - Function to run gradient descent
  • [#] computeCostMulti.m - Cost function for multiple variables
  • [#] gradientDescentMulti.m - Gradient descent for multiple variables
  • [#] featureNormalize.m - Function to normalize features
  • [#] normalEqn.m - Function to compute the normal equations
  • Video - YouTube videos featuring Free IOT/ML tutorials

warmUpExercise.m :

Plotdata.m :, computecost.m :, gradientdescent.m :, computecostmulti.m :, gradientdescentmulti.m :, check-out our free tutorials on iot (internet of things):.

featureNormalize.m :

Normaleqn.m :, 163 comments.

programming assignment week 2 practice lab linear regression solution

Have you got prediction values as expected?

programming assignment week 2 practice lab linear regression solution

Yes. We got prediction values as expected.

My program was successfully run.But after hitting submit and giving the token this error is showing please help ERROR-- % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1115 100 25 100 1090 12 554 0:00:02 0:00:01 0:00:01 558 error: structure has no member 'message' error: called from submitWithConfiguration at line 35 column 5 submit at line 45 column 3 error: evaluating argument list element number 2 error: called from submitWithConfiguration at line 35 column 5 submit at line 45 column 3 >>

Submitting configuration is generally related that your directory is not right! Or it could also mean you didn't extract the file properly...it did happen with me at times

I have similar problem please tell if you had solved it

programming assignment week 2 practice lab linear regression solution

Thanks for your comments. I still have some problems with the solutions, could you help me. In this case is with line 17, J History.... Week 2 function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters) %GRADIENTDESCENT Performs gradient descent to learn theta % theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by % taking num_iters gradient steps with learning rate alpha % Initialize some useful values data = load('ex1data1.txt') X = data(:,1) y = data(:,2) m = length(y) x = [ones(m, 1), data(:,1)] theta = zeros(2, 1) iterations = 1500 alpha = 0.01 J = (1 / (2* m) ) * sum(((x* theta)-y).^2) J_history = zeros(num_iters, 1) for iter = 1:num_iters % ====================== YOUR CODE HERE ====================== % Instructions: Perform a single gradient step on the parameter vector % theta. % % Hint: While debugging, it can be useful to print out the values % of the cost function (computeCost) and gradient here. % %error = (X * theta) - y; %temp0 = theta(1) - ((alpha/m) * sum(error .* X(:,1))); %temp1 = theta(2) - ((alpha/m) * sum(error .* X(:,2))); %theta = [temp0; temp1]; % ============================================================ % Save the cost J in every iteration J_history(iter) = computeCost(X, y, theta); end end

change the variable name of iteration.num_iters must be same with declared variable named iteration

Can you elaborate?

Hi Can anyone help me. Just now i started ML. I am using Octave UI where i write the code but i don't know how to submit using UI. Can anybody please help me.

https://www.youtube.com/watch?v=Vsg-cq7169U&feature=youtu.be Watch this video by one of the mentors you will get it .

Thanks Hrishikesh, your comment might help many people.

programming assignment week 2 practice lab linear regression solution

>> gradientDescent() error: 'y' undefined near line 7 column 12 error: called from gradientDescent at line 7 column 3 >> computeCost() error: 'y' undefined near line 7 column 12 error: called from computeCost at line 7 column 3 How to correct this?

I tried to re-ran the code and everything worked perfectly fine with me. Please check you code. In the code, you can variable "y" is defined in parameter list itself. So, logically you should not get that error. There must something else you might be missing outside these functions.

I used to get the same error! i realized i have to execute ex1.m file and ex1_multi.m files to correct our code.

Thank you for your response. It will be helpful for many others...

Hey @Akshay...I am facing same problem of 'y' undefined. I tried all the ways suggested by you and by others can you please help me out. Can u please tell which version of octave should i use for windows 8.1 64 bit,presently I am using 4.4.1 may be due to that I am facing this problem,please help

please tell how to execute ex1.m file in online MATLAB please help

computeCost error: 'y' undefined near line 8 column 12 error: called from computeCost at line 8 column 3 gradientDescent error: 'y' undefined near line 7 column 12 error: called from gradientDescent at line 7 column 3 How to correct this?

I tried to re-ran the code and everything worked perfectly fine with me. Please check you code. In the code, you can variable "y" is defined in parameter list itself. So, logically you should not get that error. There must something else you might be missing outside these functions. If you got the solution please confirm here. It will be helpful for others.

Hi, Receiving similar error. Found a solution?

Hello, Got a similar error! found the solution?

Hi Sasank, because small y already is used as a input argument for the mentioned functions. So, you can't get the error like y is undefined. Are you sure you haven't made any mistake like small y and Capital Y ? Please check it and try again.

error: 'X' undefined near line 9 column 10 error: called from featureNormalize at line 9 column 8

anyone have find the solution? getting the same error from the program. i have try number of way but getting he same problem

yes i am also getting the same error

i have the solution : you have to load data x,y >>data = load('ex1data1.txt'); >>X = data(:,1); >>y = data(:,2); >>m = length(y); >>x=[ones(m,1),data(:,1)]; >>theta = zeros(2,1); >>computeCost(X,y,theta) if you have any question Pls contact me in my instagram name t.boy__jr

I was stuck for two months in Week 2 Assignment of Machine Learning . Thanx for your guidance due to which I can now understand coding in a better way and finally I have passed 2nd Week Assignment.

Glad to know that my work helped you in understanding the topic / coding. You can also checkout free IOT tutorials with source codes and demo here: https://www.apdaga.com/search/label/IoT%20%28Internet%20of%20Things%29 Thanks.

I tried to reran the code. But i am getting error this: error: 'num_iters' undefined near line 17 column 19 error: called from gradientDescent at line 17 column 11 how to correct this??

i m also facing the same problem, plz help me to out of the problem

Facing the same problem...

i am also submitting these assignments . i have also done the same . But i dont know where to load data .thus my score is 0. how can i improve? please suggest me.

Refer the forum within the course in Coursera. They have explained the step to submit the assignments in datails.

Hello , In the gradient descent.m file : theta = theta - ((alpha/m) * X'*error); I m confused, why do we take the transpose of X (X'*error) instead of X ? Thanks in advance B

Hi Bruno, I got your confusion, Here X (capital X) represent all the training data together, each row as one training sample, each column as a feature. We want to multiply each training data with corresponding error. To make it happen you have to transpose of X (capital X). if you take x (small x) as single training sample then you don't have to worry about transpose and all. Simply (x * error) will work. Try to do it manually on a notebook. You will understand it.

Hi Akshay Thank you for the quick reply & help ...It s totally clear now, make sense !!! Have a great day Bruno

Good day,please am kind of stuck with week2 programming assignment,and this is under computecost.m I already imported the data and plotted the scatter plot.Do I also after to import the data in computecost.m working area,and and when I just in inputted the code J = (1/(2*m))*(sum(((X*theta)-y).^2)); I got an error message.please how do I fix this. Thanks

What error you got?

plotData error: 'X' undefined near line 20 column 6 error: called from plotData at at line 20 column 1 What is the solution to this?

Hi Amit, As I checked I have used small x as an input argument for plotData function. and in your error there is capital X. Are you sure you haven't made mistake in small and capital X? Please check and try once again.

i can see you have used a X there not x,still showing the error saying not enough input arguments

Hey Akshay, The error 'y' undefined problem do exist, but it is not othe problem only for the code you gave,any solution the internet gives that error.Even running through gui or through command, it says undefined.There is no clear solution for this on the net, I tried adding path too as it was said in the net.Couldnt solve the issue.I have octave 5.1.0

I found the solition for those who were getting u defi ed error. if you are using octave then the file shouldnot first start with function, octave takes it as a function, not as a script. solution add anything to first line example add 1; first line and then start function. If you wanna test your function when you run, first initialize the variables to matrices and respective values. then pass these as parameters to the function.

Thanks Chethan, It will be a great help for others as well.

I didn't understand.can u explain clearly

include two lines of code x=[]; y=[]; This should work

Its still not working. I'm getting: error: 'y' undefined near line 7 column 12 error: called from computeCost at line 7 column 3

Hi Akshay, I am getting error like this m=lenght(y); %number of training example Can you help me Thanks

This comment has been removed by a blog administrator.

Hello, within gradientDescent you use the following code error = (X * theta) - y; theta = theta - ((alpha/m) * X'*error) What is the significance of 'error' in this? Within Ng's lectures I can't remember him making reference to 'error'

Error is similar to that of "cost" (J)

!! Submission failed: 'data' undefined near line 19 column 18 Function: gradientDescent FileName: C:\Users\Gunasekar\Desktop\GNU Oct\machine-learning-ex1\ex1\gradientDescent.m LineNumber: 19 Please correct your code and resubmit. This is my problem how to correct it

Hi, I think you are doing this assignment in Octave and that's why you are facing this issue. Chethan Bhandarkar has provided solution for it. Please check out the comment by Chethan Bhandarkar: https://www.apdaga.com/2018/06/coursera-machine-learning-week-2.html?showComment=1563986935868#c4682866656714070064 Thanks

programming assignment week 2 practice lab linear regression solution

Code that is given is not running as always give error 'y' undefined near line 7 column 12 for every code.

did the same as of chethan said but still the issue is not resolved getting the same error y not defined

@Shilp, I think, You should raise your concern on Coursera forum.

>> gradientDescent() error: 'y' undefined near line 7 column 12 error: called from gradientDescent at line 7 column 3 >> computeCost() error: 'y' undefined near line 7 column 12 error: called from computeCost at line 7 column 3 i am getting this kind of error how to solve this

hey i think the errors related to undefined variables are due to people not passing arguments while calling the func from octave window. Can you post an example of command to run computeCost with arguments

the Predicted prices using normal equations and gradient descent are not equals (NE price= 293081.464335 and GD price=289314.62034) is it correct ?

I had the similar issue. For persons who would have a same situation later, please change your alpha to 1.0 and your iterations to 100.

For compute.m function, i am continuosly getting below error message: Error in computeCost (line 31) J = (1/(2*m))*sum(((X*theta)-y).^2);

What error you are getting exactly?

what is the predicted value of house..mine it is getting $0000.00 with huge theta value how is that possible?

You have to modify the value of price variable in the ex1_multi file

Ok so for the people facing problem regarding y is undefined error.....you can directly submit the program it tests ex1.m file as a whole and it is compiled successfully and gives the correct answer

programming assignment week 2 practice lab linear regression solution

how can i directly submit the ex1.m file?

plotData Not enough input arguments. Error in plotData (line 19) plot(x, y, 'rx', 'MarkerSize', 10); % Plot the data I got this error. how can I solve this?

try ylabel('Profit in $10,000s'); % Set the y-axis label xlabel('Population of City in 10,000s'); % Set the x-axis label plot(x, y, 'rx', 'MarkerSize', 10); % Plot the data

not enough input arguments. Error in computeCost (line 7) m = length(y); % number of training examples

I got the same error. have you found out a solution yet?

Hi, I am getting the same error and the program doesn't give the solution. Please advise.

Having problems with nearly everyone of these solutions. I am 12 and learning machine learning for the first time and having troubles referring to this as i find these solutions do not work. Any help?

Hello I am stuck in WK2 PlotData I keep getting errors: >> Qt terminal communication error: select() error 9 Bad file descriptor like that one or error: /Users/a69561/Desktop/machine-learning-ex1/ex1/plotData.m at line 19, column 3 Can somebody help me ??

thank you for the solution but i m still getting 2 different values of price of house( with normal equation and gradient descent method)

hi i have same problem as undefined. Please help me, please. I am using in the octave. Any other way to submit the programming assignment. Please help?

Whats is your leaning rate alpha and number of iterations?

I have provided only function definitions here. You can find the parameter (alpha, num of iterations) values in execution section of your assignment.

In Linear regression with multiple variables by 1st method ( gradientDescent method) the price value of the house was different whn compared to the 2nd method(Normal Equations).still not able to match the values of both the methods ? Note : i have copied all the code as per your guidance.

hi, thanks for all your help. But I have some problem in submission. When I finished all work, I tried to submit all in once and got this: >> submit Warning: Function Warning: Name is nonexistent or not a directory: /MATLAB Drive/./lib > In path (line 109) In addpath (line 86) In addpath (line 47) In submit (line 2) Warning: Function Warning: Name is nonexistent or not a directory: /MATLAB Drive/./lib/jsonlab > In path (line 109) In addpath (line 86) In addpath (line 47) In submitWithConfiguration (line 2) In submit (line 45) 'parts' requires one of the following: Automated Driving Toolbox Navigation Toolbox Robotics System Toolbox Sensor Fusion and Tracking Toolbox Error in submitWithConfiguration (line 4) parts = parts(conf); Error in submit (line 45) submitWithConfiguration(conf); >> submit >> submitWithConfiguration Warning: Function Warning: Name is nonexistent or not a directory: /MATLAB Drive/./lib/jsonlab > In path (line 109) In addpath (line 86) In addpath (line 47) In submitWithConfiguration (line 2) 'parts' requires one of the following: Automated Driving Toolbox Navigation Toolbox Robotics System Toolbox Sensor Fusion and Tracking Toolbox Error in submitWithConfiguration (line 4) parts = parts(conf);

Check if your are in the same directory ex1 folder and to submit the solution use ''submit()'' not submit add parenthesis

This is happening because variable parts has the same name as of parts(conf) function in file ex1/lib/submitWithConfiguration.m Make the following changes to resolve this : Line 4 - parts_1 = parts(conf); Line 92 - function [parts_1] = parts(conf) Line 93 - parts_1 = {}; Line 98 - parts_1{end + 1} = part; Basically, I've just renamed the variables. Same thing is happening with one more variable, so make the following changes : Line 66 - submissionUrl_1 = submissionUrl(); Line 68 - responseBody = getResponse(submissionUrl_1, body); Line 22: response = submitParts(conf, email, token, parts_1); Line 37: showFeedback(parts_1, response); This worked for me.

after changing my variables names also ,i'm getting error in calling function parts: !! Submission failed: Not enough input arguments. Function: parts FileName: C:\Users\Avanthi\Documents\ML\exp-2\lib\submitWithConfiguration.m LineNumber: 94 can someone help me with this?

Hello Akshay, In computeCost, how to declate or compute 'theta' because, it's giving an error - 'theta' undefined.

error: structure has no member 'message' error: called from submitWithConfiguration at line 35 column 5 submit at line 45 column 3 error: evaluating argument list element number 2 error: called from submitWithConfiguration at line 35 column 5 submit at line 45 column 3 how to solve this

Hello Akshay Daga (APDaga, Very glad to come across your guide on ML by Andred NG. I been stuck months, could complete the Programming Assisgment. Have done up to computeCost but got stuck at gradientDescent Below is the error. I don't want to drop out of this course please help me out. "error: 'num_iters' undefined near line 1 column 58" Here is my update h=(theta(1)+ theta(2)*X)'; theta(1) = theta(1) - alpha * (1/m) * theta(1) + theta(2)*X'* X(:, 1); theta(2) = theta(2) - alpha * (1/m) * htheta(1) + theta(2)*X' * X(:, 2); I count on your assistance.

gradientDescent() error: 'y' undefined near line 7 column 14 error: evaluating argument list element number 1 error: called from: error: /Users/apple/Downloads/machine-learning-ex1/ex1/gradientDescent.m at line 7, column 5 I am getting this error for both gradient descent and computeCost. plz helpme out

function [theta, J_history] = gradientDescent(X, y, theta, alpha, iterations) %GRADIENTDESCENT Performs gradient descent to learn theta % theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by % taking num_iters gradient steps with learning rate alpha % Initialize some useful values m = length(y); % number of training examples h=X*theta; error=(h-y); theta_c=(alpha/m)*(sum((error)*X')); theta=theta-theta_c; J_history = zeros(num_iters, 1); for iter = 1:iterations % ====================== YOUR CODE HERE ====================== % Instructions: Perform a single gradient step on the parameter vector % theta. % % Hint: While debugging, it can be useful to print out the values % of the cost function (computeCost) and gradient here. % % ============================================================ % Save the cost J in every iteration J_history(iter) = computeCost(X, y, theta); end end while running on octave it's showing Running Gradient Descent ... error: gradientDescent: operator *: nonconformant arguments (op1 is 97x1, op2 is 2x97) error: called from gradientDescent at line 10 column 8 ex1 at line 77 column 7 where is the problem???

i got an error in computeCost.m as following: max_recursion_depth reached. How to solve this?

i got an error as: error: computeCost: operator /: nonconformant arguments (op1 is 1x1, op2 is 1x2) How to solve this?

I can't see any variable used in codes as op1 or op2. Please check once again where did you get those variables from?

Hi, great guidance. Only, I still have the confusion how single parameter costfunction and multi parameter costfunction codes are same? (same confusion for both gradientdescent (single and multi).Am I missing something?

single parameter costfunction is as follows: h = X*theta; temp = 0; for i=1:m temp = temp + (h(i) - y(i))^2; end J = (1/(2*m)) * temp; Which doesn't work for multi parameter costfunction. But, I have also provided vectorized implementation. (It is generic code and works for both single as well as multi parameters).

Hello, I am getting x is undefined while submitting plotData in assignmnet2 several times I checked But I am getting the same error will u please help me?

function plotData(x, y) plot(x, y, 'rx', 'MarkerSize', 10); ylabel('Profit in $10,000s'); xlabel('Population of City in 10,000s'); Always I am getting x is undefined.I cant able to understand where the error is plzz help me?? figure;

function plotData(x, y) plot(x, y, 'rx', 'MarkerSize', 10); ylabel('Profit in $10,000s'); xlabel('Population of City in 10,000s'); figure; Always I am getting x is undefined.I cant able to understand where the error is plzz help me??

While doing in matlab also it is saying error in submitwithconfiguration in submit.m file accutally it was defaultly givern by them but why it is show error there???

While doing in matlab it is saying error in submitwithconfiguration in submit.m file accutally it was defaultly givern by them but why it is show error there???

Still the same problem with undefined y (small letter) using Octave 5.2.0 adding anything as first line didn't help What could I do else? Has somebody working version. I got stuck in this point

instead of running codes individually, run 'ex1' after completing all the problems....then it will not show any error

Hi.. I am using MATLAB R2015a version offline and getting error submitwithconfiguration(line 158).How to rectify this error??

Raise this concern in Coursera forum.

if you implement featureNormalize this way, it gives dimensions disagreement error so i suggest it would be better to do it in the following way; mu = ones(size(X,1),1)* mean(X); sigma = ones(size(X,1),1)* std(X); X_norm = (X - mu)./(sigma); P.S: it gives me accurate results

I entered submit () ,but I geeting error so pls help to how to submit my assignment

I think you should raise this concern to Coursera forum.

try just submit without the brackets.

ur code is not working when i use it

Sorry to know that. But I was working 100% for me and some others as well.

num_iters not defined error.. Plz help

just got the answer for num_iters not defined...You have to fix line 59 in submit.m

I have a problem running the below line of code: (X * theta) - y; it gives error: operator *: nonconformant arguments (op1 is 97x1, op2 is 2x1) I can understand because X is a 97x1 matrix and cannot be multiplied with a 2x1 matrix. Any ideas?

I get the below error when executing ex1 for testing the gradientDescent function: error: computeCost: operator *: nonconformant arguments (op1 is 97x2, op2 is 194x1) error: called from computeCost at line 15 column 2 gradientDescent at line 36 column 21 ex1 at line 77 column 7 My gradientDescent function has the below lines of code as per the tutorial. temp0 = theta(1) - ((alpha/m) * sum((X * theta) - y) .* X(:,1)); temp1 = theta(2) - ((alpha/m) * sum((X * theta) - y) .* X(:,2)); theta = [temp0; temp1]; My computeCost function has this line of code on line number 15: J=1/(2*m)*sum(((X*theta)-y).^2) NB: surprisingly I can run the gradientDescent lines individually on octave command without problems

I also had this problem, I realised that it is to do with the brackets. if you compare your code to mine; t0 = theta(1) - ((alpha/m) * sum(((X * theta) - y).* X(:,1))); t1 = theta(2) - ((alpha/m) * sum(((X * theta) - y).* X(:,2))); theta = [t0; t1]; you can see that you are missing 2 brackets on each side. this dimensions may be messed up due to wrong operations

programming assignment week 2 practice lab linear regression solution

Hey, how do you calculate the value of theta?

The values of theta1 and theta2 are initially set to 0, theta = zeros(2,1)

Getting an error, theta is undefined...

I get the below error when executing ex1 for Submitting the gradient Descent: >> submit 'parts' requires one of the following: Automated Driving Toolbox Navigation Toolbox Robotics System Toolbox Sensor Fusion and Tracking Toolbox Error in submitWithConfiguration (line 4) parts = parts(conf); Error in submit (line 45) submitWithConfiguration(conf);

did you get this answer? , I see this error

I have the same error

some of these answers are incorrect. for example the feature normalization question is wrong. when you calculate X-u /sigma X and u are different dimensions so it doesn't work.

Thanks for the feedback. All these answers worked 100% for me. and they are working fine for many of others as well (you can get idea from comments.) But coursera keeps on updating the assignments time to time. So, You might be right in that case. Please use above codes just for reference and then solve your assignment on your own. Thanks

hello brother, can you please briefly explain the working in these two lines of GD error = (X * theta) - y; theta = theta - ((alpha/m) * X'*error)

How can I solve this problem?? submit 'parts' requires one of the following: Automated Driving Toolbox Navigation Toolbox Robotics System Toolbox Sensor Fusion and Tracking Toolbox Error in submitWithConfiguration (line 4) parts = parts(conf); Error in submit (line 45) submitWithConfiguration(conf);

same problem here

Hi, when I run my code, the predicted price of the house (in ex1_multi.m), it says 0.0000. How can I fix that?

>> [Xn mu sigma] = featureNormalize([1 ; 2 ; 3]) error: Invalid call to std. Correct usage is: -- std (X) -- std (X, OPT) -- std (X, OPT, DIM) error: called from print_usage at line 91 column 5 std at line 69 column 5 featureNormalize at line 32 column 8 >> Even after I am doing it the right way i hope: ''' mu = mean(X); sigma = std(X, 1); X_norm = (X - mu) ./ std; ''' Anyone any idea, why i am facing this error?

I tried simply this also: sigma = std(X);

>> submit() 'parts' requires one of the following: Automated Driving Toolbox Navigation Toolbox Robotics System Toolbox Sensor Fusion and Tracking Toolbox Error in submitWithConfiguration (line 4) parts = parts(conf); Error in submit (line 45) submitWithConfiguration(conf);

This is happening because variable parts has the same name as of parts(conf) function in file ex1/lib/submitWithConfiguration.m Make the following changes to resolve this : Line 4 - parts_1 = parts(conf); Line 92 - function [parts_1] = parts(conf) Line 93 - parts_1 = {}; Line 98 - parts_1{end + 1} = part; Basically, I've just renamed the variables. Same thing is happening with one more variable, so make the following changes : Line 66 - submissionUrl_1 = submissionUrl(); Line 68 - responseBody = getResponse(submissionUrl_1, body); Line 22: response = submitParts(conf, email, token, parts_1); Line 37: showFeedback(parts_1, response); This worked for me

which is better to use to submit assignments ( Octave or Matlab)... The solutions that have been provided are for Matlab or Octave?

I have provided solution in MATLAB but It works in Octave as well.

hi I don't understand why X*theta . I mean theta is a 2X1 vector right? I understand the formula, but i get confused in this exercise.

I figure it out because I thought X is a 97x1 vector. I have another question. Is this a gradient descent with one variable? I thought it is two variables? Does the theta0 count 1 variable?

%%%%%%%% CORRECT %%%%%%%%%% error = (X * theta) - y; theta = theta - ((alpha/m) * X'*error); %%%%%%%%%%%%%%%%%%%%%%%%%%% WHY IS NOT HERE "SUM" USED? THAAAAAAAANKS!!!

Here we have used a Matrix multiplication (Which is nothing but Sum of product operation). Matrix multiplication already consist of sum operation.

OWWWWWWWW!!! so the other one is (dot product). Thank you so much! You are awesome !

J = (1/(2*m))*sum(((X*theta)-y).^2); Can you please break it down, then we used SUM here. Thanks in advance !!

and not in the above one (theta = theta - ((alpha/m) * X'*error))? Like I could see with the dimensions that, sum is not required. But I want to know how should I think/(the intuition) or approach to this idea that I need or dnt need sum.

"Matrix multiplication (Which is nothing but Sum of product operation)." then why using SUM here, J = (1/(2*m))*sum(((X*theta)-y).^2);

PLEASE PLEASE HELP. I will be ever grateful to you. And will pray for you.

Don't get confused with normal and vectorized implementation. > "sum" in vectorized implementation represents summation in the given formula. > In normal implementation, "temp = temp + formula" is equivalent to that of "sum" in vectorized implementation. Please look at below code, (both codes achieves same result) compare them and try to understand. %%%%%%%%%%%%% CORRECT %%%%%%%%% % h = X*theta; % temp = 0; % for i=1:m % temp = temp + (h(i) - y(i))^2; % end % J = (1/(2*m)) * temp; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%% CORRECT: Vectorized Implementation %%%%%%%%% J = (1/(2*m))*sum(((X*theta)-y).^2); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

My Goodness ! Thank you so much! You are awesome ! You have explained it very nicely! Became your fan. God bless you! Will be following your Blog.

x = data(:, 1); y = data(:, 2); m = length(y); plot(x, y, 'rx', 'MarkerSize', 10); ylabel('Profit in $10,000s'); xlabel('Population of City in 10,000s'); X = [ones(m, 1), data(:,1)]; theta = zeros(2, 1); iterations = 1500; alpha = 0.01; temp = 0; for i=1:m temp = temp + (h(i) - y(i))^2; end J = (1/(2*m)) * temp; >> J J = 32.073 the answer is good But when execute the submit: !! Submission failed: operator *: nonconformant arguments (op1 is 97x1, op2 is 2x1) Function: computeCost FileName: LineNumber: 65 Help me please

WHY IT IS SHOWING "This item will be unlocked when the session begins." ON THE QUIZ SECTION.

managed to run every other thing corectly in octave but got a submission error.please help( !! Submission failed: parse error near line 25 of file C:\Users\user\Desktop\ml-class-ex1\computeCostMulti.m) syntax error >>> j= (1/(2*m)) *sum(((X*theta)-y.^2); ^ Function: submit>output FileName: C:\Users\user\Desktop\ml-class-ex1\submit.m LineNumber: 63 Please correct your code and resubmit.

what van i do to fix this problem?? Please help me > submit Unrecognized function or variable 'parts'. Error in submitWithConfiguration (line 4) parts = parts(conf); Error in submit (line 45) submitWithConfiguration(conf);

i have some issues while uploading codes. i do run it without any error but at the end the score still shows 0/10 for 3rd questions and so on. Along with this the same result reflects on my course id. Please help

It should not happen. you might be missing something simple in your process. have you raised this concern on coursera forum. please try it there, you will get the resolution for sure.

I have error at m= length(y) This error is occur

Thank you Akshay for your helping lots of people for years!

Thank you for your kind words.

Hi Akshay i have question about gradient descent with multiple variables. Q(0) Q(1) While doing the Gradient descent we were using X as [1 random number] [1 random number] [1 random number] we were using 1 for Q(0). My question is for doing multiple varient Gradient Descent do we use X matrix as a Q(0) Q(1) Q(2).... X = [1 random random ...] because in coursera as an example they took as; X = data(:, 1:2); y = data(:, 3); Don't they need to add 1 numbers in X for to represent Q(0)?

Once you split the input(X) and output(y) from the raw data. The below line add the 1 in the input =(X) as mentioned the theory. x = [ones(m, 1), data(:,1)] Above line will take care of adding one in the input(X). Please check the code, it is already present in the code.

i have a issue >> submitWithConfiguration error: 'conf' undefined near line 4, column 4 error: called from submitWithConfiguration at line 4 column 10

i m facing this error while submitting my assignment.. unexpected error: Index in position 1 exceeds array bounds.. please need help ...how can i fix it ?

I copied exactly all the same code as author. The program run successfully but the result of gradient descent was crazy large(incorrect much bigger then expected). I was stuck in this part for a long time. Could anyone help? Thank you very much. My Octave Version is 6.3.0. Here is the result of my output Loading data ... First 10 examples from the dataset: x = [2104 3], y = 399900 x = [1600 3], y = 329900 x = [2400 3], y = 369000 x = [1416 2], y = 232000 x = [3000 4], y = 539900 x = [1985 4], y = 299900 x = [1534 3], y = 314900 x = [1427 3], y = 198999 x = [1380 3], y = 212000 x = [1494 3], y = 242500 Program paused. Press enter to continue. Normalizing Features ... Running gradient descent ... Theta computed from gradient descent: 340412.659574 110631.050279 -6649.474271 Predicted price of a 1650 sq-ft, 3 br house (using gradient descent): $182861697.196858 Program paused. Press enter to continue. Solving with normal equations... Theta computed from the normal equations: 89597.909543 139.210674 -8738.019112 Predicted price of a 1650 sq-ft, 3 br house (using normal equations): $293081.464335

Facing same issue..any solution to this?

!! Submission failed: unexpected error: Undefined function 'makeValidFieldName' for input arguments of type 'char'. !! Please try again later.

Facing the same issue, any updates ?

concerning the code on gradient Descent, please am yet to undrstand how the iterations work, am i to keep running the gradient descent and manually updating theta myself till i get to the value of theta with the lowest cost. please expanciate more on this it will be very helpful.

>> normalEqn error: 'X' undefined near line 7, column 22 error: called from normalEqn at line 7 column 9 I am getting this error in normaleqn

I want to thank the writer for their sincere efforts. Best Data Science Institute In Chennai With Placement

Our website uses cookies to improve your experience. Learn more

Contact form

Instantly share code, notes, and snippets.

@mGalarnyk

mGalarnyk / machineLearningWeek2Quiz1.md

  • Download ZIP
  • Star ( 11 ) 11 You must be signed in to star a gist
  • Fork ( 4 ) 4 You must be signed in to fork a gist
  • Embed Embed this gist in your website.
  • Share Copy sharable link for this gist.
  • Clone via HTTPS Clone using the web URL.
  • Learn more about clone URLs
  • Save mGalarnyk/dbb246e04fb3ad41be2549727248a051 to your computer and use it in GitHub Desktop.

Machine Learning Week 2 Quiz 1 (Linear Regression with Multiple Variables) Stanford Coursera

Github repo for the Course: Stanford Machine Learning (Coursera)

Suppose m=4 students have taken some class, and the class had a midterm exam and a final exam. You have collected a dataset of their scores on the two exams, which is as follows:

Midterm Exam (midterm exam) Final Exam
89 7921 96
72 5184 74
94 8836 87
69 4761 78

You'd like to use polynomial regression to predict a student's final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the form hθ(x)=θ 0 +θ 1 x 1 +θ 2 x 2 , where x 1 is the midterm score and x 2 is (midterm score) 2 . Further, you plan to use both feature scaling (dividing by the "max-min", or range, of a feature) and mean normalization.

What is the normalized feature x 2 (4) ? (Hint: midterm = 69, final = 78 is training example 4.) Please round off your answer to two decimal places and enter in the text box below.

The mean of x 2 is 6675.5 and the range is 8836 - 4761 is 4075.

x 2 (4) = (4761 - 6675.5) / 4075 = -0.47

You run gradient descent for 15 iterations with α=0.3 and compute J(θ) after each iteration. You find that the value of J(θ) decreases quickly then levels off. Based on this, which of the following conclusions seems most plausible?

Rather than use the current value of α, it'd be more promising to try a larger value of α (say α=1.0).

Rather than use the current value of α, it'd be more promising to try a smaller value of α (say α=0.1).

α=0.3 is an effective choice of learning rate.

Answer Explanation
α=0.3 is an effective choice of learning rate. We want gradient descent to quickly converge to the minimum, so the current setting of α seems to be good

Suppose you have m=14 training examples with n=3 features (excluding the additional all-ones feature for the intercept term, which you should add). The normal equation is θ=(X T X) −1 X T y. For the given values of m and n, what are the dimensions of θ, X, and y in this equation?

X is 14×3, y is 14×1, θ is 3×3

X is 14×4, y is 14×4, θ is 4×4

X is 14×4, y is 14×1, θ is 4×1

X is 14×3, y is 14×1, θ is 3×1

Answer Explanation
X is 14×4, y is 14×1, θ is 4×1 X has m rows and n + 1 columns (+1 because of the x =1 term. y is an m-vector. θ is an (n+1)-vector.

Suppose you have a dataset with m=50 examples and n=200000 features for each example. You want to use multivariate linear regression to fit the parameters θ to our data. Should you prefer gradient descent or the normal equation?

Gradient descent, since (X T X) −1 will be very slow to compute in the normal equation.

Gradient descent, since it will always converge to the optimal θ.

The normal equation, since it provides an efficient way to directly find the solution.

The normal equation, since gradient descent might be unable to find the optimal θ.

Answer Explanation
Gradient descent, since (X X) will be very slow to compute in the normal equation. With n = 200000 features, you have to invert a 200001 x 200001 matrix to compute the normal equation. Inverting such a large matrix is computationally expensive, so gradient descent is a good choice.

Which of the following are reasons for using feature scaling?

It speeds up solving for θ using the normal equation.

It prevents the matrix X T X (used in the normal equation) from being non-invertable (singular/degenerate).

It is necessary to prevent gradient descent from getting stuck in local optima.

It speeds up gradient descent by making it require fewer iterations to get to a good solution.

True or False Statement Explanation
False It speeds up solving for θ using the normal equation. The magnitude of the feature values are insignificant in terms of computational cost.
False It prevents the matrix X X (used in the normal equation) from being non-invertable (singular/degenerate). none
False It is necessary to prevent gradient descent from getting stuck in local optima. The cost function J(θ) for linear regression has no local optima.
True It speeds up gradient descent by making it require fewer iterations to get to a good solution. Feature scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much larger values than the rest.

@ABHIJITPANIGRAHI

ABHIJITPANIGRAHI commented Sep 17, 2017

Answer to question 2 is wrong .the answer to this is otion 2 as it will be easier to find the answer with much smaller value of alpha.

Sorry, something went wrong.

@binbsr

binbsr commented Mar 4, 2018

@ABHIJITPANIGRAHI , You are wrong. And readers, a thing negated twice is RIGHT :)

@Mohitbhatt2710

Mohitbhatt2710 commented Jul 23, 2018

answer to question number 2 is "Rather than use the current value of α, it'd be more promising to try a smaller value of α (say α=0.1)."

@vijay9908

vijay9908 commented Mar 25, 2020

using a prefixed alpha value such as 0.3 is optimum as theta is decreasing slowly rather than not increasing slowly

@pbevillard

pbevillard commented May 5, 2020

@ABHIJITPANIGRAHI If you decreased alpha the number of iterations would be larger. Of course you would get a more accurate answer in terms of more decimal points, but one has to see a tradeoff between speed and efficiency of the program to that of accuracy. So the current answer given holds, but I can see your point.

ghost commented Aug 21, 2020

The answer of question 2 is option A, because after 15 iterations if the cost value is decreasing then the value of alpha is too small so it should be increased.

ghost commented Sep 21, 2021

well, the questions are different for everyone, you guys should read the text more carefully

Coursera Machine Learning

Coursera machine learning by prof. andrew ng, machine learning by prof. andrew ng.

:book:

Table of Contents

Breif intro, video lectures index, programming exercise tutorials, programming exercise test cases, useful resources, extra information.

  • Online E-Books

Aditional Information

The most of the course talking about hypothesis function and minimising cost funtions

A hypothesis is a certain function that we believe (or hope) is similar to the true function, the target function that we want to model. In context of email spam classification, it would be the rule we came up with that allows us to separate spam from non-spam emails.

Cost Function

The cost function or Sum of Squeared Errors(SSE) is a measure of how far away our hypothesis is from the optimal hypothesis. The closer our hypothesis matches the training examples, the smaller the value of the cost function. Theoretically, we would like J(θ)=0

Gradient Descent

Gradient descent is an iterative minimization method. The gradient of the error function always shows in the direction of the steepest ascent of the error function. Thus, we can start with a random weight vector and subsequently follow the negative gradient (using a learning rate alpha)

Differnce between cost function and gradient descent functions

Cost Function Gradient Descent
<pre> </pre> <pre> </pre>

Bias and Variance

When we discuss prediction models, prediction errors can be decomposed into two main subcomponents we care about: error due to “bias” and error due to “variance”. There is a tradeoff between a model’s ability to minimize bias and variance. Understanding these two types of error can help us diagnose model results and avoid the mistake of over- or under-fitting.

Source: http://scott.fortmann-roe.com/docs/BiasVariance.html

Hypotheis and Cost Function Table

Algorithem Hypothesis Function Cost Function Gradient Descent  
Linear Regression    
Linear Regression with Multiple variables  
Logistic Regression  
Logistic Regression with Multiple Variable    
Nural Networks      

Regression with Pictures

  • Linear Regression
  • Logistic Regression

https://class.coursera.org/ml/lecture/preview

https://www.coursera.org/learn/machine-learning/discussions/all/threads/m0ZdvjSrEeWddiIAC9pDDA

https://www.coursera.org/learn/machine-learning/discussions/all/threads/0SxufTSrEeWPACIACw4G5w

https://www.coursera.org/learn/machine-learning/resources/NrY2G

Week 1 - Due 07/16/17:

  • Welcome - pdf - ppt
  • Linear regression with one variable - pdf - ppt
  • Linear Algebra review (Optional) - pdf - ppt
  • Lecture Notes

Week 2 - Due 07/23/17:

  • Linear regression with multiple variables - pdf - ppt
  • Octave tutorial pdf
  • Programming Exercise 1: Linear Regression - pdf - Problem - Solution
  • Program Exercise Notes

Week 3 - Due 07/30/17:

  • Logistic regression - pdf - ppt
  • Regularization - pdf - ppt
  • Programming Exercise 2: Logistic Regression - pdf - Problem - Solution

Week 4 - Due 08/06/17:

  • Neural Networks: Representation - pdf - ppt
  • Programming Exercise 3: Multi-class Classification and Neural Networks - pdf - Problem - Solution

Week 5 - Due 08/13/17:

  • Neural Networks: Learning - pdf - ppt
  • Programming Exercise 4: Neural Networks Learning - pdf - Problem - Solution

Week 6 - Due 08/20/17:

  • Advice for applying machine learning - pdf - ppt
  • Machine learning system design - pdf - ppt
  • Programming Exercise 5: Regularized Linear Regression and Bias v.s. Variance - pdf - Problem - Solution

Week 7 - Due 08/27/17:

  • Support vector machines - pdf - ppt
  • Programming Exercise 6: Support Vector Machines - pdf - Problem - Solution

Week 8 - Due 09/03/17:

  • Clustering - pdf - ppt
  • Dimensionality reduction - pdf - ppt
  • Programming Exercise 7: K-means Clustering and Principal Component Analysis - pdf - Problems - Solution

Week 9 - Due 09/10/17:

  • Anomaly Detection - pdf - ppt
  • Recommender Systems - pdf - ppt
  • Programming Exercise 8: Anomaly Detection and Recommender Systems - pdf - Problems - Solution

Week 10 - Due 09/17/17:

  • Large scale machine learning - pdf - ppt

Week 11 - Due 09/24/17:

  • Application example: Photo OCR - pdf - ppt
  • Linear Algebra Review and Reference Zico Kolter
  • CS229 Lecture notes
  • CS229 Problems
  • Financial time series forecasting with machine learning techniques
  • Octave Examples

Online E Books

  • Introduction to Machine Learning by Nils J. Nilsson
  • Introduction to Machine Learning by Alex Smola and S.V.N. Vishwanathan
  • Introduction to Data Science by Jeffrey Stanton
  • Bayesian Reasoning and Machine Learning by David Barber
  • Understanding Machine Learning, © 2014 by Shai Shalev-Shwartz and Shai Ben-David
  • Elements of Statistical Learning, by Hastie, Tibshirani, and Friedman
  • Pattern Recognition and Machine Learning, by Christopher M. Bishop

Course Status

coursera_course_completion

  • What are the top 10 problems in deep learning for 2017?
  • When will the deep learning bubble burst?

Statistics Models

  • HMM - Hidden Markov Model
  • CRFs - Conditional Random Fields
  • LSI - Latent Semantic Indexing
  • MRF - Markov Random Fields
  • SIGIR - Special Interest Group on Information Retrieval
  • ACL - Association for Computational Linguistics
  • NAACL - The North American Chapter of the Association for Computational Linguistics
  • EMNLP - Empirical Methods in Natural Language Processing
  • NIPS - Neural Information Processing Systems
  • OpenClassroom

Machine Learning

  • Credits/Acknowledgments

About Us   |   Contact   |   Privacy Policy    |   FAQ   |  

  • MATLAB Answers
  • File Exchange
  • AI Chat Playground
  • Discussions
  • Communities
  • Treasure Hunt
  • Community Advisors
  • Virtual Badges
  • MathWorks.com
  • Trial software

You are now following this Submission

  • You may receive emails, depending on your communication preferences

programming assignment week 2 practice lab linear regression solution

Week 2 programming assignment answers

View License

  • Open in MATLAB Online
  • Version History
  • Reviews (1)
  • Discussions (2)

Hi Sir/Ma'm,

I am sending 2-week assignment coding answers. Coursera machine learning (week 2 programming assignment answers) is Matlab.

Please check the attached file and confirm. email id- [email protected]

Thanks & Regards, Manoj Shukla

Manoj Shukla (2024). Week 2 programming assignment answers (https://www.mathworks.com/matlabcentral/fileexchange/74778-week-2-programming-assignment-answers), MATLAB Central File Exchange. Retrieved July 29, 2024 .

MATLAB Release Compatibility

Platform compatibility, tags add tags, community treasure hunt.

Find the treasures in MATLAB Central and discover how the community can help you!

Discover Live Editor

Create scripts with code, output, and formatted text in a single executable document.

Learn About Live Editor

  • computeCost
  • computeCostMulti
  • ex1_multi.m
  • featureNormalize
  • gradientDescent
  • gradientDescentMulti
  • warmUpExercise
Version Published Release Notes
1.0.0

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

  • América Latina (Español)
  • Canada (English)
  • United States (English)
  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)
  • 简体中文 Chinese
  • 日本 Japanese (日本語)
  • 한국 Korean (한국어)

Contact your local office

IMAGES

  1. GitHub

    programming assignment week 2 practice lab linear regression solution

  2. Ex1

    programming assignment week 2 practice lab linear regression solution

  3. Supervised Machine Learning: Regression and Classification

    programming assignment week 2 practice lab linear regression solution

  4. Exercise 04 linear regression solution

    programming assignment week 2 practice lab linear regression solution

  5. Week 02 multiple linear regression assignment 01 instructions

    programming assignment week 2 practice lab linear regression solution

  6. Linear Regression

    programming assignment week 2 practice lab linear regression solution

VIDEO

  1. BBS 1st : Business statistics

  2. NPTEL The Joy of Computing using Python week 2 quiz assignment answers with proof of each answer

  3. Logistic regression with a neural network Mindset Week 2 assignment coursera

  4. Laboratory Exercise Guessing Game Computer Programming 2 Answer

  5. Programming Assignment 1 &2

  6. Python for Data Science|| WEEK-2 Quiz assignment Answers 2023||NPTEL||#SKumarEdu

COMMENTS

  1. greyhatguy007/Machine-Learning-Specialization-Coursera

    Week 2. Practice quiz: Gradient descent in practice; Practice quiz: Multiple linear regression; Optional Labs. Numpy Vectorization; Multi Variate Regression; Feature Scaling; Feature Engineering; Sklearn Gradient Descent; Sklearn Normal Method; Programming Assignment. Linear Regression; Week 3. Practice quiz: Cost function for logistic regression

  2. ieopare/Programming-Assignment-Week-2-practice-lab-Linear-regression

    This Jupyter Notebook implements gradient descent regression for Machine Learning Specialization Course 1. Great for Python beginners or refresh. Learn key functions, practice regression and get ready for advanced work - ieopare/Programming-Assignment-Week-2-practice-lab-Linear-regression

  3. Coursera: Machine Learning (Week 2) [Assignment Solution]

    163. Linear regression and get to see it work on data. I have recently completed the Machine Learning course from Coursera by Andrew NG. While doing the course we have to go through various quiz and assignments. Here, I am sharing my solutions for the weekly assignments throughout the course. These solutions are for reference only.

  4. Coursera Machine Learning week 2 assignment Linear Regression ...

    If you are unable to complete the week 2 assignment Linear Regression Ex1 of Coursera Machine Learning, then You are in the right place to complete it with ...

  5. Ex1

    Programming Exercise 1: Linear Regression Machine Learning Introduction. In this exercise, you will implement linear regression and get to see it work on data. Before starting on this programming exercise, we strongly recom- mend watching the video lectures and completing the review questions for the associated topics.

  6. GitHub

    Week 2 : Practice lab: Linear regression. Week 3 : Practice lab: logistic regression ... Week 2 : Practice lab:Deep Learning for Content-Based Filtering. Week 3 : Practice Lab: Reinforcement Learning. 📝 Disclaimer. I made this repository as a reference. Please do not copy paste the solution as is. You can find the solution if you read the ...

  7. PDF Programming Exercise 1: Linear Regression

    this assignment. For this programming exercise, you are only required to complete the rst part of the exercise to implement linear regression with one variable. The second part of the exercise, which is optional, covers linear regression with multiple variables. Where to get help The exercises in this course use Octave1 or MATLAB, a high-level ...

  8. Machine Learning Week 2 Quiz 1 (Linear Regression with Multiple

    You'd like to use polynomial regression to predict a student's final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the form hθ(x)=θ 0 +θ 1 x 1 +θ 2 x 2, where x 1 is the midterm score and x 2 is (midterm score) 2.Further, you plan to use both feature scaling (dividing by the "max-min", or range, of a feature) and mean normalization.

  9. Coursera: Machine learning Linear Regression Week 2 Assignment

    All 8 Solutions Files :- https://ko-fi.com/s/a764a2cc3e Send me message on (WhatsApp) +918302648025I complete all Your Assignments Using Email+Token.Machine ...

  10. Linear regression (Coursera programming assignment)

    Week 2programming assignmentlinear regressioncourseramachine learning courseBy Andrew Ng

  11. Machine Learning Exercise 1

    Machine Learning Exercise 1 - Linear Regression. This notebook covers a Python-based solution for the first programming exercise of the machine learning class on Coursera. Please refer to the exercise text for detailed descriptions and equations. In this exercise we'll implement simple linear regression using gradient descent and apply it to an ...

  12. C1_W2_Linear_Regression.ipynb

    You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window.

  13. CoCalc -- C1_W2_Linear_Regression.ipynb

    Welcome to your first practice lab! In this lab, you will implement linear regression with one variable to predict profits for a restaurant franchise. Outline. 1 - Packages . 2 - Linear regression with one variable . 2.1 Problem Statement. 2.2 Dataset. 2.3 Refresher on linear regression. 2.4 Compute Cost. Exercise 1. 2.5 Gradient descent ...

  14. Machine Learning By Prof. Andrew Ng

    Week 2 - Due 07/23/17: Linear regression with multiple variables - pdf - ppt; Octave tutorial pdf; Programming Exercise 1: Linear Regression - pdf - Problem - Solution; Lecture Notes; Errata; Program Exercise Notes; Week 3 - Due 07/30/17: Logistic regression - pdf - ppt; Regularization - pdf - ppt; Programming Exercise 2: Logistic Regression ...

  15. FatehMuhammad/Supervised-Machine-Learning-Regression-and ...

    This repository is composed of Solution notebooks for Course 1 of Machine Learning Specialization taught by Andrew N.g. on Coursera. This repository have two notebooks, one for week 2 graded Lab and one for week 3 graded Lab. You can take a look, if you are unable to complete these graded evaluations without any help.

  16. Machine Learning

    Exercise 2: Linear Regression. This course consists of videos and programming exercises to teach you about machine learning. The exercises are designed to give you hands-on, practical experience for getting these algorithms to work. To get the most out of this course, you should watch the videos and complete the exercises in the order in which ...

  17. Machine Learning Coursera

    About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features NFL Sunday Ticket Press Copyright ...

  18. RPubs

    R Pubs. by RStudio. Sign in Register. coursera Linear modeling and regression week 2 Lab. by Odera Philip Fredrick. Last updated over 5 years ago.

  19. Week 2 programming assignment answers

    Hi Sir/Ma'm, I am sending 2-week assignment coding answers. Coursera machine learning (week 2 programming assignment answers) is Matlab. Please check the attached file and confirm. email id- [email protected].

  20. Personal Solutions to Programming Assignments on Matlab

    Personal Solutions to Programming Assignments on Matlab - GitHub - koushal95/Coursera-Machine-Learning-Assignments-Personal-Solutions: Personal Solutions to Programming Assignments on Matlab ... Week 2. Linear Regression with Multiple Variables. Octave / Matlab Tutorial. Programming Exercise 1. Week 3. Logistic Regression. Regularization.

  21. GitHub

    Programming Assignment: Week 2 practice lab: Linear regression of Supervised Machine Learning: Regression and Classification(Andrew Ng) 0 stars 0 forks Branches Tags Activity Star