Skip to content

Commit

Permalink
fix: module 05 and 06 are corrected
Browse files Browse the repository at this point in the history
  • Loading branch information
A-Mahla committed Jun 10, 2024
1 parent 7502a49 commit aadb89c
Show file tree
Hide file tree
Showing 6 changed files with 89 additions and 70 deletions.
3 changes: 1 addition & 2 deletions module05/en.py_proj.tex
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,7 @@ \chapter*{Common Instructions}

\item Your manual is the internet.

\item You can also ask questions in the \texttt{\#bootcamps} channel in the \href{https://42-ai.slack.com}{42AI}
or \href{42born2code.slack.com}{42born2code}.
\item You can also ask questions on the {https://discord.gg/8Vvb6QMCZq}{42AI} discord.

\item If you find any issue or mistakes in the subject please create an issue on \href{https://github.com/42-AI/bootcamp_python/issues}{42AI repository on Github}.

Expand Down
66 changes: 33 additions & 33 deletions module05/en.subject.tex
Original file line number Diff line number Diff line change
Expand Up @@ -612,8 +612,8 @@ \section*{Instructions}
def simple_predict(x, theta):
"""Computes the vector of prediction y_hat from two non-empty numpy.ndarray.
Args:
x: has to be an numpy.ndarray, a vector of dimension m * 1.
theta: has to be an numpy.ndarray, a vector of dimension 2 * 1.
x: has to be an numpy.ndarray, a one-dimensional vector of size m.
theta: has to be an numpy.ndarray, a one-dimensional vector of size 2.
Returns:
y_hat as a numpy.ndarray, a vector of dimension m * 1.
None if x or theta are empty numpy.ndarray.
Expand Down Expand Up @@ -694,7 +694,7 @@ \section*{Instructions}
def add_intercept(x):
"""Adds a column of 1's to the non-empty numpy.array x.
Args:
x: has to be a numpy.array of dimension m * n.
x: has to be a numpy.array. x can be a one-dimensional (m * 1) or two-dimensional (m * n) vector.
Returns:
X, a numpy.array of dimension m * (n + 1).
None if x is not a numpy.array.
Expand Down Expand Up @@ -811,8 +811,8 @@ \section*{Instructions}
def predict_(x, theta):
"""Computes the vector of prediction y_hat from two non-empty numpy.array.
Args:
x: has to be an numpy.array, a vector of dimension m * 1.
theta: has to be an numpy.array, a vector of dimension 2 * 1.
x: has to be an numpy.array, a one-dimensional vector of size m.
theta: has to be an numpy.array, a two-dimensional vector of shape 2 * 1.
Returns:
y_hat as a numpy.array, a vector of dimension m * 1.
None if x and/or theta are not numpy.array.
Expand Down Expand Up @@ -845,7 +845,7 @@ \section*{Examples}

# Example 3:
theta3 = np.array([[5], [3]])
predict_(X, theta3)
predict_(x, theta3)
# Output:
array([[ 8.], [11.], [14.], [17.], [20.]])

Expand Down Expand Up @@ -895,9 +895,9 @@ \section*{Instructions}
def plot(x, y, theta):
"""Plot the data and prediction line from three non-empty numpy.array.
Args:
x: has to be an numpy.array, a vector of dimension m * 1.
y: has to be an numpy.array, a vector of dimension m * 1.
theta: has to be an numpy.array, a vector of dimension 2 * 1.
x: has to be an numpy.array, a one-dimensional vector of size m.
y: has to be an numpy.array, a one-dimensional vector of size m.
theta: has to be an numpy.array, a two-dimensional vector of shape 2 * 1.
Returns:
Nothing.
Raises:
Expand Down Expand Up @@ -986,7 +986,7 @@ \section*{Objective}

Where:
\begin{itemize}
\item $\hat{y}$ is a vector of dimension $m$, the vector of predicted values
\item $\hat{y}$ is a vector of dimension $m\times 1$, the vector of predicted values
\item $y$ is a vector of dimension $m\times 1$, the vector of expected values
\item $\hat{y}^{(i)}$ is the ith component of vector $\hat{y}$,
\item $y^{(i)}$ is the ith component of vector $y$,
Expand All @@ -1011,8 +1011,8 @@ \section*{Instructions}
Description:
Calculates all the elements (y_pred - y)^2 of the loss function.
Args:
y: has to be an numpy.array, a vector.
y_hat: has to be an numpy.array, a vector.
y: has to be an numpy.array, a two-dimensional vector of shape m * 1.
y_hat: has to be an numpy.array, a two-dimensional vector of shape m * 1.
Returns:
J_elem: numpy.array, a vector of dimension (number of the training examples,1).
None if there is a dimension matching problem between X, Y or theta.
Expand All @@ -1027,8 +1027,8 @@ \section*{Instructions}
Description:
Calculates the value of loss function.
Args:
y: has to be an numpy.array, a vector.
y_hat: has to be an numpy.array, a vector.
y: has to be an numpy.array, a two-dimensional vector of shape m * 1.
y_hat: has to be an numpy.array, a two-dimensional vector of shape m * 1.
Returns:
J_value : has to be a float.
None if there is a dimension matching problem between X, Y or theta.
Expand Down Expand Up @@ -1063,8 +1063,8 @@ \section*{Examples}
3.0

x2 = np.array([0, 15, -9, 7, 12, 3, -21]).reshape(-1, 1)
theta2 = np.array([[0.], [1.]]).reshape(-1, 1)
y_hat2 = predict_(x3, theta3)
theta2 = np.array(np.array([[0.], [1.]]))
y_hat2 = predict_(x2, theta2)
y2 = np.array([2, 14, -13, 5, 12, 4, -19]).reshape(-1, 1)

# Example 3:
Expand Down Expand Up @@ -1141,8 +1141,8 @@ \section*{Instructions}
"""Computes the half mean squared error of two non-empty numpy.array, without any for loop.
The two arrays must have the same dimensions.
Args:
y: has to be an numpy.array, a vector.
y_hat: has to be an numpy.array, a vector.
y: has to be an numpy.array, a one-dimensional vector of size m.
y_hat: has to be an numpy.array, a one-dimensional vector of size m.
Returns:
The half mean squared error of the two vectors as a float.
None if y or y_hat are empty numpy.array.
Expand All @@ -1159,8 +1159,8 @@ \section*{Examples}
% --------------------------------- %
\begin{minted}[bgcolor=darcula-back,formatcom=\color{lightgrey},fontsize=\scriptsize]{python}
import numpy as np
X = np.array([[0], [15], [-9], [7], [12], [3], [-21]])
Y = np.array([[2], [14], [-13], [5], [12], [4], [-19]])
X = np.array([0, 15, -9, 7, 12, 3, -21])
Y = np.array([2, 14, -13, 5, 12, 4, -19])

# Example 1:
loss_(X, Y)
Expand Down Expand Up @@ -1206,9 +1206,9 @@ \section*{Instructions}
def plot_with_loss(x, y, theta):
"""Plot the data and prediction line from three non-empty numpy.ndarray.
Args:
x: has to be an numpy.ndarray, a vector of dimension m * 1.
y: has to be an numpy.ndarray, a vector of dimension m * 1.
theta: has to be an numpy.ndarray, a vector of dimension 2 * 1.
x: has to be an numpy.ndarray, one-dimensional array of size m.
y: has to be an numpy.ndarray, one-dimensional array of size m.
theta: has to be an numpy.ndarray, one-dimensional array of size 2.
Returns:
Nothing.
Raises:
Expand Down Expand Up @@ -1334,8 +1334,8 @@ \section*{Instructions}
Description:
Calculate the MSE between the predicted output and the real output.
Args:
y: has to be a numpy.array, a vector of dimension m * 1.
y_hat: has to be a numpy.array, a vector of dimension m * 1.
y: has to be a numpy.array, a two-dimensional vector of shape m * 1.
y_hat: has to be a numpy.array, a two-dimensional vector of shape m * 1.
Returns:
mse: has to be a float.
None if there is a matching dimension problem.
Expand All @@ -1350,8 +1350,8 @@ \section*{Instructions}
Description:
Calculate the RMSE between the predicted output and the real output.
Args:
y: has to be a numpy.array, a vector of dimension m * 1.
y_hat: has to be a numpy.array, a vector of dimension m * 1.
y: has to be a numpy.array, a two-dimensional vector of shape m * 1.
y_hat: has to be a numpy.array, a two-dimensional vector of shape m * 1.
Returns:
rmse: has to be a float.
None if there is a matching dimension problem.
Expand All @@ -1366,8 +1366,8 @@ \section*{Instructions}
Description:
Calculate the MAE between the predicted output and the real output.
Args:
y: has to be a numpy.array, a vector of dimension m * 1.
y_hat: has to be a numpy.array, a vector of dimension m * 1.
y: has to be a numpy.array, a two-dimensional vector of shape m * 1.
y_hat: has to be a numpy.array, a two-dimensional vector of shape m * 1.
Returns:
mae: has to be a float.
None if there is a matching dimension problem.
Expand All @@ -1382,8 +1382,8 @@ \section*{Instructions}
Description:
Calculate the R2score between the predicted output and the output.
Args:
y: has to be a numpy.array, a vector of dimension m * 1.
y_hat: has to be a numpy.array, a vector of dimension m * 1.
y: has to be a numpy.array, a two-dimensional vector of shape m * 1.
y_hat: has to be a numpy.array, a two-dimensional vector of shape m * 1.
Returns:
r2score: has to be a float.
None if there is a matching dimension problem.
Expand Down Expand Up @@ -1412,8 +1412,8 @@ \section*{Examples}
from math import sqrt

# Example 1:
x = np.array([0, 15, -9, 7, 12, 3, -21])
y = np.array([2, 14, -13, 5, 12, 4, -19])
x = np.array([[0], [15], [-9], [7], [12], [3], [-21]])
y = np.array([[2], [14], [-13], [5], [12], [4], [-19]])

# Mean squared error
## your implementation
Expand Down
43 changes: 24 additions & 19 deletions module05/usefull_ressources.tex
Original file line number Diff line number Diff line change
Expand Up @@ -16,36 +16,41 @@ \section*{Useful Ressources}

You are strongly advise to use the following resource:
\href{https://www.coursera.org/learn/machine-learning/home/week/1}{Machine Learning MOOC - Stanford}
Here are the sections of the MOOC that are relevant for today's exercises:
These videos are available at no cost; simply log in, select "Enroll for Free", and choose "audit the course for free" in the popup window.
The following sections of the course are pertinent to today's exercises:

\subsection*{Week 1}
\subsection*{Week 1: Introduction to Machine Learning}

\subsubsection*{Introduction}
\subsubsection*{Supervised vs. Unsupervised Machine Learning}
\begin{itemize}
\item What is Machine Learning? (Video + Reading)
\item Supervised Learning (Video + Reading)
\item Unsupervised Learning (Video + Reading)
\item Review (Reading + Quiz)
\item What is Machine Learning?
\item Supervised Learning Part 1
\item Supervised Learning Part 2
\item Unsupervised Learning Part 1
\item Unsupervised Learning Part 2
\end{itemize}

\subsubsection*{Linear Regression with One Variable}
\subsubsection*{Regression Model}
\begin{itemize}
\item Model Representation (Video + Reading)
\item Cost Function (Video + Reading)
\item Cost Function - Intuition I (Video + Reading)
\item Cost Function - Intuition II (Video + Reading)
\item Regression Model Part 1
\item Regression Model Part 2
\item Cost Function Formula
\item Cost Function Intuition
\item Visualizing the cost function
\item Visualizing Example
\item \textit{Keep what remains for tomorow ;)}
\end{itemize}

\emph{All videos above are available also on this \href{https://youtube.com/playlist?list=PLkDaE6sCZn6FNC6YRfRQc_FbeQrF8BwGI&feature=shared}{Andrew Ng's YouTube playlist} from \#3 to \#14 includes}

\newpage

\subsubsection*{Linear Algebra Review}
\begin{itemize}
\item Matrices and Vectors (Video + Reading)
\item Addition and Scalar Multiplication (Video + Reading)
\item Matrix Vector Multiplication (Video + Reading)
\item Matrix Matrix Multiplication (Video + Reading)
\item Matrix Multiplication Properties (Video + Reading)
\item Inverse and Transpose (Video + Reading)
\item Review (Reading + Quiz)
\item \href{https://www.youtube.com/watch?v=XMB__E658fQ}{Matrices and Vectors}
\item \href{https://www.youtube.com/watch?v=k1JGJhUGmBE}{Addition and Scalar Multiplication}
\item \href{https://www.youtube.com/watch?v=VIfykceJoZI}{Matrix Vector Multiplication}
\item \href{https://www.youtube.com/watch?v=JHZKyt0m1kc}{Matrix Matrix Multiplication}
\item \href{https://www.youtube.com/watch?v=wqM7O_ZUtCc}{Matrix Multiplication Properties}
\item \href{https://www.youtube.com/watch?v=IUf8HDyUeY0}{Inverse and Transpose}
\end{itemize}
3 changes: 1 addition & 2 deletions module06/en.py_proj.tex
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,7 @@ \chapter*{Common Instructions}

\item Your manual is the internet.

\item You can also ask questions in the \texttt{\#bootcamps} channel in the \href{https://42-ai.slack.com}{42AI}
or \href{42born2code.slack.com}{42born2code}.
\item You can also ask questions on the {https://discord.gg/8Vvb6QMCZq}{42AI} discord.

\item If you find any issue or mistakes in the subject please create an issue on \href{https://github.com/42-AI/bootcamp_python/issues}{42AI repository on Github}.

Expand Down
8 changes: 4 additions & 4 deletions module06/en.subject.tex
Original file line number Diff line number Diff line change
Expand Up @@ -297,7 +297,7 @@ \section*{Instructions}
"""Computes a gradient vector from three non-empty numpy.array, without any for loop.
The three arrays must have compatible shapes.
Args:
x: has to be a numpy.array, a matrix of shape m * 1.
x: has to be a numpy.array, a vector of shape m * 1.
y: has to be a numpy.array, a vector of shape m * 1.
theta: has to be a numpy.array, a 2 * 1 vector.
Return:
Expand Down Expand Up @@ -630,11 +630,11 @@ \section*{Examples}
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
from mylinearregression import MyLinearRegression as MyLR
from my_linear_regression import MyLinearRegression as MyLR

data = pd.read_csv("are_blue_pills_magic.csv")
Xpill = np.array(data[Micrograms]).reshape(-1,1)
Yscore = np.array(data[Score]).reshape(-1,1)
Xpill = np.array(data['Micrograms']).reshape(-1,1)
Yscore = np.array(data['Score']).reshape(-1,1)

linear_model1 = MyLR(np.array([[89.0], [-8]]))
linear_model2 = MyLR(np.array([[89.0], [-6]]))
Expand Down
36 changes: 26 additions & 10 deletions module06/usefull_ressources.tex
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,36 @@ \section*{Useful Ressources}

You are strongly advise to use the following resource:
\href{https://www.coursera.org/learn/machine-learning/home/week/1}{Machine Learning MOOC - Stanford}
Here are the sections of the MOOC that are relevant for today's exercises:
These videos are available at no cost; simply log in, select "Enroll for Free", and choose "audit the course for free" in the popup window.
The following sections of the course are pertinent to today's exercises:

\subsection*{Week 1}
\subsection*{Week 1: Introduction to Machine Learning}

\subsubsection*{Linear Regression with One Variable}
\subsubsection*{Train the model with Gradient Descent}
\begin{itemize}
\item Gradient Descent (Video + Reading)
\item Gradient Descent Intuition (Video + Reading)
\item Gradient Descent For Linear Regression (Video + Reading)
\item Review (Reading + Quiz)
\item Gradient descent
\item Implementing gradient descent
\item Gradient descent intuition
\item Learning rate
\item Gradient descent for linear regression
\item Running gradient descent
\end{itemize}

\subsection*{Week 2}
\subsubsection*{Multivariate Linear Regression}
\subsection*{Week 2: Regression with multiple input variables}

\subsubsection*{Multiple linear Regression}
\begin{itemize}
\item Multiple features
\item Vectorization part1 (optional)
\item Vectorization part2 (optional)
\end{itemize}

\subsubsection*{Gradient descent in practice}
\begin{itemize}
\item Gradient Descent in Practice 1 - Feature Scaling (Video + Reading)
\item Feature scaling part 1
\item Feature scaling part 2
\end{itemize}

\emph{All videos above are available also on this \href{https://youtube.com/playlist?list=PLkDaE6sCZn6FNC6YRfRQc_FbeQrF8BwGI&feature=shared}{Andrew Ng's YouTube playlist} from #15 to #21 includes, #25 and #26}


0 comments on commit aadb89c

Please sign in to comment.