Skip to content

Commit

Permalink
fix: module 08 and module 09
Browse files Browse the repository at this point in the history
  • Loading branch information
A-Mahla committed Jun 12, 2024
1 parent 82a2ef8 commit e7deba7
Show file tree
Hide file tree
Showing 7 changed files with 50 additions and 37 deletions.
2 changes: 1 addition & 1 deletion module08/en.ex05_interlude.tex
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ \subsection*{Vectorized Logistic Gradient}

Given the previous logistic gradient formula, it's quite easy to produce a vectorized version.

Actually, you almost already implemented it on module07!
Actually, you almost already implemented it on module02!

As with the previous exercise, \textbf{the only thing you have to change is your hypothesis} in order to calculate your logistic gradient.

Expand Down
3 changes: 1 addition & 2 deletions module08/en.py_proj.tex
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,7 @@ \chapter*{Common Instructions}

\item Your manual is the internet.

\item You can also ask questions in the \texttt{\#bootcamps} channel in the \href{https://42-ai.slack.com}{42AI}
or \href{42born2code.slack.com}{42born2code}.
\item You can also ask questions on the {https://discord.gg/8Vvb6QMCZq}{42AI} discord.

\item If you find any issue or mistakes in the subject please create an issue on \href{https://github.com/42-AI/bootcamp_python/issues}{42AI repository on Github}.

Expand Down
4 changes: 2 additions & 2 deletions module08/en.subject.tex
Original file line number Diff line number Diff line change
Expand Up @@ -544,7 +544,7 @@ \chapter{Exercise 04}
\turnindir{ex04}
\exnumber{04}
\exfiles{log\_gradient.py}
\exforbidden{numpy and any function that performs derivatives for you}
\exforbidden{any function that performs derivatives for you}
\makeheaderfilesforbidden


Expand Down Expand Up @@ -819,7 +819,7 @@ \subsection*{Examples}
# Example 1:
mylr.loss_(X,Y)
# Output:
11.513157421577004
11.513157421577002

# Example 2:
mylr.fit_(X, Y)
Expand Down
29 changes: 16 additions & 13 deletions module08/usefull_ressources.tex
Original file line number Diff line number Diff line change
Expand Up @@ -15,26 +15,29 @@ \section*{Notions of the module}
\section*{Useful Ressources}

You are strongly advise to use the following resource:
\href{https://www.coursera.org/learn/machine-learning/home/week/3}{Machine Learning MOOC - Stanford}
Here are the sections of the MOOC that are relevant for today's exercises:
\href{https://www.coursera.org/learn/machine-learning}{Machine Learning MOOC - Stanford}
These videos are available at no cost; simply log in, select "Enroll for Free", and choose "audit the course for free" in the popup window.
The following sections of the course are pertinent to today's exercises:

\subsection*{Week 3}
\subsection*{Week 3: Classification}

\subsubsection*{Classification and representation}
\subsubsection*{Classification with logistic regression}
\begin{itemize}
\item Classification (Video + Reading)
\item Hypothesis Representation (Video + Reading)
\item Decision Boundary (Video + Reading)
\item Motivations
\item Logistic regression
\item Decision boundary
\end{itemize}

\subsubsection*{Logistic Regression Model}
\subsubsection*{Cost function for logistic regression}
\begin{itemize}
\item Cost Function (Video + Reading)
\item Simplified Cost Function and Gradient Descent (Video + Reading)
\item Cost function for logistic regression
\item Simplified Cost Function for Logistic Regression
\end{itemize}

\subsubsection*{Multiclass Classification}
\subsubsection*{Gradient descent for logistic regression}
\begin{itemize}
\item Mutliclass Classification: One-vs-all (Video + Reading)
\item Review (Reading + Quiz)
\item Gradient Descent Implementation
\end{itemize}


\emph{All videos above are available also on this \href{https://youtube.com/playlist?list=PLkDaE6sCZn6FNC6YRfRQc_FbeQrF8BwGI&feature=shared}{Andrew Ng's YouTube playlist}, videos from \#31 to \#36.}
3 changes: 1 addition & 2 deletions module09/en.py_proj.tex
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,7 @@ \chapter*{Common Instructions}

\item Your manual is the internet.

\item You can also ask questions in the \texttt{\#bootcamps} channel in the \href{https://42-ai.slack.com}{42AI}
or \href{42born2code.slack.com}{42born2code}.
\item You can also ask questions on the {https://discord.gg/8Vvb6QMCZq}{42AI} discord.

\item If you find any issue or mistakes in the subject please create an issue on \href{https://github.com/42-AI/bootcamp_python/issues}{42AI repository on Github}.

Expand Down
8 changes: 4 additions & 4 deletions module09/en.subject.tex
Original file line number Diff line number Diff line change
Expand Up @@ -1060,12 +1060,12 @@ \section*{Instructions}
"""
supported_penalities = ['l2'] # We consider l2 penality only. One may wants to implement other penalities

def __init__(self, theta, alpha=0.001, max_iter=1000, penalty='l2', lambda_=1.0):
def __init__(self, theta, alpha=0.001, max_iter=1000, penality='l2', lambda_=1.0):
# Check on type, data type, value ... if necessary
self.alpha = alpha
self.max_iter = max_iter
self.theta = theta
self.penalty = penalty
self.penality = penality
self.lambda_ = lambda_ if penality in self.supported_penalities else 0
#... Your code ...

Expand All @@ -1075,8 +1075,8 @@ \section*{Instructions}
\begin{itemize}
\item \textbf{update} the \texttt{fit\_(self, x, y)} method:
\begin{itemize}
\item \texttt{if penalty == 'l2'}: use a \textbf{regularized version} of the gradient descent.
\item \texttt{if penalty = 'none'}: use the \textbf{unregularized version} of the gradient descent from \texttt{module03}.
\item \texttt{if penality == 'l2'}: use a \textbf{regularized version} of the gradient descent.
\item \texttt{if penality = 'none'}: use the \textbf{unregularized version} of the gradient descent from \texttt{module03}.
\end{itemize}
\end{itemize}

Expand Down
38 changes: 25 additions & 13 deletions module09/usefull_ressources.tex
Original file line number Diff line number Diff line change
Expand Up @@ -15,26 +15,38 @@ \section*{Notions of the module}
\section*{Useful Ressources}

You are strongly advise to use the following resource:
\href{https://www.coursera.org/learn/machine-learning/home/week/3}{Machine Learning MOOC - Stanford}
Here are the sections of the MOOC that are relevant for today's exercises:
\href{https://www.coursera.org/learn/machine-learning}{Machine Learning MOOC - Stanford}
These videos are available at no cost; simply log in, select "Enroll for Free", and choose "audit the course for free" in the popup window.
The following sections of the course are pertinent to today's exercises:

\subsection*{Week 3}
\subsection*{Week 3: Classification}

\subsubsection*{Classification and Representation}
\subsubsection*{Classification with logistic regression (already seen on module 03)}
\begin{itemize}
\item Classification (Video + Reading)
\item Hypothesis Representation (Video + Reading)
\item Decision Boundary (Video + Reading)
\item Motivations
\item Logistic regression
\item Decision boundary
\end{itemize}

\subsubsection*{Logistic Regression Model}
\subsubsection*{Cost function for logistic regression (already seen on module 03)}
\begin{itemize}
\item Cost Function (Video + Reading)
\item Simplified Cost Function and Gradient Descent (Video + Reading)
\item Cost function for logistic regression
\item Simplified Cost Function for Logistic Regression
\end{itemize}

\subsubsection*{Multiclass Classification}
\subsubsection*{Gradient descent for logistic regression (already seen on module 03)}
\begin{itemize}
\item Mutliclass Classification: One-vs-all (Video + Reading)
\item Review (Reading + Quiz)
\item Gradient Descent Implementation
\end{itemize}

\subsubsection*{The problem of overfitting (New !!!)}
\begin{itemize}
\item The problem of overfitting
\item Addressing overfitting
\item Cost function with regularization
\item Regularized linear regression
\item Regularized logistic regression
\end{itemize}


\emph{All videos above are available also on this \href{https://youtube.com/playlist?list=PLkDaE6sCZn6FNC6YRfRQc_FbeQrF8BwGI&feature=shared}{Andrew Ng's YouTube playlist}, videos from \#31 to \#36 (already seen on module 03) and \#37 to \#41 (new !!!).}

0 comments on commit e7deba7

Please sign in to comment.