-
Notifications
You must be signed in to change notification settings - Fork 9
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
b14d850
commit ab4e756
Showing
4 changed files
with
360 additions
and
10 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,166 @@ | ||
References | ||
|
||
1 | ||
Breiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001. | ||
|
||
Examples | ||
|
||
>>> | ||
>>> from sklearn.ensemble import RandomForestClassifier | ||
>>> from sklearn.datasets import make_classification | ||
>>> X, y = make_classification(n_samples=1000, n_features=4, | ||
... n_informative=2, n_redundant=0, | ||
... random_state=0, shuffle=False) | ||
>>> clf = RandomForestClassifier(max_depth=2, random_state=0) | ||
>>> clf.fit(X, y) | ||
RandomForestClassifier(...) | ||
>>> print(clf.predict([[0, 0, 0, 0]])) | ||
[1] | ||
Methods | ||
|
||
apply(X) | ||
|
||
Apply trees in the forest to X, return leaf indices. | ||
|
||
decision_path(X) | ||
|
||
Return the decision path in the forest. | ||
|
||
fit(X, y[, sample_weight]) | ||
|
||
Build a forest of trees from the training set (X, y). | ||
|
||
get_params([deep]) | ||
|
||
Get parameters for this estimator. | ||
|
||
predict(X) | ||
|
||
Predict class for X. | ||
|
||
predict_log_proba(X) | ||
|
||
Predict class log-probabilities for X. | ||
|
||
predict_proba(X) | ||
|
||
Predict class probabilities for X. | ||
|
||
score(X, y[, sample_weight]) | ||
|
||
Return the mean accuracy on the given test data and labels. | ||
|
||
set_params(**params) | ||
|
||
Set the parameters of this estimator. | ||
|
||
__init__(n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)[source] | ||
Initialize self. See help(type(self)) for accurate signature. | ||
|
||
apply(X)[source] | ||
Apply trees in the forest to X, return leaf indices. | ||
|
||
Parameters | ||
X{array-like, sparse matrix} of shape (n_samples, n_features) | ||
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. | ||
|
||
Returns | ||
X_leavesndarray of shape (n_samples, n_estimators) | ||
For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in. | ||
|
||
decision_path(X)[source] | ||
Return the decision path in the forest. | ||
|
||
New in version 0.18. | ||
|
||
Parameters | ||
X{array-like, sparse matrix} of shape (n_samples, n_features) | ||
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. | ||
|
||
Returns | ||
indicatorsparse matrix of shape (n_samples, n_nodes) | ||
Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format. | ||
|
||
n_nodes_ptrndarray of shape (n_estimators + 1,) | ||
The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator. | ||
|
||
property feature_importances_ | ||
The impurity-based feature importances. | ||
|
||
The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. | ||
|
||
Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See sklearn.inspection.permutation_importance as an alternative. | ||
|
||
Returns | ||
feature_importances_ndarray of shape (n_features,) | ||
The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros. | ||
|
||
fit(X, y, sample_weight=None)[source] | ||
Build a forest of trees from the training set (X, y). | ||
|
||
Parameters | ||
X{array-like, sparse matrix} of shape (n_samples, n_features) | ||
The training input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. | ||
|
||
yarray-like of shape (n_samples,) or (n_samples, n_outputs) | ||
The target values (class labels in classification, real numbers in regression). | ||
|
||
sample_weightarray-like of shape (n_samples,), default=None | ||
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. | ||
|
||
Returns | ||
selfobject | ||
get_params(deep=True)[source] | ||
Get parameters for this estimator. | ||
|
||
Parameters | ||
deepbool, default=True | ||
If True, will return the parameters for this estimator and contained subobjects that are estimators. | ||
|
||
Returns | ||
paramsmapping of string to any | ||
Parameter names mapped to their values. | ||
|
||
predict(X)[source] | ||
Predict class for X. | ||
|
||
The predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. That is, the predicted class is the one with highest mean probability estimate across the trees. | ||
|
||
Parameters | ||
X{array-like, sparse matrix} of shape (n_samples, n_features) | ||
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. | ||
|
||
Returns | ||
yndarray of shape (n_samples,) or (n_samples, n_outputs) | ||
The predicted classes. | ||
|
||
predict_log_proba(X)[source] | ||
Predict class log-probabilities for X. | ||
|
||
The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the trees in the forest. | ||
|
||
Parameters | ||
X{array-like, sparse matrix} of shape (n_samples, n_features) | ||
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. | ||
|
||
Returns | ||
pndarray of shape (n_samples, n_classes), or a list of n_outputs | ||
such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. | ||
|
||
predict_proba(X)[source] | ||
Predict class probabilities for X. | ||
|
||
The predicted class probabilities of an input sample are computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf. | ||
|
||
Parameters | ||
X{array-like, sparse matrix} of shape (n_samples, n_features) | ||
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. | ||
|
||
Returns | ||
pndarray of shape (n_samples, n_classes), or a list of n_outputs | ||
such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. | ||
|
||
score(X, y, sample_weight=None)[source] | ||
Return the mean accuracy on the given test data and labels. | ||
|
||
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
step 1: | ||
import numpy as np | ||
import pandas as pd | ||
import matplotlib.pyplot as plt | ||
import seaborn as sns | ||
%matplotlib inline | ||
import scipy.stats as stats | ||
import warnings | ||
warnings.filterwarnings("ignore") | ||
|
||
|
||
from google.colab import files | ||
uploaded = files.upload() | ||
df = pd.read_csv(io.BytesIO(uploaded['Bank_Personal_Loan_Modelling.csv'])) | ||
|
||
|
||
|
||
Importing Data | ||
Use these commands to import data from a variety of different sources and formats. | ||
|
||
pd.read_csv(filename) | From a CSV file | ||
pd.read_table(filename) | From a delimited text file (like TSV) | ||
pd.read_excel(filename) | From an Excel file | ||
pd.read_sql(query, connection_object) | Read from a SQL table/database | ||
pd.read_json(json_string) | Read from a JSON formatted string, URL or file. | ||
pd.read_html(url) | Parses an html URL, string or file and extracts tables to a list of dataframes | ||
pd.read_clipboard() | Takes the contents of your clipboard and passes it to read_table() | ||
pd.DataFrame(dict) | From a dict, keys for columns names, values for data as lists | ||
|
||
Exporting Data | ||
Use these commands to export a DataFrame to CSV, .xlsx, SQL, or JSON. | ||
|
||
df.to_csv(filename) | Write to a CSV file | ||
df.to_excel(filename) | Write to an Excel file | ||
df.to_sql(table_name, connection_object) | Write to a SQL table | ||
df.to_json(filename) | Write to a file in JSON format | ||
|
||
Create Test Objects | ||
These commands can be useful for creating test segments. | ||
|
||
pd.DataFrame(np.random.rand(20,5)) | 5 columns and 20 rows of random floats | ||
pd.Series(my_list) | Create a series from an iterable my_list | ||
df.index = pd.date_range('1900/1/30', periods=df.shape[0]) | Add a date index | ||
|
||
Viewing/Inspecting Data | ||
Use these commands to take a look at specific sections of your pandas DataFrame or Series. | ||
|
||
df.head(n) | First n rows of the DataFrame | ||
df.tail(n) | Last n rows of the DataFrame | ||
df.shape | Number of rows and columns | ||
df.info() | Index, Datatype and Memory information | ||
df.describe() | Summary statistics for numerical columns | ||
s.value_counts(dropna=False) | View unique values and counts | ||
df.apply(pd.Series.value_counts) | Unique values and counts for all columns | ||
|
||
Selection | ||
Use these commands to select a specific subset of your data. | ||
|
||
df[col] | Returns column with label col as Series | ||
df[[col1, col2]] | Returns columns as a new DataFrame | ||
s.iloc[0] | Selection by position | ||
s.loc['index_one'] | Selection by index | ||
df.iloc[0,:] | First row | ||
df.iloc[0,0] | First element of first column | ||
|
||
Data Cleaning | ||
Use these commands to perform a variety of data cleaning tasks. | ||
|
||
df.columns = ['a','b','c'] | Rename columns | ||
pd.isnull() | Checks for null Values, Returns Boolean Arrray | ||
pd.notnull() | Opposite of pd.isnull() | ||
df.dropna() | Drop all rows that contain null values | ||
df.dropna(axis=1) | Drop all columns that contain null values | ||
df.dropna(axis=1,thresh=n) | Drop all rows have have less than n non null values | ||
df.fillna(x) | Replace all null values with x | ||
s.fillna(s.mean()) | Replace all null values with the mean (mean can be replaced with almost any function from the statistics module) | ||
s.astype(float) | Convert the datatype of the series to float | ||
s.replace(1,'one') | Replace all values equal to 1 with 'one' | ||
s.replace([1,3],['one','three']) | Replace all 1 with 'one' and 3 with 'three' | ||
df.rename(columns=lambda x: x + 1) | Mass renaming of columns | ||
df.rename(columns={'old_name': 'new_ name'}) | Selective renaming | ||
df.set_index('column_one') | Change the index | ||
df.rename(index=lambda x: x + 1) | Mass renaming of index | ||
|
||
Filter, Sort, and Groupby | ||
Use these commands to filter, sort, and group your data. | ||
|
||
df[df[col] > 0.5] | Rows where the column col is greater than 0.5 | ||
df[(df[col] > 0.5) & (df[col] < 0.7)] | Rows where 0.7 > col > 0.5 | ||
df.sort_values(col1) | Sort values by col1 in ascending order | ||
df.sort_values(col2,ascending=False) | Sort values by col2 in descending order | ||
df.sort_values([col1,col2],ascending=[True,False]) | Sort values by col1 in ascending order then col2 in descending order | ||
df.groupby(col) | Returns a groupby object for values from one column | ||
df.groupby([col1,col2]) | Returns groupby object for values from multiple columns | ||
df.groupby(col1)[col2] | Returns the mean of the values in col2, grouped by the values in col1 (mean can be replaced with almost any function from the statistics module) | ||
df.pivot_table(index=col1,values=[col2,col3],aggfunc=mean) | Create a pivot table that groups by col1 and calculates the mean of col2 and col3 | ||
df.groupby(col1).agg(np.mean) | Find the average across all columns for every unique col1 group | ||
df.apply(np.mean) | Apply the function np.mean() across each column | ||
nf.apply(np.max,axis=1) | Apply the function np.max() across each row | ||
|
||
Join/Combine | ||
Use these commands to combine multiple dataframes into a single one. | ||
|
||
df1.append(df2) | Add the rows in df1 to the end of df2 (columns should be identical) | ||
pd.concat([df1, df2],axis=1) | Add the columns in df1 to the end of df2 (rows should be identical) | ||
df1.join(df2,on=col1,how='inner') | SQL-style join the columns in df1 with the columns on df2 where the rows for col have identical values. 'how' can be one of 'left', 'right', 'outer', 'inner' | ||
|
||
Statistics | ||
Use these commands to perform various statistical tests. (These can all be applied to a series as well.) | ||
|
||
|
||
df.describe() | Summary statistics for numerical columns | ||
df.mean() | Returns the mean of all columns | ||
df.corr() | Returns the correlation between columns in a DataFrame | ||
df.count() | Returns the number of non-null values in each DataFrame column | ||
df.max() | Returns the highest value in each column | ||
df.min() | Returns the lowest value in each column | ||
df.median() | Returns the median of each column | ||
df.std() | Returns the standard deviation of each column |
Oops, something went wrong.