Skip to content
This repository has been archived by the owner on Jun 25, 2019. It is now read-only.

added deepLearning directory #10

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions deepLearning/ProTrial_Log.txt_filt.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
MCC:0.832987158811 Dense Size:768 Optimizer:adam Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:sigmoid
MCC:0.827999756307 Dense Size:512 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:sigmoid
MCC:0.819101693446 Dense Size:512 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:sigmoid
MCC:0.81829264043 Dense Size:256 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:16 FC Size:Double Final Classifier:sigmoid
MCC:0.814556645235 Dense Size:512 Optimizer:sgd Nesterov:True Learning Rate:0.0001 Activation:sigmoid Dropout Rate:0.25 batch size:96 FC Size:Double Momentum:1.0 Final Classifier:sigmoid
MCC:0.809378392131 Dense Size:512 Optimizer:adam Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:sigmoid
MCC:0.804804496976 Dense Size:768 Optimizer:adam Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.5 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:softmax
MCC:0.801501580623 Dense Size:768 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.75 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:sigmoid
MCC:0.800573981443 Dense Size:768 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:16 FC Size:Double Final Classifier:softmax
MCC:0.798564809301 Dense Size:1024 Optimizer:adam Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:sigmoid
MCC:0.795531326737 Dense Size:1024 Optimizer:adam Nesterov:True Learning Rate:0.0001 Activation:sigmoid Dropout Rate:0.0 Epsilon:1e-08 batch size:96 FC Size:Double Momentum:0.7 Final Classifier:sigmoid
MCC:0.777924965312 Dense Size:256 Optimizer:adam Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:32 FC Size:Double Final Classifier:sigmoid
MCC:0.773921347869 Dense Size:1024 Optimizer:adam Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:16 FC Size:Double Final Classifier:sigmoid
MCC:0.763984683141 Dense Size:768 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:16 FC Size:Double Final Classifier:softmax
MCC:0.762745973732 Dense Size:256 Optimizer:adam Epsilon:1e-08 Activation:relu Dropout Rate:0.5 Learning Rate:0.0001 batch size:32 FC Size:Double Final Classifier:sigmoid
MCC:0.762712407197 Dense Size:1024 Optimizer:adam Epsilon:1e-08 Activation:tanh Dropout Rate:0.5 Learning Rate:0.0001 batch size:64 FC Size:Single Final Classifier:sigmoid
MCC:0.76066394547 Dense Size:768 Optimizer:adam Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.5 Learning Rate:0.0001 batch size:16 FC Size:Double Final Classifier:sigmoid
MCC:0.756101341557 Dense Size:128 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:softmax
MCC:0.75418110397 Dense Size:512 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.75 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:sigmoid
MCC:0.737292132031 Dense Size:512 Optimizer:rmsprop Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:sigmoid
MCC:0.729434557965 Dense Size:128 Optimizer:adamax Epsilon:1e-08 Activation:tanh Dropout Rate:0.5 Learning Rate:0.0001 batch size:32 FC Size:Single Final Classifier:softmax
MCC:0.725797823651 Dense Size:512 Optimizer:adam Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Single Final Classifier:softmax
MCC:0.713360393403 Dense Size:512 Optimizer:rmsprop Epsilon:1e-08 Activation:tanh Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:sigmoid
MCC:0.709724890745 Dense Size:768 Optimizer:rmsprop Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.75 Learning Rate:0.0001 batch size:64 FC Size:Double Final Classifier:sigmoid
MCC:0.709146980652 Dense Size:768 Optimizer:rmsprop Epsilon:1e-08 Activation:tanh Dropout Rate:0.5 Learning Rate:0.0001 batch size:64 FC Size:Double Final Classifier:sigmoid
MCC:0.696038316872 Dense Size:128 Optimizer:adamax Epsilon:1e-08 Activation:tanh Dropout Rate:0.5 Learning Rate:0.0001 batch size:32 FC Size:Double Final Classifier:softmax
MCC:0.693400654298 Dense Size:256 Optimizer:adam Epsilon:1e-08 Activation:tanh Dropout Rate:0.25 Learning Rate:0.0001 batch size:128 FC Size:Single Final Classifier:sigmoid
MCC:0.692385706658 Dense Size:128 Optimizer:rmsprop Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:sigmoid
MCC:0.689045196309 Dense Size:768 Optimizer:adamax Epsilon:1e-08 Activation:relu Dropout Rate:0.0 Learning Rate:0.0001 batch size:64 FC Size:Single Final Classifier:sigmoid
MCC:0.675695670853 Dense Size:512 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:128 FC Size:Single Final Classifier:softmax
MCC:0.673400071443 Dense Size:64 Optimizer:adam Epsilon:1e-08 Activation:relu Dropout Rate:0.0 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:sigmoid
MCC:0.671176701573 Dense Size:64 Optimizer:adamax Epsilon:1e-08 Activation:tanh Dropout Rate:0.0 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:softmax
MCC:0.663868803579 Dense Size:768 Optimizer:adagrad Epsilon:1e-08 Activation:relu Dropout Rate:0.5 Learning Rate:0.0001 batch size:96 FC Size:Double Final Classifier:sigmoid
MCC:0.662533736799 Dense Size:128 Optimizer:adamax Epsilon:1e-08 Activation:tanh Dropout Rate:0.5 Learning Rate:0.0001 batch size:96 FC Size:Single Final Classifier:softmax
MCC:0.660021168174 Dense Size:768 Optimizer:adagrad Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.75 Learning Rate:0.0001 batch size:64 FC Size:Single Final Classifier:softmax
MCC:0.654821788734 Dense Size:256 Optimizer:adamax Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.75 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:sigmoid
MCC:0.650111909556 Dense Size:1024 Optimizer:adam Epsilon:1e-08 Activation:relu Dropout Rate:0.0 Learning Rate:0.0001 batch size:64 FC Size:Double Final Classifier:sigmoid
MCC:0.639537632186 Dense Size:256 Optimizer:sgd Nesterov:True Learning Rate:0.0001 Activation:relu Dropout Rate:0.25 batch size:32 FC Size:Double Momentum:0.8 Final Classifier:sigmoid
MCC:0.62886650319 Dense Size:128 Optimizer:adam Epsilon:1e-08 Activation:relu Dropout Rate:0.0 Learning Rate:0.0001 batch size:96 FC Size:Single Final Classifier:softmax
MCC:0.602956507197 Dense Size:768 Optimizer:adagrad Epsilon:1e-08 Activation:tanh Dropout Rate:0.5 Learning Rate:0.0001 batch size:128 FC Size:Single Final Classifier:softmax
MCC:0.601123496533 Dense Size:256 Optimizer:adagrad Epsilon:1e-08 Activation:relu Dropout Rate:0.5 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:softmax
MCC:0.593859802542 Dense Size:128 Optimizer:adagrad Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.25 Learning Rate:0.0001 batch size:64 FC Size:Double Final Classifier:softmax
MCC:0.519458234782 Dense Size:512 Optimizer:sgd Nesterov:True Learning Rate:0.0001 Activation:relu Dropout Rate:0.0 batch size:96 FC Size:Double Momentum:0.7 Final Classifier:sigmoid
MCC:0.518403322249 Dense Size:64 Optimizer:adagrad Epsilon:1e-08 Activation:tanh Dropout Rate:0.25 Learning Rate:0.0001 batch size:64 FC Size:Single Final Classifier:sigmoid
MCC:0.486900591238 Dense Size:1024 Optimizer:sgd Nesterov:True Learning Rate:0.0001 Activation:tanh Dropout Rate:0.0 batch size:32 FC Size:Single Momentum:0.8 Final Classifier:sigmoid
MCC:0.363218458448 Dense Size:512 Optimizer:adagrad Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.75 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:sigmoid
MCC:0.145821043046 Dense Size:64 Optimizer:rmsprop Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.25 Learning Rate:0.0001 batch size:16 FC Size:Single Final Classifier:sigmoid
MCC:0.111236163354 Dense Size:256 Optimizer:sgd Nesterov:True Learning Rate:0.0001 Activation:sigmoid Dropout Rate:0.25 batch size:32 FC Size:Double Momentum:0.7 Final Classifier:sigmoid
MCC:0.105369460361 Dense Size:64 Optimizer:rmsprop Nesterov:True Learning Rate:0.0001 Activation:tanh Dropout Rate:0.5 Epsilon:1e-08 batch size:32 FC Size:Single Momentum:1.0 Final Classifier:softmax
MCC:0.094840439659 Dense Size:768 Optimizer:adadelta Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.0 Learning Rate:0.0001 batch size:16 FC Size:Double Final Classifier:sigmoid
MCC:0.0776933927296 Dense Size:1024 Optimizer:adadelta Epsilon:1e-08 Activation:relu Dropout Rate:0.25 Learning Rate:0.0001 batch size:32 FC Size:Single Final Classifier:softmax
MCC:0.0668945224923 Dense Size:512 Optimizer:adadelta Epsilon:1e-08 Activation:sigmoid Dropout Rate:0.25 Learning Rate:0.0001 batch size:16 FC Size:Single Final Classifier:sigmoid
MCC:0.0627332326675 Dense Size:512 Optimizer:adamax Epsilon:1e-08 Activation:relu Dropout Rate:0.75 Learning Rate:0.0001 batch size:128 FC Size:Double Final Classifier:softmax
1 change: 1 addition & 0 deletions deepLearning/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
#deepLearning
80 changes: 80 additions & 0 deletions deepLearning/WeakLabels/create_toy_ds.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
__author__ = 'zarbo'
import os

lag_e = "./BerryPhotos_LV/L/train/early/"
lag_g = "./BerryPhotos_LV/L/train/good/"
lag_l = "./BerryPhotos_LV/L/train/late/"

vaj_e = "./BerryPhotos_LV/V/train/early/"
vaj_g = "./BerryPhotos_LV/V/train/good/"
vaj_l = "./BerryPhotos_LV/V/train/late/"

label_dict = {
"L_E":0,
"L_G":1,
"L_L":2,
"V_E":3,
"V_G":4,
"V_L":5,
"E":0,
"G":1,
"L":2
}

weak_dict_lbl = {"L_E":[],
"L_G":[],
"L_L":[],
"V_E":[],
"V_G":[],
"V_L":[]
}
hard_dict_lbl = {"E":[],
"G":[],
"L":[]
}

img = os.listdir(lag_e)
for m in img:
weak_dict_lbl['L_E'].append(lag_e+m)
hard_dict_lbl['E'].append(lag_e+m)

img = os.listdir(lag_g)
for m in img:
weak_dict_lbl['L_G'].append(lag_g+m)
hard_dict_lbl['G'].append(lag_g+m)

img = os.listdir(lag_l)
for m in img:
weak_dict_lbl['L_L'].append(lag_l+m)
hard_dict_lbl['L'].append(lag_l+m)

img = os.listdir(vaj_e)
for m in img:
weak_dict_lbl['V_E'].append(vaj_e+m)
hard_dict_lbl['E'].append(vaj_e+m)

img = os.listdir(vaj_g)
for m in img:
weak_dict_lbl['V_G'].append(vaj_g+m)
hard_dict_lbl['G'].append(vaj_g+m)

img = os.listdir(vaj_l)
for m in img:
weak_dict_lbl['V_L'].append(vaj_l+m)
hard_dict_lbl['L'].append(vaj_l+m)


out = open("weak_map.txt", "w")
for k in weak_dict_lbl.keys():
for p in weak_dict_lbl[k]:
out.write("\t".join([p,str(label_dict[k])])+"\n")
out.close()

out = open("hard_map.txt", "w")
for k in hard_dict_lbl.keys():
for p in hard_dict_lbl[k]:
out.write("\t".join([p,str(label_dict[k])])+"\n")
out.close()



125 changes: 125 additions & 0 deletions deepLearning/WeakLabels/mcc_multiclass.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
import numpy as np

def multimcc(t,p, classes=None):
""" Matthews Correlation Coefficient for multiclass
:Parameters:
t : 1d array_like object integer
target values
p : 1d array_like object integer
predicted values
classes: 1d array_like object integer containing
all possible classes

:Returns:
MCC : float, in range [-1.0, 1.0]
"""

# Cast to integer
tarr = np.asarray(t, dtype=np.int)
parr = np.asarray(p, dtype=np.int)


# Get the classes
if classes is None:
classes = np.unique(tarr)

nt = tarr.shape[0]
nc = classes.shape[0]

# Check dimension of the two array
if tarr.shape[0] != parr.shape[0]:
raise ValueError("t, p: shape mismatch")

# Initialize X and Y matrices
X = np.zeros((nt, nc))
Y = np.zeros((nt, nc))

# Fill the matrices
for i,c in enumerate(classes):
yidx = np.where(tarr==c)
xidx = np.where(parr==c)

X[xidx,i] = 1
Y[yidx,i] = 1

# Compute the denominator
denom = cov(X,X) * cov(Y,Y)
denom = np.sqrt(denom)

if denom == 0:
# If all samples assigned to one class return 0
return 0
else:
num = cov(X,Y)
return num / denom


def confusion_matrix(t, p):
""" Compute the multiclass confusion matrix
:Parameters:
t : 1d array_like object integer (-1/+1)
target values
p : 1d array_like object integer (-1/+1)
predicted values

:Returns:
MCC : float, in range [-1.0, 1.0]
"""

# Read true and predicted classes
tarr = np.asarray(t, dtype=np.int)
parr = np.asarray(p, dtype=np.int)

# Get the classes
classes = np.unique(tarr)

# Get dimension of the arrays
nt = tarr.shape[0]
nc = classes.shape[0]

# Check dimensions should match between true and predicted
if tarr.shape[0] != parr.shape[0]:
raise ValueError("t, p: shape mismatch")

# Initialize Confusion Matrix C
C = np.zeros((nc, nc))

# Fill the confusion matrix
for i in range(nt):
ct = np.where(classes == tarr[i])[0]
cp = np.where(classes == parr[i])[0]
C[ct, cp] += 1

# return the Confusion matrix and the classes
return C, classes

def cov(x,y):
nt = x.shape[0]

xm, ym = x.mean(axis=0), y.mean(axis=0)
xxm = x - xm
yym = y - ym

tmp = np.sum(xxm * yym, axis=1)
ss = tmp.sum()

return ss/nt


if __name__ == '__main__':

import numpy as np
ytrue = np.array([ 7., 6., 2., 2., 2., 2., 6., 2., 2., 3.,
2., 7., 6., 2., 5., 7., 2., 2., 2.])

ypred = np.array([7, 3, 6, 2, 2, 2, 3, 6, 2,
3, 2, 5, 3, 2, 2, 3, 2, 2, 2])

ypred = np.copy(ytrue)
ypred[0] = 6.0

# ytrue = np.array([9., 2., 1., 2., 1., 5., 3., 9.,10., 4., 8., 5., 5., 3., 2., 9., 4., 3., 7., 5., 7., 9.,10., 7., 4., 1., 4., 1., 6., 6.,1., 9., 4., 4., 3., 6., 5., 3., 1., 1., 7.,10., 6., 1., 8., 1., 9., 1., 5., 4., 2., 1., 2., 8., 2., 1., 3., 7., 5., 5.,1., 4.,10., 8., 5., 7., 6., 2., 1., 3., 4.,10., 2., 4., 6.,6., 8.,10., 1., 5., 4., 2., 8., 1., 7., 1.,10., 9., 3., 1.,2., 3., 1., 2., 6., 3., 1., 6., 3., 3., 6., 1., 1., 2., 1.,2., 6., 2., 1., 8., 6., 5., 7., 1.,10., 4.,10., 8., 7.,10.,8., 3., 2., 9., 3., 3., 2., 2., 8., 7., 5., 9., 6., 2., 2.,4., 1., 2., 1., 5., 4., 1., 4., 4., 1., 4., 4., 1., 2., 5.,2., 1., 6., 2., 1., 9., 4., 1., 2., 2., 6., 4., 1., 2., 4.,1., 1., 4., 4., 2., 2., 2., 8., 3., 1., 1., 8., 2., 1., 6.,7., 1., 2., 8.,10., 1., 2., 4., 4.,10., 7., 1., 5., 7., 8.,5., 4., 1., 1., 2.,10., 2.])

# ypred = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])

print(multimcc(ytrue, ypred))
Loading