Example 5.1 oc phenomenological model deterministic
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Optimal control of deterministic phenomenological models
+
This notebook shows how to compute the optimal control (OC) signal for phenomenological models (FHN, Hopf) for a simple example task.
+
+
+
+
+
+
+
importmatplotlib.pyplotasplt
+importnumpyasnp
+importos
+
+whileos.getcwd().split(os.sep)[-1]!="neurolib":
+ os.chdir('..')
+
+# We import the model, stimuli, and the optimal control package
+fromneurolib.models.fhnimportFHNModel
+fromneurolib.models.hopfimportHopfModel
+fromneurolib.utils.stimulusimportZeroInput
+fromneurolib.control.optimal_controlimportoc_fhn
+fromneurolib.control.optimal_controlimportoc_hopf
+fromneurolib.utils.plot_ocimportplot_oc_singlenode,plot_oc_network
+
+# This will reload all imports as soon as the code changes
+%load_extautoreload
+%autoreload3
+
+
+
+
+
+
+
+
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
+
+
+
+
+
+
+
# We import the model
+model=FHNModel()
+# model = HopfModel() # OC can be computed for the Hopf model completely analogously
+
+# Some parameters to define stimulation signals
+dt=model.params["dt"]
+duration=10.
+amplitude=1.
+period=duration/4.
+
+# We define a "zero-input", and a sine-input
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-2]=np.sin(2.*np.pi*np.arange(0,duration-0.2,dt)/period)# other functions or random values can be used as well
+
+# We set the duration of the simulation and the initial values
+model.params["duration"]=duration
+x_init=0.
+y_init=0.
+model.params["xs_init"]=np.array([[x_init]])
+model.params["ys_init"]=np.array([[y_init]])
+
+
+
+
+
+
+
+
# We set the stimulus in x and y variables, and run the simulation
+model.params["x_ext"]=input
+model.params["y_ext"]=zero_input
+model.run()
+
+# Define the result of the stimulation as target
+target=np.concatenate((np.concatenate((model.params["xs_init"],model.params["ys_init"]),axis=1)[:,:,np.newaxis],np.stack((model.x,model.y),axis=1)),axis=2)
+target_input=np.concatenate((input,zero_input),axis=0)[np.newaxis,:,:]
+
+# Remove stimuli and re-run the simulation
+model.params["x_ext"]=zero_input
+model.params["y_ext"]=zero_input
+control=np.concatenate((zero_input,zero_input),axis=0)[np.newaxis,:,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=np.concatenate((np.concatenate((model.params["xs_init"],model.params["ys_init"]),axis=1)[:,:,np.newaxis],np.stack((model.x,model.y),axis=1)),axis=2)
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["x_ext"]=zero_input
+model.params["y_ext"]=zero_input
+
+# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+ifmodel.name=='fhn':
+ model_controlled=oc_fhn.OcFhn(model,target,print_array=np.arange(0,501,25))
+elifmodel.name=='hopf':
+ model_controlled=oc_hopf.OcHopf(model,target,print_array=np.arange(0,501,25))
+
+# per default, the weights are set to w_p = 1 and w_2 = 0, meaning that energy costs do not contribute. The algorithm will produce a control such that the signal will match the target exactly, regardless of the strength of the required control input.
+# If you want to adjust the ratio of precision and energy weight, you can change the values in the weights dictionary
+model_controlled.weights["w_p"]=1.# default value 1
+model_controlled.weights["w_2"]=0.# default value 0
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input,model_controlled.cost_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.5533851530971279
+Cost in iteration 25: 0.2424229146183965
+Cost in iteration 50: 0.1584467235220361
+Cost in iteration 75: 0.12000029040838786
+Cost in iteration 100: 0.09606458437628636
+Cost in iteration 125: 0.07875899052824148
+Cost in iteration 150: 0.06567349888722097
+Cost in iteration 175: 0.055617171219608186
+Cost in iteration 200: 0.04682087916195195
+Cost in iteration 225: 0.03978086855629591
+Cost in iteration 250: 0.03392391540076884
+Cost in iteration 275: 0.028992099916335258
+Cost in iteration 300: 0.024790790776996006
+Cost in iteration 325: 0.021330380416435698
+Cost in iteration 350: 0.018279402174332753
+Cost in iteration 375: 0.01576269909191436
+Cost in iteration 400: 0.013565848707923062
+Cost in iteration 425: 0.011714500580338114
+Cost in iteration 450: 0.009981011218383677
+Cost in iteration 475: 0.008597600155106654
+Cost in iteration 500: 0.007380756958683128
+Final cost : 0.007380756958683128
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# Do another 100 iterations if you want to.
+# Repeated execution will continue with further 100 iterations.
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_singlenode(duration,dt,state,target,control,target_input,model_controlled.cost_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.007380756958683128
+Cost in iteration 25: 0.0063153874519220445
+Cost in iteration 50: 0.00541103301473969
+Cost in iteration 75: 0.004519862815977447
+Cost in iteration 100: 0.003828425847813115
+Final cost : 0.003828425847813115
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Network of neural populations (no delay)
+
Let us know study a simple 2-node network of FHN oscillators. We first define the coupling matrix and the distance matrix. We can then initialize the model.
+
+
+
+
+
+
+
cmat=np.array([[0.,0.5],[1.,0.]])# diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)
+dmat=np.array([[0.,0.],[0.,0.]])# no delay
+
+ifmodel.name=='fhn':
+ model=FHNModel(Cmat=cmat,Dmat=dmat)
+elifmodel.name=='hopf':
+ model=HopfModel(Cmat=cmat,Dmat=dmat)
+
+# we define the control input matrix to enable or disable certain channels and nodes
+control_mat=np.zeros((model.params.N,len(model.state_vars)))
+control_mat[0,0]=1.# only allow inputs in x-channel in node 0
+
+ifcontrol_mat[0,0]==0.andcontrol_mat[1,0]==0:
+ # if x is input channel, high connection strength can lead to numerical issues
+ model.params.K_gl=5.# increase for stronger connectivity, WARNING: too high value will cause numerical problems
+
+model.params["duration"]=duration
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-3]=np.sin(np.arange(0,duration-0.3,dt))# other functions or random values can be used as well
+model.params["xs_init"]=np.vstack([x_init,x_init])
+model.params["ys_init"]=np.vstack([y_init,y_init])
+
+# We set the stimulus in x and y variables, and run the simulation
+input_nw=np.concatenate((np.vstack([control_mat[0,0]*input,control_mat[0,1]*input])[np.newaxis,:,:],
+ np.vstack([control_mat[1,0]*input,control_mat[1,1]*input])[np.newaxis,:,:]),axis=0)
+zero_input_nw=np.concatenate((np.vstack([zero_input,zero_input])[np.newaxis,:,:],
+ np.vstack([zero_input,zero_input])[np.newaxis,:,:]),axis=0)
+
+model.params["x_ext"]=input_nw[:,0,:]
+model.params["y_ext"]=input_nw[:,1,:]
+
+model.run()
+
+# Define the result of the stimulation as target
+target=np.concatenate((np.concatenate((model.params["xs_init"],model.params["ys_init"]),axis=1)[:,:,np.newaxis],np.stack((model.x,model.y),axis=1)),axis=2)
+
+# Remove stimuli and re-run the simulation
+model.params["x_ext"]=zero_input_nw[:,0,:]
+model.params["y_ext"]=zero_input_nw[:,0,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=np.concatenate((np.concatenate((model.params["xs_init"],model.params["ys_init"]),axis=1)[:,:,np.newaxis],np.stack((model.x,model.y),axis=1)),axis=2)
+
+plot_oc_network(model.params.N,duration,dt,state,target,zero_input_nw,input_nw)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target
+cost_mat=np.zeros((model.params.N,len(model.output_vars)))
+cost_mat[1,0]=1.# only measure in y-channel in node 1
+
+# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["x_ext"]=zero_input_nw[:,0,:]
+model.params["y_ext"]=zero_input_nw[:,0,:]
+
+# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+ifmodel.name=='fhn':
+ model_controlled=oc_fhn.OcFhn(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+elifmodel.name=='hopf':
+ model_controlled=oc_hopf.OcHopf(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.26634675059119883
+Cost in iteration 25: 0.007720097126561841
+Cost in iteration 50: 0.0034680947661811417
+Cost in iteration 75: 0.0019407060206991053
+Cost in iteration 100: 0.0014869014234351792
+Cost in iteration 125: 0.0012416880831819742
+Cost in iteration 150: 0.001092671530708714
+Cost in iteration 175: 0.0009785714578839102
+Cost in iteration 200: 0.0008690983607758308
+Cost in iteration 225: 0.0007820993626886098
+Cost in iteration 250: 0.0007014496869583778
+Cost in iteration 275: 0.0006336452348537255
+Cost in iteration 300: 0.0005674277634957603
+Cost in iteration 325: 0.0005103364437866347
+Cost in iteration 350: 0.0004672824975699639
+Cost in iteration 375: 0.0004270480894871664
+Cost in iteration 400: 0.00038299359917410083
+Cost in iteration 425: 0.00033863450743146543
+Cost in iteration 450: 0.0002822096745731488
+Cost in iteration 475: 0.00025498430139333237
+Cost in iteration 500: 0.0002317087704141942
+Final cost : 0.0002317087704141942
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# Do another 100 iterations if you want to.
+# Repeated execution will continue with further 100 iterations.
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.0002317087704141942
+Cost in iteration 25: 0.00021249031308297534
+Cost in iteration 50: 0.00019830797443039547
+Cost in iteration 75: 0.0001844977342872052
+Cost in iteration 100: 0.00017230020232441738
+Final cost : 0.00017230020232441738
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Delayed network of neural populations
+
We now consider a network topology with delayed signalling between the two nodes.
+
+
+
+
+
+
+
cmat=np.array([[0.,0.],[1.,0.]])# diagonal elements are zero, connection strength is 1 from node 0 to node 1
+dmat=np.array([[0.,0.],[18,0.]])# distance from 0 to 1, delay is computed by dividing by the signal speed params.signalV
+
+ifmodel.name=='fhn':
+ model=FHNModel(Cmat=cmat,Dmat=dmat)
+elifmodel.name=='hopf':
+ model=HopfModel(Cmat=cmat,Dmat=dmat)
+
+duration,dt=2000.,0.1
+model.params.duration=duration
+model.params.dt=dt
+
+# change coupling parameters for faster and stronger connection between nodes
+model.params.K_gl=1.
+
+model.params.x_ext=np.zeros((1,))
+model.params.y_ext=np.zeros((1,))
+
+model.run()
+
+e0=model.x[0,-1]
+e1=model.x[1,-1]
+i0=model.y[0,-1]
+i1=model.y[1,-1]
+
+maxdelay=model.getMaxDelay()
+
+model.params["xs_init"]=np.array([[e0]*(maxdelay+1),[e1]*(maxdelay+1)])
+model.params["ys_init"]=np.array([[i0]*(maxdelay+1),[i1]*(maxdelay+1)])
+
+duration=6.
+model.params.duration=duration
+time=np.arange(dt,duration+dt,dt)
+
+# we define the control input matrix to enable or disable certain channels and nodes
+control_mat=np.zeros((model.params.N,len(model.state_vars)))
+control_mat[0,0]=1.# only allow inputs in E-channel in node 0
+
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,10]=1.
+input[0,20]=1.
+input[0,30]=1.# Three pulses as control input
+
+input_nw=np.concatenate((np.vstack([control_mat[0,0]*input,control_mat[0,1]*input])[np.newaxis,:,:],
+ np.vstack([control_mat[1,0]*input,control_mat[1,1]*input])[np.newaxis,:,:]),axis=0)
+zero_input_nw=np.concatenate((np.vstack([zero_input,zero_input])[np.newaxis,:,:],
+ np.vstack([zero_input,zero_input])[np.newaxis,:,:]),axis=0)
+
+model.params["x_ext"]=input_nw[:,0,:]
+model.params["y_ext"]=input_nw[:,1,:]
+
+model.params["xs_init"]=np.array([[e0]*(maxdelay+1),[e1]*(maxdelay+1)])
+model.params["ys_init"]=np.array([[i0]*(maxdelay+1),[i1]*(maxdelay+1)])
+model.run()
+
+# Define the result of the stimulation as target
+target=np.concatenate((np.stack((model.params["xs_init"][:,-1],model.params["ys_init"][:,-1]),axis=1)[:,:,np.newaxis],np.stack((model.x,model.y),axis=1)),axis=2)
+
+# Remove stimuli and re-run the simulation
+model.params["x_ext"]=zero_input_nw[:,0,:]
+model.params["y_ext"]=zero_input_nw[:,0,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=np.concatenate((np.stack((model.params["xs_init"][:,-1],model.params["ys_init"][:,-1]),axis=1)[:,:,np.newaxis],np.stack((model.x,model.y),axis=1)),axis=2)
+plot_oc_network(model.params.N,duration,dt,state,target,zero_input_nw,input_nw)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["x_ext"]=zero_input_nw[:,0,:]
+model.params["y_ext"]=zero_input_nw[:,0,:]
+
+# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+ifmodel.name=="fhn":
+ model_controlled=oc_fhn.OcFhn(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+elifmodel.name=="hopf":
+ model_controlled=oc_hopf.OcHopf(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.0011947065709511494
+Cost in iteration 25: 1.8995713965492315e-05
+Cost in iteration 50: 1.2661264833225136e-05
+Cost in iteration 75: 9.010644155785715e-06
+Cost in iteration 100: 6.820944851923922e-06
+Cost in iteration 125: 5.474911745391518e-06
+Cost in iteration 150: 4.530608100186918e-06
+Cost in iteration 175: 3.927022075378679e-06
+Cost in iteration 200: 3.506301912798229e-06
+Cost in iteration 225: 3.1905412820140275e-06
+Cost in iteration 250: 2.9567061175703895e-06
+Cost in iteration 275: 2.7741407209279735e-06
+Cost in iteration 300: 2.625794937490633e-06
+Cost in iteration 325: 2.502192369572658e-06
+Cost in iteration 350: 2.3959920314309043e-06
+Cost in iteration 375: 2.303282831253012e-06
+Cost in iteration 400: 2.220451776797742e-06
+Cost in iteration 425: 2.1458248650643056e-06
+Cost in iteration 450: 2.0775097671229942e-06
+Cost in iteration 475: 2.0119242553645737e-06
+Cost in iteration 500: 1.953220604966201e-06
+Final cost : 1.953220604966201e-06
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# perofrm another 100 iterations to improve result
+# repeat execution to add another 100 iterations
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 1.953220604966201e-06
+Cost in iteration 25: 1.8983582753730346e-06
+Cost in iteration 50: 1.8467668220809676e-06
+Cost in iteration 75: 1.798071064385974e-06
+Cost in iteration 100: 1.7518998980010873e-06
+Final cost : 1.7518998980010873e-06
+
+
This notebook shows how to compute the optimal control (OC) signal for the Wilson-Cowan model for a simple example task.
+
+
+
+
+
+
+
importmatplotlib.pyplotasplt
+importnumpyasnp
+importos
+
+whileos.getcwd().split(os.sep)[-1]!="neurolib":
+ os.chdir('..')
+
+# We import the model, stimuli, and the optimal control package
+fromneurolib.models.wcimportWCModel
+fromneurolib.utils.stimulusimportZeroInput
+fromneurolib.control.optimal_controlimportoc_wc
+fromneurolib.utils.plot_ocimportplot_oc_singlenode,plot_oc_network
+
+# This will reload all imports as soon as the code changes
+%load_extautoreload
+%autoreload2
+
+
+
+
+
+
+
+
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
+
+
+
+
+
+
+
# We import the model
+model=WCModel()
+
+# Some parameters to define stimulation signals
+dt=model.params["dt"]
+duration=10.
+amplitude=1.
+period=duration/4.
+
+# We define a "zero-input", and a sine-input
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-1]=amplitude*np.sin(2.*np.pi*np.arange(0,duration-0.1,dt)/period)# other functions or random values can be used as well
+
+# We set the duration of the simulation and the initial values
+model.params["duration"]=duration
+x_init=0.011225367461896877
+y_init=0.013126741089502588
+model.params["exc_init"]=np.array([[x_init]])
+model.params["inh_init"]=np.array([[y_init]])
+
+
+
+
+
+
+
+
# We set the stimulus in x and y variables, and run the simulation
+model.params["exc_ext"]=input
+model.params["inh_ext"]=zero_input
+model.run()
+
+# Define the result of the stimulation as target
+target=np.concatenate((np.concatenate((model.params["exc_init"],model.params["inh_init"]),axis=1)[:,:,np.newaxis],
+ np.stack((model.exc,model.inh),axis=1)),axis=2)
+target_input=np.concatenate((input,zero_input),axis=0)[np.newaxis,:,:]
+
+# Remove stimuli and re-run the simulation
+model.params["exc_ext"]=zero_input
+model.params["inh_ext"]=zero_input
+control=np.concatenate((zero_input,zero_input),axis=0)[np.newaxis,:,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=np.concatenate((np.concatenate((model.params["exc_init"],model.params["inh_init"]),axis=1)[:,:,np.newaxis],
+ np.stack((model.exc,model.inh),axis=1)),axis=2)
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["exc_ext"]=zero_input
+model.params["inh_ext"]=zero_input
+
+# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model_controlled=oc_wc.OcWc(model,target,print_array=np.arange(0,501,25))
+model_controlled.weights["w_p"]=1.# default value 1
+model_controlled.weights["w_2"]=0.# default value 0
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input,model_controlled.cost_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.00041810554198290294
+Cost in iteration 25: 1.0532102454109209e-05
+Cost in iteration 50: 3.925315729100555e-06
+Cost in iteration 75: 2.1054588334476998e-06
+Cost in iteration 100: 1.398320694183479e-06
+Cost in iteration 125: 1.0229387100203843e-06
+Cost in iteration 150: 7.974333735234386e-07
+Cost in iteration 175: 6.521115340266662e-07
+Cost in iteration 200: 5.444869100157712e-07
+Cost in iteration 225: 4.64536510299819e-07
+Cost in iteration 250: 4.017338930501393e-07
+Cost in iteration 275: 3.5110841320809306e-07
+Cost in iteration 300: 3.096084004886465e-07
+Cost in iteration 325: 2.752219772816687e-07
+Cost in iteration 350: 2.466122217504442e-07
+Cost in iteration 375: 2.2171404739100818e-07
+Cost in iteration 400: 2.0072190143053269e-07
+Cost in iteration 425: 1.8306021177634902e-07
+Cost in iteration 450: 1.6681651877735875e-07
+Cost in iteration 475: 1.5334951215981366e-07
+Cost in iteration 500: 1.409261374589448e-07
+Final cost : 1.409261374589448e-07
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# Do another 100 iterations if you want to.
+# Repeated execution will continue with further 100 iterations.
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_singlenode(duration,dt,state,target,control,target_input,model_controlled.cost_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 1.409261374589448e-07
+Cost in iteration 25: 1.3051113114486073e-07
+Cost in iteration 50: 1.2069164098268257e-07
+Cost in iteration 75: 1.1215971283577606e-07
+Cost in iteration 100: 1.0456327452784617e-07
+Final cost : 1.0456327452784617e-07
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Network case
+
Let us know study a simple 2-node network of model oscillators. We first define the coupling matrix and the distance matrix. We can then initialize the model.
+
+
+
+
+
+
+
cmat=np.array([[0.,0.5],[1.,0.]])# diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)
+dmat=np.array([[0.,0.],[0.,0.]])# no delay
+
+model=WCModel(Cmat=cmat,Dmat=dmat)
+
+# we define the control input matrix to enable or disable certain channels and nodes
+control_mat=np.zeros((model.params.N,len(model.state_vars)))
+control_mat[0,0]=1.# only allow inputs in x-channel in node 0
+
+model.params.K_gl=5.
+
+model.params["duration"]=duration
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-3]=np.sin(np.arange(0,duration-0.3,dt))# other functions or random values can be used as well
+model.params["exc_init"]=np.vstack([x_init,x_init])
+model.params["inh_init"]=np.vstack([y_init,y_init])
+
+
+# We set the stimulus in x and y variables, and run the simulation
+input_nw=np.concatenate((np.vstack([control_mat[0,0]*input,control_mat[0,1]*input])[np.newaxis,:,:],
+ np.vstack([control_mat[1,0]*input,control_mat[1,1]*input])[np.newaxis,:,:]),axis=0)
+zero_input_nw=np.concatenate((np.vstack([zero_input,zero_input])[np.newaxis,:,:],
+ np.vstack([zero_input,zero_input])[np.newaxis,:,:]),axis=0)
+
+model.params["exc_ext"]=input_nw[:,0,:]
+model.params["inh_ext"]=input_nw[:,1,:]
+
+model.run()
+
+# Define the result of the stimulation as target
+target=np.concatenate((np.concatenate((model.params["exc_init"],model.params["inh_init"]),axis=1)[:,:,np.newaxis],np.stack((model.exc,model.inh),axis=1)),axis=2)
+
+# Remove stimuli and re-run the simulation
+model.params["exc_ext"]=zero_input_nw[:,0,:]
+model.params["inh_ext"]=zero_input_nw[:,0,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=np.concatenate((np.concatenate((model.params["exc_init"],model.params["inh_init"]),axis=1)[:,:,np.newaxis],np.stack((model.exc,model.inh),axis=1)),axis=2)
+
+plot_oc_network(model.params.N,duration,dt,state,target,zero_input_nw,input_nw)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target
+cost_mat=np.zeros((model.params.N,len(model.output_vars)))
+cost_mat[1,0]=1.# only measure in y-channel in node 1
+
+# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["exc_ext"]=zero_input_nw[:,0,:]
+model.params["inh_ext"]=zero_input_nw[:,0,:]
+
+# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model_controlled=oc_wc.OcWc(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 8.117061134315108e-06
+Cost in iteration 25: 4.0329637221407195e-07
+Cost in iteration 50: 2.133706589679289e-07
+Cost in iteration 75: 1.0846418185856119e-07
+Cost in iteration 100: 6.237553898673198e-08
+Cost in iteration 125: 3.607365058691262e-08
+Cost in iteration 150: 2.2496421814207724e-08
+Cost in iteration 175: 1.5886138922670738e-08
+Cost in iteration 200: 1.1727415781910453e-08
+Cost in iteration 225: 9.005487959890062e-09
+Cost in iteration 250: 7.191281120908631e-09
+Cost in iteration 275: 5.835744371001404e-09
+Cost in iteration 300: 4.915806895112334e-09
+Cost in iteration 325: 4.206672224203755e-09
+Cost in iteration 350: 3.6916483993194285e-09
+Cost in iteration 375: 3.2948161905145206e-09
+Cost in iteration 400: 2.9837006122863342e-09
+Cost in iteration 425: 2.7310136209212046e-09
+Cost in iteration 450: 2.5267282859627983e-09
+Cost in iteration 475: 2.352356874896669e-09
+Cost in iteration 500: 2.2057268519628175e-09
+Final cost : 2.2057268519628175e-09
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# Do another 100 iterations if you want to.
+# Repeated execution will continue with further 100 iterations.
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 2.2057268519628175e-09
+Cost in iteration 25: 2.079569265893922e-09
+Cost in iteration 50: 1.969986550217457e-09
+Cost in iteration 75: 1.874389888067335e-09
+Cost in iteration 100: 1.7855706988225455e-09
+Final cost : 1.7855706988225455e-09
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Delayed network of neural populations
+
We now consider a network topology with delayed signalling between the two nodes.
+
+
+
+
+
+
+
cmat=np.array([[0.,0.],[1.,0.]])# diagonal elements are zero, connection strength is 1 from node 0 to node 1
+dmat=np.array([[0.,0.],[18,0.]])# distance from 0 to 1, delay is computed by dividing by the signal speed params.signalV
+
+model=WCModel(Cmat=cmat,Dmat=dmat)
+
+duration,dt=2000.,0.1
+model.params.duration=duration
+model.params.dt=dt
+model.params.K_gl=10.
+
+model.run()
+
+e0=model.exc[0,-1]
+e1=model.exc[1,-1]
+i0=model.inh[0,-1]
+i1=model.inh[1,-1]
+
+maxdelay=model.getMaxDelay()
+
+model.params["exc_init"]=np.array([[e0]*(maxdelay+1),[e1]*(maxdelay+1)])
+model.params["inh_init"]=np.array([[i0]*(maxdelay+1),[i1]*(maxdelay+1)])
+
+duration=6.
+model.params.duration=duration
+model.run()
+
+# we define the control input matrix to enable or disable certain channels and nodes
+control_mat=np.zeros((model.params.N,len(model.state_vars)))
+control_mat[0,0]=1.# only allow inputs in E-channel in node 0
+
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=zero_input.copy()
+input[0,10]=1.
+input[0,20]=1.
+input[0,30]=1.# Three pulses as control input
+
+input_nw=np.concatenate((np.vstack([control_mat[0,0]*input,control_mat[0,1]*input])[np.newaxis,:,:],
+ np.vstack([control_mat[1,0]*input,control_mat[1,1]*input])[np.newaxis,:,:]),axis=0)
+zero_input_nw=np.concatenate((np.vstack([zero_input,zero_input])[np.newaxis,:,:],
+ np.vstack([zero_input,zero_input])[np.newaxis,:,:]),axis=0)
+
+model.params["exc_ext"]=input_nw[:,0,:]
+model.params["inh_ext"]=input_nw[:,1,:]
+model.run()
+
+# Define the result of the stimulation as target
+target=np.concatenate((np.stack((model.params["exc_init"][:,-1],model.params["inh_init"][:,-1]),axis=1)[:,:,np.newaxis],np.stack((model.exc,model.inh),axis=1)),axis=2)
+
+# Remove stimuli and re-run the simulation
+model.params["exc_ext"]=zero_input_nw[:,0,:]
+model.params["inh_ext"]=zero_input_nw[:,0,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=np.concatenate((np.stack((model.params["exc_init"][:,-1],model.params["inh_init"][:,-1]),axis=1)[:,:,np.newaxis],np.stack((model.exc,model.inh),axis=1)),axis=2)
+plot_oc_network(model.params.N,duration,dt,state,target,zero_input_nw,input_nw)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model_controlled=oc_wc.OcWc(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 1.792835053390993e-07
+Cost in iteration 25: 3.224858708247228e-10
+Cost in iteration 50: 1.0235990384283723e-10
+Cost in iteration 75: 8.627681277851615e-11
+Cost in iteration 100: 8.09708890397755e-11
+Cost in iteration 125: 6.901547805762654e-11
+Cost in iteration 150: 6.563898918059379e-11
+Cost in iteration 175: 6.358322097910284e-11
+Cost in iteration 200: 5.819126634851626e-11
+Cost in iteration 225: 5.598411882794661e-11
+Cost in iteration 250: 5.458351655389417e-11
+Cost in iteration 275: 5.101837452145287e-11
+Cost in iteration 300: 4.9526343719852504e-11
+Cost in iteration 325: 4.872279762423021e-11
+Cost in iteration 350: 4.599347400927492e-11
+Cost in iteration 375: 4.5049466495032303e-11
+Cost in iteration 400: 4.32863678958512e-11
+Cost in iteration 425: 4.241565430129624e-11
+Cost in iteration 450: 4.121896579349796e-11
+Cost in iteration 475: 4.036542019862459e-11
+Cost in iteration 500: 3.990804399212831e-11
+Final cost : 3.990804399212831e-11
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# perofrm another 100 iterations to improve result
+# repeat execution to add another 500 iterations
+# converence to the input stimulus is relatively slow for the WC nodel
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 3.990804399212831e-11
+Cost in iteration 25: 3.8701660107380814e-11
+Cost in iteration 50: 3.8275743610357815e-11
+Cost in iteration 75: 3.731362663528545e-11
+Cost in iteration 100: 3.694171527929222e-11
+Final cost : 3.694171527929222e-11
+
+
This notebook shows how to compute the optimal control (OC) signal for the noisy WC model for a simple example task.
+
+
+
+
+
+
+
importmatplotlib.pyplotasplt
+importnumpyasnp
+importos
+
+whileos.getcwd().split(os.sep)[-1]!="neurolib":
+ os.chdir('..')
+
+# We import the model, stimuli, and the optimal control package
+fromneurolib.models.wcimportWCModel
+fromneurolib.utils.stimulusimportZeroInput
+fromneurolib.control.optimal_controlimportoc_wc
+fromneurolib.utils.plot_ocimportplot_oc_singlenode,plot_oc_network
+
+# This will reload all imports as soon as the code changes
+%load_extautoreload
+%autoreload2
+
+
+
+
+
+
+
+
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
+
+
+
+
+
+
+
# We import the model
+model=WCModel()
+
+# Set noise strength to zero to define target state
+model.params.sigma_ou=0.
+
+# Some parameters to define stimulation signals
+dt=model.params["dt"]
+duration=40.
+amplitude=1.
+period=duration/4.
+
+# We define a "zero-input", and a sine-input
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-1]=amplitude*np.sin(2.*np.pi*np.arange(0,duration-0.1,dt)/period)# other functions or random values can be used as well
+
+# We set the duration of the simulation and the initial values
+model.params["duration"]=duration
+x_init=0.011225367461896877
+y_init=0.013126741089502588
+model.params["exc_init"]=np.array([[x_init]])
+model.params["inh_init"]=np.array([[y_init]])
+
+# We set the stimulus in x and y variables, and run the simulation
+model.params["exc_ext"]=input
+model.params["inh_ext"]=zero_input
+model.run()
+
+# Define the result of the stimulation as target
+target=np.concatenate((np.concatenate((model.params["exc_init"],model.params["inh_init"]),axis=1)[:,:,np.newaxis],
+ np.stack((model.exc,model.inh),axis=1)),axis=2)
+target_input=np.concatenate((input,zero_input),axis=0)[np.newaxis,:,:]
+
+# Remove stimuli and re-run the simulation
+# Change sigma_ou_parameter to adjust the noise strength
+model.params['sigma_ou']=0.1
+model.params['tau_ou']=1.
+model.params["exc_ext"]=zero_input
+model.params["inh_ext"]=zero_input
+control=np.concatenate((zero_input,zero_input),axis=0)[np.newaxis,:,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=np.concatenate((np.concatenate((model.params["exc_init"],model.params["inh_init"]),axis=1)[:,:,np.newaxis],
+ np.stack((model.exc,model.inh),axis=1)),axis=2)
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The target is a periodic oscillation of x and y variable (computed in deterministic, noise-free system).
+
The noisy, undistrubed system fluctuates around zero.
+
For the optimization, you can now set several new parameters:
+- M: the number of noise realizations that the algorithm averages over. Default=1
+- M_validation: the number of noise realization the final cost is computed from. Default=1000
+- validate_per_step: If True, the cost for each step is computed averaging over M_validation instead of M realizations, this takes much longer. Default=False
+- method: determines, how the noise averages are computed. Results may vary for different methods depending on the specific task. Choose from ['3']. Default='3'
+
Please note:
+- higher number of iterations does not promise better results for computations in noisy systems. The cost will level off at some iteration number, and start increasing again afterwards. Make sure not to perform too many iterations.
+- M, M_validation should increase with sigma_ou model parameter
+- validate_per_step does not impact the control result
+
Let's first optimize with the following parameters: M=20, iterations=100
+
+
+
+
+
+
+
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["exc_ext"]=zero_input
+model.params["inh_ext"]=zero_input
+
+# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model_controlled=oc_wc.OcWc(model,target,print_array=np.arange(0,101,10),
+ M=20,M_validation=500,validate_per_step=True)
+
+# We run 100 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(100)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input,model_controlled.cost_history)
+
+
+
+
+
+
+
+
+Compute control for a noisy system
+Mean cost in iteration 0: 0.0486299027821106
+Mean cost in iteration 10: 0.02795683316984877
+Mean cost in iteration 20: 0.027101411958439722
+Mean cost in iteration 30: 0.026543919519260453
+Mean cost in iteration 40: 0.026707819124178123
+Mean cost in iteration 50: 0.026786489900410732
+Mean cost in iteration 60: 0.026412584686262147
+Mean cost in iteration 70: 0.026425089398826186
+Mean cost in iteration 80: 0.026760368474147204
+Mean cost in iteration 90: 0.026954163211574594
+Mean cost in iteration 100: 0.027106734179733114
+Minimal cost found at iteration 36
+Final cost validated with 500 noise realizations : 0.02719992592343364
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Let's do the same thing with different parameters: M=100, iterations=30
+
+
+
+
+
+
+
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["exc_ext"]=zero_input
+model.params["inh_ext"]=zero_input
+
+# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model_controlled=oc_wc.OcWc(model,target,print_array=np.arange(0,31,5),
+ M=100,M_validation=500,validate_per_step=True)
+
+# We run 30 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(30)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input,model_controlled.cost_history)
+
+
+
+
+
+
+
+
+Compute control for a noisy system
+Mean cost in iteration 0: 0.044519683319845585
+Mean cost in iteration 5: 0.049139417017223554
+Mean cost in iteration 10: 0.050857609671347954
+Mean cost in iteration 15: 0.04663531486878592
+Mean cost in iteration 20: 0.046747345271133535
+Mean cost in iteration 25: 0.05112611753258763
+Mean cost in iteration 30: 0.04785865829049892
+Minimal cost found at iteration 27
+Final cost validated with 500 noise realizations : 0.045416281905513174
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Network case
+
+
+
+
+
+
+
Let us know study a simple 2-node network of model oscillators. We first need to define the coupling matrix and the delay matrix. We can then initialize the model.
+
+
+
+
+
+
+
cmat=np.array([[0.,0.5],[1.0,0.]])# diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)
+dmat=np.array([[0.,0.],[0.,0.]])# no delay
+
+model=WCModel(Cmat=cmat,Dmat=dmat)
+
+# we define the control input matrix to enable or disable certain channels and nodes
+control_mat=np.zeros((model.params.N,len(model.state_vars)))
+control_mat[0,0]=1.# only allow inputs in x-channel in node 0
+
+model.params.K_gl=10.
+
+# Set noise strength to zero to define target state
+model.params['sigma_ou']=0.
+
+model.params["duration"]=duration
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-1]=amplitude*np.sin(2.*np.pi*np.arange(0,duration-0.1,dt)/period)# other functions or random values can be used as well
+model.params["exc_init"]=np.vstack([0.01255381969006173,0.01190300495001282])
+model.params["inh_init"]=np.vstack([0.013492631513639169,0.013312224583806076])
+
+
+# We set the stimulus in x and y variables, and run the simulation
+input_nw=np.concatenate((np.vstack([control_mat[0,0]*input,control_mat[0,1]*input])[np.newaxis,:,:],
+ np.vstack([control_mat[1,0]*input,control_mat[1,1]*input])[np.newaxis,:,:]),axis=0)
+zero_input_nw=np.concatenate((np.vstack([zero_input,zero_input])[np.newaxis,:,:],
+ np.vstack([zero_input,zero_input])[np.newaxis,:,:]),axis=0)
+
+model.params["exc_ext"]=input_nw[:,0,:]
+model.params["inh_ext"]=input_nw[:,1,:]
+
+model.run()
+
+# Define the result of the stimulation as target
+target=np.concatenate((np.concatenate((model.params["exc_init"],model.params["inh_init"]),axis=1)[:,:,np.newaxis],np.stack((model.exc,model.inh),axis=1)),axis=2)
+
+# Remove stimuli and re-run the simulation
+model.params['sigma_ou']=0.03
+model.params['tau_ou']=1.
+model.params["exc_ext"]=zero_input_nw[:,0,:]
+model.params["inh_ext"]=zero_input_nw[:,0,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=np.concatenate((np.concatenate((model.params["exc_init"],model.params["inh_init"]),axis=1)[:,:,np.newaxis],np.stack((model.exc,model.inh),axis=1)),axis=2)
+
+plot_oc_network(model.params.N,duration,dt,state,target,zero_input_nw,input_nw)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Let's optimize with the following parameters: M=20, iterations=100
+
+
+
+
+
+
+
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target
+cost_mat=np.zeros((model.params.N,len(model.output_vars)))
+cost_mat[1,0]=1.# only measure in x-channel in node 1
+
+# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["exc_ext"]=zero_input_nw[:,0,:]
+model.params["inh_ext"]=zero_input_nw[:,0,:]
+
+# We load the optimal control class
+# print array (optinal parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model_controlled=oc_wc.OcWc(model,
+ target,
+ print_array=np.arange(0,101,10),
+ control_matrix=control_mat,
+ cost_matrix=cost_mat,
+ M=20,
+ M_validation=500,
+ validate_per_step=True)
+
+# We run 100 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(100)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a noisy system
+Mean cost in iteration 0: 0.0161042019653286
+Mean cost in iteration 10: 0.029701202083900886
+Mean cost in iteration 20: 0.02055100392146934
+Mean cost in iteration 30: 0.01824138412316584
+Mean cost in iteration 40: 0.01774943248604246
+Mean cost in iteration 50: 0.00938616563892467
+Mean cost in iteration 60: 0.013815979179667275
+Mean cost in iteration 70: 0.011677029951767951
+Mean cost in iteration 80: 0.03103645422939053
+Mean cost in iteration 90: 0.018355469642118635
+Mean cost in iteration 100: 0.021407393453975455
+Minimal cost found at iteration 67
+Final cost validated with 500 noise realizations : 0.02038125379192151
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Let's do the same thing with different parameters: M=100, iterations=30
+
+
+
+
+
+
+
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["exc_ext"]=zero_input_nw[:,0,:]
+model.params["inh_ext"]=zero_input_nw[:,0,:]
+
+# We load the optimal control class
+# print array (optinal parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model_controlled=oc_wc.OcWc(model,
+ target,
+ print_array=np.arange(0,31,5),
+ control_matrix=control_mat,
+ cost_matrix=cost_mat,
+ M=100,
+ M_validation=500,
+ validate_per_step=True)
+
+# We run 30 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(30)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a noisy system
+Mean cost in iteration 0: 0.01775755329403377
+Mean cost in iteration 5: 0.010280452998278504
+Mean cost in iteration 10: 0.01594708289308906
+Mean cost in iteration 15: 0.028644745813145765
+Mean cost in iteration 20: 0.030889247442364865
+Mean cost in iteration 25: 0.02629869930972565
+Mean cost in iteration 30: 0.017322464091192105
+Minimal cost found at iteration 21
+Final cost validated with 500 noise realizations : 0.04481574197020663
+
+
This notebook shows how to compute the optimal control (OC) signal for the ALN model for a simple example task.
+
+
+
+
+
+
+
importmatplotlib.pyplotasplt
+importnumpyasnp
+importos
+
+whileos.getcwd().split(os.sep)[-1]!="neurolib":
+ os.chdir('..')
+
+# We import the model, stimuli, and the optimal control package
+fromneurolib.models.alnimportALNModel
+fromneurolib.utils.stimulusimportZeroInput
+fromneurolib.control.optimal_controlimportoc_aln
+fromneurolib.utils.plot_ocimportplot_oc_singlenode,plot_oc_network
+
+# This will reload all imports as soon as the code changes
+%load_extautoreload
+%autoreload2
+
+
+# This function reads out the final state of a simulation
+defgetfinalstate(model):
+ N=model.params.Cmat.shape[0]
+ V=len(model.state_vars)
+ T=model.getMaxDelay()+1
+ state=np.zeros((N,V,T))
+ forvinrange(V):
+ if"rates"inmodel.state_vars[v]or"IA"inmodel.state_vars[v]:
+ forninrange(N):
+ state[n,v,:]=model.state[model.state_vars[v]][n,-T:]
+ else:
+ forninrange(N):
+ state[n,v,:]=model.state[model.state_vars[v]][n]
+ returnstate
+
+
+defsetinitstate(model,state):
+ N=model.params.Cmat.shape[0]
+ V=len(model.init_vars)
+ T=model.getMaxDelay()+1
+
+ forninrange(N):
+ forvinrange(V):
+ if"rates"inmodel.init_vars[v]or"IA"inmodel.init_vars[v]:
+ model.params[model.init_vars[v]]=state[:,v,-T:]
+ else:
+ model.params[model.init_vars[v]]=state[:,v,-1]
+
+ return
+
+defgetstate(model):
+ state=np.concatenate((np.concatenate((model.params["rates_exc_init"][:,np.newaxis,-1],
+ model.params["rates_inh_init"][:,np.newaxis,-1],
+ model.params["IA_init"][:,np.newaxis,-1],),axis=1,)[:,:,np.newaxis],
+ np.stack((model.rates_exc,model.rates_inh,model.IA),axis=1),),axis=2,)
+
+ returnstate
+
+
+
+
+
+
+
+
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
+
We first study current inputs. We will later proceed to rate inputs.
+
+
+
+
+
+
+
# We import the model
+model=ALNModel()
+model.params.duration=10000
+model.params.mue_ext_mean=2.# up state
+model.run()
+setinitstate(model,getfinalstate(model))
+
+# Some parameters to define stimulation signals
+dt=model.params["dt"]
+duration=10.
+amplitude=1.
+period=duration/4.
+
+# We define a "zero-input", and a sine-input
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-1]=amplitude*np.sin(2.*np.pi*np.arange(0,duration-0.1,dt)/period)# other functions or random values can be used as well
+
+# We set the duration of the simulation and the initial values
+model.params["duration"]=duration
+
+# We set the stimulus in x and y variables, and run the simulation
+model.params["ext_exc_current"]=input
+model.params["ext_inh_current"]=zero_input
+model.params["ext_exc_rate"]=zero_input
+model.params["ext_inh_rate"]=zero_input
+
+model.run()
+
+# Define the result of the stimulation as target
+target=getstate(model)
+target_input=np.concatenate((input,zero_input,zero_input,zero_input),axis=0)[np.newaxis,:,:]
+
+# Remove stimuli and re-run the simulation
+model.params["ext_exc_current"]=zero_input
+model.params["ext_inh_current"]=zero_input
+control=np.concatenate((zero_input,zero_input,zero_input,zero_input),axis=0)[np.newaxis,:,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=getstate(model)
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+control_mat=np.zeros((1,len(model.input_vars)))
+control_mat[0,0]=1.
+cost_mat=np.zeros((1,len(model.output_vars)))
+cost_mat[0,0]=1.
+
+model_controlled=oc_aln.OcAln(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+model_controlled.weights["w_p"]=1.# default value 1
+model_controlled.weights["w_2"]=0.# default value 0
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input,model_controlled.cost_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 314.1247597247194
+Cost in iteration 25: 0.13317432824531167
+Cost in iteration 50: 0.025934764241784855
+Cost in iteration 75: 0.010689714898934012
+Cost in iteration 100: 0.006042649711908977
+Cost in iteration 125: 0.003852074448389804
+Cost in iteration 150: 0.0026454397557471756
+Cost in iteration 175: 0.0019048498068881534
+Cost in iteration 200: 0.0014175325285176437
+Cost in iteration 225: 0.0010832777739798686
+Cost in iteration 250: 0.0008270405756069322
+Cost in iteration 275: 0.000647747907643482
+Cost in iteration 300: 0.0005135789763737352
+Cost in iteration 325: 0.00041166220430455887
+Cost in iteration 350: 0.00033334319584000865
+Cost in iteration 375: 0.0002682483135493626
+Cost in iteration 400: 0.00021897331522083166
+Cost in iteration 425: 0.0001797951466810639
+Cost in iteration 450: 0.0001484385297291106
+Cost in iteration 475: 0.00012322292996632452
+Cost in iteration 500: 0.0001019978308262297
+Final cost : 0.0001019978308262297
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# Do another 100 iterations if you want to.
+# Repeated execution will continue with further 100 iterations.
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_singlenode(duration,dt,state,target,control,target_input,model_controlled.cost_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.0001019978308262297
+Cost in iteration 25: 8.503577269809191e-05
+Cost in iteration 50: 7.113629148054069e-05
+Cost in iteration 75: 5.970536946996868e-05
+Cost in iteration 100: 5.02763560369055e-05
+Final cost : 5.02763560369055e-05
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Let us now look at a scenario with rate-type control inputs
+
+
+
+
+
+
+
amplitude=40.
+offset=60.
+period=duration/4.
+
+# We define a "zero-input", and a sine-input
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-1]=offset+amplitude*np.sin(2.*np.pi*np.arange(0,duration-0.1,dt)/period)# other functions or random values can be used as well
+
+# We set the stimulus in x and y variables, and run the simulation
+model.params["ext_exc_current"]=zero_input
+model.params["ext_inh_current"]=zero_input
+model.params["ext_exc_rate"]=input*1e-3# rate inputs need to be converted to kHz
+model.params["ext_inh_rate"]=zero_input
+
+model.run()
+
+# Define the result of the stimulation as target
+target=getstate(model)
+target_input=np.concatenate((zero_input,zero_input,input,zero_input),axis=0)[np.newaxis,:,:]
+
+# Remove stimuli and re-run the simulation
+model.params["ext_exc_rate"]=zero_input
+control=np.concatenate((zero_input,zero_input,zero_input,zero_input),axis=0)[np.newaxis,:,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=getstate(model)
+
+plot_oc_singlenode(duration,dt,state,target,control,target_input,plot_control_vars=[2,3])
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# Control matrix needs to be adjusted for rate inputs
+control_mat=np.zeros((1,len(model.input_vars)))
+control_mat[0,2]=1.
+
+model_controlled=oc_aln.OcAln(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+model_controlled.weights["w_p"]=1.# default value 1
+model_controlled.weights["w_2"]=0.# default value 0
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_singlenode(duration,dt,state,target,control*1e3,target_input,model_controlled.cost_history,plot_control_vars=[2,3])
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 27.349397232974408
+Cost in iteration 25: 0.0006390076320320428
+Cost in iteration 50: 0.00014311978667798868
+Cost in iteration 75: 8.017957661471726e-05
+Cost in iteration 100: 5.679617359217007e-05
+Cost in iteration 125: 4.306794192661556e-05
+Cost in iteration 150: 3.376433119895472e-05
+Cost in iteration 175: 2.7066420641127278e-05
+Cost in iteration 200: 2.2059610014723193e-05
+Cost in iteration 225: 1.8212160897041168e-05
+Cost in iteration 250: 1.5191277735291038e-05
+Cost in iteration 275: 1.2778303406474285e-05
+Cost in iteration 300: 1.0888696043551817e-05
+Cost in iteration 325: 9.243703911351409e-06
+Cost in iteration 350: 7.899581967191086e-06
+Cost in iteration 375: 6.787562684851147e-06
+Cost in iteration 400: 5.859013881863671e-06
+Cost in iteration 425: 5.077487368901499e-06
+Cost in iteration 450: 4.439379983051779e-06
+Cost in iteration 475: 3.85899283693207e-06
+Cost in iteration 500: 3.3690715490197364e-06
+Final cost : 3.3690715490197364e-06
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# Do another 100 iterations if you want to.
+# Repeated execution will continue with further 100 iterations.
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_singlenode(duration,dt,state,target,control*1e3,target_input,model_controlled.cost_history,plot_control_vars=[2,3])
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 3.3690715490197364e-06
+Cost in iteration 25: 2.9515384676759174e-06
+Cost in iteration 50: 2.593417209868494e-06
+Cost in iteration 75: 2.2845622320483142e-06
+Cost in iteration 100: 2.024231674713015e-06
+Final cost : 2.024231674713015e-06
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Network case
+
Let us know study a simple 2-node network of model oscillators. We first define the coupling matrix and the distance matrix. We can then initialize the model.
+
+
+
+
+
+
+
cmat=np.array([[0.,0.5],[1.,0.]])# diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)
+dmat=np.array([[0.,0.],[0.,0.]])# no delay
+
+model=ALNModel(Cmat=cmat,Dmat=dmat)
+model.params.duration=10000
+model.params.mue_ext_mean=2.# up state
+model.params.de=0.0
+model.params.di=0.0
+model.run()
+setinitstate(model,getfinalstate(model))
+
+# we define the control input matrix to enable or disable certain channels and nodes
+control_mat=np.zeros((model.params.N,len(model.input_vars)))
+control_mat[0,0]=1.# only allow inputs in x-channel in node 0
+
+amplitude=1.
+model.params["duration"]=duration
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=np.copy(zero_input)
+input[0,1:-3]=amplitude*np.sin(2.*np.pi*np.arange(0,duration-0.3,dt)/period)# other functions or random values can be used as well
+
+# We set the stimulus in x and y variables, and run the simulation
+input_nw=np.concatenate((np.vstack([control_mat[0,0]*input,control_mat[0,1]*input,control_mat[0,2]*input,control_mat[0,3]*input])[np.newaxis,:,:],
+ np.vstack([control_mat[1,0]*input,control_mat[1,1]*input,control_mat[1,2]*input,control_mat[1,3]*input])[np.newaxis,:,:]),axis=0)
+zero_input_nw=np.concatenate((np.vstack([zero_input,zero_input,zero_input,zero_input])[np.newaxis,:,:],
+ np.vstack([zero_input,zero_input,zero_input,zero_input])[np.newaxis,:,:]),axis=0)
+
+model.params["ext_exc_current"]=input_nw[:,0,:]
+model.params["ext_inh_current"]=input_nw[:,1,:]
+model.params["ext_exc_rate"]=input_nw[:,2,:]
+model.params["ext_inh_rate"]=input_nw[:,3,:]
+model.run()
+
+# Define the result of the stimulation as target
+target=getstate(model)
+
+# Remove stimuli and re-run the simulation
+model.params["ext_exc_current"]=zero_input_nw[:,0,:]
+model.params["ext_inh_current"]=zero_input_nw[:,1,:]
+model.params["ext_exc_rate"]=zero_input_nw[:,2,:]
+model.params["ext_inh_rate"]=zero_input_nw[:,3,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=getstate(model)
+plot_oc_network(model.params.N,duration,dt,state,target,zero_input_nw,input_nw)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target
+cost_mat=np.zeros((model.params.N,len(model.output_vars)))
+cost_mat[1,0]=1.# only measure in y-channel in node 1
+
+# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
+model.params["ext_exc_current"]=zero_input_nw[:,0,:]
+model.params["ext_inh_current"]=zero_input_nw[:,1,:]
+model.params["ext_exc_rate"]=zero_input_nw[:,2,:]
+model.params["ext_inh_rate"]=zero_input_nw[:,3,:]
+
+# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model_controlled=oc_aln.OcAln(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.05681899553888795
+Cost in iteration 25: 0.009049511507864006
+Cost in iteration 50: 0.00385727901608276
+Cost in iteration 75: 0.0018622667677526768
+Cost in iteration 100: 0.000987085765866294
+Cost in iteration 125: 0.000572356512723035
+Cost in iteration 150: 0.0003547474327963845
+Cost in iteration 175: 0.0002363751625995732
+Cost in iteration 200: 0.0001619919185800181
+Cost in iteration 225: 0.00011952382655835105
+Cost in iteration 250: 9.020890267478555e-05
+Cost in iteration 275: 7.169979753138072e-05
+Cost in iteration 300: 5.8948947006216384e-05
+Cost in iteration 325: 4.953649496402098e-05
+Cost in iteration 350: 4.2578616654798227e-05
+Cost in iteration 375: 3.721358584763165e-05
+Cost in iteration 400: 3.294916084298363e-05
+Cost in iteration 425: 2.9490826042506942e-05
+Cost in iteration 450: 2.6637122691294857e-05
+Cost in iteration 475: 2.418022517349344e-05
+Cost in iteration 500: 2.213935579529806e-05
+Final cost : 2.213935579529806e-05
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# Do another 1000 iterations if you want to.
+# Repeated execution will continue with further 100 iterations.
+model_controlled.zero_step_encountered=False
+model_controlled.optimize(100)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 2.213935579529806e-05
+Cost in iteration 25: 2.039986650084248e-05
+Cost in iteration 50: 1.890816061870718e-05
+Cost in iteration 75: 1.7543052398445186e-05
+Cost in iteration 100: 1.6372947909519095e-05
+Cost in iteration 125: 1.535146855935076e-05
+Cost in iteration 150: 1.4407226990366437e-05
+Cost in iteration 175: 1.3578403645605011e-05
+Cost in iteration 200: 1.2839061879178726e-05
+Cost in iteration 225: 1.215663786521688e-05
+Cost in iteration 250: 1.1540904218753432e-05
+Cost in iteration 275: 1.098339286406832e-05
+Cost in iteration 300: 1.0476920392110899e-05
+Cost in iteration 325: 1.001955972944213e-05
+Cost in iteration 350: 9.57055264939235e-06
+Cost in iteration 375: 9.17392006953542e-06
+Cost in iteration 400: 8.809334792766664e-06
+Cost in iteration 425: 8.475824235515095e-06
+Cost in iteration 450: 8.147435560163446e-06
+Cost in iteration 475: 7.852707565165967e-06
+Cost in iteration 500: 7.579247993018956e-06
+Final cost : 7.579247993018956e-06
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Delayed network of neural populations
+
We now consider a network topology with delayed signalling between the two nodes.
+
+
+
+
+
+
+
cmat=np.array([[0.,0.],[1.,0.]])# diagonal elements are zero, connection strength is 1 from node 0 to node 1
+dmat=np.array([[0.,0.],[18,0.]])# distance from 0 to 1, delay is computed by dividing by the signal speed params.signalV
+
+model=ALNModel(Cmat=cmat,Dmat=dmat)
+
+model.params.mue_ext_mean=2.# up state
+model.run()
+setinitstate(model,getfinalstate(model))
+
+duration=6.
+model.params.duration=duration
+model.run()
+
+# we define the control input matrix to enable or disable certain channels and nodes
+control_mat=np.zeros((model.params.N,len(model.state_vars)))
+control_mat[0,0]=1.# only allow inputs in E-channel in node 0
+
+zero_input=ZeroInput().generate_input(duration=duration+dt,dt=dt)
+input=zero_input.copy()
+input[0,10]=10.
+input[0,20]=10.
+input[0,30]=10.# Three pulses as control input
+
+input_nw=np.concatenate((np.vstack([control_mat[0,0]*input,control_mat[0,1]*input,control_mat[0,2]*input,control_mat[0,3]*input])[np.newaxis,:,:],
+ np.vstack([control_mat[1,0]*input,control_mat[1,1]*input,control_mat[1,2]*input,control_mat[1,3]*input])[np.newaxis,:,:]),axis=0)
+zero_input_nw=np.concatenate((np.vstack([zero_input,zero_input,zero_input,zero_input])[np.newaxis,:,:],
+ np.vstack([zero_input,zero_input,zero_input,zero_input])[np.newaxis,:,:]),axis=0)
+
+model.params["ext_exc_current"]=input_nw[:,0,:]
+model.params["ext_inh_current"]=input_nw[:,1,:]
+model.params["ext_exc_rate"]=input_nw[:,2,:]
+model.params["ext_inh_rate"]=input_nw[:,3,:]
+model.run()
+
+# Define the result of the stimulation as target
+target=getstate(model)
+
+# Remove stimuli and re-run the simulation
+model.params["ext_exc_current"]=zero_input_nw[:,0,:]
+model.params["ext_inh_current"]=zero_input_nw[:,1,:]
+model.params["ext_exc_rate"]=zero_input_nw[:,2,:]
+model.params["ext_inh_rate"]=zero_input_nw[:,3,:]
+model.run()
+
+# combine initial value and simulation result to one array
+state=getstate(model)
+plot_oc_network(model.params.N,duration,dt,state,target,zero_input_nw,input_nw)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# We load the optimal control class
+# print array (optional parameter) defines, for which iterations intermediate results will be printed
+# Parameters will be taken from the input model
+model.params["ext_exc_current"]=zero_input_nw[:,0,:]
+model.params["ext_inh_current"]=zero_input_nw[:,1,:]
+model.params["ext_exc_rate"]=zero_input_nw[:,2,:]
+model.params["ext_inh_rate"]=zero_input_nw[:,3,:]
+model_controlled=oc_aln.OcAln(model,target,print_array=np.arange(0,501,25),control_matrix=control_mat,cost_matrix=cost_mat)
+
+# We run 500 iterations of the optimal control gradient descent algorithm
+model_controlled.optimize(500)
+
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.3591626682338002
+Cost in iteration 25: 0.0009615249415297563
+Cost in iteration 50: 0.0007333032937119198
+Cost in iteration 75: 0.0006259951827765307
+Cost in iteration 100: 0.0005505407696329882
+Cost in iteration 125: 0.0004885380123600698
+Cost in iteration 150: 0.00043735661984762556
+Cost in iteration 175: 0.00039467203255346946
+Cost in iteration 200: 0.00035575090435742684
+Cost in iteration 225: 0.00032290389213762856
+Cost in iteration 250: 0.0002955564149879958
+Cost in iteration 275: 0.0002706822302509814
+Cost in iteration 300: 0.0002481078663686744
+Cost in iteration 325: 0.0002287228008388444
+Cost in iteration 350: 0.00021138912691190224
+Cost in iteration 375: 0.00019614824660540533
+Cost in iteration 400: 0.00018255547996069997
+Cost in iteration 425: 0.00017091020493998155
+Cost in iteration 450: 0.00016022332136043902
+Cost in iteration 475: 0.0001503441843619978
+Cost in iteration 500: 0.00014206923879279553
+Final cost : 0.00014206923879279553
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
# perofrm another 100 iterations to improve result
+# repeat execution to add another 100 iterations
+# converence to the input stimulus is relatively slow for the WC nodel
+model_controlled.optimize(100)
+state=model_controlled.get_xs()
+control=model_controlled.control
+plot_oc_network(model.params.N,duration,dt,state,target,control,input_nw,model_controlled.cost_history,model_controlled.step_sizes_history)
+
+
+
+
+
+
+
+
+Compute control for a deterministic system
+Cost in iteration 0: 0.00014206923879279553
+Cost in iteration 25: 0.0001344899989412419
+Cost in iteration 50: 0.00012771190226165116
+Cost in iteration 75: 0.00012170773950612534
+Cost in iteration 100: 0.0001161846252066297
+Final cost : 0.0001161846252066297
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/exploration/boxsearch/index.html b/exploration/boxsearch/index.html
index e96ae1d..a28a8f8 100644
--- a/exploration/boxsearch/index.html
+++ b/exploration/boxsearch/index.html
@@ -9,7 +9,7 @@
-
+
@@ -708,6 +708,86 @@
+
+
+
+
+
+
Example 0.6 - Minimal example of how to implement your own model in neurolib
Example 1.2 - Parameter exploration of a brain network and fitting to BOLD data
Example 2.0 - A simple example of the evolutionary optimization framework
+
Example 5.2 - Example of optimal control of the noise-free Wilson-Cowan model
A basic overview of the functionality of neurolib is also given in the following.
Single node
@@ -1580,6 +1679,26 @@
Evolutionary optimization
+
Optimal control
+
The optimal control module enables to compute efficient stimulation for your neural model. If you know how your output should look like, this module computes the optimal input. Detailes example notebooks can be found in the example folder (examples 5.1, 5.2, 5.3, 5.4). In optimal control computations, you trade precision with respect to a target against control strength. You can determine how much each contribution affects the results, by setting weights accordingly.
+
To compute an optimal control signal, you need to create a model (e.g., an FHN model) and define a target state (e.g., a sine curve with period 2).
+
+You can then create a controlled model and run the iterative optimization to find the most efficient control input. The optimal control and the controlled model activity can be taken from the controlled model.
+
model_controlled=oc_fhn.OcFhn(model,target)
+model_controlled.optimize(500)# run 500 iterations
+optimal_control=model_controlled.control
+optimal_state=model_controlled.get_xs()
+
+
For a comprehensive study on optimal control of the Wilson-Cowan model based on the neurolib optimal control module, see Salfenmoser, L. & Obermayer, K. Optimal control of a Wilson–Cowan model of neural population dynamics. Chaos 33, 043135 (2023). https://doi.org/10.1063/5.0144682.
More information
Built With
neurolib is built using other amazing open source projects:
@@ -1606,9 +1725,10 @@
How to cite
Get in touch
Caglar Cakan (cakan@ni.tu-berlin.de)
Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
-Bernstein Center for Computational Neuroscience Berlin, Germany
+Bernstein Center for Computational Neuroscience Berlin, Germany
Acknowledgments
This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) with the project number 327654276 (SFB 1315) and the Research Training Group GRK1589/2.
+
The optimal control module was developed by Lena Salfenmoser and Martin Krück supported by the DFG project 163436311 (SFB 910).
neurolib is a simulation and optimization framework for whole-brain modeling. It allows you to implement your own neural mass models which can simulate fMRI BOLD activity. neurolib helps you to analyse your simulations, to load and handle structural and functional brain data, and to use powerful evolutionary algorithms to tune your model's parameters and fit it to empirical data.
You can chose from different neural mass models to simulate the activity of each brain area. The main implementation is a mean-field model of spiking adaptive exponential integrate-and-fire neurons (AdEx) called ALNModel where each brain area contains two populations of excitatory and inhibitory neurons. An analysis and validation of the ALNModel model can be found in our paper.
\ud83d\udcda Please read the gentle introduction to neurolib for an overview of the basic functionality and the science behind whole-brain simulations or read the documentation for getting started.
To browse the source code of neurolib visit out GitHub repository.
\ud83d\udcdd Cite the following paper if you use neurolib for your own research:
Cakan, C., Jajcay, N. & Obermayer, K. neurolib: A Simulation Framework for Whole-Brain Neural Mass Modeling. Cogn. Comput. (2021).
The figure below shows a schematic of how a brain network is constructed:
Typically, in whole-brain modeling, diffusion tensor imaging (DTI) is used to infer the structural connectivity (the connection strengths) between different brain areas. In a DTI scan, the direction of the diffusion of molecules is measured across the whole brain. Using tractography, this information can yield the distribution of axonal fibers in the brain that connect distant brain areas, called the connectome. Together with an atlas that divides the brain into distinct areas, a matrix can be computed that encodes how many fibers go from one area to another, the so-called structural connectivity (SC) matrix. This matrix defines the coupling strengths between brain areas and acts as an adjacency matrix of the brain network. The fiber length determines the signal transmission delay between all brain areas. Combining the structural data with a computational model of the neuronal activity of each brain area, we can create a dynamical model of the whole brain.
The resulting whole-brain model consists of interconnected brain areas, with each brain area having their internal neural dynamics. The neural activity can also be used to simulate hemodynamic BOLD activity using the Balloon-Windkessel model, which can be compared to empirical fMRI data. Often, BOLD activity is used to compute correlations of activity between brain areas, the so called resting state functional connectivity, resulting in a matrix with correlations between each brain area. This matrix can then be fitted to empirical fMRI recordings of the resting-state activity of the brain.
Below is an animation of the neuronal activity of a whole-brain model plotted on a brain.
It is recommended to clone or fork the entire repository since it will also include all examples and tests."},{"location":"#project-layout","title":"Project layout","text":"
Example IPython Notebooks on how to use the library can be found in the ./examples/ directory, don't forget to check them out! You can run the examples in your browser using Binder by clicking here or one of the following links:
Example 0.0 - Basic use of the aln model
Example 0.3 - Fitz-Hugh Nagumo model fhn on a brain network
Example 0.6 - Minimal example of how to implement your own model in neurolib
Example 1.2 - Parameter exploration of a brain network and fitting to BOLD data
Example 2.0 - A simple example of the evolutionary optimization framework
A basic overview of the functionality of neurolib is also given in the following.
A detailed example is available as a IPython Notebook.
To simulate a whole-brain network model, first we need to load a DTI and a resting-state fMRI dataset. neurolib already provides some example data for you:
from neurolib.utils.loadData import Dataset\n\nds = Dataset(\"gw\")\n
The dataset that we just loaded, looks like this:
We initialize a model with the dataset and run it:
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\nmodel.params['duration'] = 5*60*1000 # in ms, simulates for 5 minutes\n\nmodel.run(bold=True)\n
This can take several minutes to compute, since we are simulating 80 brain regions for 5 minutes realtime. Note that we specified bold=True which simulates the BOLD model in parallel to the neuronal model. The resulting firing rates and BOLD functional connectivity looks like this:
The quality of the fit of this simulation can be computed by correlating the simulated functional connectivity matrix above to the empirical resting-state functional connectivity for each subject of the dataset. This gives us an estimate of how well the model reproduces inter-areal BOLD correlations. As a rule of thumb, a value above 0.5 is considered good.
We can compute the quality of the fit of the simulated data using func.fc() which calculates a functional connectivity matrix of N (N = number of brain regions) time series. We use func.matrix_correlation() to compare this matrix to empirical data.
scores = [func.matrix_correlation(func.fc(model.BOLD.BOLD[:, 5:]), fcemp) for fcemp in ds.FCs]\n\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(f\"Mean FC/FC correlation: {np.mean(scores):.2}\")\n
A detailed example of a single-node exploration is available as a IPython Notebook. For an example of a brain network exploration, see this Notebook.
Whenever you work with a model, it is of great importance to know what kind of dynamics it exhibits given a certain set of parameters. It is often useful to get an overview of the state space of a given model of interest. For example in the case of aln, the dynamics depends a lot on the mean inputs to the excitatory and the inhibitory population. neurolib makes it very easy to quickly explore parameter spaces of a given model:
# create model\nmodel = ALNModel()\n# define the parameter space to explore\nparameters = ParameterSpace({\"mue_ext_mean\": np.linspace(0, 3, 21), # input to E\n \"mui_ext_mean\": np.linspace(0, 3, 21)}) # input to I\n\n# define exploration \nsearch = BoxSearch(model, parameters)\n\nsearch.run() \n
That's it!. You can now use the builtin functions to load the simulation results from disk and perform your analysis:
search.loadResults()\n\n# calculate maximum firing rate for each parameter\nfor i in search.dfResults.index:\n search.dfResults.loc[i, 'max_r'] = np.max(search.results[i]['rates_exc'][:, -int(1000/model.params['dt']):])\n
We can plot the results to get something close to a bifurcation diagram!
A detailed example is available as a IPython Notebook.
neurolib also implements evolutionary parameter optimization, which works particularly well with brain networks. In an evolutionary algorithm, each simulation is represented as an individual and the parameters of the simulation, for example coupling strengths or noise level values, are represented as the genes of each individual. An individual is a part of a population. In each generation, individuals are evaluated and ranked according to a fitness criterion. For whole-brain network simulations, this could be the fit of the simulated activity to empirical data. Then, individuals with a high fitness value are selected as parents and mate to create offspring. These offspring undergo random mutations of their genes. After all offspring are evaluated, the best individuals of the population are selected to transition into the next generation. This process goes on for a given amount generations until a stopping criterion is reached. This could be a predefined maximum number of generations or when a large enough population with high fitness values is found.
An example genealogy tree is shown below. You can see the evolution starting at the top and individuals reproducing generation by generation. The color indicates the fitness.
neurolib makes it very easy to set up your own evolutionary optimization and everything else is handled under the hood. You can chose between two implemented evolutionary algorithms: adaptive is a gaussian mutation and rank selection algorithm with adaptive step size that ensures convergence (a schematic is shown in the image below). nsga2 is an implementation of the popular multi-objective optimization algorithm by Deb et al. 2002.
Of course, if you like, you can dig deeper, define your own selection, mutation and mating operators. In the following demonstration, we will simply evaluate the fitness of each individual as the distance to the unit circle. After a couple of generations of mating, mutating and selecting, only individuals who are close to the circle should survive:
from neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.evolution import Evolution\n\ndef optimize_me(traj):\n ind = evolution.getIndividualFromTraj(traj)\n\n # let's make a circle\n fitness_result = abs((ind.x**2 + ind.y**2) - 1)\n\n # gather results\n fitness_tuple = (fitness_result ,)\n result_dict = {\"result\" : [fitness_result]}\n\n return fitness_tuple, result_dict\n\n# we define a parameter space and its boundaries\npars = ParameterSpace(['x', 'y'], [[-5.0, 5.0], [-5.0, 5.0]])\n\n# initialize the evolution and go\nevolution = Evolution(optimize_me, pars, weightList = [-1.0], POP_INIT_SIZE= 100, POP_SIZE = 50, NGEN=10)\nevolution.run() \n
This will gives us a summary of the last generation and plots a distribution of the individuals (and their parameters). Below is an animation of 10 generations of the evolutionary process. Ass you can see, after a couple of generations, all remaining individuals lie very close to the unit circle.
neurolib is built using other amazing open source projects:
pypet - Python parameter exploration toolbox
deap - Distributed Evolutionary Algorithms in Python
numpy - The fundamental package for scientific computing with Python
numba - NumPy aware dynamic Python compiler using LLVM
Jupyter - Jupyter Interactive Notebook
"},{"location":"#how-to-cite","title":"How to cite","text":"
Cakan, C., Jajcay, N. & Obermayer, K. neurolib: A Simulation Framework for Whole-Brain Neural Mass Modeling. Cogn. Comput. (2021). https://doi.org/10.1007/s12559-021-09931-9
@article{cakan2021,\nauthor={Cakan, Caglar and Jajcay, Nikola and Obermayer, Klaus},\ntitle={neurolib: A Simulation Framework for Whole-Brain Neural Mass Modeling},\njournal={Cognitive Computation},\nyear={2021},\nmonth={Oct},\nissn={1866-9964},\ndoi={10.1007/s12559-021-09931-9},\nurl={https://doi.org/10.1007/s12559-021-09931-9}\n}\n
"},{"location":"#get-in-touch","title":"Get in touch","text":"
Caglar Cakan (cakan@ni.tu-berlin.de) Department of Software Engineering and Theoretical Computer Science, Technische Universit\u00e4t Berlin, Germany Bernstein Center for Computational Neuroscience Berlin, Germany
This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) with the project number 327654276 (SFB 1315) and the Research Training Group GRK1589/2.
"},{"location":"contributing/","title":"Contributing to neurolib","text":"
Thank you for your interest in contributing to neurolib. We welcome bug reports through the issues tab and pull requests for fixes or improvements. You are warlmy invited to join our development efforts and make brain network modeling easier and more useful for all researchers.
To propose a change to neurolib's code, you should first clone the repository to your own Github account. Then, create a branch and make some changes. You can then send a pull request to neurolib's own repository and we will review and discuss your proposed changes.
More information on how to make pull requests can be found in the Github help pages.
Please be aware that we have a conservative policy for implementing new functionality. All new features need to be maintained, sometimes forever. We are a small team of developers and can only maintain a limited amount of code. Therefore, ideally, you should also feel responsible for the changes you have proposed and maintain it after it becomes part of neurolib.
We are using the black code formatter with the additional argument --line-length=120. It's called the \"uncompromising formatter\" because it is completely deterministic and you have literally no control over how your code will look like. We like that! We recommend using black directly in your IDE, for example in VSCode.
We are using the sphinx format for commenting code. Comments are incredibly important to us since neurolib is supposed to be a library of user-facing code. It's encouraged to read the code, change it and build something on top of it. Our users are coders. Please write as many comments as you can, including a description of each function and method and its arguments but also single-line comments for the code itself.
"},{"location":"contributing/#implementing-a-neural-mass-model","title":"Implementing a neural mass model","text":"
You are very welcome to implement your favorite neural mass model and contribute it to neurolib.
The easiest way of implementing a model is to copy a model directory and adapt the relevant parts of it to your own model. Please have a look of how other models are implemented. We recommend having a look at the HopfModel which is a fairly simple model.
All models inherit from the Model base class which can be found in neurolib/models/model.py.
You can also check out the model implementation example to find out how a model is implemented.
All models need to pass tests. Tests are located in the tests/ directory of the project. A model should be added to the test files tests/test_models.py and tests/test_autochunk.py. However, you should also make sure that your model supports as many neurolib features as possible, such as exploration and optimization. If you did everything right, this should be the case.
As of now, models consist of three parts:
The model.py file which contains the class of the model. Here the model specifies attributes like its name, its state variables, its initial value parameters. Additionally, in the constructor (the __init__() method), the model loads its default parameters.
The loadDefaultParams.py file contains a function (loadDefaultParams()) which has the arguments Cmat for the structural connectivity matrix, Dmat for the delay matrix and seed for the seed of the random number generator. This function returns a dictionary (or dotdict, see neurolib/utils/collections.py) with all parrameters inside.
The timeIntegration.py file which contains a timeIntegration() function which has the argument params coming from the previous step. Here, we need to prepare the numerical integration. We load all relevant parameters from the params dictionary and pass it to the main integration loop. The integration loop is written such that it can be accelerated by numba (numba's page) which speeds up the integration by a factor of around 1000.
We very much welcome example contributions since they help new users to learn how to make use of neurolib. They can include basic usage examples or tutorials of neurolib's features, or a demonstration of how to solve a specific scientific task using neural mass models or whole-brain networks.
Examples are provided as Jupyter Notebooks in the /examples/ directory of the project repository.
Notebooks should have a brief description of what they are trying to accomplish at the beginning.
It is recommended to change the working directory to the root directory at the very beginning of the notebook (os.chdir('..')).
Notebooks should be structured with different subheadings (Markdown style). Please also describe in words what you are doing in code.
We have a few small datasets already in neurolib so everyone can start simulating right away. If you'd like to contribute more data to the project, please feel invited to do so. We're looking for more structural connectivity matrices and fiber length matrices in the MATLAB matrix .mat format (which can be loaded by scipy.loadmat). We also appreciate BOLD data, EEG data, or MEG data. Other modalities could be useful as well. Please be aware that the data has to be in a parcellated form, i.e., the brain areas need to be organized according to an atlas like the AAL2 atlas (or others).
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\nimport scipy\n\n# Let's import the aln model\nfrom neurolib.models.aln import ALNModel\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
# Create the model\nmodel = ALNModel()\n\n# Each model comes with a set of default parameters which are a dictionary. \n# Let's change the parameter that controls the duration of a simulation to 10s.\nmodel.params['duration'] = 10.0 * 1000 \n\n# For convenience, we could also use:\nmodel.params.duration = 10.0 * 1000\n\n# In the aln model an Ornstein-Uhlenbeck process is simulated in parallel\n# as the source of input noise fluctuations. Here we can set the variance\n# of the process. \n# For more info: https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process \n# Let's add some noise.\nmodel.params['sigma_ou'] = 0.1\n\n# Finally, we run the model\nmodel.run()\n
Accessing the outputs is straight-forward. Every model's outputs are stored in the model.outputs attribute. According to the specific name of each of the model's outputs, they can also be accessed as a key of the Model object, i.e. model['rates_exc'].
# Outputs are also available as an xr DataArray\nxr = model.xr()\nprint(xr.dims)\n# outputs can also be accessed via attributes in dot.notation\nprint(\"rates_exc\", model.rates_exc)\n
Bifurcation diagrams can give us an overview of how different parameters of the model affect its dynamics. The simplest method for drawing a bifurcation diagram is to simply change relevant parameters step by step and record the model's behavior in response to these changes. In this example, we want to see how the model's dynamics changes with respect to the external input currents to the excitatory population. These input currents could be due to couplings with other nodes in a brain network or we could model other factors like external electrical stimulation.
Below, you can see a schematic of the aln model. As you can see, a single node consists of one excitatory (red) and one inhibitory population (blue). The parameter that controls the mean input to the excitatory population is \\(\\mu_{E}\\) or model.params[\"mue_ext_mean\"] .
Let's first decrease the duration of a single run so we can scan the parameter space a bit faster and let's also disable the noisy input.
We draw a one-dimensional bifurcation diagram, so it is enough to loop through different values of mue_ext_mean and record the minimum and maximum of the rate for each parameter.
max_rate_e = []\nmin_rate_e = []\n# these are the different input values that we want to scan\nmue_inputs = np.linspace(0, 2, 50)\nfor mue in mue_inputs:\n # Note: this has to be a vector since it is input for all nodes\n # (but we have only one node in this example)\n model.params['mue_ext_mean'] = mue\n model.run()\n # we add the maximum and the minimum of the last second of the \n # simulation to a list\n max_rate_e.append(np.max(model.output[0, -int(1000/model.params['dt']):]))\n min_rate_e.append(np.min(model.output[0, -int(1000/model.params['dt']):]))\n
Let's plot the results!
plt.plot(mue_inputs, max_rate_e, c='k', lw = 2)\nplt.plot(mue_inputs, min_rate_e, c='k', lw = 2)\nplt.title(\"Bifurcation diagram of the aln model\")\nplt.xlabel(\"Input to excitatory population\")\nplt.ylabel(\"Min / max firing rate\")\n
\nText(0, 0.5, 'Min / max firing rate')\n
neurolib comes with some example datasets for exploring its functionality. Please be aware that these datasets are not tested and should not be used for your research, only for experimentation with the software.
A dataset for whole-brain modeling can consists of the following parts:
A structural connectivity matrix capturing the synaptic connection strengths between brain areas, often derived from DTI tractography of the whole brain. The connectome is then typically parcellated in a preferred atlas (for example the AAL2 atlas) and the number of axonal fibers connecting each brain area with every other area is counted. This number serves as a indication of the synaptic coupling strengths between the areas of the brain.
A delay matrix which can be calculated from the average length of the axonal fibers connecting each brain area with another.
A set of functional data that can act as a target for model optimization. Resting-state fMRI offers an easy and fairly unbiased way for calibrating whole-brain models. EEG data could be used as well.
We can load a Dataset by passing the name of it in the constructor.
from neurolib.utils.loadData import Dataset\nds = Dataset(\"gw\")\n
We now create the aln model with a structural connectivity matrix and a delay matrix. In order to achieve a good fit of the BOLD activity to the empirical data, the model has to run for quite a while. A a rule of thumb, a simulation of resting-state BOLD activity should not be shorter than 3 minutes and preferably longer than 5 minutes real time. If the empirical recordings are for example 10 minutes long, ideally, a simulation of 10 minutes would be used to compare the output of the model to the resting state recording.
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\n\nmodel.params['duration'] = 0.2*60*1000 \n# Info: value 0.2*60*1000 is low for testing\n# use 5*60*1000 for real simulation\n
After some optimization to the resting-state fMRI data of the dataset, we found a set of parameters that creates interesting whole-brain dynamics. We set the mean input of the excitatory and the inhibitory population to be close to the E-I limit cycle.
model.params['mue_ext_mean'] = 1.57\nmodel.params['mui_ext_mean'] = 1.6\n# We set an appropriate level of noise\nmodel.params['sigma_ou'] = 0.09\n# And turn on adaptation with a low value of spike-triggered adaptation currents.\nmodel.params['b'] = 5.0\n
Let's have a look what the data looks like. We can access the data of each model by calling its internal attributes. Here, we plot the structural connectivity matrix by calling model.params['Cmat'] and fiber length matrix by calling model.params['lengthMat']. Of course, we can also access the dataset using the Dataset object itself. For example the functional connectivity matrices of the BOLD timeseries in the datasets are given as list with ds.FCs.
We run the model with bold simulation by using bold=True. This simulates the Balloon-Windkessel BOLD model in parallel to the neural population model in order to estimate the blood oxygen levels of the underlying neural activity. The output of the bold model can be used to compare the simulated data to empirical fMRI data (resting-state fMRI for example).
To save (a lot of) RAM, we can run the simulation in chunkwise mode. In this mode, the model will be simulated for a length of chunksize steps (not time in ms, but actual integration steps!), and the output of that chunk will be used to automatically reinitiate the model with the appropriate initial conditions. This allows for a serial continuation of the model without having to store all the data in memory and is particularly useful for very long and many parallel simulations.
For convenience, they can also be accessed directly using attributes of the model with the outputs name, like model.rates_exc. The outputs are also available as xr DataArrays as model.xr().
Since we used bold=True to simulate BOLD, we can also access model.BOLD.BOLD for the actual BOLD activity, and model.BOLD.t for the time steps of the BOLD simulation (which are downsampled to 0.5 Hz by default).
# Plot functional connectivity and BOLD timeseries (z-scored)\nfig, axs = plt.subplots(1, 2, figsize=(6, 2), dpi=75, gridspec_kw={'width_ratios' : [1, 2]})\naxs[0].imshow(func.fc(model.BOLD.BOLD[:, 5:]))\naxs[1].imshow(scipy.stats.mstats.zscore(model.BOLD.BOLD[:, model.BOLD.t_BOLD>10000], axis=1), aspect='auto', extent=[model.BOLD.t_BOLD[model.BOLD.t_BOLD>10000][0], model.BOLD.t_BOLD[-1], 0, model.params['N']]);\n\naxs[0].set_title(\"FC\")\naxs[0].set_xlabel(\"Node\")\naxs[0].set_ylabel(\"Node\")\naxs[1].set_xlabel(\"t [ms]\")\n\n# the results of the model are also accessible through an xarray DataArray\nfig, axs = plt.subplots(1, 1, figsize=(6, 2), dpi=75)\nplt.plot(model.xr().time, model.xr().loc['rates_exc'].T);\n
scores = [func.matrix_correlation(func.fc(model.BOLD.BOLD[:, 5:]), fcemp) for fcemp in ds.FCs]\n\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(f\"Mean FC/FC correlation: {np.mean(scores):.2}\")\n
"},{"location":"examples/example-0-aln-minimal/#the-neural-mass-model","title":"The neural mass model","text":"
In this example, we will learn about the basic of neurolib. We will create a two-population mean-field model of exponential integrate-and-fire neurons called the aln model. We will learn how to create a Model, set some parameters and run a simulation. We will also see how we can easily access the output of each simulation.
"},{"location":"examples/example-0-aln-minimal/#aln-the-adaptive-linear-nonlinear-cascade-model","title":"aln - the adaptive linear-nonlinear cascade model","text":"
The adaptive linear-nonlinear (aln) cascade model is a low-dimensional population model of spiking neural networks. Mathematically, it is a dynamical system of non-linear ODEs. The dynamical variables of the system simulated in the aln model describe the average firing rate and other macroscopic variables of a randomly connected, delay-coupled network of excitatory and inhibitory adaptive exponential integrate-and-fire neurons (AdEx) with non-linear synaptic currents.
Ultimately, the model is a result of various steps of model reduction starting from the Fokker-Planck equation of the AdEx neuron subject to white noise input at many steps of input means \\(\\mu\\) and variances \\(\\sigma\\). The resulting mean firing rates and mean membrane potentials are then stored in a lookup table and serve as the nonlinear firing rate transfer function, \\(r = \\Phi(\\mu, \\sigma)\\).
"},{"location":"examples/example-0-aln-minimal/#basic-use","title":"Basic use","text":""},{"location":"examples/example-0-aln-minimal/#simulating-a-single-aln-node","title":"Simulating a single aln node","text":"
To create a single node, we simply instantiate the model without any arguments.
"},{"location":"examples/example-0-aln-minimal/#accessing-the-outputs","title":"Accessing the outputs","text":""},{"location":"examples/example-0-aln-minimal/#bifurcation-diagram","title":"Bifurcation diagram","text":""},{"location":"examples/example-0-aln-minimal/#whole-brain-model","title":"Whole-brain model","text":""},{"location":"examples/example-0-aln-minimal/#run-model","title":"Run model","text":""},{"location":"examples/example-0-aln-minimal/#results","title":"Results","text":"
The outputs of the model can be accessed using the attribute model.outputs
"},{"location":"examples/example-0-aln-minimal/#plot-simulated-activity","title":"Plot simulated activity","text":""},{"location":"examples/example-0-aln-minimal/#correlation-of-simulated-bold-to-empirical-data","title":"Correlation of simulated BOLD to empirical data","text":"
We can compute the element-wise Pearson correlation of the functional connectivity matrices of the simulated data to the empirical data to estimate how well the model captures the inter-areal BOLD correlations found in empirical resting-state recordings.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n%load_ext autoreload\n%autoreload 2\n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\n# Let's import the Hopf model\nfrom neurolib.models.hopf import HopfModel\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
model = HopfModel()\nmodel.params['duration'] = 1.0*1000\nmodel.params['sigma_ou'] = 0.03\n\nmodel.run()\n
plt.plot(model.t, model.x.T, c='k', lw = 2)\n# alternatively plot the results in the xarray:\n# plt.plot(hopfModel.xr[0, 0].time, hopfModel.xr[0, 0].values)\nplt.xlabel(\"t [ms]\")\nplt.ylabel(\"Activity\")\n
\nText(0, 0.5, 'Activity')\n
model = HopfModel()\nmodel.params['duration'] = 2.0*1000\n
max_x = []\nmin_x = []\n# these are the different input values that we want to scan\na_s = np.linspace(-2, 2, 50)\nfor a in a_s:\n model.params['a'] = a\n model.run()\n # we add the maximum and the minimum of the last second of the \n # simulation to a list\n max_x.append(np.max(model.x[0, -int(1000/model.params['dt']):]))\n min_x.append(np.min(model.x[0, -int(1000/model.params['dt']):]))\n
plt.plot(a_s, max_x, c='k', lw = 2)\nplt.plot(a_s, min_x, c='k', lw = 2)\nplt.title(\"Bifurcation diagram of the Hopf oscillator\")\nplt.xlabel(\"a\")\nplt.ylabel(\"Min / max x\")\n
\nText(0, 0.5, 'Min / max x')\n
from neurolib.utils.loadData import Dataset\n\nds = Dataset(\"hcp\")\n
model = HopfModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\n
scores = [func.matrix_correlation(func.fc(model.x[:, -int(5000/model.params['dt']):]), fcemp) for fcemp in ds.FCs]\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(\"Mean FC/FC correlation: {:.2f}\".format(np.mean(scores)))\n
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
ds = Dataset(\"gw\")\n# simulates the whole-brain model\naln = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\n# Resting state fits\naln.params['mue_ext_mean'] = 1.57\naln.params['mui_ext_mean'] = 1.6\naln.params['sigma_ou'] = 0.09\naln.params['b'] = 5.0\naln.params['duration'] = 0.2*60*1000 \n# info: value 0.2*60*1000 is low for testing\n# use 5*60*1000 for real simulation\naln.run(chunkwise=True, bold = True)\n
\nWARNING:root:aln: BOLD simulation is supported only with chunkwise integration. Enabling chunkwise integration.\n\n
Now we can cast the modelling result into our Signal class. Signal is a parent base class for any neuro signal. We also provide three child class for particular signals: RatesSignal (for firing rate of the populations), VoltageSignal (for average membrane potential of the populations), and BOLDSignal (for simulated BOLD). They only differ in name, labels and units. Nothing fancy. Of course, you can implement your own class for your particular results very easily as:
from neurolib.utils.signal import Signal\n\n\nclass PostSynapticCurrentSignal(Signal):\n name = \"Population post-synaptic current signal\"\n label = \"I_syn\"\n signal_type = \"post_current\"\n unit = \"mA\"\n
and that's it. All useful methods and attributes are directly inherited from the Signal parent.
# Create Signal out of firing rates\nfr = RatesSignal.from_model_output(aln, group=\"\", time_in_ms=True)\n# optional description\nfr.description = \"Output of the ALN model with default SC and fiber lengths\"\n\n# Create Signal out of BOLD simulated timeseries\nbold = BOLDSignal.from_model_output(aln, group=\"BOLD\", time_in_ms=True)\nbold.description = \"Simulated BOLD of the ALN model with default SC and fiber lengths\"\n
\nPopulation firing rate representing rate signal with unit of Hz with user-provided description: `Output of the ALN model with default SC and fiber lengths`. Shape of the signal is (2, 80, 8831) with dimensions ('output', 'space', 'time').\nPopulation blood oxygen level-dependent signal representing bold signal with unit of % with user-provided description: `Simulated BOLD of the ALN model with default SC and fiber lengths`. Shape of the signal is (1, 80, 7) with dimensions ('output', 'space', 'time').\n\n
Signal automatically computes useful attributes like dt, sampling rate, starting and ending times.
\nInherent attributes:\nPopulation firing rate\nq\nHz\nrate\nOutput of the ALN model with default SC and fiber lengths\n\nComputed attributes:\n0.0001\n10000.0\n0.0\n0.883\n(2, 80, 8831)\n\n
# internal representation of the signal is just xarray's DataArray\nprint(fr.data)\n# xarray is just pandas on steroids, i.e. it supports multi-dimensional arrays, not only 2D\n\n# if you'd need simple numpy array just call .values on signal's data\nprint(type(fr.data.values))\nprint(fr.data.values.shape)\n
Now let's see what Signal can do... Just a side note, all operations can be done inplace (everything happens inside signal class), or altered signal is returned with the same attributes as the original one
# basic operations\nnorm = fr.normalize(std=True, inplace=False)\n# so, are all temporal means close to zero?\nprint(np.allclose(norm.data.mean(dim=\"time\"), 0.))\n# aand, are all temporal std close to 1?\nprint(np.allclose(norm.data.std(dim=\"time\"), 1.0))\nplt.plot(fr[\"rates_exc\"].data.sel({\"space\": 0}), label=\"original\")\nplt.plot(norm[\"rates_exc\"].data.sel({\"space\": 0}), label=\"normalised\")\n\n# you can detrend the signal, all of it, or by segments (as indices within the signal)\n# let's first normalise (so inplace=False), then detrend (we can inplace=True)\ndetrended = fr.normalize(std=True, inplace=False)\ndetrended.detrend(inplace=True)\nplt.plot(detrended[\"rates_exc\"].data.sel({\"space\": 0}), label=\"normalised & detrended\")\ndetrended_segments = fr.detrend(segments=np.arange(20000, 1000), inplace=False)\nplt.legend()\n
\nTrue\nTrue\n\n
\n<matplotlib.legend.Legend at 0x1301a4320>\n
so, the sampling frequency is too high, let's resample
# init again to start fresh\nfr = RatesSignal.from_model_output(aln, group=\"\", time_in_ms=True)\nplt.plot(fr.data.time, fr[\"rates_exc\"].data.sel({\"space\": 0}), label=\"original\")\n\n# first resample\nfr.resample(to_frequency=1000., inplace=True)\n\n# next detrend\nfr.detrend(inplace=True)\nprint(fr.start_time, fr.end_time)\n\n# next pad with 0s for 0.5 seconds in order to suppress edge effect when filtering\npadded = fr.pad(how_much=0.5, in_seconds=True, padding_type=\"constant\", side=\"both\",\n constant_values=0., inplace=False)\nprint(padded.start_time, padded.end_time)\n\n# now filter - by default uses mne, if not installed, falls back to scipy basic IIR filter\npadded.filter(low_freq=8., high_freq=12., inplace=True)\n\n# now cut back the original length\nfiltered = padded.sel([fr.start_time, fr.end_time], inplace=False)\nprint(filtered.start_time, filtered.end_time)\n\nplt.plot(filtered.data.time, filtered[\"rates_exc\"].data.sel({\"space\": 0}), label=r\"filtered $\\alpha$\")\n\n# finally, get phase and amplitude via Hilbert transform\nphase = filtered.hilbert_transform(return_as=\"phase_wrapped\", inplace=False)\nplt.plot(phase.data.time, phase[\"rates_exc\"].data.sel({\"space\": 0}), label=r\"phase $\\alpha$\")\namplitude = filtered.hilbert_transform(return_as=\"amplitude\", inplace=False)\nplt.plot(amplitude.data.time, amplitude[\"rates_exc\"].data.sel({\"space\": 0}), label=r\"amplitude $\\alpha$\")\nplt.legend()\n
\n0.0 0.882\n-0.5 1.382\nSetting up band-pass filter from 8 - 12 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 8.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 7.00 Hz)\n- Upper passband edge: 12.00 Hz\n- Upper transition bandwidth: 3.00 Hz (-6 dB cutoff frequency: 13.50 Hz)\n- Filter length: 1651 samples (1.651 sec)\n\n0.0 0.882\n\n
\n<matplotlib.legend.Legend at 0x1322e6e80>\n
# in case you forget that happened in the processing, you can easily check all steps:\nprint(phase.preprocessing_steps)\nprint(amplitude.preprocessing_steps)\n
\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 8.0Hz - high 12.0Hz -> select x:0.882s -> Hilbert - wrapped phase\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 8.0Hz - high 12.0Hz -> select x:0.882s -> Hilbert - amplitude\n\n
# and you can save your signal for future generations! (saved as netCDF file)\nphase.save(\"phase_from_some_experiment\")\n
# and then load it\nphase_loaded = RatesSignal.from_file(\"phase_from_some_experiment\")\n# compare whether it is the same\nprint(phase == phase_loaded)\n# the attributes are saved/loaded as well\nprint(phase_loaded.name)\nprint(phase_loaded.unit)\nprint(phase_loaded.preprocessing_steps)\n# delete file\nos.remove(\"phase_from_some_experiment.nc\")\n
\nTrue\nPopulation firing rate\nHz\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 8.0Hz - high 12.0Hz -> select x:0.882s -> Hilbert - wrapped phase\n\n
# this will iterate over whole data and return one 1D temporal slice at the time, each slice is Signal class\nfor name, ts in fr.iterate(return_as=\"signal\"):\n print(name, type(ts), ts.start_time, ts.end_time)\n\n# this will iterate over whole data and return one 1D temporal slice at the time, each slice is DataArray\nfor name, ts in fr.iterate(return_as=\"xr\"):\n print(name, type(ts), ts.shape, ts.shape)\n
# sliding window - let's iterate over temporal windows of 0.5seconds, with 0.1s translation and boxcar window function\nfor window in fr.sliding_window(length=0.5, step=0.1, window_function=\"boxcar\", lengths_in_seconds=True):\n print(type(window), window.shape, window.start_time, window.end_time)\n
# apply 1D function - Signal supports applying 1D function per temporal slice\n# both are supported: function that reduces temporal dimension (e.g. mean which reduces timeseries of length N to one number),\n# and functions that preserve shape\n\n# reduce\nmean = fr.apply(partial(np.mean, axis=-1), inplace=False)\n# mean is now xr.DataArray, not Signal; but the coordinates except time are preserved\nprint(type(mean), mean.shape, mean.coords)\n\n# preserve shape\nabsolute_value = fr.apply(np.abs, inplace=False)\n# still Signal\nprint(absolute_value.shape)\n
\nWARNING:root:Shape changed after operation! Old shape: (2, 80, 883), new shape: (2, 80); Cannot cast to Signal class, returing as `xr.DataArray`\n\n
# basic FC from excitatory rates - using correlation\nfc_exc = fr[\"rates_exc\"].functional_connectivity(fc_function=np.corrcoef)\n# results is DataArray with space coordinates\nprint(type(fc_exc), fc_exc.shape, fc_exc.coords)\nplt.subplot(1,2,1)\nplt.title(\"Correlation FC\")\nplt.imshow(fc_exc.values)\n\n# FC from covariance\nfc_cov_exc = fr[\"rates_exc\"].functional_connectivity(fc_function=np.cov)\nplt.subplot(1,2,2)\nplt.title(\"Covariance FC\")\nplt.imshow(fc_cov_exc.values)\n\n# so fc_function can be any function that can take (nodes x time) array and transform it to (nodes x nodes) connectivity matrix\n
\nProcessing delta...\nSetting up band-pass filter from 2 - 4 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 2.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 1.00 Hz)\n- Upper passband edge: 4.00 Hz\n- Upper transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 5.00 Hz)\n- Filter length: 1651 samples (1.651 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 2Hz - high 4Hz -> select x:0.882s\nProcessing theta...\nSetting up band-pass filter from 4 - 8 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 4.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 3.00 Hz)\n- Upper passband edge: 8.00 Hz\n- Upper transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 9.00 Hz)\n- Filter length: 1651 samples (1.651 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 4Hz - high 8Hz -> select x:0.882s\nProcessing alpha...\nSetting up band-pass filter from 8 - 12 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 8.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 7.00 Hz)\n- Upper passband edge: 12.00 Hz\n- Upper transition bandwidth: 3.00 Hz (-6 dB cutoff frequency: 13.50 Hz)\n- Filter length: 1651 samples (1.651 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 8Hz - high 12Hz -> select x:0.882s\nProcessing beta...\nSetting up band-pass filter from 12 - 30 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 12.00\n- Lower transition bandwidth: 3.00 Hz (-6 dB cutoff frequency: 10.50 Hz)\n- Upper passband edge: 30.00 Hz\n- Upper transition bandwidth: 7.50 Hz (-6 dB cutoff frequency: 33.75 Hz)\n- Filter length: 1101 samples (1.101 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 12Hz - high 30Hz -> select x:0.882s\nProcessing low_gamma...\nSetting up band-pass filter from 30 - 60 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 30.00\n- Lower transition bandwidth: 7.50 Hz (-6 dB cutoff frequency: 26.25 Hz)\n- Upper passband edge: 60.00 Hz\n- Upper transition bandwidth: 15.00 Hz (-6 dB cutoff frequency: 67.50 Hz)\n- Filter length: 441 samples (0.441 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 30Hz - high 60Hz -> select x:0.882s\nProcessing high_gamma...\nSetting up band-pass filter from 60 - 1.2e+02 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 60.00\n- Lower transition bandwidth: 15.00 Hz (-6 dB cutoff frequency: 52.50 Hz)\n- Upper passband edge: 120.00 Hz\n- Upper transition bandwidth: 30.00 Hz (-6 dB cutoff frequency: 135.00 Hz)\n- Filter length: 221 samples (0.221 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 60Hz - high 120Hz -> select x:0.882s\n\n
# time-varying FC\nfor window in fr.sliding_window(length=0.5, step=0.2, window_function=\"boxcar\", lengths_in_seconds=True):\n fc = window[\"rates_exc\"].functional_connectivity(fc_function=np.corrcoef)\n plt.imshow(fc)\n plt.title(f\"FC: {window.start_time}-{window.end_time}s\")\n plt.show()\n
"},{"location":"examples/example-0.2-basic_analysis/#introduction","title":"Introduction","text":""},{"location":"examples/example-0.2-basic_analysis/#run-the-aln-model","title":"Run the ALN model","text":"
Firstly, let us run a network model given the structural connectivity and fiber lengths.
Let's do a more complete example. Let's say, you run the model and want to extract phase and amplitude of the \\(\\alpha\\) band (i.e. 8-12Hz) for some phase-amplitude coupling analyses.
Sometimes it is useful to apply or see something in a loop. That's why Signal supports both: iterating over space / outputs variables and applying some 1D function over temporal dimensions.
Lot of modelling effort actually goes to fitting the experimental functional connectivity with the modelled one. That's why Signal class supports functional connectivity computation and with other methods (like filtering and iterating over temporal windows) we can even do timeseries of FC or band-specific FC very easily within the couple of lines.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\n# Let's import the fhn model\nfrom neurolib.models.fhn import FHNModel\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
model = FHNModel()\nmodel.params['duration'] = 2.0*1000\n
Let's draw a simple one-dimensional bifurcation diagram of this model to orient ourselves in the parameter space
max_x = []\nmin_x = []\n# these are the different input values that we want to scan\nx_inputs = np.linspace(0, 2, 50)\nfor x_ext in x_inputs:\n # Note: this has to be a vector since it is input for all nodes\n # (but we have only one node in this example)\n model.params['x_ext'] = [x_ext]\n model.run()\n # we add the maximum and the minimum of the last second of the \n # simulation to a list\n max_x.append(np.max(model.x[0, -int(1000/model.params['dt']):]))\n min_x.append(np.min(model.x[0, -int(1000/model.params['dt']):]))\n
plt.plot(x_inputs, max_x, c='k', lw = 2)\nplt.plot(x_inputs, min_x, c='k', lw = 2)\nplt.title(\"Bifurcation diagram of the FHN oscillator\")\nplt.xlabel(\"Input to x\")\nplt.ylabel(\"Min / max x\")\n
\nText(0, 0.5, 'Min / max x')\n
In this model, there is a Hopf bifurcation happening at two input values. We can see the oscillatory region at input values from roughly 0.75 to 1.3.
from neurolib.utils.loadData import Dataset\n\nds = Dataset(\"hcp\")\n
model = FHNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\n
model.params['duration'] = 10 * 1000 \n# add some noise\nmodel.params['sigma_ou'] = .01\n# set the global coupling strenght of the brain network\nmodel.params['K_gl'] = 1.0\n# let's put all nodes close to the limit cycle such that\n# noise can kick them in and out of the oscillation\n# all nodes get the same constant input\nmodel.params['x_ext'] = [0.72] * model.params['N']\n\nmodel.run(chunkwise=True, append_outputs=True)\n
scores = [func.matrix_correlation(func.fc(model.x[:, -int(5000/model.params['dt']):]), fcemp) for fcemp in ds.FCs]\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(\"Mean FC/FC correlation: {:.2f}\".format(np.mean(scores)))\n
In this notebook, the basic use of the implementation of the FitzHugh-Nagumo (fhn) model is presented. Usually, the fhn model is used to represent a single neuron (for example in Cakan et al. (2014), \"Heterogeneous delays in neural networks\"). This is due to the difference in timescales of the two equations that define the FHN model: The first equation is often referred to as the \"fast variable\" whereas the second one is the \"slow variable\". This makes it possible to create a model with a very fast spiking mechanism but with a slow refractory period.
In our case, we are using a parameterization of the fhn model that is not quite as usual. Inspired by the paper by Kostova et al. (2004) \"FitzHugh\u2013Nagumo revisited: Types of bifurcations, periodical forcing and stability regions by a Lyapunov functional.\", the implementation in neurolib produces a slowly oscillating dynamics and has the advantage to incorporate an external input term that causes a Hopf bifurcation. This means, that the model roughly approximates the behaviour of the aln model: For low input values, there is a low-activity fixed point, for intermediate inputs, there is an oscillatory region, and for high input values, the system is in a high-activity fixed point. Thus, it offers a simple way of exploring the dynamics of a neural mass model with these properties, such as the aln model.
We want to start by producing a bifurcation diagram of a single node. With neurolib, this can be done with a couple of lines of code, as seen further below.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n
%load_ext autoreload\n%autoreload 2\n
import matplotlib.pyplot as plt\nimport numpy as np\nimport glob\n\nfrom neurolib.models.wc import WCModel\n\nimport neurolib.utils.loadData as ld\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
model = WCModel()\nmodel.params['duration'] = 2.0*1000\n
Let's draw a simple one-dimensional bifurcation diagram of this model to orient ourselves in the parameter space
max_exc = []\nmin_exc = []\n# these are the different input values that we want to scan\nexc_inputs = np.linspace(0, 3.5, 50)\nfor exc_ext in exc_inputs:\n # Note: this has to be a vector since it is input for all nodes\n # (but we have only one node in this example)\n model.params['exc_ext'] = exc_ext\n model.run()\n # we add the maximum and the minimum of the last second of the \n # simulation to a list\n max_exc.append(np.max(model.exc[0, -int(1000/model.params['dt']):]))\n min_exc.append(np.min(model.exc[0, -int(1000/model.params['dt']):]))\n
plt.plot(exc_inputs, max_exc, c='k', lw = 2)\nplt.plot(exc_inputs, min_exc, c='k', lw = 2)\nplt.title(\"Bifurcation diagram of the Wilson-Cowan model\")\nplt.xlabel(\"Input to exc\")\nplt.ylabel(\"Min / max exc\")\n
\nText(0,0.5,'Min / max exc')\n
model = WCModel()\nmodel.params['duration'] = 1.0*1000\nmodel.params['sigma_ou'] = 0.01\n\nmodel.run()\n
scores = [func.matrix_correlation(func.fc(model.exc[:, -int(5000/model.params['dt']):]), fcemp) for fcemp in ds.FCs]\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(\"Mean FC/FC correlation: {:.2f}\".format(np.mean(scores)))\n
In this notebook, the basic use of the implementation of the Wilson-Cowan (wc) model is presented.
In the wc model, the activity of a particular brain region is defined by a coupled system of excitatory (E) and inhibitory (I) neuronal populations with the mean firing rates of the E and I pools being the dynamic variables, as first described by Wilson and Cowan in 1972 ( H.R. Wilson and J.D. Cowan. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J., 12:1\u201324 (1972))
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\n# Let's import the Kuramoto model\nfrom neurolib.models.kuramoto import KuramotoModel\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n
model = KuramotoModel()\nmodel.params['duration'] = 10\nmodel.run()\n
theta = model['theta'].T\ntheta_capped = np.mod(theta, 2*np.pi) # cap theta to [0, 2*pi]\n\nplt.plot(model.t, theta_capped)\nplt.xlabel(\"Time\")\nplt.ylabel(\"Theta\")\nplt.yticks(np.arange(0, 2*np.pi+0.1, np.pi/2), [ r\"$0$\", r\"$\\pi/2$\", r\"$\\pi$\", r\"$3/4\\pi$\", r\"$2\\pi$\",])# modify y-axis ticks to be in multiples of pi\nplt.show()\n
Here we simulate networks of oscillators. We will simulate a network of 8 oscillators with a global coupling strength 0.3. Here we initialize a connectivity matrix with all-to-all connectivity. We then simulate the network for 30 milliseconds assuming dt is in ms. We will also plot the phase values over time.
theta = network_model['theta'].T\n# cap the phase to be between 0 and 2pi\ntheta_capped = np.mod(theta, 2*np.pi)\n\n# set up the figure\nfig, ax = plt.subplots(1, 1, figsize=(16, 8))\n\nplt.plot(network_model.t, theta_capped)\nplt.xlabel(\"Time [ms]\")\nplt.ylabel(\"Theta\")\nplt.yticks(np.arange(0, 2*np.pi+0.1, np.pi/2), [ r\"$0$\", r\"$\\pi/2$\", r\"$\\pi$\", r\"$3/4\\pi$\", r\"$2\\pi$\",])# modify y-axis ticks to be in multiples of pi\nplt.show()\n
We can see that there is synchronization between nodes after around 25 ms. This happened because the nodes do not really have strong connection with each others. Now we will try to increase global coupling to 1 to see if synchronization comes faster.
theta = network_model['theta'].T\n# cap the phase to be between 0 and 2pi\ntheta_capped = np.mod(theta, 2*np.pi)\n\n# set up the figure\nfig, ax = plt.subplots(1, 1, figsize=(16, 8))\n\nplt.plot(network_model.t, theta_capped)\nplt.xlabel(\"Time [ms]\")\nplt.ylabel(\"Theta\")\nplt.yticks(np.arange(0, 2*np.pi+0.1, np.pi/2), [ r\"$0$\", r\"$\\pi/2$\", r\"$\\pi$\", r\"$3/4\\pi$\", r\"$2\\pi$\",])# modify y-axis ticks to be in multiples of pi\nplt.show()\n
Now the synchronization happens after 7 ms which is faster compared to the previous simulation.
In this notebook, we will simulate the Kuramoto model. The Kuramoto model is defined by the following differential equation: $$ \\frac{d \\theta_i}{dt} = \\omega_i + \\zeta_i + \\frac{K}{N} \\sum_{j=1}^N A_{ij} sin(\\theta_j(t - \\tau_{ij}) - \\theta_i(t)) + h_i(t)$$ here \\(\\theta_i\\) is the phase of oscillator \\(i\\), \\(\\omega_i\\) is the natural frequency of oscillator \\(i\\), \\(\\zeta_i\\) is the noise term, \\(K\\) is the global coupling strength, \\(A\\) is the coupling matrix, \\(\\tau_{ij}\\) is the phase lag between oscillator \\(i\\) and \\(j\\), and \\(h_i(t)\\) is the external input to oscillator \\(i\\).
The Kuramoto model describes synchronization between oscillators. Nodes in the network are influenced not only by their own natural frequency but also by the other nodes in the network. The strength of this influence is determined by the global coupling and the connectivity matrix. The degree of synchronization depends on the strength of the coupling. The Kuramoto model is relatively simple, mathematically tractable, and easy to understand. Kuramoto model firstly described in 1975 by Yoshiki Kuramoto (Y. Kuramoto. Self-entrainment of a population of coupled non-linear oscillators. in International Symposium on Mathematical Problems in Theoretical Physics, H. Araki, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 1975, pp. 420\u2013422).
Here we will simulate a signal node with no noise. We then cap the phase values to be between 0 and 2*pi. We also will plot the phase values over time.
# change to the root directory of the project\nimport os\n\nif os.getcwd().split(\"/\")[-1] in [\"examples\", \"dev\"]:\n os.chdir(\"..\")\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2\n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\nimport neurolib.utils.stimulus as stim\nimport numpy as np\nimport scipy\n# Let's import the aln model\nfrom neurolib.models.aln import ALNModel\n
# you can also set stim_start and stim_end - in ms\ninp = stim.StepInput(\n step_size=1.43, start=1200, end=2400, n=2\n).as_array(duration, dt)\nplt.plot(inp.T);\n
# frequency in Hz; dc_bias=True shifts input by its amplitude\ninp = stim.SinusoidalInput(\n amplitude=2.5, frequency=2.0, start=1200, dc_bias=True\n).as_array(duration, dt)\ninp2 = stim.SinusoidalInput(amplitude=2.5, frequency=2.0).as_array(\n duration, dt\n)\nplt.plot(inp.T)\nplt.plot(inp2.T);\n
# frequency in Hz; dc_bias=True shifts input by its amplitude\ninp = stim.SquareInput(\n amplitude=2.5, frequency=2.0, start=1200, dc_bias=True\n).as_array(duration, dt)\ninp2 = stim.SquareInput(amplitude=2.5, frequency=2.0).as_array(\n duration, dt\n)\nplt.plot(inp.T)\nplt.plot(inp2.T);\n
summed = ou + sq + sin\nplt.plot(summed.as_array(duration, dt).T);\n
# same lengths - use &\nconc = ou & sq & sin\nplt.plot(conc.as_array(duration, dt).T);\n
# can also do different length ratios, but for this you need to call ConcatenatedStimulus directly\nconc = stim.ConcatenatedStimulus([ou, sq, sin], length_ratios=[0.5, 2, 5])\nplt.plot(conc.as_array(duration, dt).T);\n
class PoissonNoiseWithExpKernel(stim.Stimulus):\n\"\"\"\n Poisson noise with exponential kernel.\n By subclassing the `StimulusInput` we have an option to select `start` and `end`.\n \"\"\"\n\n def __init__(\n self, amp, freq, tau_syn, start=None, end=None, n=1, seed=None\n ):\n # save parameters as attributes\n self.freq = freq\n self.amp = amp\n self.tau_syn = tau_syn\n # pass other params to parent class\n super().__init__(\n start=start, end=end, n=n, seed=seed\n )\n\n def generate_input(self, duration, dt):\n # this is a helper function that creates self.times vector\n self._get_times(duration=duration, dt=dt)\n # do the magic here: prepare output vector\n x = np.zeros((self.n, self.times.shape[0]))\n # compute total number of spikes\n total_spikes = int(self.freq * (self.times[-1] - self.times[0]) / 1000.0)\n # randomly put spikes into the output vector\n spike_indices = np.random.choice(\n x.shape[1], (self.n, total_spikes), replace=True\n )\n x[np.arange(x.shape[0])[:, None], spike_indices] = 1.0\n # create exponential kernel\n time_spike_end = -self.tau_syn * np.log(0.001)\n arg_spike_end = np.argmin(np.abs(self.times - time_spike_end))\n spike_kernel = np.exp(-self.times[:arg_spike_end] / self.tau_syn)\n # convolve over dimensions\n x = np.apply_along_axis(np.convolve, axis=1, arr=x, v=spike_kernel, mode=\"same\")\n # self._trim_stim_input takes care of trimming the stimulus based on stim_start and stim_end\n return self._trim_stim(x * self.amp)\n
# sum and concat test\npois = PoissonNoiseWithExpKernel(freq=20.0, amp=1.2, tau_syn=50.0, n=2)\n\nsummed = pois + sin\nplt.plot(summed.as_array(duration, dt).T);\n
concat = pois & sin\nplt.plot(concat.as_array(duration, dt).T);\n
model = ALNModel()\nmodel.params[\"duration\"] = 5 * 1000\nmodel.params[\"sigma_ou\"] = 0.2 # we add some noise\n
After creating a base for stimulus, we can simply call to_model(model) function and our stimulus is generated.
The stimulus is then set as an input current parameter to the model. The parameter that models a current that goes to the excitatory population is called ext_exc_current. For the inhibitory population, we can use ext_inh_current. We can also set a firing rate input, that will then be integrated over the synapses using the parameter model.params['ext_exc_rate'].
from neurolib.utils.loadData import Dataset\n\nds = Dataset(\"hcp\")\n
model = ALNModel(Cmat=ds.Cmat, Dmat=ds.Dmat)\n\n# we chose a parameterization in which the brain network oscillates slowly\n# between up- and down-states\n\nmodel.params[\"mue_ext_mean\"] = 2.56\nmodel.params[\"mui_ext_mean\"] = 3.52\nmodel.params[\"b\"] = 4.67\nmodel.params[\"tauA\"] = 1522.68\nmodel.params[\"sigma_ou\"] = 0.40\n\nmodel.params[\"duration\"] = 0.2 * 60 * 1000\n
def plot_output_and_spectrum(model, individual=False, vertical_mark=None):\n\"\"\"A simple plotting function for the timeseries\n and the power spectrum of the activity.\n \"\"\"\n fig, axs = plt.subplots(\n 1, 2, figsize=(8, 2), dpi=150, gridspec_kw={\"width_ratios\": [2, 1]}\n )\n axs[0].plot(model.t, model.output.T, lw=1)\n axs[0].set_xlabel(\"Time [ms]\")\n axs[0].set_ylabel(\"Activity [Hz]\")\n\n frs, powers = func.getMeanPowerSpectrum(model.output, dt=model.params.dt)\n axs[1].plot(frs, powers, c=\"k\")\n\n if individual:\n for o in model.output:\n frs, powers = func.getPowerSpectrum(o, dt=model.params.dt)\n axs[1].plot(frs, powers)\n\n axs[1].set_xlabel(\"Frequency [Hz]\")\n axs[1].set_ylabel(\"Power\")\n\n plt.show()\n
model.run(chunkwise=True)\n
plot_output_and_spectrum(model)\n
neurolib helps you to create a few basic stimuli out of the box using the function stimulus.construct_stimulus().
# construct a stimulus\n# we want 1-dim input - to all the nodes - 25Hz\nac_stimulus = stim.SinusoidalInput(amplitude=0.2, frequency=25.0).to_model(model)\nprint(ac_stimulus.shape)\n\n# this stimulus is 1-dimensional. neurolib will threfore automatically apply it to *all nodes*.\nmodel.params[\"ext_exc_current\"] = ac_stimulus * 5.0\n
\n(80, 120000)\n\n
model.run(chunkwise=True)\n
plot_output_and_spectrum(model)\n
# now we create multi-d input of 25Hz\nac_stimulus = stim.SinusoidalInput(amplitude=0.2, frequency=25.0).to_model(model)\nprint(ac_stimulus.shape)\n\n# We set the input to a bunch of nodes to zero.\n# This will have the effect that only nodes from 0 to 4 will be sitmulated!\nac_stimulus[5:, :] = 0\n\n# multiply the stimulus amplitude\nmodel.params[\"ext_exc_current\"] = ac_stimulus * 5.0\n
\n(80, 120000)\n\n
model.run(chunkwise=True)\n
We can see that the spectrum has a peak at the frequency we stimulated with, but only in a subset of nodes (where we stimulated).
This notebook will demonstrate how to construct stimuli using a variety of different predefined classes in neurolib.
You can then apply them as an input to a whole-brain model. As an example, we will see how to add an external current to the excitatory population of the ALNModel.
neurolib offers a range of external stimuli you can apply to your models. These range from basic noise processes like a Wiener process or an Ornstein-Uhlenbeck process, to more simple forms of inputs such as sinousoids, rectified inputs etc. All stimuli are based on the ModelInput class, and are available in the neurolib.utils.stimulus subpackage. In the following we will detail the implemented inputs and also show how to easily implement your own custom stimulus further below.
All inputs are initialized as classes. Three different functions are provided for the generation of the actual stimulus as a usable input: - as_array(duration, dt) - will return numpy array. - as_cubic_splines(duration, dt) - will return a CubicHermiteSpline object, which represents a spline representation of the given input - useful for jitcdde backend in MultiModel. - to_model(model) - the easiest one - infers the duration, dt and number of nodes from the simulated model itself and returns numpy array of an appropriate shape.
Each stimulus type has their own init function with attributes that apply to the specific kind of stimulus. However, all of them include the attributes n and seed. n controls how many spatial dimensions the stimulus should have, and in the case of stochastic inputs, such as a noisy Ornstein-Uhlenbeck process, this controls the number of independent realizations that are returned. For a deterministic stimulus, such as the sinusoidal input, this just returns a copy of itself.
"},{"location":"examples/example-0.6-external-stimulus/#zero-input-for-convenience","title":"Zero input - for convenience","text":"
You'll probably never use it, but you know, it's there... Maybe you can use it as a \"pause\" when concatenating two different stimuli.
A mix of inputs that start with negative step, then we have exponential rise and subsequent decay to zero. Useful for detecting bistability
"},{"location":"examples/example-0.6-external-stimulus/#operations-on-stimuli","title":"Operations on stimuli","text":"
Sometimes you need to concatenate inputs in the temporal dimension to create a mix of different stimuli. This is easy with neurolib's stimuli. All of them allow two operations: + for a sum of different stimuli and & to concatenate them (one after another). Below, we will show some of the weird combinations you can make.
"},{"location":"examples/example-0.6-external-stimulus/#sum","title":"Sum","text":""},{"location":"examples/example-0.6-external-stimulus/#concatenation","title":"Concatenation","text":""},{"location":"examples/example-0.6-external-stimulus/#mixing-the-operations","title":"Mixing the operations","text":"
You should be able to use as many + and & as you want. Go crazy.
"},{"location":"examples/example-0.6-external-stimulus/#creating-a-custom-stimulus","title":"Creating a custom stimulus","text":"
Creating a custom stimulus is very easy and you can build your library of stimuili as inputs for your model. There are three necessary steps: 1. Subclass stim.Input for a basic input or stim.Stimulus to have the option to set start and end times. 2. Define an __init__() method with the necessary parameters of your stimulus and set the appropriate attributes. 3. Define a generate_input(duration, dt) method, which returns a numpy array as with a shape (space, time) and that's it. Everything else described above is taken care of. Your new input class will be also support operations like + and &.
Below we implement a new stimulus class that represents currents caused by a Poission spike train convolved with an exponential kernel.
"},{"location":"examples/example-0.6-external-stimulus/#using-stimuli-in-neurolib","title":"Using stimuli in neurolib","text":"
First, we initialize a single node.
"},{"location":"examples/example-0.6-external-stimulus/#brain-network-stimulation","title":"Brain network stimulation","text":""},{"location":"examples/example-0.6-external-stimulus/#without-stimulation","title":"Without stimulation","text":""},{"location":"examples/example-0.6-external-stimulus/#constructing-a-stimulus","title":"Constructing a stimulus","text":""},{"location":"examples/example-0.6-external-stimulus/#focal-stimulation","title":"Focal stimulation","text":"
In the previous example, the stimulus was applied to all nodes simultaneously. We can also apply stimulation to a specific set of nodes.
This notebook demonstrates how to implement your own model in neurolib. There are two main parts of each model: its class that inherits from the Model base class and its timeIntegration() function.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-2] == \"neurolib\":\n os.chdir('..')\n\n%load_ext autoreload\n%autoreload 2\n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n
In this example we will implement a linear model with the following equation:
\\(\\frac{d}{dt} x_i(t) = - \\frac{x_i(t)}{\\tau} + \\sum_{j=0}^{N} K G_{ij} x_j(t)\\).
Here, we simulate \\(N\\) nodes that are coupled in a network. \\(x_i\\) are the elements of an \\(N\\)-dimensional state vector, \\(\\tau\\) is the decay time constant, \\(G\\) is the adjacency matrix and \\(K\\) is the global coupling strength.
We first create a class for the model called LinearModel which inherits lots of functionality from the Model base class. We define state_vars and default_output so that neurolib knows how to handle the variables of the system. Next, we define init_vars in order to use the autochunk integration scheme, so we can save a lot of RAM when we run very long simulations.
Next we define a simple parameter dictionary called params. In here, we can define all the necessary parameters of the model and change their values later. In this example, we set the timescale \\(\\tau\\), the coupling strength \\(K\\), the integration time step dt (in ms) and the duration to 100 ms.
We are now ready to set up the constructor of our model! This method is supposed to set up the model and prepare it for integration. All the magic happens in the background! We pass the self.timeIntegration function and the parameter dictionary self.params to the base class using super().__init__().
That wasn't too bad, was it? We are finally ready to define the time integration method that prepares all variables and passes it to the last function that will crunch the numbers. Here we prepare the numpy arrays that will hold the simulation results. We have to prepare them before we can execute the numba code.
def timeIntegration(self, p):\n N = p['Cmat'].shape[0]\n t = np.arange(1, p['duration']/p['dt']) # holds time steps\n x = np.ndarray((N, len(t)+1)) # holds variable x\n
Next, we make use of a neurolib convention to prepare the initial conditions of our model. If you remember, we defined init_vars above in order to use the autochunk feature. The autochunk feature will automatically fill this parameter with the last state of the last simulated chunk so the model integration can be continued without having to remember the entire output and state variables of the model indefinitely. In this line, we check whether x_init is set or not (which it will be, when we use chunkwise integration). If it is not set, we simply use random initial conditions using rand((N, 1)). Remember that the convention for array dimensions is array[space, time], meaning that we only fill in the first time step with the initial condition.
# either use predefined initial conditions or random ones\nx[:, :1] = p.get('x_init') if p.get('x_init') is not None else rand((N, 1))\n
We're ready to call our accelerated integration part and return the results \ud83d\ude80!
return njit_integrate(x, t, p['tau'], p['K'], N, p['Cmat'], p['dt'])\n
Remember to put this function outside of the class definition, so we can use use numba acceleration to greatly increase the performance of our code. We first have to let numba know which part of the code to precompile. We do this by simply placing the decorator @numba.njit in the line above the integration function. Easy way of getting 100x faster code! \u2764\ufe0f numba!
@numba.njit\ndef njit_integrate(x, t, tau, K, N, Cmat, dt):\n
Next, we do some simple math. We first loop over all time steps. If you have prepared the array t as described above, you can simply loop over its length. In the next line, we calculate the coupling term from the model equation above. However, instead of looping over the sum, we use a little trick here and simply compute the dot product between the coupling matrix G and the state vector x. This results in a N-dimensional vector that carries the amount of input each node receives at each time step. Finally, we loop over all nodes so we can finally add up everything.
for i in range(1, 1 + len(t)): # loop over time\n inp = Cmat.dot(x[:, i-1]) # input vector\n for n in range(N): # loop over nodes\n
In the next line, we integrate the equation that we have shown above. This integration scheme is called Euler integration and is the most simple way of solving an ODE. The idea is easy and is best expressed as x_next = x_before + f(x) * dt where f(x) is simply the time derivative \\(\\frac{d}{dt} x_i(t)\\) shown above.
x[n, i] = x[n, i-1] + (- x[n, i-1] / tau + K * inp[n]) * dt # model equations\n
We're done! The only thing left to do is to return the data so that neurolib can take over from here on. The outputs of this simulation will be available in the model.outputs attribute. You can see an example time series below.
return t, x\n
import numba\nimport numpy as np\nfrom numpy.random import random as rand\nfrom neurolib.models.model import Model\n\nclass LinearModel(Model):\n state_vars = [\"x\"]\n default_output = \"x\"\n init_vars = [\"x_init\"]\n params = dict(tau=10, K=1e-2, dt=1e-1, duration=100)\n def __init__(self, Cmat=np.zeros((1,1))):\n self.params['Cmat'] = Cmat\n super().__init__(self.timeIntegration, self.params)\n\n def timeIntegration(self, p):\n p['N'] = p['Cmat'].shape[0] # number of nodes\n t = np.arange(1, p['duration']/p['dt'] + 1) # holds time steps\n x = np.ndarray((p['N'], len(t)+1)) # holds variable x\n # either use predefined initial conditions or random ones\n x[:, :1] = p['x_init'] if 'x_init' in p else rand((p['N'], 1))\n return njit_integrate(x, t, p['tau'], p['K'], p['N'], p['Cmat'], p['dt'])\n\n@numba.njit\ndef njit_integrate(x, t, tau, K, N, Cmat, dt):\n for i in range(1, 1 + len(t)): # loop over time\n inp = Cmat.dot(x[:, i-1]) # input vector\n for n in range(N): # loop over nodes\n x[n, i] = x[n, i-1] +\\\n (- x[n, i-1] / tau + K * inp[n]) * dt # model equations\n return t, x\n
We prepare a \"mock\" connectivity matrix, simply consisting of 12x12 random numbers, meaning that we will simulate 12 LinearModel's in a network.
Cmat = rand((12, 12)) # use a random connectivity matrix\nmodel = LinearModel(Cmat) # initialize the model\n
Since we've followed the model implementation guidelines, the model is also compatible with chunkwise integration and can produce a BOLD signal. Let's try it out!
"},{"location":"examples/example-0.7-custom-model/#minimal-model-implementation","title":"Minimal model implementation","text":""},{"location":"examples/example-0.7-custom-model/#model-equations","title":"Model equations","text":""},{"location":"examples/example-0.7-custom-model/#implementation","title":"Implementation","text":""},{"location":"examples/example-0.7-custom-model/#numba-time-integration","title":"Numba time integration","text":""},{"location":"examples/example-0.7-custom-model/#code","title":"Code","text":""},{"location":"examples/example-0.7-custom-model/#running-the-model","title":"Running the model","text":""},{"location":"examples/example-0.7-custom-model/#plot-outputs","title":"Plot outputs","text":""},{"location":"examples/example-0.7-custom-model/#bold-and-autochunk","title":"BOLD and autochunk","text":""},{"location":"examples/example-1-aln-parameter-exploration/","title":"Example 1 aln parameter exploration","text":"
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.exploration import BoxSearch\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
aln = ALNModel()\n
parameters = ParameterSpace({\"mue_ext_mean\": np.linspace(0, 3, 2), \"mui_ext_mean\": np.linspace(0, 3, 2)})\n# info: chose np.linspace(0, 3, 21) or more, values here are low for testing\nsearch = BoxSearch(aln, parameters, filename=\"example-1.hdf\")\n
search.run()\n
search.loadResults()\n
print(\"Number of results: {}\".format(len(search.results)))\n
# Example analysis of the results\n# The .results attribute is a list and can be indexed by the run \n# number (which is also the index of the pandas dataframe .dfResults).\n# Here we compute the maximum firing rate of the node in the last second\n# and add the result (a float) to the pandas dataframe.\nfor i in search.dfResults.index:\n search.dfResults.loc[i, 'max_r'] = np.max(search.results[i]['rates_exc'][:, -int(1000/aln.params['dt']):])\n
plt.imshow(search.dfResults.pivot_table(values='max_r', index = 'mui_ext_mean', columns='mue_ext_mean'), \\\n extent = [min(search.dfResults.mue_ext_mean), max(search.dfResults.mue_ext_mean),\n min(search.dfResults.mui_ext_mean), max(search.dfResults.mui_ext_mean)], origin='lower')\nplt.colorbar(label='Maximum rate [Hz]')\nplt.xlabel(\"Input to E\")\nplt.ylabel(\"Input to I\")\n
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.exploration import BoxSearch\n
def explore_me(traj):\n pars = search.getParametersFromTraj(traj)\n # let's calculate the distance to a circle\n computation_result = abs((pars['x']**2 + pars['y']**2) - 1)\n result_dict = {\"distance\" : computation_result}\n search.saveToPypet(result_dict, traj)\n
parameters = ParameterSpace({\"x\": np.linspace(-2, 2, 2), \"y\": np.linspace(-2, 2, 2)})\n# info: chose np.linspace(-2, 2, 40) or more, values here are low for testing\nsearch = BoxSearch(evalFunction = explore_me, parameterSpace = parameters, filename=\"example-1.1.hdf\")\n
search.run()\n
search.loadResults()\nprint(\"Number of results: {}\".format(len(search.results)))\n
The runs are also ordered in a simple pandas dataframe called search.dfResults. We cycle through all results by calling search.results[i] and loading the desired result (here the distance to the circle) into the dataframe
for i in search.dfResults.index:\n search.dfResults.loc[i, 'distance'] = search.results[i]['distance']\n\nsearch.dfResults\n
And of course a plot can visualize the results very easily.
plt.imshow(search.dfResults.pivot_table(values='distance', index = 'x', columns='y'), \\\n extent = [min(search.dfResults.x), max(search.dfResults.x),\n min(search.dfResults.y), max(search.dfResults.y)], origin='lower')\nplt.colorbar(label='Distance to the unit circle')\n
This notebook demonstrates a very simple parameter exploration of a custom function that we have defined. It is a simple function that returns the distance to a unit circle, so we expect our parameter exploration to resemble a circle.
"},{"location":"examples/example-1.1-custom-parameter-exploration/#define-the-evaluation-function","title":"Define the evaluation function","text":"
Here we define a very simple evaluation function. The function needs to take in traj as an argument, which is the pypet trajectory. This is how the function knows what parameters were assigned to it. Using the builtin function search.getParametersFromTraj(traj) we can then retrieve the parameters for this run. They are returned as a dictionary and can be accessed in the function.
In the last step, we use search.saveToPypet(result_dict, traj) to save the results to the pypet trajectory and to an HDF. In between, the computational magic happens!
"},{"location":"examples/example-1.1-custom-parameter-exploration/#define-the-parameter-space-and-exploration","title":"Define the parameter space and exploration","text":"
Here we define which space we want to cover. For this, we use the builtin class ParameterSpace which provides a very easy interface to the exploration. To initialize the exploration, we simply pass the evaluation function and the parameter space to the BoxSearch class.
We can easily obtain the results from pypet. First we call search.loadResults() to make sure that the results are loaded from the hdf file to our instance.
#hide\n# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
\nThe autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n\n
#hide\ntry:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\n# Let's import all the necessary functions for the parameter\nfrom neurolib.models.fhn import FHNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.exploration import BoxSearch\n\n# load some utilty functions for explorations\nimport neurolib.utils.pypetUtils as pu\nimport neurolib.utils.paths as paths\nimport neurolib.optimize.exploration.explorationUtils as eu\n\n# The brain network dataset\nfrom neurolib.utils.loadData import Dataset\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
We load a dataset (in this case the hcp dataset from the Human Connectome Project) and initialize a model to run on each node of the brain network (here the FHNModel which is the Fitz-Hugh Nagumo model).
Running the model is as simple as entering model.run(chunkwise=True).
We define a parameter range to explore. Our first parameter is x_ext, which is the input to each node of the FHNModel in a brain network. Therefore, this parameter is a list with N entries, one per node. Our next parameter is K_gl, the global coupling strength. Finally, we have the coupling parameter, which defines how each FHNModel is coupled to its adjacent nodes via either additive coupling (activity += input) or diffusive (activity += (activity - input) ).
parameters = ParameterSpace({\"x_ext\": [np.ones((model.params['N'],)) * a for a in np.linspace(0, 2, 2)] # testing: 2, original: 41\n ,\"K_gl\": np.linspace(0, 2, 2) # testing: 2, original: 41\n ,\"coupling\" : [\"additive\", \"diffusive\"]\n }, kind=\"grid\")\nsearch = BoxSearch(model=model, parameterSpace=parameters, filename=\"example-1.2.0.hdf\")\n
We run the exploration, simply by calling the run() function of the BoxSearch class. We can pass parameters to this function, that will be directly passed to the FHNModel.run() function of the simulated model. This way, we can easily specify to run the simulation chunkwise, without storing all the activity in memory, and simulate bold activity as well.
Note that the default behaviour of the BoxSearch class is to save the default_output of each model and if bold is simulated, then also the BOLD data. If the exploration is initialized with BoxSearch(saveAllModelOutputs=True), the exploration would save all outputs of the model. This can obviously create a lot of data to store, so please use this option at your own discretion.
search.run(chunkwise=True, bold=True)\n
A simple helper function for getting the trajectories of an hdf file created by pypet can be found in pypetUtils.py (aka pu). This way, you can explore which explorations are in the file and decide later which one you want to load for analysis
The default behaviour will load the latest exploration. It's name is also stored in search.trajectoryName:
search.trajectoryName\n
\n'results-2020-04-08-02H-50M-09S'\n
Now we load all results. As said above, the newest exploration will be loaded by default. You can load results from earlier explorations by adding the argument trajectoryName=results-from-earlier and also chose another hdf file by using the argument filename=/path/to/explorations.hdf.
Remember that using search.loadResults() will load all results to memory. This can cause a lot of RAM, depending on how big the exploration was.
search.loadResults()\n
print(\"Number of results: {}\".format(len(search.results)))\n
One way of loading a result without loading everything else into RAM is to use the builtin function search.getRun(). However, you need to know which runId you're looking for! For this, you can run search.loadDfResults() to create a pandas.DataFrame search.dfResults with all parameters (which also happens when you call search.loadResults()).
After loading the results with search.loadResults() they are now available as a simple list using search.results. Let's look at the time series of one result.
If you remember from before, the external input parameter x_ext is a list of length N (one per node). Since they're all the same in this example, we reduce the parameter to only the first entry of each list.
search.dfResults.x_ext = [a[0] for a in list(search.dfResults.x_ext)]\n
We can use eu.processExplorationResults() from explorationUtils.py (aka eu) to process the results from the simluation and store results in our pandas.DataFrame of all results called search.dfResults:
This finally gives us a dataframe with parameters and respective values from postprocessing the results, which we can access using search.dfResults.
We can use the utility function eu.findCloseResults() to navigate in this DataFrame and find for example the runId of a run for a specific parameter configuration.
To understand what is happening in eu.processExplorationResults(), it helps to see how we could do postprocessing on the loaded data ourselves. Let's calculate the correlation to empirical functional connectivity using the builtin funtions func.fc() and func.matrix_correlation().
mean_corr = np.mean([func.matrix_correlation(func.fc(search.results[rId]['BOLD']), fc) for fc in ds.FCs])\n\nprint(f\"Mean correlation of run {rId} with empirical FC matrices is {mean_corr:.02}\")\n
\nMean correlation of run 3324 with empirical FC matrices is 0.28\n\n
Another usefull function is eu.plotExplorationResults(), which helps you to visualize the results from the exploration. You can specify which parameters should be the x- and the y-axis using the par1=[parameter_name, parameter_label] and par2 arguments, and you can define by which paramter plane the results should be \"sliced\".
We want to find parameter for which the brain network model produces realistic BOLD functional connectivity. For this, we calculated the entry fc in search.dfResults by taking the func.fc() of the model.BOLD timeseries and compared it to empirical data using func.matrix_correlation.
Below, the average of this value across all subjects of the dataset is plotted. A higher value (brighter color) means a better fit to the empirical data. Observe how the best solutions tend to cluster at the edges of bifurcations, indicating that correlations in the network are generated by multiple nodes undergoing bifurcation together, such as transitioning from the constant activity (fixed point) solution to an oscillation.
"},{"location":"examples/example-1.2-brain-network-exploration/#parameter-exploration-of-a-brain-network-model","title":"Parameter exploration of a brain network model","text":"
This notebook demonstrates how to scan the parameter space of a brain network model using neurolib. We will simulate BOLD activity and compare the results to empirical data to identify optimal parameters of the model.
The steps outlined in this notebook are the following:
We load a DTI and resting-state fMRI dataset (hcp) and set up a brain network using the FHNModel.
We simulate the system for a range of different parameter configurations.
We load the simulated data from disk.
We postprocess the results and obtain the model fit.
Finally, we plot the results in the parameter space of the exploration.
"},{"location":"examples/example-1.2-brain-network-exploration/#1-set-up-a-brain-network","title":"1. Set up a brain network","text":""},{"location":"examples/example-1.2-brain-network-exploration/#2-run-the-exploration","title":"2. Run the exploration","text":""},{"location":"examples/example-1.2-brain-network-exploration/#3-load-results","title":"3. Load results","text":""},{"location":"examples/example-1.2-brain-network-exploration/#4-postprocessing","title":"4. Postprocessing","text":""},{"location":"examples/example-1.2-brain-network-exploration/#5-plot","title":"5. Plot","text":""},{"location":"examples/example-1.2-brain-network-exploration/#bold-functional-connectivity","title":"BOLD functional connectivity","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/","title":"Example 1.2.1 brain exploration postprocessing","text":"
#hide\n# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
#hide\ntry:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n\nimport numpy as np\n\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.exploration import BoxSearch\nimport neurolib.utils.functions as func\n\nfrom neurolib.utils.loadData import Dataset\nds = Dataset(\"hcp\")\n
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat) # simulates the whole-brain model in 10s chunks by default if bold == True\n# Resting state fits\nmodel.params['mue_ext_mean'] = 1.57\nmodel.params['mui_ext_mean'] = 1.6\n#model.params['sigma_ou'] = 0.09\nmodel.params['b'] = 5.0\nmodel.params['dt'] = 0.2\nmodel.params['duration'] = 0.2 * 1000 #ms\n# testing: model.params['duration'] = 0.2 * 60 * 1000 #ms\n# real: model.params['duration'] = 1.0 * 60 * 1000 #ms\n
\nMainProcess root INFO aln: Model initialized.\n\n
def evaluateSimulation(traj):\n # get the model from the trajectory using `search.getModelFromTraj(traj)`\n model = search.getModelFromTraj(traj)\n # initiate the model with random initial contitions\n model.randomICs()\n defaultDuration = model.params['duration']\n invalid_result = {\"fc\" : np.nan, \"fcd\" : np.nan}\n\n # -------- STAGEWISE EVALUATION --------\n stagewise = True\n if stagewise:\n # -------- stage wise simulation --------\n\n # Stage 1 : simulate for a few seconds to see if there is any activity\n # ---------------------------------------\n model.params['duration'] = 3*1000.\n model.run()\n\n # check if stage 1 was successful\n amplitude = np.max(model.output[:, model.t > 500]) - np.min(model.output[:, model.t > 500])\n if amplitude < 0.05:\n search.saveToPypet(invalid_result, traj)\n return invalid_result, {}\n\n # Stage 2: simulate BOLD for a few seconds to see if it moves\n # ---------------------------------------\n model.params['duration'] = 30*1000.\n model.run(chunkwise=True, bold = True)\n\n if np.max(np.std(model.outputs.BOLD.BOLD[:, 10:15], axis=1)) < 1e-5:\n search.saveToPypet(invalid_result, traj)\n return invalid_result, {}\n\n # Stage 3: full and final simulation\n # ---------------------------------------\n model.params['duration'] = defaultDuration\n model.run(chunkwise=True, bold = True)\n\n # -------- POSTPROCESSING --------\n # FC matrix correlation to all subject rs-fMRI\n BOLD_TRANSIENT = 10000\n fc_score = np.mean([func.matrix_correlation(func.fc(model.BOLD.BOLD[:, model.BOLD.t_BOLD > BOLD_TRANSIENT]), fc) for fc in ds.FCs])\n\n # FCD to all subject rs-fMRI\n try:\n fcd_score = np.mean([func.ts_kolmogorov(model.BOLD.BOLD[:, model.BOLD.t_BOLD > BOLD_TRANSIENT], ds.BOLDs[i]) for i in range(len(ds.BOLDs))])\n except:\n fcd_score = np.nan\n\n # let's build the results dictionary\n result_dict = {\"fc\" : fc_score, \"fcd\" : fcd_score}\n # we could also save the output of the model by adding to the results_dict like this:\n # result_dict = {\"fc\" : fc_score, \"fcd\" : fcd_score, \"outputs\" : model.outputs}\n\n # Save the results to pypet. \n # Remember: This has to be dictionary!\n search.saveToPypet(result_dict, traj)\n
parameters = ParameterSpace({\"mue_ext_mean\": np.linspace(0, 3.0, 2), \"mui_ext_mean\": np.linspace(0.2, 3.0, 2)})\n# info: chose np.linspace(0, 3, 21) or more, values here are low for testing\nsearch = BoxSearch(evalFunction = evaluateSimulation, model=model, parameterSpace=parameters, filename=\"example-1.2.1.hdf\")\n
\nMainProcess root INFO Number of processes: 80\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `/mnt/raid/data/cakan/hdf/example-1.2.1.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\n/home/cakan/anaconda/lib/python3.7/site-packages/pypet/parameter.py:884: FutureWarning: Conversion of the second argument of issubdtype from `str` to `str` is deprecated. In future, it will be treated as `np.str_ == np.dtype(str).type`.\n if np.issubdtype(dtype, np.str):\nMainProcess root INFO Number of parameter configurations: 4\nMainProcess root INFO BoxSearch: Environment initialized.\n\n
search.run()\n
\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2020-04-08-01H-16M-48S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 80 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 0/4 runs [ ] 0.0%\nMainProcess pypet INFO PROGRESS: Finished 1/4 runs [===== ] 25.0%, remaining: 0:00:02\nMainProcess pypet INFO PROGRESS: Finished 2/4 runs [========== ] 50.0%, remaining: 0:00:00\nMainProcess pypet INFO PROGRESS: Finished 3/4 runs [=============== ] 75.0%, remaining: 0:00:09\nMainProcess pypet INFO PROGRESS: Finished 4/4 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2020-04-08-01H-16M-48S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2020-04-08-01H-16M-48S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\n/home/cakan/anaconda/lib/python3.7/site-packages/pypet/storageservice.py:4597: FutureWarning: Conversion of the second argument of issubdtype from `str` to `str` is deprecated. In future, it will be treated as `np.str_ == np.dtype(str).type`.\n if (np.issubdtype(val.dtype, str) or\n/home/cakan/anaconda/lib/python3.7/site-packages/pypet/storageservice.py:4598: FutureWarning: Conversion of the second argument of issubdtype from `bytes` to `bytes` is deprecated. In future, it will be treated as `np.bytes_ == np.dtype(bytes).type`.\n np.issubdtype(val.dtype, bytes)):\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\n/home/cakan/anaconda/lib/python3.7/site-packages/pypet/storageservice.py:3110: FutureWarning: Conversion of the second argument of issubdtype from `str` to `str` is deprecated. In future, it will be treated as `np.str_ == np.dtype(str).type`.\n np.issubdtype(data.dtype, str)):\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2020-04-08-01H-16M-48S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2020-04-08-01H-16M-48S` were completed successfully.\n\n
search.loadResults()\nprint(\"Number of results: {}\".format(len(search.results)))\n
\nMainProcess root INFO Loading results from /mnt/raid/data/cakan/hdf/example-1.2.1.hdf\n/mnt/antares_raid/home/cakan/projects/neurolib/neurolib/utils/pypetUtils.py:21: H5pyDeprecationWarning: The default file mode will change to 'r' (read-only) in h5py 3.0. To suppress this warning, pass the mode you need to h5py.File(), or set the global default h5.get_config().default_file_mode, or set the environment variable H5PY_DEFAULT_READONLY=1. Available modes are: 'r', 'r+', 'w', 'w-'/'x', 'a'. See the docs for details.\n hdf = h5py.File(filename)\nMainProcess root INFO Analyzing trajectory results-2020-04-08-01H-16M-48S\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `/mnt/raid/data/cakan/hdf/example-1.2.1.hdf`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `config` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `parameters` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `results` in mode `1`.\nMainProcess root INFO Creating pandas dataframe ...\nMainProcess root INFO Creating results dictionary ...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4/4 [00:00<00:00, 219.06it/s]\nMainProcess root INFO All results loaded.\n\n
\nNumber of results: 4\n\n
for i in search.dfResults.index:\n search.dfResults.loc[i, 'bold_cc'] = np.mean(search.results[i]['fc'])\nsearch.dfResults\n
plt.figure(dpi=150)\nplt.imshow(search.dfResults.pivot_table(values='bold_cc', index = 'mui_ext_mean', columns='mue_ext_mean'), \\\n extent = [min(search.dfResults.mue_ext_mean), max(search.dfResults.mue_ext_mean),\n min(search.dfResults.mui_ext_mean), max(search.dfResults.mui_ext_mean)], origin='lower')\nplt.colorbar(label='Mean correlation to empirical rs-FC')\nplt.xlabel(\"Input to E\")\nplt.ylabel(\"Input to I\")\n
\nText(0, 0.5, 'Input to I')\n
"},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#parameter-exploration-with-custom-run-function-and-postprocessing","title":"Parameter exploration with custom run function and postprocessing","text":"
This notebook demonstrates how to scan the parameter space of a brain network model using neurolib with a custom evaluation function to quickly find regions of interest. The evaluation function is designed to increase the speed for the exploration by focussing on regions where the simulated dynamics meets certain criteria. For this, the simulation is run in multiple, successive steps, that increase in duration.
In this scenario, we want to postprocess the simulated data as soon as the simulation is done and before writing the results to the hard disk. After the full simulation is run, the funciotnal connectivity (FC) of the BOLD signal is computed and compared to the empirical FC dataset. The Pearson correlation of the FC matrices is computed and the average is taken. We then tell pypet to save these postprocessed results along with the model output.
"},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#set-up-model","title":"Set up model","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#define-evaluation-function","title":"Define evaluation function","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#set-up-parameter-exploration","title":"Set up parameter exploration","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#load-data","title":"Load data","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#plot","title":"Plot","text":""},{"location":"examples/example-1.3-aln-bifurcation-diagram/","title":"Example 1.3 aln bifurcation diagram","text":"
# change into the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n
model = ALNModel()\nmodel.params['dt'] = 0.1 # Integration time step, ms\nmodel.params['duration'] = 20 * 1000 # Simulation time, ms\n\nmodel.params['save_dt'] = 10.0 # 10 ms sampling steps for saving data, should be multiple of dt\nmodel.params[\"tauA\"] = 600.0 # Adaptation timescale, ms\n
The aln model has a region of bistability, in which two states are stable at the same time: the low-activity down-state, and the high-activity up-state. We can find these states by constructing a stimulus, which uncovers the bistable nature of the system: Initially, we apply a negative push to the system, to make sure that it is in the down-state. We then relax this stimulus slowly and wait for the system to settle. We then apply a sharp push in order to reach the up-state and release the stimulus slowly back again. The difference of the two states after the stimulus has relaxed back to zero is a sign for bistability.
# we place the system in the bistable region\nmodel.params['mue_ext_mean'] = 2.5\nmodel.params['mui_ext_mean'] = 2.5\n\n# construct a stimulus\nrect_stimulus = stim.RectifiedInput(amplitude=0.2).to_model(model)\nmodel.params['ext_exc_current'] = rect_stimulus * 5.0 \n\nmodel.run()\n
Let's construct a rather lengthy evaluation function which does exactly that, for every parameter configuration that we want to explore. We will also measure other things like the dominant frequency and amplitude of oscillations and the maximum rate of the excitatory population.
def evaluateSimulation(traj):\n # get the model from the trajectory using `search.getModelFromTraj(traj)`\n model = search.getModelFromTraj(traj)\n # initiate the model with random initial contitions\n model.randomICs()\n defaultDuration = model.params['duration']\n\n # -------- stage wise simulation --------\n\n # Stage 3: full and final simulation\n # --------------------------------------- \n model.params['duration'] = defaultDuration\n\n rect_stimulus = stim.RectifiedInput(amplitude=0.2).to_model(model)\n model.params['ext_exc_current'] = rect_stimulus * 5.0 \n\n model.run()\n\n # up down difference \n state_length = 2000\n last_state = (model.t > defaultDuration - state_length)\n down_window = (defaultDuration/2-state_length<model.t) & (model.t<defaultDuration/2) # time period in ms where we expect the down-state\n up_window = (defaultDuration-state_length<model.t) & (model.t<defaultDuration) # and up state\n up_state_rate = np.mean(model.output[:, up_window], axis=1)\n down_state_rate = np.mean(model.output[:, down_window], axis=1)\n up_down_difference = np.max(up_state_rate - down_state_rate)\n\n # check rates!\n max_amp_output = np.max(\n np.max(model.output[:, up_window], axis=1) \n - np.min(model.output[:, up_window], axis=1)\n )\n max_output = np.max(model.output[:, up_window])\n\n model_frs, model_pwrs = func.getMeanPowerSpectrum(model.output, \n dt=model.params.dt, \n maxfr=40, \n spectrum_windowsize=10)\n max_power = np.max(model_pwrs) \n\n model_frs, model_pwrs = func.getMeanPowerSpectrum(model.output[:, up_window], dt=model.params.dt, maxfr=40, spectrum_windowsize=5)\n domfr = model_frs[np.argmax(model_pwrs)] \n\n result = {\n \"end\" : 3,\n \"max_output\": max_output, \n \"max_amp_output\" : max_amp_output,\n \"max_power\" : max_power,\n #\"model_pwrs\" : model_pwrs,\n #\"output\": model.output[:, ::int(model.params['save_dt']/model.params['dt'])],\n \"domfr\" : domfr,\n \"up_down_difference\" : up_down_difference\n }\n\n search.saveToPypet(result, traj)\n return \n
Let's now define the parameter space over which we want to serach. We apply a grid search over the mean external input parameters to the excitatory and the inhibitory population mue_ext_mean/mui_ext_mean and do this for two values of spike-frequency adapation strength \\(b\\), once without and once with adaptation.
\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-1.3-aln-bifurcation-diagram.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\nMainProcess root INFO Number of parameter configurations: 3362\nMainProcess root INFO BoxSearch: Environment initialized.\n\n
search.run()\n
\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-06-19-01H-23M-48S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 0/3362 runs [ ] 0.0%\nMainProcess pypet INFO PROGRESS: Finished 169/3362 runs [= ] 5.0%, remaining: 0:01:27\nMainProcess pypet INFO PROGRESS: Finished 337/3362 runs [== ] 10.0%, remaining: 0:01:24\nMainProcess pypet INFO PROGRESS: Finished 505/3362 runs [=== ] 15.0%, remaining: 0:01:27\nMainProcess pypet INFO PROGRESS: Finished 673/3362 runs [==== ] 20.0%, remaining: 0:01:26\nMainProcess pypet INFO PROGRESS: Finished 841/3362 runs [===== ] 25.0%, remaining: 0:01:26\nMainProcess pypet INFO PROGRESS: Finished 1009/3362 runs [====== ] 30.0%, remaining: 0:01:24\nMainProcess pypet INFO PROGRESS: Finished 1177/3362 runs [======= ] 35.0%, remaining: 0:01:19\nMainProcess pypet INFO PROGRESS: Finished 1345/3362 runs [======== ] 40.0%, remaining: 0:01:15\nMainProcess pypet INFO PROGRESS: Finished 1513/3362 runs [========= ] 45.0%, remaining: 0:01:10\nMainProcess pypet INFO PROGRESS: Finished 1681/3362 runs [========== ] 50.0%, remaining: 0:01:05\nMainProcess pypet INFO PROGRESS: Finished 1850/3362 runs [=========== ] 55.0%, remaining: 0:00:59\nMainProcess pypet INFO PROGRESS: Finished 2018/3362 runs [============ ] 60.0%, remaining: 0:00:55\nMainProcess pypet INFO PROGRESS: Finished 2186/3362 runs [============= ] 65.0%, remaining: 0:00:49\nMainProcess pypet INFO PROGRESS: Finished 2354/3362 runs [============== ] 70.0%, remaining: 0:00:42\nMainProcess pypet INFO PROGRESS: Finished 2522/3362 runs [=============== ] 75.0%, remaining: 0:00:36\nMainProcess pypet INFO PROGRESS: Finished 2690/3362 runs [================ ] 80.0%, remaining: 0:00:29\nMainProcess pypet INFO PROGRESS: Finished 2858/3362 runs [================= ] 85.0%, remaining: 0:00:22\nMainProcess pypet INFO PROGRESS: Finished 3026/3362 runs [================== ] 90.0%, remaining: 0:00:15\nMainProcess pypet INFO PROGRESS: Finished 3194/3362 runs [=================== ] 95.0%, remaining: 0:00:07\nMainProcess pypet INFO PROGRESS: Finished 3362/3362 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-06-19-01H-23M-48S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-06-19-01H-23M-48S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-06-19-01H-23M-48S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-06-19-01H-23M-48S` were completed successfully.\n\n
search.loadResults(all=False)\n
\nMainProcess root INFO Loading results from ./data/hdf/example-1.3-aln-bifurcation-diagram.hdf\nMainProcess root INFO Analyzing trajectory results-2021-06-19-01H-23M-48S\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-1.3-aln-bifurcation-diagram.hdf`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `config` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `parameters` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `results` in mode `1`.\nMainProcess root INFO Creating `dfResults` dataframe ...\nMainProcess root INFO Aggregating results to `dfResults` ...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3362/3362 [00:22<00:00, 152.47it/s]\nMainProcess root INFO All results loaded.\n\n
Let's draw the bifurcation diagrams. We will use a white contour for oscillatory areas (measured by max_amp_output) and a green dashed lined for the bistable region (measured by up_down_difference). We can use the function explorationUtils.plotExplorationResults() for this.
"},{"location":"examples/example-1.3-aln-bifurcation-diagram/#bifurcation-diagram-of-the-aln-model","title":"Bifurcation diagram of the aln model","text":"
In this notebook, we will discover how easy it is to draw bifurcation diagrams in neurolib using its powerful BoxSearch class.
Bifurcation diagrams are an important tool to understand a dynamical system, may it be a single neuron model or a whole-brain network. They show how a system behaves when certain parameters of the model are changed: whether the system transitions into an oscillation for example, or whethter the system remains in a fixed point (of sustained constant activity).
We will use this to draw a map of the aln model: Since the aln model consists of two populations of Adex neurons, we will change its inputs to the excitatory and to the inhibitory population independently and do so for two different values of spike-frequency adaptation strength \\(b\\). We will measure the activity of the system and identify regions of oscillatory activity and discover bistable states, in which the system can be in two different stable states for the same set of parameters.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib seaborn\n import matplotlib.pyplot as plt\n\nimport numpy as np\nimport logging\n\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.evolution import Evolution\n\nimport neurolib.optimize.evolution.evolutionaryUtils as eu\nimport neurolib.utils.functions as func\n\ndef optimize_me(traj):\n ind = evolution.getIndividualFromTraj(traj)\n logging.info(\"Hello, I am {}\".format(ind.id))\n logging.info(\"You can also call me {}, or simply ({:.2}, {:.2}).\".format(ind.params, ind.x, ind.y))\n\n # let's make a circle\n computation_result = abs((ind.x**2 + ind.y**2) - 1)\n # DEAP wants a tuple as fitness, ALWAYS!\n fitness_tuple = (computation_result ,)\n\n # we also require a dictionary with at least a single result for storing the results in the hdf\n result_dict = {}\n\n return fitness_tuple, result_dict\n\n\npars = ParameterSpace(['x', 'y'], [[-5.0, 5.0], [-5.0, 5.0]])\nevolution = Evolution(optimize_me, pars, weightList = [-1.0],\n POP_INIT_SIZE=10, POP_SIZE = 6, NGEN=4, filename=\"example-2.0.hdf\")\n# info: chose POP_INIT_SIZE=100, POP_SIZE = 50, NGEN=10 for real exploration, \n# values here are low for testing: POP_INIT_SIZE=10, POP_SIZE = 6, NGEN=4\n\nevolution.run(verbose = True)\n
"},{"location":"examples/example-2-evolutionary-optimization-minimal/#simple-example-of-the-evolutionary-optimization-framework","title":"Simple example of the evolutionary optimization framework","text":"
This notebook provides a simple example for the use of the evolutionary optimization framework builtin to the library. Under the hood, the implementation of the evolutionary algorithm is powered by deap and pypet cares about the parallelization and storage of the simulation data for us.
Here we demonstrate how to fit parameters of a the evaluation function optimize_me which simply computes the distance of the parameters to the unit circle and returns this as the fitness_tuple that DEAP expects.
"},{"location":"examples/example-2.0.1-save-and-load-evolution/","title":"Example 2.0.1 save and load evolution","text":"
In this example, we will demonstrate how to save an evolutionary optimization on one machine or instance and load the results in another machine. This is useful, when the optimization is carried out on another computer as the analysis of the results are done.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-2] == \"neurolib\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
\nMainProcess root INFO Saving evolution to saved_evolution.dill\n\n
Here, we pretend as if we're on a completely new machine. We need to instantiate the Evolution class in order to fill it with the data from the previous optimization. For this, we create a \"mock\" evolution with some fake parameters and then load the dill file to fill out the mock values with the real ones.
\nMainProcess root INFO weightList not set, assuming single fitness value to be maximized.\nMainProcess root INFO Trajectory Name: results-2021-02-15-12H-13M-39S\nMainProcess root INFO Storing data to: ./data/hdf/evolution.hdf\nMainProcess root INFO Trajectory Name: results-2021-02-15-12H-13M-39S\nMainProcess root INFO Number of cores: 8\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/evolution.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\nMainProcess root INFO Evolution: Using algorithm: adaptive\n/Users/caglar/anaconda/lib/python3.7/site-packages/deap/creator.py:141: RuntimeWarning: A class named 'FitnessMulti' has already been created and it will be overwritten. Consider deleting previous creation of that class or rename it.\n RuntimeWarning)\n/Users/caglar/anaconda/lib/python3.7/site-packages/deap/creator.py:141: RuntimeWarning: A class named 'Individual' has already been created and it will be overwritten. Consider deleting previous creation of that class or rename it.\n RuntimeWarning)\nMainProcess root INFO Evolution: Individual generation: <function randomParametersAdaptive at 0x7fd122dfa950>\nMainProcess root INFO Evolution: Mating operator: <function cxBlend at 0x7fd122dcdb70>\nMainProcess root INFO Evolution: Mutation operator: <function gaussianAdaptiveMutation_nStepSizes at 0x7fd122dfad90>\nMainProcess root INFO Evolution: Parent selection: <function selRank at 0x7fd122dfaae8>\nMainProcess root INFO Evolution: Selection operator: <function selBest_multiObj at 0x7fd122dfab70>\n\n
Now, we should be able to do everything we want with the new evolution object.
We can also be able to load the hdf file in which all simulated was stored (\"random_output\" in the evaluation function above).
evolution_new.loadResults()\n
\nMainProcess root INFO Loading results from ./data/hdf/example-2.0.1.hdf\nMainProcess root INFO Analyzing trajectory results-2021-02-15-12H-13M-24S\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-2.0.1.hdf`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading trajectory `results-2021-02-15-12H-13M-24S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `config` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `derived_parameters` in mode `1`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `parameters` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `results` in mode `1`.\n\n
We can load the output from the hdf file by passing the argument outputs=True to the dfEvolution() method:
\n> Simulation parameters\nHDF file storage: ./data/hdf/example-2.0.1.hdf\nTrajectory Name: results-2021-02-15-12H-13M-24S\nDuration of evaluating initial population 0:00:01.093011\nDuration of evolution 0:00:08.117928\nEval function: <function optimize_me at 0x7fd124ee4840>\nParameter space: {'x': [-5.0, 5.0], 'y': [-5.0, 5.0]}\n> Evolution parameters\nNumber of generations: 4\nInitial population size: 10\nPopulation size: 6\n> Evolutionary operators\nMating operator: <function cxBlend at 0x7fd122dcdb70>\nMating paramter: {'alpha': 0.5}\nSelection operator: <function selBest_multiObj at 0x7fd122dfab70>\nSelection paramter: {}\nParent selection operator: <function selRank at 0x7fd122dfaae8>\nComments: no comments\n--- Info summary ---\nValid: 6\nMean score (weighted fitness): -0.93\nParameter distribution (Generation 3):\nx: mean: 0.4360, std: 1.0159\ny: mean: 0.3560, std: 0.5401\n--------------------\nBest 5 individuals:\nPrinting 5 individuals\nIndividual 0\n Fitness values: 0.16\n Score: -0.16\n Weighted fitness: -0.16\n Stats mean 0.16 std 0.00 min 0.16 max 0.16\n model.params[\"x\"] = 0.74\n model.params[\"y\"] = 0.78\nIndividual 1\n Fitness values: 0.4\n Score: -0.4\n Weighted fitness: -0.4\n Stats mean 0.40 std 0.00 min 0.40 max 0.40\n model.params[\"x\"] = 0.76\n model.params[\"y\"] = 0.17\nIndividual 2\n Fitness values: 0.47\n Score: -0.47\n Weighted fitness: -0.47\n Stats mean 0.47 std 0.00 min 0.47 max 0.47\n model.params[\"x\"] = 0.61\n model.params[\"y\"] = -0.41\nIndividual 3\n Fitness values: 1.19\n Score: -1.19\n Weighted fitness: -1.19\n Stats mean 1.19 std 0.00 min 1.19 max 1.19\n model.params[\"x\"] = 1.48\n model.params[\"y\"] = 0.03\nIndividual 4\n Fitness values: 1.22\n Score: -1.22\n Weighted fitness: -1.22\n Stats mean 1.22 std 0.00 min 1.22 max 1.22\n model.params[\"x\"] = 0.78\n model.params[\"y\"] = 1.27\n--------------------\n\n
\n/Users/caglar/anaconda/lib/python3.7/site-packages/neurolib/optimize/evolution/evolutionaryUtils.py:212: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n plt.tight_layout()\n\n
\nMainProcess root INFO Saving plot to ./data/figures/results-2021-02-15-12H-13M-24S_hist_3.png\n\n
\nThere are 6 valid individuals\nMean score across population: -0.93\n\n
\n<Figure size 432x288 with 0 Axes>\n
"},{"location":"examples/example-2.0.1-save-and-load-evolution/#saving-and-loading-evolution","title":"Saving and loading Evolution","text":""},{"location":"examples/example-2.0.1-save-and-load-evolution/#save-evolution","title":"Save evolution","text":""},{"location":"examples/example-2.0.1-save-and-load-evolution/#load-evolution","title":"Load evolution","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/","title":"Example 2.1 evolutionary optimization aln","text":"
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2\n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib seaborn\n import matplotlib.pyplot as plt\n\nimport numpy as np\nimport logging \n\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.evolution import Evolution\nimport neurolib.utils.functions as func\n\nimport neurolib.optimize.evolution.deapUtils as deapUtils\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
aln = ALNModel()\n
# Here we define our evaluation function. This function will\n# be called reapedly and perform a single simulation. The object\n# that is passed to the function, `traj`, is a pypet trajectory\n# and serves as a \"bridge\" to load the parameter set of this \n# particular trajectory and execute a run.\n# Then the power spectrum of the run is computed and its maximum\n# is fitted to the target of 25 Hz peak frequency.\ndef evaluateSimulation(traj):\n # The trajectory id is provided as an attribute\n rid = traj.id\n logging.info(\"Running run id {}\".format(rid))\n # this function provides the a model with the partuclar\n # parameter set for this given run\n model = evolution.getModelFromTraj(traj)\n # parameters can also be modified after loading\n model.params['dt'] = 0.1\n model.params['duration'] = 2*1000.\n # and the simulation is run\n model.run()\n\n # compute power spectrum\n frs, powers = func.getPowerSpectrum(model.rates_exc[:, -int(1000/model.params['dt']):], dt=model.params['dt'])\n # find the peak frequency\n domfr = frs[np.argmax(powers)] \n # fitness evaluation: let's try to find a 25 Hz oscillation\n fitness = abs(domfr - 25) \n # deap needs a fitness *tuple*!\n fitness_tuple = ()\n # more fitness values could be added\n fitness_tuple += (fitness, )\n # we need to return the fitness tuple and the outputs of the model\n return fitness_tuple, model.outputs\n
The evolutionary algorithm tries to find the optimal parameter set that will maximize (or minimize) a certain fitness function.
This achieved by seeding an initial population of size POP_INIT_SIZE that is randomly initiated in the parameter space parameterSpace. INIT: After simulating the initial population using evalFunction, only a subset of the individuals is kept, defined by POP_SIZE.
START: Members of the remaining population are chosen based on their fitness (using rank selection) to mate and produce offspring. These offspring have parameters that are drawn from a normal distribution defined by the mean of the parameters between the two parents. Then the offspring population is evaluated and the process loops back to START:
This process is repeated for NGEN generations.
# Here we define the parameters and the range in which we want\n# to perform the evolutionary optimization.\n# Create a `ParameterSpace` \npars = ParameterSpace(['mue_ext_mean', 'mui_ext_mean'], [[0.0, 4.0], [0.0, 4.0]])\n# Iitialize evolution with\n# :evaluateSimulation: The function that returns a fitness, \n# :pars: The parameter space and its boundaries to optimize\n# :model: The model that should be passed to the evaluation function\n# :weightList: A list of optimization weights for the `fitness_tuple`,\n# positive values will lead to a maximization, negative \n# values to a minimzation. The length of this list must\n# be the same as the length of the `fitness_tuple`.\n# \n# :POP_INIT_SIZE: The size of the initial population that will be \n# randomly sampled in the parameter space `pars`.\n# Should be higher than POP_SIZE. 50-200 might be a good\n# range to start experimenting with.\n# :POP_SIZE: Size of the population that should evolve. Must be an\n# even number. 20-100 might be a good range to start with.\n# :NGEN: Number of generations to simulate the evolution for. A good\n# range to start with might be 20-100.\n\nweightList = [-1.0]\n\nevolution = Evolution(evalFunction = evaluateSimulation, parameterSpace = pars, model = aln, weightList = [-1.0],\n POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2, filename=\"example-2.1.hdf\")\n# info: chose POP_INIT_SIZE=50, POP_SIZE = 20, NGEN=20 for real exploration, \n# values are lower here for testing\n
# Enabling `verbose = True` will print statistics and generate plots \n# of the current population for each generation.\nevolution.run(verbose = False)\n
# the current population is always accesible via\npop = evolution.pop\n# we can also use the functions registered to deap\n# to select the best of the population:\nbest_10 = evolution.toolbox.selBest(pop, k=10)\n# Remember, we performed a minimization so a fitness\n# of 0 is optimal\nprint(\"Best individual\", best_10[0], \"fitness\", best_10[0].fitness)\n
We can look at the current population by calling evolution.dfPop() which returns a pandas dataframe with the parameters of each individual, its id, generation of birth, its outputs, and the fitness (called \"f0\" here).
# a sinple overview of the current population (in this case the \n# last one) is given via the `info()` method. This provides a \n# a histogram of the score (= mean fitness) and scatterplots\n# and density estimates across orthogonal parameter space cross \n# sections.\nevolution.info(plot=True)\n
\n> Simulation parameters\nHDF file storage: ./data/hdf/example-2.1.hdf\nTrajectory Name: results-2020-07-02-14H-20M-45S\nDuration of evaluating initial population 0:00:29.656935\nDuration of evolution 0:03:50.565418\nModel: <class 'neurolib.models.aln.model.ALNModel'>\nModel name: aln\nEval function: <function evaluateSimulation at 0x10ba8cae8>\nParameter space: {'mue_ext_mean': [0.0, 4.0], 'mui_ext_mean': [0.0, 4.0]}\n> Evolution parameters\nNumber of generations: 20\nInitial population size: 50\nPopulation size: 20\n> Evolutionary operators\nMating operator: <function cxBlend at 0x11dcaf510>\nMating paramter: {'alpha': 0.5}\nSelection operator: <function selBest_multiObj at 0x11f4d9d08>\nSelection paramter: {}\nParent selection operator: <function selRank at 0x11f4d9c80>\nComments: no comments\n--- Info summary ---\nValid: 20\nMean score (weighted fitness): -0.85\nParameter distribution (Generation 19):\nmue_ext_mean: mean: 1.0852, std: 0.1270\nmui_ext_mean: mean: 0.2200, std: 0.1042\n--------------------\nBest 5 individuals:\nPrinting 5 individuals\nIndividual 0\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"mue_ext_mean\"] = 1.18\n model.params[\"mui_ext_mean\"] = 0.30\nIndividual 1\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"mue_ext_mean\"] = 1.11\n model.params[\"mui_ext_mean\"] = 0.24\nIndividual 2\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"mue_ext_mean\"] = 0.91\n model.params[\"mui_ext_mean\"] = 0.08\nIndividual 3\n Fitness values: 1.0\n Score: -1.0\n Weighted fitness: -1.0\n Stats mean 1.00 std 0.00 min 1.00 max 1.00\n model.params[\"mue_ext_mean\"] = 1.19\n model.params[\"mui_ext_mean\"] = 0.36\nIndividual 4\n Fitness values: 1.0\n Score: -1.0\n Weighted fitness: -1.0\n Stats mean 1.00 std 0.00 min 1.00 max 1.00\n model.params[\"mue_ext_mean\"] = 1.01\n model.params[\"mui_ext_mean\"] = 0.11\n--------------------\n\n
\nMainProcess root INFO Saving plot to ./data/figures/results-2020-07-02-14H-20M-45S_hist_19.png\n\n
\nThere are 20 valid individuals\nMean score across population: -0.85\n\n
\n<Figure size 432x288 with 0 Axes>\n
neurolib keeps track of all individuals during the evolution. You can see all individuals from each generation by calling evolution.history. The object evolution.tree provides a network description of the genealogy of the evolution: each individual (indexed by its unique .id) is connected to its parents. We can use this object in combination with the network library networkx to plot the tree:
# we put this into a try except block since we don't do testing on networkx\ntry:\n import matplotlib.pyplot as plt\n import networkx as nx\n from networkx.drawing.nx_pydot import graphviz_layout\n\n G = nx.DiGraph(evolution.tree)\n G = G.reverse() # Make the graph top-down\n pos = graphviz_layout(G, prog='dot')\n plt.figure(figsize=(8, 8))\n nx.draw(G, pos, node_size=50, alpha=0.5, node_color=list(evolution.id_score.values()), with_labels=False)\n plt.show()\nexcept:\n print(\"It looks like networkx or pydot are not installed\")\n
\n/Users/caglar/anaconda/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:579: MatplotlibDeprecationWarning: \nThe iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.\n if not cb.iterable(width):\n/Users/caglar/anaconda/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:676: MatplotlibDeprecationWarning: \nThe iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.\n if cb.iterable(node_size): # many node sizes\n\n
"},{"location":"examples/example-2.1-evolutionary-optimization-aln/#evolutionary-parameter-search-with-a-single-neural-mass-model","title":"Evolutionary parameter search with a single neural mass model","text":"
This notebook provides a simple example for the use of the evolutionary optimization framework built-in to the library. Under the hood, the implementation of the evolutionary algorithm is powered by deap and pypet cares about the parallelization and storage of the simulation data for us.
We want to optimize for a simple target, namely finding a parameter configuration that produces activity with a peak power frequency spectrum at 25 Hz.
In this notebook, we will also plot the evolutionary genealogy tree, to visualize how the population evolves over generations.
"},{"location":"examples/example-2.1-evolutionary-optimization-aln/#model-definition","title":"Model definition","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/#initialize-and-run-evolution","title":"Initialize and run evolution","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/#analysis","title":"Analysis","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/#population","title":"Population","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/#plotting-genealogy-tree","title":"Plotting genealogy tree","text":""},{"location":"examples/example-2.2-evolution-brain-network-aln-resting-state-fit/","title":"Example 2.2 evolution brain network aln resting state fit","text":"
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib seaborn\n import matplotlib.pyplot as plt\n\nimport numpy as np\nimport logging \n\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.evolution import Evolution\nimport neurolib.utils.functions as func\n\nfrom neurolib.utils.loadData import Dataset\nds = Dataset(\"hcp\")\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
We create a brain network model using the empirical dataset ds:
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat) # simulates the whole-brain model in 10s chunks by default if bold == True\n# Resting state fits\nmodel.params['mue_ext_mean'] = 1.57\nmodel.params['mui_ext_mean'] = 1.6\nmodel.params['sigma_ou'] = 0.09\nmodel.params['b'] = 5.0\nmodel.params['signalV'] = 2\nmodel.params['dt'] = 0.2\nmodel.params['duration'] = 0.2 * 60 * 1000 #ms\n# testing: aln.params['duration'] = 0.2 * 60 * 1000 #ms\n# real: aln.params['duration'] = 1.0 * 60 * 1000 #ms\n
Our evaluation function will do the following: first it will simulate the model for a short time to see whether there is any sufficient activity. This speeds up the evolution considerably, since large regions of the state space show almost no neuronal activity. Only then do we simulate the model for the full duration and compute the fitness using the empirical dataset.
def evaluateSimulation(traj):\n rid = traj.id\n model = evolution.getModelFromTraj(traj)\n defaultDuration = model.params['duration']\n invalid_result = (np.nan,)* len(ds.BOLDs)\n\n # -------- stage wise simulation --------\n\n # Stage 1 : simulate for a few seconds to see if there is any activity\n # ---------------------------------------\n model.params['duration'] = 3*1000.\n model.run()\n\n # check if stage 1 was successful\n if np.max(model.output[:, model.t > 500]) > 160 or np.max(model.output[:, model.t > 500]) < 10:\n return invalid_result, {}\n\n\n # Stage 2: full and final simulation\n # ---------------------------------------\n model.params['duration'] = defaultDuration\n model.run(chunkwise=True, bold = True)\n\n # -------- fitness evaluation here --------\n\n scores = []\n for i, fc in enumerate(ds.FCs):#range(len(ds.FCs)):\n fc_score = func.matrix_correlation(func.fc(model.BOLD.BOLD[:, 5:]), fc)\n scores.append(fc_score)\n\n meanFitness = np.mean(scores)\n fitness_tuple = (meanFitness,)\n #print(f\"fitness {meanFitness}\")\n #print(f\"scores {scores}\")\n\n fitness_tuple = tuple(scores)\n return fitness_tuple, {}\n
We specify the parameter space that we want to search.
Note that we chose algorithm='nsga2' when we create the Evolution. This will use the multi-objective optimization algorithm by Deb et al. 2002. Although we have only one objective here (namely the FC fit), we could in principle add more objectives, like the FCD matrix fit or other objectives. For this, we would have to add these values to the fitness in the evaluation function above and add more weights in the definition of the Evolution. We can use positive weights for that objective to be maximized and negative ones for minimization. Please refer to the DEAP documentation for more information.
We could now save the full evolution object for later analysis using evolution.saveEvolution().
The info() method gives us a useful overview of the evolution, like a summary of the evolution parameters, the statistics of the population and also scatterplots of the individuals in our search space.
"},{"location":"examples/example-2.2-evolution-brain-network-aln-resting-state-fit/#evolutionary-optimization-of-a-whole-brain-model","title":"Evolutionary optimization of a whole-brain model","text":"
This notebook provides an example for the use of the evolutionary optimization framework built-in to the library. Under the hood, the implementation of the evolutionary algorithm is powered by deap and pypet cares about the parallelization and storage of the simulation data for us.
We want to optimize a whole-brain network that should produce simulated BOLD activity (fMRI data) that is similar to the empirical dataset. We measure the fitness of each simulation by computing the func.matrix_correlation of the functional connectivity func.fc(model.BOLD.BOLD) to the empirical data ds.FCs. The ones that are closest to the empirical data get a higher fitness and have a higher chance of reproducing and survival.
"},{"location":"examples/example-2.2-evolution-brain-network-aln-resting-state-fit/#analysis","title":"Analysis","text":""},{"location":"examples/example-3-meg-functional-connectivity/","title":"Example 3 meg functional connectivity","text":"
In this example we will learn how to use neurolib to simulate resting state functional connectivity of MEG recordings.
In the first part of the notebook, we will compute the frequency specific functional connectivity matrix of an examplary resting state MEG recording from the YouR-Study Uhlhaas, P.J., Gajwani, R., Gross, J. et al. The Youth Mental Health Risk and Resilience Study (YouR-Study). BMC Psychiatry 17, 43 (2017).
To this end we will:
Band-Pass filter the signal
Apply the hilbert-transformation to extract the signal envelope
Orthogonalize the signal envelopes of two examplary regions
Low-Pass filter the signal envelopes
and compute the pairwise envelope correlations which yields the functional connectivity matrix.
We follow the approach presented in Hipp, J., Hawellek, D., Corbetta, M. et al., Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat Neurosci 15, 884\u2013890 (2012)
In the second part of this notebook, we will use a whole-brain model to simulate brain activity and compute functional connectivity matrix of the simulated signal envelope, as was done for the empirical MEG data. The parameters of this model have been previously optimized with neurolib's evolutionary algorithms (not shown here).
Finally, we will compute the fit (Pearson correlation) of the simulated functional connectivity to the empirical MEG data, which was used as a fitting objective in a previous optimization procedure.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
import os\nimport numpy as np\nimport xarray as xr\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport ipywidgets as widgets\nfrom IPython.utils import io\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)\nimport time\nimport pandas as pd\n
\n/Users/caglar/anaconda/lib/python3.7/site-packages/pandas/compat/_optional.py:138: UserWarning: Pandas requires version '2.7.0' or newer of 'numexpr' (version '2.6.9' currently installed).\n warnings.warn(msg, UserWarning)\n\n
from neurolib.utils.signal import Signal \n\nsignal = Signal.from_file(os.path.join('examples', 'data','rs-meg.nc'))\nregion_labels = signal.data.regions.values\nnr_regions = len(region_labels)\ndisplay(signal.data)\n
Attributes: (6)name :rest meglabel :signal_type :unit :Tdescription :MEG recording in AAL2 spaceprocess_steps_0 :resample to 100.0Hz
We will now filter the signal into the desidered frequency band and apply the hilbert transform on the band-passed filtered signal. This will provide us with the analytic representation of the signal, which we can then use to extract the signal's envelope and its phase.
In the following, we plot each processing step for an example target region that you can chose using the widgets below (default: left Precentral Gyrus). Furthermore, we can also choose the frequency range that we'd like to filter the signal in (default: alpha (8-12Hz)).
print('Select a region from the AAL2 atlas and a frequency range')\n# Select a Region \ntarget = widgets.Select(options=region_labels, value='PreCG.L', description='Regions', \n tooltips=['Description of slow', 'Description of regular', 'Description of fast'], \n layout=widgets.Layout(width='50%', height='150px'))\ndisplay(target)\n\n# Select Frequency Range\nfreq = widgets.IntRangeSlider(min=1, max=46, description='Frequency (Hz)', value=[8, 12], layout=widgets.Layout(width='80%'), \n style={'description_width': 'initial'})\ndisplay(freq)\n
\nSelect a region from the AAL2 atlas and a frequency range\n\n
# Define how many timepoints you'd like to plot\nplot_timepoints = 1000\n\n# Plot unfiltered Signal\nfig, ax = plt.subplots(2, 1, figsize=(12,8), sharex=True)\nsns.lineplot(x=signal.data.time[:plot_timepoints].values, y=signal.data.sel(regions=target.value)[:plot_timepoints].values, \n ax=ax[0], color='k', alpha=0.6)\nax[0].set_title(f'Unfiltered Signal ({target.value})');\n\n# Band Pass Filter the Signal\nsignal.filter(freq.value[0], freq.value[1], inplace=True);\n\n# Apply hilbert-transform to extract the signal envelope\ncomplex_signal = signal.hilbert_transform('complex', inplace=False)\nsignal_env = np.abs(complex_signal.data)\n\n# Plot filtered Signal and Signal Envelope\nsns.lineplot(x=signal.data.time[:plot_timepoints].values, y=signal.data.sel(regions=target.value)[:plot_timepoints].values, \n ax=ax[1], label='Bandpass-Filtered Signal')\nsns.lineplot(x=signal_env.time[:plot_timepoints].values, y=signal_env.sel(regions=target.value)[:plot_timepoints].values, \n ax=ax[1], label='Signal Envelope')\nax[1].set_title(f'Filtered Signal ({target.value})');\nax[1].legend(bbox_to_anchor=(1.2, 1),borderaxespad=0)\nsns.despine(trim=True)\n
\nSetting up band-pass filter from 8 - 12 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 8.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 7.00 Hz)\n- Upper passband edge: 12.00 Hz\n- Upper transition bandwidth: 3.00 Hz (-6 dB cutoff frequency: 13.50 Hz)\n- Filter length: 165 samples (1.650 sec)\n\n\n
print('Select a reference region for the orthogonalization')\n# Select a Region \nreferenz = widgets.Select(options=region_labels, value='PreCG.R', description='Regions',\n tooltips=['Description of slow', 'Description of regular', 'Description of fast'],\n layout=widgets.Layout(width='50%', height='150px'))\ndisplay(referenz)\n
\nSelect a reference region for the orthogonalization\n\n
exclude = list(range(40, 46)) + list(range(74, 82))\ntmp = np.delete(fc, exclude, axis=0)\nemp_fc = np.delete(tmp, exclude, axis=1)\n# Exclude regions from the list of region labels\nemp_labels = np.delete(region_labels, exclude)\n
# Let's import the neurolib\nfrom neurolib.models.wc import WCModel\nfrom neurolib.utils.loadData import Dataset\n\n# First we load the structural data set from the Human Connectome Project \nds = Dataset(\"hcp\")\n\n# We initiate the Wilson-Cowan model\nwc = WCModel(Cmat = ds.Cmat, Dmat = ds.Dmat, seed=0)\n
# Let's set the previously defined parameters\n# note: the duraiton here is short for testing:\nwc.params['duration'] = 10*1000 \n\n# use longer simulation for real run:\n#wc.params['duration'] = 1*60*1000 \n\nwc.params['K_gl'] = global_coupling.value\nwc.params['exc_ext'] = exc_drive.value\nwc.params['inh_ext'] = inh_drive.value\nwc.params['sigma_ou'] = noise_level.value\n# Run the model\nwc.run()\n
# Create xr DataArray from the simulated excitatory timeseries (keeping the region labels)\nsim_signal = xr.DataArray(wc.exc[:, int(1000/wc.params.dt):], dims=(\"regions\", \"time\"), coords={\"regions\": emp_labels, \"time\": wc.t[int(1000/wc.params.dt):]/1000}, \n attrs={'atlas':'AAL2'})\n\n# Initialize Figure\nfig, ax = plt.subplots(figsize=(12,4))\n\n# Filter signal\nsim_signal = Signal(sim_signal)\nsim_signal.resample(to_frequency=100)\nwith io.capture_output() as captured:\n sim_signal.filter(freq.value[0], freq.value[1], inplace=True);\nsns.lineplot(x=sim_signal.data.time[:plot_timepoints].values, y=sim_signal.data.sel(regions=target.value)[:plot_timepoints].values, ax=ax, label='Filtered Signal')\n\n# Extract signal envelope \nsim_signal.hilbert_transform('amplitude', inplace=True)\nsns.lineplot(x=sim_signal.data.time[:plot_timepoints].values, y=sim_signal.data.sel(regions=target.value)[:plot_timepoints].values, ax=ax, label='Signal Envelope')\n\n# Low-Pass Filter\nwith io.capture_output() as captured:\n sim_signal.filter(low_freq=None, high_freq=low_pass.value, inplace=True)\nsns.lineplot(x=sim_signal.data.time[:plot_timepoints].values, y=sim_signal.data.sel(regions=target.value)[:plot_timepoints].values, ax=ax, label='Low-Pass Signal Envelope')\nax.legend(bbox_to_anchor=(1.2, 1),borderaxespad=0)\nax.set_title(f'Simulated Signal of Target Region Y ({target.value})');\nsns.despine(trim=True)\n
To compute the simulated functional connectivity matrix we use the fc functions from neurolib.
import neurolib.utils.functions as func\n\n# Compute the functional connectivity matrix\nsim_fc = func.fc(sim_signal.data)\n\n# Set diagonal to zero\nnp.fill_diagonal(sim_fc, 0)\n\n# Plot Empirical and simulated connectivity matrix\nfig, ax = plt.subplots(1,2, figsize=(16,10))\nsns.heatmap(emp_fc, square=True, ax=ax[0], cmap='YlGnBu', linewidth=0.005, cbar_kws={\"shrink\": .5})\nax[0].set_title('Empirical FC',pad=10);\nsns.heatmap(sim_fc, square=True, ax=ax[1], cmap='YlGnBu', linewidth=0.005, cbar_kws={\"shrink\": .5})\nax[1].set_title('Simulated FC',pad=10);\nticks = [tick[:-2] for tick in emp_labels[::2]]\nfor ax in ax:\n ax.set_xticks(np.arange(0,80,2)); ax.set_yticks(np.arange(0,80,2)) \n ax.set_xticklabels(ticks, rotation=90, fontsize=8); ax.set_yticklabels(ticks, rotation=0, fontsize=8);\n
# Compare structural and simulated connectivity to the empirical functional connectivity\nstruct_emp = np.corrcoef(emp_fc.flatten(), ds.Cmat.flatten())[0,1]\nsim_emp = np.corrcoef(emp_fc.flatten(), sim_fc.flatten())[0,1]\n\n# Plot\nfig, ax = plt.subplots(figsize=(6,6))\nsplot = sns.barplot(x=['Structural Connectivity', 'Simulated Connectivity'], y=[struct_emp, sim_emp], ax=ax)\nax.set_title('Correlation to Empiral Functional Connectivity', pad=10)\nfor p in splot.patches:\n splot.annotate(format(p.get_height(), '.2f'), \n (p.get_x() + p.get_width() / 2., p.get_height()), \n ha = 'center', va = 'center', \n size=20, color='white',\n xytext = (0, -12), \n textcoords = 'offset points')\nsns.despine()\nprint(f\"Parameters: \\tGlobal Coupling: {wc.params['K_gl']}\\n\\t\\tExc. Background Drive: {wc.params['exc_ext']}\")\nprint(f\"\\t\\tNoise Level: {wc.params['sigma_ou']}\")\n
First off, let's load the MEG data using the Signal class from neurolib. Our example data has already been preprocessed and projected into source space using the AAL2 atlas.
"},{"location":"examples/example-3-meg-functional-connectivity/#band-pass-filter-and-hilbert-transform","title":"Band-Pass filter and Hilbert transform","text":""},{"location":"examples/example-3-meg-functional-connectivity/#orthogonalized-signal-envelope","title":"Orthogonalized signal envelope","text":"
Now we are going to address the main methodological issue of MEG when it comes to the analysis of the cortical functional connectivity structure, i.e. its low spatial resolution. The electric field generated by any given neural source spreads widely over the cortex so that the signal captured at the MEG sensors is a complex mixture of signals from multiple underlying neural sources.
To account for the effect of electric field spread on our MEG connectivity measures, we adapted the orthogonalization approach by Hipp, J., Hawellek, D., Corbetta, M. et al. Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat Neurosci 15, 884\u2013890 (2012) link.
The basic idea here is that a signal generated by one neural source and measured at two separate sensors must have exactly the same phase at both sensors. In contrast, signals from different neural sources have different phases. And thus it is possible to eliminate the effect of a reference signal on the target signal by removing the signal component that has the same phase as a reference region.
Formally, this can be expressed as: \\(Y_{\\perp X}(t,f) = imag\\big(\\ Y(t,f)\\ \\frac{X(t,f)^\\star}{|X(t,f)|}\\ \\big)\\ \\label{eq:orth}\\). Here, \\(Y\\) represents the analytic signal from our target regions that is being orthogonalized with respect to the signal from region \\(X\\).
Using the widgets below, you can choose the reference region \\(X\\) (default: right Precentral Gyrus)
"},{"location":"examples/example-3-meg-functional-connectivity/#low-pass-filtering-of-the-envelopes","title":"Low-Pass filtering of the envelopes","text":"
As a last step, before calculating the envelope correlations, we need to low-pass filter the signal envelopes since the connectivity measures of (ultra)-low frequency components of the MEG-signal correspond best to the functional connectivity as measured using fMRI.
Below, you can choose the low-pass frequency (default: 0.2 Hz).
"},{"location":"examples/example-3-meg-functional-connectivity/#computing-the-functional-connectivity-matrix","title":"Computing the functional connectivity matrix","text":"
We will now define a function that iterates over each pair of brain regions and performs the previously presented processing steps, i.e. that extracts the envelopes, performs the orthogonalization, applies the low-pass filter, and returns the functional connectivity matrix that contains the pairwise envelope correlations.
For the following whole-brain simulation we are only interested in the cortical regions. So we'll now exclude all subcortical regions: * Hippocampus: 41 - 44 * Amygdala: 45-46 * Basal Ganglia: 75-80 * Thalamus: 81-82
In this part of the notebook, we will use neurolib to simulate the functional connectivity. We will therefore:
Load structural connectivity matrices from the Human Connectome Project and initiate the whole-brain model using the Wilson-Cowan model to simulate each brain region
Set the global coupling strength, exc. background input, and the noise strength parameters of the model
Run the simulation
Compute the functional connectivity using the signal envelopes
Please refer to the wc-minimal example for an introduction to the Wilson-Cowan model.
You may now choose parameters settings for the global coupling, the excitatory background input, and the noise strength, which will be used when we run the model. The final fit between simulated and empirical connectivity matrices will depend on the parameters choosen here.
"},{"location":"examples/example-3-meg-functional-connectivity/#run-the-simulation","title":"Run the simulation","text":"
Let's now run the whole-brain model using the defined parameter settings. This may take some time since we're simulating a complete minute here.
We'll now compute the functional connectivity matrix containing the pairwise envelope correlations between all cortical regions of the AAL2 atlas. We'll thus follow the processing steps as before, i.e. band-pass filter the signal, extract the signal envelopes using the hilbert transformation, low-pass filter the envelopes and compute the pairwise pearson correlations. Note that we don't apply the orthogonalization scheme here, since this was only done to account to the electric field spread in the empirical data.
Lastly, we may evaluate the model fit by computing the pearson correlation between our simulated functional connectivity matrix and the empirical one. Additionally we'll also plot the correlation between structural and functional connectivity matrices to have a reference.
# import stuff\n\n# try:\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom IPython.display import display\n\n# import ALN single node model and neurolib wrapper `MultiModel`\nfrom neurolib.models.multimodel import ALNNetwork, ALNNode, MultiModel\n\n# except ImportError:\n# import sys\n# !{sys.executable} -m pip install matplotlib\n# import matplotlib.pyplot as plt\n
# create a model and wrap it to `MultiModel`\naln = MultiModel.init_node(ALNNode())\n\n# 5 seconds run\naln.params[\"duration\"] = 5.0 * 1000 # in ms\n# MultiModel offers two integration backends, be default we are using so-called `jitcdde` backend\n# `jitcdde` is a numerical backend employing adaptive dt scheme for DDEs, therefore we do not care about\n# actual dt (since it is adaptive), only about the sampling dt and this can be higher\n# more about this in example-4.2\naln.params[\"sampling_dt\"] = 1.0 # in ms\n# parametrise ALN model in slow limit cycle\naln.params[\"*EXC*input*mu\"] = 4.2\naln.params[\"*INH*input*mu\"] = 1.8\n# run\naln.run()\n\n# plot - default output is firing rates in kHz\nplt.plot(aln[\"t\"], aln[\"r_mean_EXC\"].T, lw=2, c=\"k\")\nplt.xlabel(\"t [ms]\")\nplt.ylabel(\"Rate [kHz]\")\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
As you saw in the previous cell, the internal workings of MultiModel are very similar to the core neurolib. Therefore, for simple runs, you do care about the following: * MultiModel: a wrapper class for all models in MultiModel framework which gives model objects neurolib powers (meaning .params and .run()). MultiModel class is initialized as follows: * when initialising with Node: model = MultiModel.init_node(<init'd Node class>) * when initialising with Network: model = MultiModel(<init'd Network class>) (see later)
dummy_sc = np.array([[0.0, 1.0], [1.0, 0.0]])\n# init MultiModelnetwork with 2 ALN nodes with dummy sc and no delays\nmm_net = ALNNetwork(connectivity_matrix=dummy_sc, delay_matrix=None)\n\nprint(mm_net)\n# each network is an proper python iterator, i.e. len() is defined\nprint(f\"Nodes: {len(mm_net)}\")\n# as well as __get_item__\nprint(mm_net[0])\nprint(mm_net[1])\n# similarly, each node is a python iterator, i.e.\nprint(f\"Masses in 1. node: {len(mm_net[0])}\")\nprint(mm_net[0][0])\nprint(mm_net[0][1])\n\n# in order to navigate through the hierarchy, each mass, node and net\n# has its own name and label and index\n# index of a node is relative within the network\n# index of a mass is relative within the node\nprint(f\"This network name: {mm_net.name}\")\nprint(f\"This network label: {mm_net.label}\")\nprint(f\"1st node name: {mm_net[0].name}\")\nprint(f\"1st node label: {mm_net[0].label}\")\nprint(f\"1st node index: {mm_net[0].index}\")\nprint(f\"1st mass in 1st node name: {mm_net[0][0].name}\")\nprint(f\"1st mass in 1st node label: {mm_net[0][0].label}\")\nprint(f\"1st mass in 1st node index: {mm_net[0][0].index}\")\n\n# you can also check number of variables etc at all levels of hierarchy\nprint(f\"ALN EXC num. vars: {mm_net[0][0].num_state_variables}\")\nprint(f\"ALN INH num. vars: {mm_net[0][1].num_state_variables}\")\nprint(f\"ALN node num. vars: {mm_net[0].num_state_variables}\")\nprint(f\"This network num. vars: {mm_net.num_state_variables}\")\n# similarly you can check number of \"noise variables\", i.e. the number\n# of stochastic variables entering the simulation\nprint(f\"ALN EXC noise vars: {mm_net[0][0].num_noise_variables}\")\n# etc\n\n# not sure what are the state variables? no problem!\nprint(f\"ALN EXC state vars: {mm_net[0][0].state_variable_names}\")\nprint(f\"ALN node state vars: {mm_net[0].state_variable_names}\")\nprint(f\"This network state vars: {mm_net.state_variable_names}\")\n\n# if you are unsure what kind of a monster you build in MultiModel,\n# a function `describe()` is available at all three levels -\n# it returns a dictionary with basic info about the model object\n# this is describe of a `NeuralMass`\nprint(\"\")\nprint(\"Mass `describe`:\")\ndisplay(mm_net[0][0].describe())\n# describe of a `Node` recursively describes all masses and some more\nprint(\"\")\nprint(\"Node `describe`:\")\ndisplay(mm_net[0].describe())\n# and finally, describe of a `Network` gives you everything\nprint(\"\")\nprint(\"Network `describe`:\")\ndisplay(mm_net.describe())\n\n# PRO tip: imagine highly heterogeneous network and some long simulation with it;\n# apart from the results you can dump `net.describe()` dictionary into json and\n# never forget what you've done!\n
\nBrain network ALN neural mass network with 2 nodes\nNodes: 2\nNetwork node: ALN neural mass node with 2 neural masses: ALN excitatory neural mass EXC, ALN inhibitory neural mass INH\nNetwork node: ALN neural mass node with 2 neural masses: ALN excitatory neural mass EXC, ALN inhibitory neural mass INH\nMasses in 1. node: 2\nNeural mass: ALN excitatory neural mass with 7 state variables: I_mu, I_A, I_syn_mu_exc, I_syn_mu_inh, I_syn_sigma_exc, I_syn_sigma_inh, r_mean\nNeural mass: ALN inhibitory neural mass with 6 state variables: I_mu, I_syn_mu_exc, I_syn_mu_inh, I_syn_sigma_exc, I_syn_sigma_inh, r_mean\nThis network name: ALN neural mass network\nThis network label: ALNNet\n1st node name: ALN neural mass node\n1st node label: ALNNode\n1st node index: 0\n1st mass in 1st node name: ALN excitatory neural mass\n1st mass in 1st node label: ALNMassEXC\n1st mass in 1st node index: 0\nALN EXC num. vars: 7\nALN INH num. vars: 6\nALN node num. vars: 13\nThis network num. vars: 26\nALN EXC noise vars: 1\nALN EXC state vars: ['I_mu', 'I_A', 'I_syn_mu_exc', 'I_syn_mu_inh', 'I_syn_sigma_exc', 'I_syn_sigma_inh', 'r_mean']\nALN node state vars: [['I_mu_EXC', 'I_A_EXC', 'I_syn_mu_exc_EXC', 'I_syn_mu_inh_EXC', 'I_syn_sigma_exc_EXC', 'I_syn_sigma_inh_EXC', 'r_mean_EXC', 'I_mu_INH', 'I_syn_mu_exc_INH', 'I_syn_mu_inh_INH', 'I_syn_sigma_exc_INH', 'I_syn_sigma_inh_INH', 'r_mean_INH']]\nThis network state vars: [['I_mu_EXC', 'I_A_EXC', 'I_syn_mu_exc_EXC', 'I_syn_mu_inh_EXC', 'I_syn_sigma_exc_EXC', 'I_syn_sigma_inh_EXC', 'r_mean_EXC', 'I_mu_INH', 'I_syn_mu_exc_INH', 'I_syn_mu_inh_INH', 'I_syn_sigma_exc_INH', 'I_syn_sigma_inh_INH', 'r_mean_INH'], ['I_mu_EXC', 'I_A_EXC', 'I_syn_mu_exc_EXC', 'I_syn_mu_inh_EXC', 'I_syn_sigma_exc_EXC', 'I_syn_sigma_inh_EXC', 'r_mean_EXC', 'I_mu_INH', 'I_syn_mu_exc_INH', 'I_syn_mu_inh_INH', 'I_syn_sigma_exc_INH', 'I_syn_sigma_inh_INH', 'r_mean_INH']]\n\nMass `describe`:\n\n
# now let us check the parameters.. for this we initialise MultiModel in neurolib's fashion\naln_net = MultiModel(mm_net)\n# parameters are accessible via .params\naln_net.params\n# as you can see the parameters are flattened nested dictionary which follows this nomenclature\n# {\"<network label>.<node label>_index.<mass label>_index.<param name>: param value\"}\n
# as you can see there are a lot of parameters for simple 2-node network of ALN models\n# typically you want to change parameters of all nodes at the same time\n# fortunately, model.params is not your basic dictionary, it's a special one, we call it `star` dictionary,\n# because you can do this:\ndisplay(aln_net.params[\"*tau\"])\nprint(\"\")\n# so yes, star works as a glob identifier, so by selecting \"*tau\" I want all parameters named tau\n# (I dont care from which mass or node it comes)\n# what if I want to change taus only in EXC masses? easy:\ndisplay(aln_net.params[\"*EXC*tau\"])\nprint(\"\")\n# or maybe I want to change taus only in the first node?\ndisplay(aln_net.params[\"*Node_0*tau\"])\nprint(\"\")\n# of course, you can change a param value with this\naln_net.params[\"*Node_0*tau\"] = 13.2\ndisplay(aln_net.params[\"*Node_0*tau\"])\naln_net.params[\"*Node_0*tau\"] = 5.0\n
# case: I want to change all taus except \"noise\" taus\n# this gives all the taus, including \"noise\"\ndisplay(aln_net.params[\"*tau*\"])\nprint(\"\")\n# pipe symbol filters out unwanted keys - here we have only taus which key does NOT include \"input\"\ndisplay(aln_net.params[\"*tau*|input\"])\n
max_rate_e = []\nmin_rate_e = []\n\n# number low for testing:\nmue_inputs = np.linspace(0, 2, 2)\n# use: mue_inputs = np.linspace(0, 2, 20)\n\n# not let's match ALN parameters to those in example-0-aln-minimal and recreate\n# the 1D bif. diagram\naln.params[\"*INH*input*mu\"] = 0.5\naln.params[\"*b\"] = 0.0\naln.params[\"ALNNode_0.ALNMassEXC_0.a\"] = 0.0\nfor mue in mue_inputs:\n aln.params[\"*EXC*input*mu\"] = mue\n display(aln.params[\"*EXC*input*mu\"])\n aln.run()\n max_rate_e.append(np.max(aln.output[0, -int(1000 / aln.params[\"sampling_dt\"]) :]))\n min_rate_e.append(np.min(aln.output[0, -int(1000 / aln.params[\"sampling_dt\"]) :]))\n
\n{'ALNNode_0.ALNMassEXC_0.input_0.mu': 0.0}\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
plt.plot(mue_inputs, max_rate_e, c=\"k\", lw=2)\nplt.plot(mue_inputs, min_rate_e, c=\"k\", lw=2)\nplt.title(\"Bifurcation diagram of the aln model\")\nplt.xlabel(\"Input to excitatory population\")\nplt.ylabel(\"Min / max firing rate [kHz]\")\n
# let us start by subclassing the Network\n\n\nclass ALNThalamusMiniNetwork(Network):\n\"\"\"\n Simple thalamocortical motif: 1 cortical node ALN + 1 NMM thalamus.\n \"\"\"\n\n # provide basic attributes as name and label\n name = \"ALN 1 node + Thalamus\"\n label = \"ALNThlmNet\"\n\n # define which variables are used to sync, i.e. what coupling variables our nodes need\n sync_variables = [\n # both nodes are connected via excitatory synapses\n \"network_exc_exc\",\n # ALN requires also squared coupling\n \"network_exc_exc_sq\",\n # and INH mass in thalamus also receives excitatory coupling\n \"network_inh_exc\",\n ]\n\n # lastly, we need to define what is default output of the network (this has to be the\n # variable present in all nodes)\n # for us it is excitatory firing rates\n default_output = f\"r_mean_{EXC}\"\n # define all output vars of any interest to us - EXC and INH firing rates and adaptive current in ALN\n output_vars = [f\"r_mean_{EXC}\", f\"r_mean_{INH}\", f\"I_A_{EXC}\"]\n\n def __init__(self, connectivity_matrix, delay_matrix):\n # self connections are resolved within nodes, so zeroes at the diagonal\n assert np.all(np.diag(connectivity_matrix) == 0.0)\n\n # init ALN node with index 0\n aln_node = ALNNode()\n aln_node.index = 0\n # index where the state variables start - for first node it is always 0\n aln_node.idx_state_var = 0\n # set correct indices for noise input\n for mass in aln_node:\n mass.noise_input_idx = [mass.index]\n\n # init thalamus node with index 1\n thalamus = ThalamicNode()\n thalamus.index = 1\n # thalamic state variables start where ALN state variables end - easy\n thalamus.idx_state_var = aln_node.num_state_variables\n # set correct indices of noise input - one per mass, after ALN noise\n # indices\n for mass in thalamus:\n mass.noise_input_idx = [aln_node.num_noise_variables + mass.index]\n\n # now super.__init__ network with these two nodes:\n super().__init__(\n nodes=[aln_node, thalamus],\n connectivity_matrix=connectivity_matrix,\n delay_matrix=delay_matrix,\n )\n\n # done! the only other thing we need to do, is to set the coupling variables\n # thalamus vs. ALN are coupled via their firing rates and here we setup the\n # coupling matrices; the super class `Network` comes with some convenient\n # functions for this\n\n def _sync(self):\n\"\"\"\n Set coupling variables - the ones we defined in `sync_variables`\n _sync returns a list of tuples where the first element in each tuple is the coupling \"symbol\"\n and the second is the actual mathematical expression\n for the ease of doing this, `Network` class contains convenience functions for this:\n - _additive_coupling\n - _diffusive_coupling\n - _no_coupling\n here we use additive coupling only\n \"\"\"\n # get indices of coupling variables from all nodes\n exc_indices = [\n next(\n iter(\n node.all_couplings(\n mass_indices=node.excitatory_masses.tolist()\n )\n )\n )\n for node in self\n ]\n assert len(exc_indices) == len(self)\n return (\n # basic EXC <-> EXC coupling\n # within_node_idx is a list of len 2 (because we have two nodes)\n # with indices of coupling variables within the respective state vectors\n self._additive_coupling(\n within_node_idx=exc_indices, symbol=\"network_exc_exc\"\n )\n # squared EXC <-> EXC coupling (only to ALN)\n + self._additive_coupling(\n within_node_idx=exc_indices,\n symbol=\"network_exc_exc_sq\",\n # square connectivity\n connectivity=self.connectivity * self.connectivity,\n )\n # EXC -> INH coupling (only to thalamus)\n + self._additive_coupling(\n within_node_idx=exc_indices,\n symbol=\"network_inh_exc\",\n connectivity=self.connectivity,\n )\n + super()._sync()\n )\n
# lets check what we have\nSC = np.array([[0.0, 0.15], [1.2, 0.0]])\ndelays = np.array([[0.0, 13.0], [13.0, 0.0]]) # thalamocortical delay = 13ms\nthalamocortical = MultiModel(ALNThalamusMiniNetwork(connectivity_matrix=SC, delay_matrix=delays))\n# original `MultiModel` instance is always accessible as `MultiModel.model_instance`\ndisplay(thalamocortical.model_instance.describe())\n
# fix parameters for interesting regime\nthalamocortical.params[\"*g_LK\"] = 0.032 # K-leak conductance in thalamus\nthalamocortical.params[\"ALNThlmNet.ALNNode_0.ALNMassEXC_0.a\"] = 0.0 # no firing rate adaptation\nthalamocortical.params[\"*b\"] = 15.0 # spike adaptation\nthalamocortical.params[\"*tauA\"] = 1000.0 # slow adaptation timescale\nthalamocortical.params[\"*EXC*mu\"] = 3.4 # background excitation to ALN\nthalamocortical.params[\"*INH*mu\"] = 3.5 # background inhibition to ALN\nthalamocortical.params[\"*ALNMass*input*sigma\"] = 0.05 # noise in ALN\nthalamocortical.params[\"*TCR*input*sigma\"] = 0.005 # noise in thalamus\nthalamocortical.params[\"*input*tau\"] = 5.0 # timescale of OU process\n\n# number low for testing:\nthalamocortical.params[\"duration\"] = 2000. \n# use: thalamocortical.params[\"duration\"] = 20000. # 20 seconds simulation\nthalamocortical.params[\"sampling_dt\"] = 1.0\nthalamocortical.run()\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
We can nicely see the interplay between cortical UP and DOWN states (with UP state being dominant and irregular DOWN state excursions) and thalamic spindles.
Combining different models might seem hard at first, but it is actually kind of intuitive and works as you would connect models with pen and paper. The only necessary thing is to define and initialize individual nodes in the network (done in __init__ function) and then specify the type of coupling between these nodes (in _sync() function). That's it!
For more information on how to build a network and for a deeper understanding of how exactly MultiModel works, please check out our following example, where we will build the Jansen-Rit network from scratch!
Here we showcase the MultiModel framework, a standalone framework within neurolib to create and simulate heterogeneous brain models. By heterogeneous, we mean that a brain network may consist of nodes with totally different dynamics coupled by a single variable. Imagine having a population model for the thalamus, a different model for the hippocampus, and a different model for the cortex. Of course, the parameters and the model dynamics, and the equations would be completely different. This is all possible and even relatively easy in MultiModel.
To facilitate your heterogeneous experiments, the MultiModel comes with few population models predefined for you. We can mix these into a brain network in many ways. We provide:
aln: the adaptive linear-nonlinear population model, it is a mean-field approximation of delay-coupled network of excitatory and inhibitory adaptive exponential integrate-and-fire neurons (AdEx)
fitzhugh_nagumo: the FitzHugh-Nagumo model, a two-dimensional slow-fast system, is a simplified version of the famous 4D Hodgkin\u2013Huxley model
hopf: the Hopf model (sometimes called a Stuart-Landau oscillator) is a 1D nonlinear model and serves as a normal form of Hopf bifurcation in dynamical systems
thalamus: a conductance-based population rate model of the thalamus, it is a Jansen-Rit like population model with current-based voltage evolution, includes adaptation (K-leak), calcium, and rectifying currents
wilson_cowan: the Wilson-Cowan neuronal model is a simple model of interacting interconnected neurons of excitatory and inhibitory subtypes
wong_wang: a Wong-Wang model, a model approximating a biophysically-based cortical network model. Our implementation comes in two flavors:
original Wong-Wang model with excitatory and inhibitory subtypes
reduced Wong-Wang model with simpler dynamics and no EXC/INH distinction
Moreover, the MultiModel framework is built in such a way that creating and connecting new models (e.g., Jansen-Rit) is easy and intuitive. An example of how to make a brand new model implementation in MultiModel is provided in the following example notebook (example-4.1-create-new-model.ipynb).
The MultiModel relies on the modeling hierarchy, which is typically implicit in whole-brain modeling. This hierarchy has three levels: * NeuralMass: represents a single neural population (typically excitatory, inhibitory, or without a subtype) and is defined by a set of parameters and (possibly delayed) (possibly stochastic) differential equations * Node: represents a single brain node, and it is a set of connected neural masses (so, e.g., a single Wilson-Cowan node consists of one excitatory and one inhibitory Wilson-Cowan NeuralMass) * Network: represents a brain network, and it is a set of connected nodes (can be any type, as long as the coupling variables are the same)
Although the magic happens at the level of NeuralMass (by magic, we mean the dynamics), users can only simulate (integrate) a Node or a Network. In other words, even for models without excitatory/inhibitory subtyping (e.g., Hopf or FitzHugh-Nagumo), we create a Node consisting of one NeuralMass. In the case of, e.g., Wilson-Cowan, ALN, or original Wong-Wang model, the Node consists of one excitatory and one inhibitory mass. More info on the modeling hierarchy and how it actually works is provided in the following example notebook (example-4.1-create-new-model.ipynb), where we need to subclass the base classes for this hierarchy to build a new model.
"},{"location":"examples/example-4-multimodel-intro/#basic-usage-in-neurolib","title":"Basic usage in neurolib","text":"
(In the following we expect the reader to be mildly familiar with how neurolib works, e.g. how to run a model, how to change it parameters, and how to get model results)
"},{"location":"examples/example-4-multimodel-intro/#simulating-the-node","title":"Simulating the node","text":""},{"location":"examples/example-4-multimodel-intro/#multimodel-parameters-and-other-accessible-attributes","title":"MultiModel parameters and other accessible attributes","text":"
Since MultiModel is able to simulate heterogeneous models, the internals of how parameters work is a bit more complex than in the core neurolib. Each mass has its own parameters, each node then gathers the parameters of each mass within that node, and finally, the network gathers all parameters from each node in the network, etc. So hierarchy again. To make it easier to navigate through MultiModel hierarchies, some attributes are implemented in all hierarchy levels.
"},{"location":"examples/example-4-multimodel-intro/#connecting-two-models","title":"Connecting two models","text":"
So far, we only showed how to use MultiModel with a single dynamical model (ALN), and that is no fun. I mean, all this is already possible in core neurolib, and it the core, it is much faster.
However, the real strength of MultiModel is combining different models into one network. Let us build a thalamocortical model using one node of the thalamic population model and one node of ALN, representing the cortex.
import matplotlib.pyplot as plt\nimport numpy as np\nimport symengine as se\nfrom IPython.display import display\nfrom jitcdde import input as system_input\nfrom neurolib.models.multimodel import MultiModel, ThalamicNode\nfrom neurolib.models.multimodel.builder.base.constants import LAMBDA_SPEED\nfrom neurolib.models.multimodel.builder.base.network import Network, Node\nfrom neurolib.models.multimodel.builder.base.neural_mass import NeuralMass\nfrom neurolib.utils.functions import getPowerSpectrum\nfrom neurolib.utils.stimulus import Input, OrnsteinUhlenbeckProcess, StepInput\n
A quick detour before we dive into the model itself. Jansen-Rit model is typically driven with a uniformly distributed noise, as the authors wanted to model nonspecific input (they used the term background spontaneous activity). For this we quickly create our model input using the ModelInput class (the tutorial on how to use stimuli in neurolib is given elsewhere).
class UniformlyDistributedNoise(Input):\n\"\"\"\n Uniformly distributed noise process between two values.\n \"\"\"\n\n def __init__(self, low, high, n=1, seed=None):\n # save arguments as attributes for later\n self.low = low\n self.high = high\n # init super\n super().__init__(n=n, seed=seed)\n\n def generate_input(self, duration, dt):\n # generate time vector\n self._get_times(duration=duration, dt=dt)\n # generate noise process itself with the correct shape\n # as (time steps x num. processes)\n return np.random.uniform(\n self.low, self.high, (self.n, self.times.shape[0])\n )\n
# let us build a proper hierarchy, i.e. we firstly build a Jansen-Rit mass\nclass SingleJansenRitMass(NeuralMass):\n\"\"\"\n Single Jansen-Rit mass implementing whole three population dynamics.\n\n Reference:\n Jansen, B. H., & Rit, V. G. (1995). Electroencephalogram and visual evoked potential\n generation in a mathematical model of coupled cortical columns. Biological cybernetics,\n 73(4), 357-366.\n \"\"\"\n\n # all these attributes are compulsory to fill in\n name = \"Jansen-Rit mass\"\n label = \"JRmass\"\n\n num_state_variables = 7 # 6 ODEs + firing rate coupling variable\n num_noise_variables = 1 # single external input\n # NOTE \n # external inputs (so-called noise_variables) are typically background noise drive in models,\n # however, this can be any type of stimulus - periodic stimulus, step stimulus, square pulse,\n # anything. Therefore you may want to add more stimuli, e.g. for Jansen-Rit model three to each\n # of its population. Here we do not stimulate our Jansen-Rit model, so only use actual noise\n # drive to excitatory interneuron population.\n # as dictionary {index of state var: it's name}\n coupling_variables = {6: \"r_mean_EXC\"}\n # as list\n state_variable_names = [\n \"v_pyr\",\n \"dv_pyr\",\n \"v_exc\",\n \"dv_exc\",\n \"v_inh\",\n \"dv_inh\",\n # to comply with other `MultiModel` nodes\n \"r_mean_EXC\",\n ]\n # as list\n # note on parameters C1 - C4 - all papers use one C and C1-C4 are\n # defined as various rations of C, typically: C1 = C, C2 = 0.8*C\n # C3 = C4 = 0.25*C, therefore we use only `C` and scale it in the\n # dynamics definition\n required_params = [\n \"A\",\n \"a\",\n \"B\",\n \"b\",\n \"C\",\n \"v_max\",\n \"v0\",\n \"r\",\n \"lambda\",\n ]\n # list of required couplings when part of a `Node` or `Network`\n # `network_exc_exc` is the default excitatory coupling between nodes\n required_couplings = [\"network_exc_exc\"]\n # here we define the default noise input to Jansen-Rit model (this can be changed later)\n # for a quick test, we follow the original Jansen and Rit paper and use uniformly distributed\n # noise between 120 - 320 Hz; but we do it in kHz, hence 0.12 - 0.32\n # fix seed for reproducibility\n _noise_input = [UniformlyDistributedNoise(low=0.12, high=0.32, seed=42)]\n\n def _sigmoid(self, x):\n\"\"\"\n Sigmoidal transfer function which is the same for all populations.\n \"\"\"\n # notes:\n # - all parameters are accessible as self.params - it is a dictionary\n # - mathematical definition (ODEs) is done in symbolic mathematics - all functions have to be\n # imported from `symengine` module, hence se.exp which is a symbolic exponential function\n return self.params[\"v_max\"] / (\n 1.0 + se.exp(self.params[\"r\"] * (self.params[\"v0\"] - x))\n )\n\n def __init__(self, params=None, seed=None):\n # init this `NeuralMass` - use passed parameters or default ones\n # parameters are now accessible as self.params, seed as self.seed\n super().__init__(params=params or JR_DEFAULT_PARAMS, seed=seed)\n\n def _initialize_state_vector(self):\n\"\"\"\n Initialize state vector.\n \"\"\"\n np.random.seed(self.seed)\n # random in average potentials around zero\n self.initial_state = (\n np.random.normal(size=self.num_state_variables)\n # * np.array([10.0, 0.0, 10.0, 0.0, 10.0, 0.0, 0.0])\n ).tolist()\n\n def _derivatives(self, coupling_variables):\n\"\"\"\n Here the magic happens: dynamics is defined here using symbolic maths package symengine.\n \"\"\"\n # first, we need to unwrap state vector\n (\n v_pyr,\n dv_pyr,\n v_exc,\n dv_exc,\n v_inh,\n dv_inh,\n firing_rate,\n ) = self._unwrap_state_vector() # this function does everything for us\n # now we need to write down our dynamics\n # PYR dynamics\n d_v_pyr = dv_pyr\n d_dv_pyr = (\n self.params[\"A\"] * self.params[\"a\"] * self._sigmoid(v_exc - v_inh)\n - 2 * self.params[\"a\"] * dv_pyr\n - self.params[\"a\"] ** 2 * v_pyr\n )\n # EXC dynamics: system input comes into play here\n d_v_exc = dv_exc\n d_dv_exc = (\n self.params[\"A\"]\n * self.params[\"a\"]\n * (\n # system input as function from jitcdde (also in symengine) with proper index:\n # in our case we have only one noise input (can be more), so index 0\n system_input(self.noise_input_idx[0])\n # C2 = 0.8*C, C1 = C\n + (0.8 * self.params[\"C\"]) * self._sigmoid(self.params[\"C\"] * v_pyr)\n )\n - 2 * self.params[\"a\"] * dv_exc\n - self.params[\"a\"] ** 2 * v_exc\n )\n # INH dynamics\n d_v_inh = dv_inh\n d_dv_inh = (\n self.params[\"B\"] * self.params[\"b\"]\n # C3 = C4 = 0.25 * C\n * (0.25 * self.params[\"C\"])\n * self._sigmoid((0.25 * self.params[\"C\"]) * v_pyr)\n - 2 * self.params[\"b\"] * dv_inh\n - self.params[\"b\"] ** 2 * v_inh\n )\n # firing rate computation\n # firing rate as dummy dynamical variable with infinitely fast\n # fixed point dynamics\n firing_rate_now = self._sigmoid(v_exc - v_inh)\n d_firing_rate = -self.params[\"lambda\"] * (firing_rate - firing_rate_now)\n\n # now just return a list of derivatives in the correct order\n return [d_v_pyr, d_dv_pyr, d_v_exc, d_dv_exc, d_v_inh, d_dv_inh, d_firing_rate]\n
And we are done with the basics! Only thing we really need is to define attributes (such as how many variables we have, what couplings we have, what about noise, etc.) and the actual dynamics as symbolic expressions. Symbolic expressions are easy: are basic operators like +, -, *, /, or ** are overloaded, which means you can simply use them and do not think about. Functions such as sin, log, or exp must be imported from symengine and used. Now we define a default set of parameters. Do not forget - MultiModel defines all in ms, therefore the parameters needs to be in ms, kHz, and similar.
JR_DEFAULT_PARAMS = {\n \"A\": 3.25, # mV\n \"B\": 22.0, # mV\n # `a` and `b` are originally 100Hz and 50Hz\n \"a\": 0.1, # kHz\n \"b\": 0.05, # kHz\n \"v0\": 6.0, # mV\n # v_max is originally 5Hz\n \"v_max\": 0.005, # kHz\n \"r\": 0.56, # m/V\n \"C\": 135.0,\n # parameter for dummy `r` dynamics\n \"lambda\": LAMBDA_SPEED,\n}\n
The next step is to create a Node. Node is second level in the hierarchy and already it can be wrapped into MultiModel and treated as any other neurolib model. On our case, creating the Node is really simple: it has only one mass, no delays, and no connectivity.
class JansenRitNode(Node):\n\"\"\"\n Jansen-Rit node with 1 neural mass representing 3 population model.\n \"\"\"\n\n name = \"Jansen-Rit node\"\n label = \"JRnode\"\n\n # if Node is integrated isolated, what network input we should use\n # zero by default = no network input for one-node model\n default_network_coupling = {\"network_exc_exc\": 0.0}\n\n # default output is the firing rate of pyramidal population\n default_output = \"r_mean_EXC\"\n\n # list of all variables that are accessible as outputs\n output_vars = [\"r_mean_EXC\", \"v_pyr\", \"v_exc\", \"v_inh\"]\n\n def __init__(self, params=None, seed=None):\n # in `Node` __init__, the list of masses is created and passed\n jr_mass = SingleJansenRitMass(params=params, seed=seed)\n # each mass has to have index, in this case it is simply 0\n jr_mass.index = 0\n # call super and properly initialize a Node\n super().__init__(neural_masses=[jr_mass])\n\n self.excitatory_masses = np.array([0])\n\n def _sync(self):\n # this function typically defines the coupling between masses\n # within one node, but in our case there is nothing to define\n return []\n
And we are done. At this point, we can integrate our Jansen-Rit model and see some results.
# init model - fix seed for reproducibility (random init. conditions)\njr_model = MultiModel.init_node(JansenRitNode(seed=42))\n\n# see parameters\nprint(\"Parameters:\")\ndisplay(jr_model.params)\nprint(\"\")\n\n# see describe\nprint(\"Describe:\")\ndisplay(jr_model.model_instance.describe())\n
# run model for 5 seconds - all in ms\njr_model.params[\"sampling_dt\"] = 1.0\njr_model.params[\"duration\"] = 5000\njr_model.params[\"backend\"] = \"jitcdde\"\njr_model.run()\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
The results looks good! With the same parameters as original Jansen and Rit paper, we got the \\(\\alpha\\) activity with spectral peak around 10 Hz. Just as a proof of concept - let us try the second MultiModel backend.
# run model for 5 seconds - all in ms\njr_model.params[\"sampling_dt\"] = 1.0\njr_model.params[\"dt\"] = 0.1\njr_model.params[\"duration\"] = 5000\njr_model.params[\"backend\"] = \"numba\"\njr_model.run()\n
All works as it should, we have our Jansen-Rit model! In the next step, we will showcase how the new model can be connected and coupled to other models (similar as in the first example).
# let us start by subclassing the Network\n\n\nclass JansenRitThalamusMiniNetwork(Network):\n\"\"\"\n Simple thalamocortical motif: 1 cortical node Jansen-Rit + 1 NMM thalamus.\n \"\"\"\n\n # provide basic attributes as name and label\n name = \"Jansen-Rit 1 node + Thalamus\"\n label = \"JRThlmNet\"\n\n # define which variables are used to sync, i.e. what coupling variables our nodes need\n sync_variables = [\n # both nodes are connected via excitatory synapses\n \"network_exc_exc\",\n # and INH mass in thalamus also receives excitatory coupling\n \"network_inh_exc\",\n ]\n\n # lastly, we need to define what is default output of the network (this has to be the\n # variable present in all nodes)\n # for us it is excitatory firing rates\n default_output = f\"r_mean_EXC\"\n # define all output vars of any interest to us - EXC and INH firing rates\n output_vars = [f\"r_mean_EXC\", f\"r_mean_INH\"]\n\n def __init__(self, connectivity_matrix, delay_matrix, seed=None):\n # self connections are resolved within nodes, so zeroes at the diagonal\n assert np.all(np.diag(connectivity_matrix) == 0.0)\n\n # init Jansen-Rit node with index 0\n jr_node = JansenRitNode(seed=seed)\n jr_node.index = 0\n # index where the state variables start - for first node it is always 0\n jr_node.idx_state_var = 0\n # set correct indices for noise input - in JR we have only one noise source\n jr_node[0].noise_input_idx = [0]\n\n # init thalamus node with index 1\n thalamus = ThalamicNode()\n thalamus.index = 1\n # thalamic state variables start where ALN state variables end - easy\n thalamus.idx_state_var = jr_node.num_state_variables\n # set correct indices of noise input - one per mass, after ALN noise\n # indices\n for mass in thalamus:\n mass.noise_input_idx = [jr_node.num_noise_variables + mass.index]\n\n # now super.__init__ network with these two nodes:\n super().__init__(\n nodes=[jr_node, thalamus],\n connectivity_matrix=connectivity_matrix,\n delay_matrix=delay_matrix,\n )\n\n # done! the only other thing we need to do, is to set the coupling variables\n # thalamus vs. Jansen-Rit are coupled via their firing rates and here we setup the\n # coupling matrices; the super class `Network` comes with some convenient\n # functions for this\n\n def _sync(self):\n\"\"\"\n Set coupling variables - the ones we defined in `sync_variables`\n _sync returns a list of tuples where the first element in each tuple is the coupling \"symbol\"\n and the second is the actual mathematical expression\n for the ease of doing this, `Network` class contains convenience functions for this:\n - _additive_coupling\n - _diffusive_coupling\n - _no_coupling\n here we use additive coupling only\n \"\"\"\n # get indices of coupling variables from all nodes\n exc_indices = [\n next(\n iter(\n node.all_couplings(\n mass_indices=node.excitatory_masses.tolist()\n )\n )\n )\n for node in self\n ]\n assert len(exc_indices) == len(self)\n return (\n # basic EXC <-> EXC coupling\n # within_node_idx is a list of len 2 (because we have two nodes)\n # with indices of coupling variables within the respective state vectors\n self._additive_coupling(\n within_node_idx=exc_indices, symbol=\"network_exc_exc\"\n )\n # EXC -> INH coupling (only to thalamus)\n + self._additive_coupling(\n within_node_idx=exc_indices,\n symbol=\"network_inh_exc\",\n connectivity=self.connectivity,\n )\n + super()._sync()\n )\n
# lets check what we have\n# in the ALN-thalamus case the matrix was [0.0, 0.15], [1.2, 0.0] - JR produces firing rates around 5Hz (5 times lower than ALN)\nSC = np.array([[0.0, 0.15], [6., 0.0]])\ndelays = np.array([[0.0, 13.0], [13.0, 0.0]]) # thalamocortical delay = 13ms\nthalamocortical = MultiModel(JansenRitThalamusMiniNetwork(connectivity_matrix=SC, delay_matrix=delays, seed=42))\n# original `MultiModel` instance is always accessible as `MultiModel.model_instance`\ndisplay(thalamocortical.model_instance.describe())\n
# fix parameters for interesting regime\nthalamocortical.params[\"*g_LK\"] = 0.032 # K-leak conductance in thalamus\nthalamocortical.params[\"*TCR*input*sigma\"] = 0.005 # noise in thalamus\nthalamocortical.params[\"*input*tau\"] = 5.0 # timescale of OU process\nthalamocortical.params[\"duration\"] = 20000. # 20 seconds simulation\nthalamocortical.params[\"sampling_dt\"] = 1.0\nthalamocortical.run()\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
So the model works, just like that! Of course, in this case we do not see anything interesting in the modelled dynamics, since we would need to proper investigation of various parameters. Consider this as a proof-of-concept, where we can very easily couple two very different population models.
"},{"location":"examples/example-4.1-multimodel-custom-model/#creating-new-model-from-scratch","title":"Creating new model from scratch","text":"
The real power of MultiModel framework is in fast prototyping of heterogeneous models. To showcase this, in this notebook we create a brand new model in the framework (the famous Jansen-Rit model), and then create a thalamocortical mini network with 1 node representing a thalamus and 1 node Jansen-Rit model representing a cortical column.
The Jansen-Rit model is a neural population model of a local cortical circuit. It contains three interconnected neural populations: one for the pyramidal projection neurons and two for excitatory and inhibitory interneurons forming feedback loops.
The equations for Jansen-Rit model reads: \\begin{align} \\ddot{x}{0} & = Aa\\cdot\\mathrm{Sigm}\\left(x{1} - x_{2}\\right) - 2a\\dot{x}{0} - a^{2}x{0} \\ \\ddot{x}{1} & = Aa[p + C{2}\\mathrm{Sigm}\\left(C_{1}x_{0}\\right)] - 2a\\dot{x}{1} - a^2x{1} \\ \\ddot{x}{2} & = BbC{4}\\mathrm{Sigm}\\left( C_{3}x_{o} \\right) - 2b\\dot{x}{2} - b^2x{2} \\ \\mathrm{Sigm}(x) & = \\frac{v_{max}}{1 + \\mathrm{e}^{r(v_{0} - x)}} \\end{align} Of course, in order to implement the above equations numerically, the system of three second-order ODEs will be rewritten into a system of six first-order ODEs.
The actual implementation will be a bit more involved than simply writing down the above equations. The building block of any proper MultiModel is a NeuralMass. Jansen-Rit model actually summarises an activity of a cortical column consisting of three populations: a population of pyramidal cells interacting with two populations of interneurons - one excitatory and one inhibitory. Moreover, the \\(x\\) represent the average membrane potential, but typically, neuronal models (at least in neurolib) are coupled via firing rate. For this reason, our main output variable would actually be a firing rate of a main, pyramidal population. The average membrane potential of a pyramidal population is \\(x = x_{1} - x_{2}\\) and its firing rate is then \\(r = \\mathrm{Sigm}(x) = \\mathrm{Sigm}(x_{1} - x_{2})\\). Similar strategy (sigmoidal transfer function for average membrane potential) is used for the thalamic model.
The coupling variable in MultiModel must be the same across all hierarchical levels. Individial populations in Jansen-Rit model are coupled via their average membrane potentials \\(x_{i}\\), \\(i\\in[0, 1, 2]\\). However, the \"global\" coupling variable for the node would be the firing rate of pyramidal population \\(r\\), introduced in the paragraph above. To reconcile this, two options exists in MultiModel: * create a NeuralMass representing a pyramidal population with two coupling variables: \\(x_{0}\\) and \\(r\\); NeuralMass representing interneurons would have one coupling variable, \\(x_{1,2}\\) * advantages: cleaner (implementation), more modular (can create J-R Node with more masses than 3 very easily) * disadvantages: more code, might be harder to navigate for the beginner * create one NeuralMass representing all three populations with one coupling variable \\(r\\), since the others are not really coupling since the whole dynamics is contained in one object * advantages: less code, easier to grasp * disadvantages: less modular, cannot simply edit J-R model, since everything is \"hardcoded\" into one NeuralMass object
In order to build a basic understanding of building blocks, we will follow the second option here (less code, easier to grasp). For interested readers, the first option (modular one) will be implemented in MultiModel, so you can follow the model files.
Our strategy for this notebook then would be: 1. implement single NeuralMass object representing all three populations in the Jansen-Rit model, with single coupling variable \\(r\\) 2. implement a \"dummy\" Node with single NeuralMass (requirement, one cannot couple Node to a NeuralMass to build a network) 3. experiment with connecting this Jansen-Rit cortical model to the thalamic population model
Last note, all model's dynamics and parameters in MultiModel are defined in milliseconds, therefore we will scale the default parameters.
Let us start with the imports:
"},{"location":"examples/example-4.1-multimodel-custom-model/#test-and-simulate-jansen-rit-model","title":"Test and simulate Jansen-Rit model","text":"
In order to simulate our newly created model, we just need to wrap it with MultiModel as in the last example and see how it goes.
"},{"location":"examples/example-4.1-multimodel-custom-model/#couple-jansen-rit-to-thalamic-model","title":"Couple Jansen-Rit to thalamic model","text":"
Here we practically copy the previous example where we coupled ALN node with a thalamic node, but instead of ALN representing a cortical column, we use our brand new Jansen-Rit model.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/","title":"Example 4.2 multimodel backends and optimization","text":"
import matplotlib.pyplot as plt\nimport neurolib.utils.functions as func\nimport numpy as np\nfrom IPython.display import display\nfrom neurolib.models.multimodel import (\n ALNNode,\n FitzHughNagumoNetwork,\n FitzHughNagumoNode,\n MultiModel,\n)\nfrom neurolib.optimize.evolution import Evolution\nfrom neurolib.optimize.exploration import BoxSearch\nfrom neurolib.utils.parameterSpace import ParameterSpace\n\n# a nice color map\nplt.rcParams[\"image.cmap\"] = \"plasma\"\n
# create a FitzHugh-Nagumo Node\nfhn_node = FitzHughNagumoNode()\n# necessary attributes when creating a Node from scratch\nfhn_node.index = 0\nfhn_node.idx_state_var = 0\nfhn_node.init_node()\n\ndisplay(fhn_node._derivatives())\ndisplay(len(fhn_node._derivatives()), fhn_node.num_state_variables)\n
As we see, the _derivatives() function return a list of equations, in this case of length 2 (which is, of course, equal to fhn.num_state_variables). The current_y(<index>) lingo is taken from jitcdde. As written above, all equations are symbolic and therefore current_y(<index>) is a symengineSymbol representing state vector with index <index> at current time t. In other words, current_y(0) is the first variable (in FitzHugh-Nagumo model, this is the \\(x\\)), while current_y(1) is the second variable (the \\(y\\)). The past_y() lingo is the same, but encodes either the past of the state vector, i.e. delayed interactions, or the external input (noise or stimulus). In this case it represents the external input (you can tell since it is past_y(-external_input...)). Now let us see how it looks like for network:
Now, since we have 2 nodes, the total number of state variables is 4. And now, we see equations for the whole network as a list of 4 symbolic equations. In the network equations we see a new symbol: network_x_0 and network_x_1. At this time, these are really just symengine symbols, but they actually represent the coupling between the nodes. And that is also a topic of the next section.
In this particular case, we have 2 coupling variables and 2 nodes, hence 4 coupling terms. The coupling of \\(y\\) is zero. As per the coupling of \\(x\\) variables between nodes, you can now see how it works: network_x_0 just means that we are about to define a network coupling of variable x for the first node, and it is just 1.43 (this is the SC matrix we passed when creating FHN network) times state variable with index 2 at time -10 milliseconds minus current state variable index 0 (diffusive coupling). Similarly for node 1 (with different coupling strength and state variable indices, of course).
Now when symbols from _sync() function are inserted into _derivatives() at the proper places, we have a full definition of a model. This is exactly what both backends do: they gather the equations (_derivatives()), look up the coupling terms (_sync()) and integrate the model forward in time.
fhn_mm = MultiModel(fhn_net)\n# 2 second run\nfhn_mm.params[\"duration\"] = 2000.0\nfhn_mm.params[\"backend\"] = \"jitcdde\"\n# jitcdde works with adaptive dt, you only set sampling dt\nfhn_mm.params[\"sampling_dt\"] = 1.0\nfhn_mm.run()\n\nplt.plot(fhn_mm.x.T)\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2000/2000 [00:00<00:00, 85415.01it/s]\n
\nUsing default integration parameters.\n\n
\n\n\n
\n[<matplotlib.lines.Line2D at 0x15aa79950>,\n <matplotlib.lines.Line2D at 0x15aa63690>]\n
fhn_mm = MultiModel(fhn_net)\n# 2 second run\nfhn_mm.params[\"duration\"] = 2000.0\nfhn_mm.params[\"backend\"] = \"numba\"\n# numba uses Euler scheme so dt is important!\nfhn_mm.params[\"dt\"] = 0.1\nfhn_mm.params[\"sampling_dt\"] = 1.0\nfhn_mm.run()\n\nplt.plot(fhn_mm.x.T)\n
\n[<matplotlib.lines.Line2D at 0x15a44fad0>,\n <matplotlib.lines.Line2D at 0x15a2a0e10>]\n
# first init multimodel\naln_mm = MultiModel.init_node(ALNNode())\ndisplay(aln_mm.params)\n
# match params to core ALN model\naln_mm.params[\"backend\"] = \"numba\"\naln_mm.params[\"dt\"] = 0.1 # ms\naln_mm.params[\"*c_gl\"] = 0.3\naln_mm.params[\"*b\"] = 0.0\naln_mm.params[\"ALNNode_0.ALNMassEXC_0.a\"] = 0.0\n\n# set up exploration - using star\n\n# parameters = ParameterSpace(\n# {\n# \"*EXC*input*mu\": np.linspace(0, 3, 21),\n# \"*INH*input*mu\": np.linspace(0, 3, 21),\n# },\n# allow_star_notation=True,\n# )\n\n# set up exploration - using exact parameter names\n# in this case (one node, search over noise mu) -> these two are equivalent!\nparameters = ParameterSpace(\n {\n # 2 values per dimension is for quick testing only, for real exploration use e.g. 21 or rather much more\n \"ALNNode_0.ALNMassEXC_0.input_0.mu\": np.linspace(0, 3, 2),\n \"ALNNode_0.ALNMassINH_1.input_0.mu\": np.linspace(0, 3, 2),\n # \"ALNNode_0.ALNMassEXC_0.input_0.mu\": np.linspace(0, 3, 21),\n # \"ALNNode_0.ALNMassINH_1.input_0.mu\": np.linspace(0, 3, 21),\n },\n allow_star_notation=True,\n)\nsearch = BoxSearch(aln_mm, parameters, filename=\"example-4.2-exploration.hdf\")\n
\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-4.2-exploration.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/pypet/naturalnaming.py:1473: SyntaxWarning: `lambda` is a python keyword, you may not be able to access it via natural naming but only by using `[]` square bracket notation. \n category=SyntaxWarning)\nMainProcess root INFO Number of parameter configurations: 441\nMainProcess root INFO BoxSearch: Environment initialized.\n\n
search.run()\n
\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-40M-57S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 0/441 runs [ ] 0.0%\nMainProcess pypet INFO PROGRESS: Finished 23/441 runs [= ] 5.2%, remaining: 0:05:32\nMainProcess pypet INFO PROGRESS: Finished 45/441 runs [== ] 10.2%, remaining: 0:04:55\nMainProcess pypet INFO PROGRESS: Finished 67/441 runs [=== ] 15.2%, remaining: 0:04:32\nMainProcess pypet INFO PROGRESS: Finished 89/441 runs [==== ] 20.2%, remaining: 0:04:11\nMainProcess pypet INFO PROGRESS: Finished 111/441 runs [===== ] 25.2%, remaining: 0:03:45\nMainProcess pypet INFO PROGRESS: Finished 133/441 runs [====== ] 30.2%, remaining: 0:03:29\nMainProcess pypet INFO PROGRESS: Finished 155/441 runs [======= ] 35.1%, remaining: 0:03:14\nMainProcess pypet INFO PROGRESS: Finished 177/441 runs [======== ] 40.1%, remaining: 0:02:59\nMainProcess pypet INFO PROGRESS: Finished 199/441 runs [========= ] 45.1%, remaining: 0:02:41\nMainProcess pypet INFO PROGRESS: Finished 221/441 runs [========== ] 50.1%, remaining: 0:02:28\nMainProcess pypet INFO PROGRESS: Finished 243/441 runs [=========== ] 55.1%, remaining: 0:02:14\nMainProcess pypet INFO PROGRESS: Finished 265/441 runs [============ ] 60.1%, remaining: 0:02:01\nMainProcess pypet INFO PROGRESS: Finished 287/441 runs [============= ] 65.1%, remaining: 0:01:44\nMainProcess pypet INFO PROGRESS: Finished 309/441 runs [============== ] 70.1%, remaining: 0:01:29\nMainProcess pypet INFO PROGRESS: Finished 331/441 runs [=============== ] 75.1%, remaining: 0:01:14\nMainProcess pypet INFO PROGRESS: Finished 353/441 runs [================ ] 80.0%, remaining: 0:00:59\nMainProcess pypet INFO PROGRESS: Finished 375/441 runs [================= ] 85.0%, remaining: 0:00:43\nMainProcess pypet INFO PROGRESS: Finished 397/441 runs [================== ] 90.0%, remaining: 0:00:29\nMainProcess pypet INFO PROGRESS: Finished 419/441 runs [=================== ] 95.0%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 441/441 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-40M-57S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-40M-57S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-40M-57S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-40M-57S` were completed successfully.\n\n
search.loadResults()\n
\nMainProcess root INFO Loading results from ./data/hdf/example-4.2-exploration.hdf\nMainProcess root INFO Analyzing trajectory results-2021-07-07-16H-40M-57S\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-4.2-exploration.hdf`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `config` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `parameters` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `results` in mode `1`.\nMainProcess root INFO Creating `dfResults` dataframe ...\nMainProcess root INFO Loading all results to `results` dictionary ...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 441/441 [00:02<00:00, 219.34it/s]\nMainProcess root INFO Aggregating results to `dfResults` ...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 441/441 [00:00<00:00, 3745.83it/s]\nMainProcess root INFO All results loaded.\n\n
print(f\"Number of results: {len(search.results)}\")\n
\nNumber of results: 441\n\n
# Example analysis of the results\n# The .results attribute is a list and can be indexed by the run\n# number (which is also the index of the pandas dataframe .dfResults).\n# Here we compute the maximum firing rate of the node in the last second\n# and add the result (a float) to the pandas dataframe.\nfor i in search.dfResults.index:\n search.dfResults.loc[i, \"max_r\"] = np.max(\n search.results[i][\"r_mean_EXC\"][:, -int(1000 / aln_mm.params[\"dt\"]) :]\n )\n
plt.imshow(\n search.dfResults.pivot_table(\n values=\"max_r\",\n index=\"ALNNode_0.ALNMassINH_1.input_0.mu\",\n columns=\"ALNNode_0.ALNMassEXC_0.input_0.mu\",\n ),\n extent=[\n min(search.dfResults[\"ALNNode_0.ALNMassEXC_0.input_0.mu\"]),\n max(search.dfResults[\"ALNNode_0.ALNMassEXC_0.input_0.mu\"]),\n min(search.dfResults[\"ALNNode_0.ALNMassINH_1.input_0.mu\"]),\n max(search.dfResults[\"ALNNode_0.ALNMassINH_1.input_0.mu\"]),\n ],\n origin=\"lower\",\n)\nplt.colorbar(label=\"Maximum rate [kHz]\")\nplt.xlabel(\"Input to E\")\nplt.ylabel(\"Input to I\")\n
\nMainProcess root INFO Loading precomputed transfer functions from /Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/neurolib/models/multimodel/builder/../../aln/aln-precalc/quantities_cascade.h5\nMainProcess root INFO All transfer functions loaded.\nMainProcess root INFO Loading precomputed transfer functions from /Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/neurolib/models/multimodel/builder/../../aln/aln-precalc/quantities_cascade.h5\nMainProcess root INFO All transfer functions loaded.\nMainProcess root INFO ALNNode: Model initialized.\n\n
# use the same loss function as example-2.1\ndef evaluateSimulation(traj):\n # The trajectory id is provided as an attribute\n rid = traj.id\n # this function provides the a model with the partuclar\n # parameter set for this given run\n model = evolution.getModelFromTraj(traj)\n # parameters can also be modified after loading\n model.params[\"dt\"] = 0.1\n model.params[\"duration\"] = 2 * 1000.0\n # and the simulation is run\n model.run()\n\n # compute power spectrum\n frs, powers = func.getPowerSpectrum(\n model.r_mean_EXC[:, -int(1000 / model.params[\"dt\"]) :], dt=model.params[\"dt\"]\n )\n # find the peak frequency\n domfr = frs[np.argmax(powers)]\n # fitness evaluation: let's try to find a 25 Hz oscillation\n fitness = abs(domfr - 25)\n # deap needs a fitness *tuple*!\n fitness_tuple = ()\n # more fitness values could be added\n fitness_tuple += (fitness,)\n # we need to return the fitness tuple and the outputs of the model\n return fitness_tuple, model.outputs\n
pars = ParameterSpace(\n [\"*EXC*input*mu\", \"*INH*input*mu\"],\n [[0.0, 4.0], [0.0, 4.0]],\n allow_star_notation=True,\n)\nweightList = [-1.0]\n\nevolution = Evolution(\n evalFunction=evaluateSimulation,\n parameterSpace=pars,\n model=aln_mm, # use our `MultiModel` here\n weightList=[-1.0],\n # POP_INIT_SIZE=16, # mutiple of number of cores\n # POP_SIZE=16,\n # low numbers for testing, for real evolution use higher number of runs\n POP_INIT_SIZE=4, # mutiple of number of cores\n POP_SIZE=4,\n # low numbers for testing, for real evolution use higher number\n NGEN=2,\n filename=\"example-4.2-evolution.hdf\",\n)\n
\nMainProcess root INFO Trajectory Name: results-2021-07-07-16H-45M-51S\nMainProcess root INFO Storing data to: ./data/hdf/example-4.2-evolution.hdf\nMainProcess root INFO Trajectory Name: results-2021-07-07-16H-45M-51S\nMainProcess root INFO Number of cores: 8\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-4.2-evolution.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\nMainProcess root INFO Evolution: Using algorithm: adaptive\nMainProcess root INFO Evolution: Individual generation: <function randomParametersAdaptive at 0x159a6df80>\nMainProcess root INFO Evolution: Mating operator: <function cxBlend at 0x159a43560>\nMainProcess root INFO Evolution: Mutation operator: <function gaussianAdaptiveMutation_nStepSizes at 0x159a70440>\nMainProcess root INFO Evolution: Parent selection: <function selRank at 0x159a70170>\nMainProcess root INFO Evolution: Selection operator: <function selBest_multiObj at 0x159a70200>\n\n
evolution.run(verbose=False)\n
\nMainProcess root INFO Evaluating initial population of size 16 ...\nMainProcess pypet.trajectory.Trajectory INFO Your trajectory has not been explored, yet. I will call `f_explore` instead.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 0/16 runs [ ] 0.0%\nMainProcess pypet INFO PROGRESS: Finished 1/16 runs [= ] 6.2%, remaining: 0:01:22\nMainProcess pypet INFO PROGRESS: Finished 2/16 runs [== ] 12.5%, remaining: 0:00:39\nMainProcess pypet INFO PROGRESS: Finished 3/16 runs [=== ] 18.8%, remaining: 0:00:24\nMainProcess pypet INFO PROGRESS: Finished 4/16 runs [===== ] 25.0%, remaining: 0:00:17\nMainProcess pypet INFO PROGRESS: Finished 5/16 runs [====== ] 31.2%, remaining: 0:00:12\nMainProcess pypet INFO PROGRESS: Finished 6/16 runs [======= ] 37.5%, remaining: 0:00:09\nMainProcess pypet INFO PROGRESS: Finished 7/16 runs [======== ] 43.8%, remaining: 0:00:07\nMainProcess pypet INFO PROGRESS: Finished 8/16 runs [========== ] 50.0%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 9/16 runs [=========== ] 56.2%, remaining: 0:00:08\nMainProcess pypet INFO PROGRESS: Finished 10/16 runs [============ ] 62.5%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 11/16 runs [============= ] 68.8%, remaining: 0:00:04\nMainProcess pypet INFO PROGRESS: Finished 12/16 runs [=============== ] 75.0%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 13/16 runs [================ ] 81.2%, remaining: 0:00:02\nMainProcess pypet INFO PROGRESS: Finished 14/16 runs [================= ] 87.5%, remaining: 0:00:01\nMainProcess pypet INFO PROGRESS: Finished 15/16 runs [================== ] 93.8%, remaining: 0:00:00\nMainProcess pypet INFO PROGRESS: Finished 16/16 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO Start of evolution\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 16/32 runs [========== ] 50.0%\nMainProcess pypet INFO PROGRESS: Finished 18/32 runs [=========== ] 56.2%, remaining: 0:00:33\nMainProcess pypet INFO PROGRESS: Finished 20/32 runs [============ ] 62.5%, remaining: 0:00:15\nMainProcess pypet INFO PROGRESS: Finished 21/32 runs [============= ] 65.6%, remaining: 0:00:11\nMainProcess pypet INFO PROGRESS: Finished 23/32 runs [============== ] 71.9%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 24/32 runs [=============== ] 75.0%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 26/32 runs [================ ] 81.2%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 28/32 runs [================= ] 87.5%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 29/32 runs [================== ] 90.6%, remaining: 0:00:02\nMainProcess pypet INFO PROGRESS: Finished 31/32 runs [=================== ] 96.9%, remaining: 0:00:00\nMainProcess pypet INFO PROGRESS: Finished 32/32 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 1 -----------\nMainProcess root INFO Best individual is [1.2026081698319384, 0.7330736886493492, 1.3333333333333333, 1.3333333333333333]\nMainProcess root INFO Score: -5.0\nMainProcess root INFO Fitness: (5.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 32/48 runs [============= ] 66.7%\nMainProcess pypet INFO PROGRESS: Finished 34/48 runs [============== ] 70.8%, remaining: 0:00:33\nMainProcess pypet INFO PROGRESS: Finished 36/48 runs [=============== ] 75.0%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 39/48 runs [================ ] 81.2%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 41/48 runs [================= ] 85.4%, remaining: 0:00:07\nMainProcess pypet INFO PROGRESS: Finished 44/48 runs [================== ] 91.7%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 46/48 runs [=================== ] 95.8%, remaining: 0:00:01\nMainProcess pypet INFO PROGRESS: Finished 48/48 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 2 -----------\nMainProcess root INFO Best individual is [1.2026081698319384, 0.7330736886493492, 1.3333333333333333, 1.3333333333333333]\nMainProcess root INFO Score: -5.0\nMainProcess root INFO Fitness: (5.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 48/64 runs [=============== ] 75.0%\nMainProcess pypet INFO PROGRESS: Finished 52/64 runs [================ ] 81.2%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 55/64 runs [================= ] 85.9%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 58/64 runs [================== ] 90.6%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 61/64 runs [=================== ] 95.3%, remaining: 0:00:02\nMainProcess pypet INFO PROGRESS: Finished 64/64 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 3 -----------\nMainProcess root INFO Best individual is [1.3018429030776317, 0.4613335264234024, 1.2708417799029919, 0.3261423030366109]\nMainProcess root INFO Score: -2.0\nMainProcess root INFO Fitness: (2.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 64/80 runs [================ ] 80.0%\nMainProcess pypet INFO PROGRESS: Finished 68/80 runs [================= ] 85.0%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 72/80 runs [================== ] 90.0%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 76/80 runs [=================== ] 95.0%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 80/80 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 4 -----------\nMainProcess root INFO Best individual is [1.3018429030776317, 0.4613335264234024, 1.2708417799029919, 0.3261423030366109]\nMainProcess root INFO Score: -2.0\nMainProcess root INFO Fitness: (2.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 80/96 runs [================ ] 83.3%\nMainProcess pypet INFO PROGRESS: Finished 82/96 runs [================= ] 85.4%, remaining: 0:00:33\nMainProcess pypet INFO PROGRESS: Finished 87/96 runs [================== ] 90.6%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 92/96 runs [=================== ] 95.8%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 96/96 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 5 -----------\nMainProcess root INFO Best individual is [1.3018429030776317, 0.4613335264234024, 1.2708417799029919, 0.3261423030366109]\nMainProcess root INFO Score: -2.0\nMainProcess root INFO Fitness: (2.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 96/112 runs [================= ] 85.7%\nMainProcess pypet INFO PROGRESS: Finished 101/112 runs [================== ] 90.2%, remaining: 0:00:10\nMainProcess pypet INFO PROGRESS: Finished 107/112 runs [=================== ] 95.5%, remaining: 0:00:04\nMainProcess pypet INFO PROGRESS: Finished 112/112 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 6 -----------\nMainProcess root INFO Best individual is [1.3018429030776317, 0.4613335264234024, 1.2708417799029919, 0.3261423030366109]\nMainProcess root INFO Score: -2.0\nMainProcess root INFO Fitness: (2.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 112/128 runs [================= ] 87.5%\nMainProcess pypet INFO PROGRESS: Finished 116/128 runs [================== ] 90.6%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 122/128 runs [=================== ] 95.3%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 128/128 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 7 -----------\nMainProcess root INFO Best individual is [0.8975385309150987, 0.11079193956336597, 0.23750510019720242, 0.2003002444648563]\nMainProcess root INFO Score: -1.0\nMainProcess root INFO Fitness: (1.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 128/144 runs [================= ] 88.9%\nMainProcess pypet INFO PROGRESS: Finished 130/144 runs [================== ] 90.3%, remaining: 0:00:32\nMainProcess pypet INFO PROGRESS: Finished 137/144 runs [=================== ] 95.1%, remaining: 0:00:07\nMainProcess pypet INFO PROGRESS: Finished 144/144 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 8 -----------\nMainProcess root INFO Best individual is [1.2018893263483494, 0.26004390251897785, 0.24989089918822285, 0.08052584592439692]\nMainProcess root INFO Score: 0.0\nMainProcess root INFO Fitness: (0.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 144/160 runs [================== ] 90.0%\nMainProcess pypet INFO PROGRESS: Finished 152/160 runs [=================== ] 95.0%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 160/160 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 9 -----------\nMainProcess root INFO Best individual is [1.2018893263483494, 0.26004390251897785, 0.24989089918822285, 0.08052584592439692]\nMainProcess root INFO Score: 0.0\nMainProcess root INFO Fitness: (0.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO --- End of evolution ---\nMainProcess root INFO Best individual is [1.2018893263483494, 0.26004390251897785, 0.24989089918822285, 0.08052584592439692], (0.0,)\nMainProcess root INFO --- End of evolution ---\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\n\n
evolution.info(plot=True)\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/neurolib/optimize/evolution/evolutionaryUtils.py:212: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n plt.tight_layout()\n\n
\n> Simulation parameters\nHDF file storage: ./data/hdf/example-4.2-evolution.hdf\nTrajectory Name: results-2021-07-07-16H-45M-51S\nDuration of evaluating initial population 0:00:11.681965\nDuration of evolution 0:01:31.868614\nModel: <class 'neurolib.models.multimodel.model.MultiModel'>\nModel name: ALNNode\nEval function: <function evaluateSimulation at 0x15b06a830>\nParameter space: {'*EXC*noise*mu': [0.0, 4.0], '*INH*noise*mu': [0.0, 4.0]}\n> Evolution parameters\nNumber of generations: 10\nInitial population size: 16\nPopulation size: 16\n> Evolutionary operators\nMating operator: <function cxBlend at 0x159a43560>\nMating paramter: {'alpha': 0.5}\nSelection operator: <function selBest_multiObj at 0x159a70200>\nSelection paramter: {}\nParent selection operator: <function selRank at 0x159a70170>\nComments: no comments\n--- Info summary ---\nValid: 16\nMean score (weighted fitness): -3.6\nParameter distribution (Generation 9):\nSTAREXCSTARnoiseSTARmu: mean: 1.1141, std: 0.2046\nSTARINHSTARnoiseSTARmu: mean: 0.4791, std: 0.2361\n--------------------\nBest 5 individuals:\nPrinting 5 individuals\nIndividual 0\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 1.20\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.26\nIndividual 1\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 1.01\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.16\nIndividual 2\n Fitness values: 1.0\n Score: -1.0\n Weighted fitness: -1.0\n Stats mean 1.00 std 0.00 min 1.00 max 1.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 0.90\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.11\nIndividual 3\n Fitness values: 2.0\n Score: -2.0\n Weighted fitness: -2.0\n Stats mean 2.00 std 0.00 min 2.00 max 2.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 1.30\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.46\nIndividual 4\n Fitness values: 2.0\n Score: -2.0\n Weighted fitness: -2.0\n Stats mean 2.00 std 0.00 min 2.00 max 2.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 0.96\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.25\n--------------------\n\n
\nThere are 16 valid individuals\nMean score across population: -3.6\n\n
\nMainProcess root INFO Saving plot to ./data/figures/results-2021-07-07-16H-45M-51S_hist_9.png\n\n
Since everything in MultiModel is a class, we will simply subclass the ExcitatoryWilsonCowanMass and edit / add things we need to.
Our adaptation current will come in as: \\(\\(\\dot{w} = -\\frac{w}{\\tau_{A}} + b*r\\)\\)
class AdaptationExcitatoryWilsonCowanMass(ExcitatoryWilsonCowanMass):\n # here we only edit attributes that will change!\n # slightly edit name and label\n name = \"Wilson-Cowan excitatory mass with adaptation\"\n label = f\"WCmass{EXC}_adapt\"\n\n num_state_variables = 2\n # same number of noise variables\n\n # coupling variables - the same, no need to do anything\n # add w as a variable\n state_variable_names = [f\"q_mean_{EXC}\", \"w\"]\n # mass type and couplings are the same\n\n # add parameters for adaptation current - b and tauA\n required_params = [\"a\", \"mu\", \"tau\", \"ext_drive\", \"b\", \"tauA\"]\n\n # same input noise\n\n def __init__(self, params=None, seed=None):\n # edit init and pass default parameters for adaptation\n super().__init__(params=params or WC_ADAPT_EXC_DEFAULT_PARAMS, seed=seed)\n\n def _initialize_state_vector(self):\n # need to add init for adaptation variable w\n np.random.seed(self.seed)\n self.initial_state = [0.05 * np.random.uniform(0, 1), 0.0]\n\n def _derivatives(self, coupling_variables):\n # edit derivatives\n [x, w] = self._unwrap_state_vector()\n d_x = (\n -x\n + (1.0 - x)\n * self._sigmoid(\n coupling_variables[\"node_exc_exc\"]\n - coupling_variables[\"node_exc_inh\"]\n + coupling_variables[\"network_exc_exc\"]\n + self.params[\"ext_drive\"]\n - w # subtract adaptation current\n )\n + system_input(self.noise_input_idx[0])\n ) / self.params[\"tau\"]\n # now define adaptation dynamics\n d_w = -w / self.params[\"tauA\"] + self.params[\"b\"] * x\n\n return [d_x, d_w]\n\n\n# define default set of parameters\nWC_ADAPT_EXC_DEFAULT_PARAMS = {\n # just copy all default parameters from non-adaptation version\n **WC_EXC_DEFAULT_PARAMS,\n # add adaptation parameters\n \"tauA\": 500.0, # ms\n \"b\": 1.0,\n}\n
That's it! Now we have our shiny excitatory Wilson-Cowan mass with adaptation. Now, we need to create a Node with one exctitatory WC mass with adaptation and one good old inhibitory mass without adaptation. The basic inhibitory mass is already implemented, no need to do anything. Below we will create our Node with adaptation.
class WilsonCowanNodeWithAdaptation(WilsonCowanNode):\n # start by subclassing the basic WilsonCowanNode and, again,\n # just change what has to be changed\n\n name = \"Wilson-Cowan node with adaptation\"\n label = \"WCnode_adapt\"\n\n # default coupling and outputs are the same\n\n def __init__(\n self,\n exc_params=None,\n inh_params=None,\n connectivity=WC_NODE_DEFAULT_CONNECTIVITY,\n ):\n # here we just pass our new `AdaptationExcitatoryWilsonCowanMass` instead of\n # `ExcitatoryWilsonCowanMass`, otherwise it is the same\n excitatory_mass = AdaptationExcitatoryWilsonCowanMass(exc_params)\n excitatory_mass.index = 0\n inhibitory_mass = InhibitoryWilsonCowanMass(inh_params)\n inhibitory_mass.index = 1\n # the only trick is, we want to call super() and init Node class, BUT\n # just calling super().__init__() will actually call parent's init, and in\n # this case, our parent is `WilsonCowanNode`... we need to call grandparent's\n # __init__.. fortunately, this can be done in python no problemo\n # instead of calling super().__init__(), we need to call\n # super(<current parent>, self).__init__()\n super(WilsonCowanNode, self).__init__(\n neural_masses=[excitatory_mass, inhibitory_mass],\n local_connectivity=connectivity,\n # within W-C node there are no local delays\n local_delays=None,\n )\n
And done. Now we can run and compare WC node with and without adaptation.
\nMainProcess root INFO WCnode: Model initialized.\nMainProcess root INFO WCnode_adapt: Model initialized.\n\n
# set parameters\nwc_basic.params[\"*EXC*ext_drive\"] = 0.8\nwc_basic.params[\"duration\"] = 2000.0\nwc_basic.params[\"sampling_dt\"] = 1.0\n\n# higher external input due to adaptation\nwc_adapt.params[\"*EXC*ext_drive\"] = 5.5\nwc_adapt.params[\"duration\"] = 2000.0\nwc_adapt.params[\"sampling_dt\"] = 1.0\nwc_adapt.params[\"*b\"] = 0.1\n
wc_basic.run()\nwc_adapt.run()\n
\nMainProcess root INFO Initialising jitcdde backend...\nMainProcess root INFO Setting up the DDE system...\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\nMainProcess root INFO Compiling to C...\nMainProcess root INFO Setting past of the state vector...\nMainProcess root INFO Integrating for 2000 time steps...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2000/2000 [00:00<00:00, 153705.07it/s]\nMainProcess root INFO Integration done.\nMainProcess root INFO `run` call took 1.24 s\nMainProcess root INFO Initialising jitcdde backend...\nMainProcess root INFO Setting up the DDE system...\nMainProcess root INFO Compiling to C...\n\n
\nUsing default integration parameters.\n\n
\nMainProcess root INFO Setting past of the state vector...\nMainProcess root INFO Integrating for 2000 time steps...\n 0%| | 0/2000 [00:00<?, ?it/s]/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:791: UserWarning: The target time is smaller than the current time. No integration step will happen. The returned state will be extrapolated from the interpolating Hermite polynomial for the last integration step. You may see this because you try to integrate backwards in time, in which case you did something wrong. You may see this just because your sampling step is small, in which case there is no need to worry.\n warn(\"The target time is smaller than the current time. No integration step will happen. The returned state will be extrapolated from the interpolating Hermite polynomial for the last integration step. You may see this because you try to integrate backwards in time, in which case you did something wrong. You may see this just because your sampling step is small, in which case there is no need to worry.\")\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2000/2000 [00:00<00:00, 100000.10it/s]\nMainProcess root INFO Integration done.\nMainProcess root INFO `run` call took 0.79 s\n\n
All done. Now we can study adaptation dynamics in Wilson-Cowan model by e.g. running exploration over adaptation parameters and eventually evolution with the target of having slow oscillations (i.e. oscillations with frequency ~1Hz) and optimising with respect to parameters such as b, tauA, ext_input and others. Happy hacking!
In the last two examples we showcased the basics of MultiModel framework and how to create a new model from scratch. Now we will look at some advanced topics such as how exactly the integration works, why we have two integration backends, and how to run optimisation and exploration with MultiModel.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#the-tale-of-two-backends","title":"The tale of two backends","text":"
In the current implementation of MultiModel, users may choose from two different integration backends. Before diving into the details of both backends, let us quickly revise how exactly MultiModel integrates the model equations.
Almost all whole-brain simulators works in the sense, that you define the dynamics of single brain area and then we have a double loop for integration: one over time, the second over brain areas. In other words, all brain areas have the same dynamics. In pseudo-code it would look something like:
for t in range(time_total):\n for n in range(num_areas):\n x[t, n] = integrate_step(x[t-max_delay:t-1, :])\n
Since all areas are the same, the integrate_step function would simply take the history of state vector and apply one integration step in any scheme. This won't work in MultiModel, since it allows building heterogeneous models. The internal workings of MultiModel can be explained in couple of steps."},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#state-vector","title":"State vector","text":"
Since the inner loop in the pseudocode above is not doable in MultiModel due to heterogeneity, we solve it simply by concatenating all individual equations into one big state vector (that is also the reason why all NeuralMass and Node objects have their indices). When the model is ready for simulation, we iterate over Nodes and NeuralMasses within these nodes and stack their equations into a single list. The concatenation is done using the _derivatives() function. - in NeuralMass, the _derivatives() function implements the actual dynamics as delay differential equations - in Node, the _derivatives() function stacks all equations from NeuralMasses within this Node into one list - in Network, the _derivatives() function stacks all equations from Nodes within this Network.
As we have seen before, the state vector encodes the whole dynamics of the brain model, but the coupling is crypted as symbol. To make things easier for simulating, we had to separate the individual internal dynamics from the coupling terms. The coupling comes in two flavours, reflecting the three levels of hierarchy: node coupling takes care of coupling between NeuralMasses within one Node, and network coupling takes care of coupling between Nodes in one Network. The coupling is implemented as _sync() function. This function returns a list, where each item is a tuple of length two: (name_of_the_symbol, symbolic_term_representing_the_coupling). The FitzHugh-Nagumo model only has one mass per node, hence there are no node couplings, but we can inspect the network coupling:
The jitcdde backend was the first integration backend in MultiModel. The name stems from the fact that we use wonderful jitcdde python package. It employs just-in-time compilation of symbolic derivatives into C and then uses DDE integration method proposed by Shampine and Thompson, which in turn employs the Bogacki\u2013Shampine Runge\u2013Kutta pair. This is the reason why the definition of dynamics in MultiModel is done using symbolic derivatives written in symengine. It uses adaptive dt scheme, hence is very useful for stiff problems. Also, if you are implementing a new model and have no idea how stiff the dynamics are, this is the backend to try first. It has reasonable speed, but for large networks and long simulations it is not the best.
The internal workings of jitcdde package served as an inspiration when creating MultiModel. jitcdde naturally works with dynamics defined as symbolic equations (_derivatives()) and it also supports the use of \"helpers\" - the helpers in our case are the coupling terms (_sync()).
Since jitcdde is rather slow, in particular for long runs or large networks, we created a numba backend. It was tricky - the whole code around MultiModel was creating with jitcdde in mind with the symbolic equations, helpers, etc. However, the advantage of symbolic equations is also - they are purely symbolic, so you can \"print\" them. By printing, you really just obtain the string with equations. So what numba backend actually does? 1. gather all symbolic equations with _derivatives() 2. substitute coupling symbols with functional terms in _sync() 3. substitute current and past state vector symbols (current_y() and past_y()) with state vector y with correct indices and time delays 4. now we have a complete description of dy 5. \"print\" out this dy into prepared function template (string) inside the time for loop, you can imagine it as
for t in range(1, t_max):\n dy = np.array({dy}) # <- `dy` is printed here\n y[:, t] = y[:, t-1] + dt*dy\n
6. compile the prepared string template into an actual python function (yes, python can do this) 7. wrap the compiled function with numba.njit() to get the speed boots 8. profit
And yes, the numba backend simply employs Euler integration scheme, which means that you need to think about dt.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#which-backend-to-use-and-when","title":"Which backend to use and when?","text":"
By default numba backend is almost always faster (with the exception of small network and not too long runs, when they perform similarly). However, for prototyping new models, or connecting couple of models into a heterogeneous network, it is always a good idea to do a couple of short simulations with jitcdde. The reason is - it uses an adaptive dt, so you do not need to worry about setting a correct dt. When you have an idea about what the dynamics should look like and how fast it is, you can switch to numba and try couple of different dt and select the one, when the results are closest to jitcdde results. For exploration and optimisation with evolution, always use numba. numba backend is actually compiling a jit'ed function with all the parameters as arguments, hence for exploration, only one compilation is necessary and then even when changing parameters, the model runs at high speed.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#exploration-with-multimodel","title":"Exploration with MultiModel","text":"
So we all love neurolib not only for its speed, efficiency, and ease of use, but also for its built-in exploration and evolution frameworks. These two makes studying the population models a real breeze! And naturally, MultiModel supports both. Firtsly, we will showcase how we can explore parameters of MultiModel using the BoxSearch class and we will replicate the ALN exploration from example-1-aln-parameter-exploration.ipynb.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#note-on-parameters-when-exploring-or-optimising","title":"Note on parameters when exploring or optimising","text":"
If you remember, the MultiModel has an option for true heterogeneous parameters, i.e. each brain node can have different parameters. This allows you to explore paramaters one-by-one (truly different parameters for each node), or in bulk. As an example, consider this:
In the first example (parameters1), we explore Ke parameter and set this for all nodes and for all masses the same. In the end, we run three simulations with homogeneous Ke for all nodes/masses.
In the second example, we explore Ke individually for excitatory masses and inhibitory masses, however, for all nodes these would be the same. In total, we will run 9 simulations with heterogeneous Ke within one node, but the same values for Ke_exc and Ke_inh would be used in all nodes.
Finally, in the last example, we would explore only excitatory Ke parameters, but for two nodes differently, hence we can study how excitatory Ke affects 2-node network.
Of course, we can always call parameters by their full name (glob path), like:
parameters3 = ParameterSpace(\n {\"ALNNode_0.ALNMassINH_1.Ke\": [400, 800, 1200]},\n allow_star_notation=True # due to \".\" in the param names\n)\n
and we will simply explore over one particular Ke of inhibitory mass in first node within a network. All other Ke parameters will remain constant."},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#evolution-with-multimodel","title":"Evolution with MultiModel","text":"
If you're familiar with how Evolution works in core neurolib, and now you know the perks of MultiModel with respect to exploration - that's all you need to know when doing optimisation using evolutionary algorithms using MultiModel. Parameters are passed via ParameterSpace with allow_star_notation=True. Below, we will reproduce the example-2.1-evolutionary-optimization-aln with the MultiModel version of ALN model.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#adapting-existing-models-in-multimodel-framework","title":"Adapting existing models in MultiModel framework","text":"
MultiModel comes with a few implemented models so you can play with models right away. Due to the hierarchical architecture based on class inheritence in python, it is also very easy to adapt existing models. As an example - MultiModel comes with an implementation of Wilson-Cowan model. In our version, there are excitatory and inhibitory masses in one node. Let's say, you want to add an adaptation current inside excitatory mass in Wilson-Cowan model.
Paremeter box search for a given model and a range of parameters.
Source code in neurolib/optimize/exploration/exploration.py
class BoxSearch:\n\"\"\"\n Paremeter box search for a given model and a range of parameters.\n \"\"\"\n\n def __init__(\n self,\n model=None,\n parameterSpace=None,\n evalFunction=None,\n filename=None,\n saveAllModelOutputs=False,\n ncores=None,\n ):\n\"\"\"Either a model has to be passed, or an evalFunction. If an evalFunction\n is passed, then the evalFunction will be called and the model is accessible to the\n evalFunction via `self.getModelFromTraj(traj)`. The parameters of the current\n run are accessible via `self.getParametersFromTraj(traj)`.\n\n If no evaluation function is passed, then the model is simulated using `Model.run()`\n for every parameter.\n\n :param model: Model to run for each parameter (or model to pass to the evaluation function if an evaluation\n function is used), defaults to None\n :type model: `neurolib.models.model.Model`, optional\n :param parameterSpace: Parameter space to explore, defaults to None\n :type parameterSpace: `neurolib.utils.parameterSpace.ParameterSpace`, optional\n :param evalFunction: Evaluation function to call for each run., defaults to None\n :type evalFunction: function, optional\n :param filename: HDF5 storage file name, if left empty, defaults to ``exploration.hdf``\n :type filename: str\n :param saveAllModelOutputs: If True, save all outputs of model, else only default output of the model will be\n saved. Note: if saveAllModelOutputs==False and the model's parameter model.params['bold']==True, then BOLD\n output will be saved as well, defaults to False\n :type saveAllModelOutputs: bool\n\n :param ncores: Number of cores to simulate on (max cores default), defaults to None\n :type ncores: int, optional\n \"\"\"\n self.model = model\n if evalFunction is None and model is not None:\n self.evalFunction = self._runModel\n elif evalFunction is not None:\n self.evalFunction = evalFunction\n\n assert (evalFunction is not None) or (\n model is not None\n ), \"Either a model has to be specified or an evalFunction.\"\n\n assert parameterSpace is not None, \"No parameters to explore.\"\n\n if parameterSpace.kind == \"sequence\":\n assert model is not None, \"Model must be defined for sequential explore\"\n\n self.parameterSpace = parameterSpace\n self.exploreParameters = parameterSpace.dict()\n\n # TODO: use random ICs for every explored point or rather reuse the ones that are generated at model\n # initialization\n self.useRandomICs = False\n\n filename = filename or \"exploration.hdf\"\n self.filename = filename\n\n self.saveAllModelOutputs = saveAllModelOutputs\n\n # number of cores\n if ncores is None:\n ncores = multiprocessing.cpu_count()\n self.ncores = ncores\n logging.info(\"Number of processes: {}\".format(self.ncores))\n\n # bool to check whether pypet was initialized properly\n self.initialized = False\n self._initializeExploration(self.filename)\n\n self.results = None\n\n def _initializeExploration(self, filename=\"exploration.hdf\"):\n\"\"\"Initialize the pypet environment\n\n :param filename: hdf filename to store the results in , defaults to \"exploration.hdf\"\n :type filename: str, optional\n \"\"\"\n # create hdf file path if it does not exist yet\n pathlib.Path(paths.HDF_DIR).mkdir(parents=True, exist_ok=True)\n\n # set default hdf filename\n self.HDF_FILE = os.path.join(paths.HDF_DIR, filename)\n\n # initialize pypet environment\n trajectoryName = \"results\" + datetime.datetime.now().strftime(\"-%Y-%m-%d-%HH-%MM-%SS\")\n trajectoryfilename = self.HDF_FILE\n\n # set up the pypet environment\n env = pypet.Environment(\n trajectory=trajectoryName,\n filename=trajectoryfilename,\n multiproc=True,\n ncores=self.ncores,\n complevel=9,\n log_config=paths.PYPET_LOGGING_CONFIG,\n )\n self.env = env\n # Get the trajectory from the environment\n self.traj = env.trajectory\n self.trajectoryName = self.traj.v_name\n\n # Add all parameters to the pypet trajectory\n if self.model is not None:\n # if a model is specified, use the default parameter of the\n # model to initialize pypet\n self._addParametersToPypet(self.traj, self.model.params)\n else:\n # else, use a random parameter of the parameter space\n self._addParametersToPypet(self.traj, self.parameterSpace.getRandom(safe=True))\n\n # Tell pypet which parameters to explore\n self.pypetParametrization = self.parameterSpace.get_parametrization()\n # explicitely add all parameters within star notation, hence unwrap star notation into actual params names\n if self.parameterSpace.star:\n assert self.model is not None, \"With star notation, model cannot be None\"\n self.pypetParametrization = unwrap_star_dotdict(self.pypetParametrization, self.model)\n self.nRuns = len(self.pypetParametrization[list(self.pypetParametrization.keys())[0]])\n logging.info(f\"Number of parameter configurations: {self.nRuns}\")\n if self.parameterSpace.kind == \"sequence\":\n # if sequential explore, need to fill-in the default parameters instead of None\n self.pypetParametrization = self._fillin_default_parameters_for_sequential(\n self.pypetParametrization, self.model.params\n )\n self.traj.f_explore(self.pypetParametrization)\n\n # initialization done\n logging.info(\"BoxSearch: Environment initialized.\")\n self.initialized = True\n\n @staticmethod\n def _fillin_default_parameters_for_sequential(parametrization, model_params):\n fresh_dict = {}\n for k, params in parametrization.items():\n fresh_dict[k] = [v if v is not None else model_params[k] for v in params]\n return fresh_dict\n\n def _addParametersToPypet(self, traj, params):\n\"\"\"This function registers the parameters of the model to Pypet.\n Parameters can be nested dictionaries. They are unpacked and stored recursively.\n\n :param traj: Pypet trajectory to store the parameters in\n :type traj: `pypet.trajectory.Trajectory`\n :param params: Parameter dictionary\n :type params: dict, dict[dict,]\n \"\"\"\n\n def addParametersRecursively(traj, params, current_level):\n # make dummy list if just string\n if isinstance(current_level, str):\n current_level = [current_level]\n # iterate dict\n for key, value in params.items():\n # if another dict - recurse and increase level\n if isinstance(value, dict):\n addParametersRecursively(traj, value, current_level + [key])\n else:\n param_address = \".\".join(current_level + [key])\n value = \"None\" if value is None else value\n traj.f_add_parameter(param_address, value)\n\n addParametersRecursively(traj, params, [])\n\n def saveToPypet(self, outputs, traj):\n\"\"\"This function takes simulation results in the form of a nested dictionary\n and stores all data into the pypet hdf file.\n\n :param outputs: Simulation outputs as a dictionary.\n :type outputs: dict\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n \"\"\"\n\n def makeSaveStringForPypet(value, savestr):\n\"\"\"Builds the pypet-style results string from the results\n dictionary's keys.\n \"\"\"\n for k, v in value.items():\n if isinstance(v, dict):\n _savestr = savestr + k + \".\"\n makeSaveStringForPypet(v, _savestr)\n else:\n _savestr = savestr + k\n self.traj.f_add_result(_savestr, v)\n\n assert isinstance(outputs, dict), \"Outputs must be an instance of dict.\"\n value = outputs\n savestr = \"results.$.\"\n makeSaveStringForPypet(value, savestr)\n\n def _runModel(self, traj):\n\"\"\"If not evaluation function is given, we assume that a model will be simulated.\n This function will be called by pypet directly and therefore wants a pypet trajectory as an argument\n\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n \"\"\"\n if self.useRandomICs:\n logging.warn(\"Random initial conditions not implemented yet\")\n # get parameters of this run from pypet trajectory\n runParams = self.getParametersFromTraj(traj)\n if self.parameterSpace.star:\n runParams = flatten_nested_dict(flat_dict_to_nested(runParams)[\"parameters\"])\n\n # set the parameters for the model\n self.model.params.update(runParams)\n\n # get kwargs from Exploration.run()\n runKwargs = {}\n if hasattr(self, \"runKwargs\"):\n runKwargs = self.runKwargs\n # run it\n self.model.run(**runKwargs)\n # save outputs\n self._saveModelOutputsToPypet(traj)\n\n def _saveModelOutputsToPypet(self, traj):\n # save all data to the pypet trajectory\n if self.saveAllModelOutputs:\n # save all results from exploration\n self.saveToPypet(self.model.outputs, traj)\n else:\n # save only the default output\n self.saveToPypet(\n {\n self.model.default_output: self.model.output,\n \"t\": self.model.outputs[\"t\"],\n },\n traj,\n )\n # save BOLD output\n # if \"bold\" in self.model.params:\n # if self.model.params[\"bold\"] and \"BOLD\" in self.model.outputs:\n # self.saveToPypet(self.model.outputs[\"BOLD\"], traj)\n if \"BOLD\" in self.model.outputs:\n self.saveToPypet(self.model.outputs[\"BOLD\"], traj)\n\n def _validatePypetParameters(self, runParams):\n\"\"\"Helper to handle None's in pypet parameters\n (used for random number generator seed)\n\n :param runParams: parameters as returned by traj.parameters.f_to_dict()\n :type runParams: dict of pypet.parameter.Parameter\n \"\"\"\n\n # fix rng seed, which is saved as a string if None\n if \"seed\" in runParams:\n if runParams[\"seed\"] == \"None\":\n runParams[\"seed\"] = None\n return runParams\n\n def getParametersFromTraj(self, traj):\n\"\"\"Returns the parameters of the current run as a (dot.able) dictionary\n\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n :return: Parameter set of the current run\n :rtype: dict\n \"\"\"\n # DO NOT use short names for star notation dicts\n runParams = self.traj.parameters.f_to_dict(short_names=not self.parameterSpace.star, fast_access=True)\n runParams = self._validatePypetParameters(runParams)\n return dotdict(runParams)\n\n def getModelFromTraj(self, traj):\n\"\"\"Return the appropriate model with parameters for this run\n :params traj: Pypet trajectory of current run\n\n :returns model: Model with the parameters of this run.\n \"\"\"\n model = self.model\n runParams = self.getParametersFromTraj(traj)\n # removes keys with None values\n # runParams = {k: v for k, v in runParams.items() if v is not None}\n if self.parameterSpace.star:\n runParams = flatten_nested_dict(flat_dict_to_nested(runParams)[\"parameters\"])\n\n model.params.update(runParams)\n return model\n\n def run(self, **kwargs):\n\"\"\"\n Call this function to run the exploration\n \"\"\"\n self.runKwargs = kwargs\n assert self.initialized, \"Pypet environment not initialized yet.\"\n self._t_start_exploration = datetime.datetime.now()\n self.env.run(self.evalFunction)\n self._t_end_exploration = datetime.datetime.now()\n\n def loadResults(self, all=True, filename=None, trajectoryName=None, pypetShortNames=True, memory_cap=95.0):\n\"\"\"Load results from a hdf file of a previous simulation.\n\n :param all: Load all simulated results into memory, which will be available as the `.results` attribute. Can\n use a lot of RAM if your simulation is large, please use this with caution. , defaults to True\n :type all: bool, optional\n :param filename: hdf file name in which results are stored, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory inside the hdf file, newest will be used if left empty, defaults\n to None\n :type trajectoryName: str, optional\n :param pypetShortNames: Use pypet short names as keys for the results dictionary. Use if you are experiencing\n errors due to natural naming collisions.\n :type pypetShortNames: bool\n :param memory_cap: Percentage memory cap between 0 and 100. If `all=True` is used, a memory cap can be set to\n avoid filling up the available RAM. Example: use `memory_cap = 95` to avoid loading more data if memory is\n at 95% use, defaults to 95\n :type memory_cap: float, int, optional\n \"\"\"\n\n self.loadDfResults(filename, trajectoryName)\n\n # make a list of dictionaries with results\n self.results = dotdict({})\n if all:\n logging.info(\"Loading all results to `results` dictionary ...\")\n for rInd in tqdm.tqdm(range(self.nResults), total=self.nResults):\n\n # check if enough memory is available\n if memory_cap:\n assert isinstance(memory_cap, (int, float)), \"`memory_cap` must be float.\"\n assert (memory_cap > 0) and (memory_cap < 100), \"`memory_cap` must be between 0 and 100\"\n # check ram usage with psutil\n used_memory_percent = psutil.virtual_memory()[2]\n if used_memory_percent > memory_cap:\n raise MemoryError(\n f\"Memory use is at {used_memory_percent}% and capped at {memory_cap}. Aborting.\"\n )\n\n self.pypetTrajectory.results[rInd].f_load()\n result = self.pypetTrajectory.results[rInd].f_to_dict(fast_access=True, short_names=pypetShortNames)\n result = dotdict(result)\n self.pypetTrajectory.results[rInd].f_remove()\n self.results[rInd] = copy.deepcopy(result)\n\n # Postprocess result keys if pypet short names aren't used\n # Before: results.run_00000001.outputs.rates_inh\n # After: outputs.rates_inh\n if not pypetShortNames:\n for i, r in self.results.items():\n new_dict = dotdict({})\n for key, value in r.items():\n new_key = \"\".join(key.split(\".\", 2)[2:])\n new_dict[new_key] = r[key]\n self.results[i] = copy.deepcopy(new_dict)\n\n self.aggregateResultsToDfResults()\n\n logging.info(\"All results loaded.\")\n\n def aggregateResultsToDfResults(self, arrays=True, fillna=False):\n\"\"\"Aggregate all results in to dfResults dataframe.\n\n :param arrays: Load array results (like timeseries) if True. If False, only load scalar results, defaults to\n True\n :type arrays: bool, optional\n :param fillna: Fill nan results (for example if they're not returned in a subset of runs) with zeros, default\n to False\n :type fillna: bool, optional\n \"\"\"\n nan_value = np.nan\n # defines which variable types will be saved in the results dataframe\n SUPPORTED_TYPES = (float, int, np.ndarray, list)\n SCALAR_TYPES = (float, int)\n ARRAY_TYPES = (np.ndarray, list)\n\n logging.info(\"Aggregating results to `dfResults` ...\")\n for runId, parameters in tqdm.tqdm(self.dfResults.iterrows(), total=len(self.dfResults)):\n # if the results were previously loaded into memory, use them\n if hasattr(self, \"results\"):\n # only if the length matches the number of results\n if len(self.results) == len(self.dfResults):\n result = self.results[runId]\n # else, load results individually from hdf file\n else:\n result = self.getRun(runId)\n # else, load results individually from hdf file\n else:\n result = self.getRun(runId)\n\n for key, value in result.items():\n # only save floats, ints and arrays\n if isinstance(value, SUPPORTED_TYPES):\n # save 1-dim arrays\n if isinstance(value, ARRAY_TYPES) and arrays:\n # to save a numpy array, convert column to object type\n if key not in self.dfResults:\n self.dfResults[key] = None\n self.dfResults[key] = self.dfResults[key].astype(object)\n self.dfResults.at[runId, key] = value\n elif isinstance(value, SCALAR_TYPES):\n # save scalars\n self.dfResults.loc[runId, key] = value\n else:\n self.dfResults.loc[runId, key] = nan_value\n # drop nan columns\n self.dfResults = self.dfResults.dropna(axis=\"columns\", how=\"all\")\n\n if fillna:\n self.dfResults = self.dfResults.fillna(0)\n\n def loadDfResults(self, filename=None, trajectoryName=None):\n\"\"\"Load results from a previous simulation.\n\n :param filename: hdf file name in which results are stored, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory inside the hdf file, newest will be used if left empty, defaults\n to None\n :type trajectoryName: str, optional\n \"\"\"\n # chose HDF file to load\n filename = filename or self.HDF_FILE\n self.pypetTrajectory = pu.loadPypetTrajectory(filename, trajectoryName)\n self.nResults = len(self.pypetTrajectory.f_get_run_names())\n\n exploredParameters = self.pypetTrajectory.f_get_explored_parameters()\n\n # create pandas dataframe of all runs with parameters as keys\n logging.info(\"Creating `dfResults` dataframe ...\")\n niceParKeys = [p[11:] for p in exploredParameters.keys()]\n if not self.parameterSpace:\n niceParKeys = [p.split(\".\")[-1] for p in niceParKeys]\n self.dfResults = pd.DataFrame(columns=niceParKeys, dtype=object)\n for nicep, p in zip(niceParKeys, exploredParameters.keys()):\n self.dfResults[nicep] = exploredParameters[p].f_get_range()\n\n @staticmethod\n def _filterDictionaryBold(filt_dict, bold):\n\"\"\"Filters result dictionary: either keeps ONLY BOLD results, or remove\n BOLD results.\n\n :param filt_dict: dictionary to filter for BOLD keys\n :type filt_dict: dict\n :param bold: whether to remove BOLD keys (bold=False) or keep only BOLD\n keys (bold=True)\n :return: filtered dict, without or only BOLD keys\n :rtype: dict\n \"\"\"\n filt_dict = copy.deepcopy(filt_dict)\n if bold:\n return {k: v for k, v in filt_dict.items() if \"BOLD\" in k}\n else:\n return {k: v for k, v in filt_dict.items() if \"BOLD\" not in k}\n\n def _getCoordsFromRun(self, run_dict, bold=False):\n\"\"\"Find coordinates of a single run - time, output and space dimensions.\n\n :param run_dict: dictionary with run results\n :type run_dict: dict\n :param bold: whether to do only BOLD or without BOLD results\n :type bold: bool\n :return: dictionary of coordinates for xarray\n :rtype: dict\n \"\"\"\n run_dict = copy.deepcopy(run_dict)\n run_dict = self._filterDictionaryBold(run_dict, bold=bold)\n timeDictKey = \"\"\n if \"t\" in run_dict:\n timeDictKey = \"t\"\n else:\n for k in run_dict:\n if k.startswith(\"t\"):\n timeDictKey = k\n logging.info(f\"Assuming {k} to be the time axis.\")\n break\n assert len(timeDictKey) > 0, \"No time array found (starting with t) in model output.\"\n t = run_dict[timeDictKey].copy()\n del run_dict[timeDictKey]\n return timeDictKey, {\n \"output\": list(run_dict.keys()),\n \"space\": list(range(next(iter(run_dict.values())).shape[0])),\n \"time\": t,\n }\n\n def xr(self, bold=False):\n\"\"\"\n Return `xr.Dataset` from the exploration results.\n\n :param bold: if True, will load and return only BOLD output\n :type bold: bool\n \"\"\"\n\n def _sanitize_nc_key(k):\n return k.replace(\"*\", \"_\").replace(\".\", \"_\").replace(\"|\", \"_\")\n\n assert self.results is not None, \"Run `loadResults()` first to populate the results\"\n assert len(self.results) == len(self.dfResults)\n # create intrisinsic dims for one run\n timeDictKey, run_coords = self._getCoordsFromRun(self.results[0], bold=bold)\n dataarrays = []\n orig_search_coords = self.parameterSpace.get_parametrization()\n for runId, run_result in self.results.items():\n # take exploration coordinates for this run\n expl_coords = {k: v[runId] for k, v in orig_search_coords.items()}\n outputs = []\n run_result = self._filterDictionaryBold(run_result, bold=bold)\n for key, value in run_result.items():\n if key == timeDictKey:\n continue\n outputs.append(value)\n # create DataArray for run only - we need to add exploration coordinates\n data_temp = xr.DataArray(\n np.stack(outputs), dims=[\"output\", \"space\", \"time\"], coords=run_coords, name=\"exploration\"\n )\n expand_coords = {}\n # iterate exploration coordinates\n for k, v in expl_coords.items():\n # sanitize keys in the case of stars etc\n k = _sanitize_nc_key(k)\n # if single values, just assign\n if isinstance(v, (str, float, int)):\n expand_coords[k] = [v]\n # if arrays, check whether they can be squeezed into one value\n elif isinstance(v, np.ndarray):\n if np.unique(v).size == 1:\n # if yes, just assign that one value\n expand_coords[k] = [float(np.unique(v))]\n else:\n # if no, sorry - coordinates cannot be array\n raise ValueError(\"Cannot squeeze coordinates\")\n # assing exploration coordinates to the DataArray\n dataarrays.append(data_temp.expand_dims(expand_coords))\n\n # finally, combine all arrays into one\n if self.parameterSpace.kind == \"sequence\":\n # when run in sequence, cannot combine to grid, so just concatenate along new dimension\n combined = xr.concat(dataarrays, dim=\"run_no\", coords=\"all\")\n else:\n # sometimes combining xr.DataArrays does not work, see https://github.com/pydata/xarray/issues/3248#issuecomment-531511177\n # resolved by casting them explicitely to xr.Dataset\n combined = xr.combine_by_coords([da.to_dataset() for da in dataarrays])[\"exploration\"]\n if self.parameterSpace.star:\n # if we explored over star params, unwrap them into attributes\n combined.attrs = {\n _sanitize_nc_key(k): list(self.model.params[k].keys()) for k in orig_search_coords.keys() if \"*\" in k\n }\n return combined\n\n def getRun(self, runId, filename=None, trajectoryName=None, pypetShortNames=True):\n\"\"\"Load the simulated data of a run and its parameters from a pypetTrajectory.\n\n :param runId: ID of the run\n :type runId: int\n\n :return: Dictionary with simulated data and parameters of the run.\n :type return: dict\n \"\"\"\n # chose HDF file to load\n filename = self.HDF_FILE or filename\n\n # either use loaded pypetTrajectory or load from HDF file if it isn't available\n pypetTrajectory = (\n self.pypetTrajectory\n if hasattr(self, \"pypetTrajectory\")\n else pu.loadPypetTrajectory(filename, trajectoryName)\n )\n\n # # if there was no pypetTrajectory loaded before\n # if pypetTrajectory is None:\n # # chose HDF file to load\n # filename = self.HDF_FILE or filename\n # pypetTrajectory = pu.loadPypetTrajectory(filename, trajectoryName)\n\n return pu.getRun(runId, pypetTrajectory, pypetShortNames=pypetShortNames)\n\n def getResult(self, runId):\n\"\"\"Returns either a loaded result or reads from disk.\n\n :param runId: runId of result\n :type runId: int\n :return: result\n :rtype: dict\n \"\"\"\n # if hasattr(self, \"results\"):\n # # load result from either the preloaded .result attribute (from .loadResults)\n # result = self.results[runId]\n # else:\n # # or from disk if results haven't been loaded yet\n # result = self.getRun(runId)\n\n # load result from either the preloaded .result attribute (from .loadResults)\n # or from disk if results haven't been loaded yet\n # result = self.results[runId] if hasattr(self, \"results\") else self.getRun(runId)\n return self.results[runId] if hasattr(self, \"results\") else self.getRun(runId)\n\n def info(self):\n\"\"\"Print info about the current search.\"\"\"\n now = datetime.datetime.now().strftime(\"%Y-%m-%d-%HH-%MM-%SS\")\n print(f\"Exploration info ({now})\")\n print(f\"HDF name: {self.HDF_FILE}\")\n print(f\"Trajectory name: {self.trajectoryName}\")\n if self.model is not None:\n print(f\"Model: {self.model.name}\")\n if hasattr(self, \"nRuns\"):\n print(f\"Number of runs {self.nRuns}\")\n print(f\"Explored parameters: {self.exploreParameters.keys()}\")\n if hasattr(self, \"_t_end_exploration\") and hasattr(self, \"_t_start_exploration\"):\n print(f\"Duration of exploration: {self._t_end_exploration-self._t_start_exploration}\")\n
Either a model has to be passed, or an evalFunction. If an evalFunction is passed, then the evalFunction will be called and the model is accessible to the evalFunction via self.getModelFromTraj(traj). The parameters of the current run are accessible via self.getParametersFromTraj(traj).
If no evaluation function is passed, then the model is simulated using Model.run() for every parameter.
Parameters:
Name Type Description Default model`neurolib.models.model.Model`, optional
Model to run for each parameter (or model to pass to the evaluation function if an evaluation function is used), defaults to None
Evaluation function to call for each run., defaults to None
Nonefilenamestr
HDF5 storage file name, if left empty, defaults to exploration.hdf
NonesaveAllModelOutputsbool
If True, save all outputs of model, else only default output of the model will be saved. Note: if saveAllModelOutputs==False and the model's parameter model.params['bold']==True, then BOLD output will be saved as well, defaults to False
Falsencoresint, optional
Number of cores to simulate on (max cores default), defaults to None
None Source code in neurolib/optimize/exploration/exploration.py
def __init__(\n self,\n model=None,\n parameterSpace=None,\n evalFunction=None,\n filename=None,\n saveAllModelOutputs=False,\n ncores=None,\n):\n\"\"\"Either a model has to be passed, or an evalFunction. If an evalFunction\n is passed, then the evalFunction will be called and the model is accessible to the\n evalFunction via `self.getModelFromTraj(traj)`. The parameters of the current\n run are accessible via `self.getParametersFromTraj(traj)`.\n\n If no evaluation function is passed, then the model is simulated using `Model.run()`\n for every parameter.\n\n :param model: Model to run for each parameter (or model to pass to the evaluation function if an evaluation\n function is used), defaults to None\n :type model: `neurolib.models.model.Model`, optional\n :param parameterSpace: Parameter space to explore, defaults to None\n :type parameterSpace: `neurolib.utils.parameterSpace.ParameterSpace`, optional\n :param evalFunction: Evaluation function to call for each run., defaults to None\n :type evalFunction: function, optional\n :param filename: HDF5 storage file name, if left empty, defaults to ``exploration.hdf``\n :type filename: str\n :param saveAllModelOutputs: If True, save all outputs of model, else only default output of the model will be\n saved. Note: if saveAllModelOutputs==False and the model's parameter model.params['bold']==True, then BOLD\n output will be saved as well, defaults to False\n :type saveAllModelOutputs: bool\n\n :param ncores: Number of cores to simulate on (max cores default), defaults to None\n :type ncores: int, optional\n \"\"\"\n self.model = model\n if evalFunction is None and model is not None:\n self.evalFunction = self._runModel\n elif evalFunction is not None:\n self.evalFunction = evalFunction\n\n assert (evalFunction is not None) or (\n model is not None\n ), \"Either a model has to be specified or an evalFunction.\"\n\n assert parameterSpace is not None, \"No parameters to explore.\"\n\n if parameterSpace.kind == \"sequence\":\n assert model is not None, \"Model must be defined for sequential explore\"\n\n self.parameterSpace = parameterSpace\n self.exploreParameters = parameterSpace.dict()\n\n # TODO: use random ICs for every explored point or rather reuse the ones that are generated at model\n # initialization\n self.useRandomICs = False\n\n filename = filename or \"exploration.hdf\"\n self.filename = filename\n\n self.saveAllModelOutputs = saveAllModelOutputs\n\n # number of cores\n if ncores is None:\n ncores = multiprocessing.cpu_count()\n self.ncores = ncores\n logging.info(\"Number of processes: {}\".format(self.ncores))\n\n # bool to check whether pypet was initialized properly\n self.initialized = False\n self._initializeExploration(self.filename)\n\n self.results = None\n
Name Type Description Default arraysbool, optional
Load array results (like timeseries) if True. If False, only load scalar results, defaults to True
Truefillnabool, optional
Fill nan results (for example if they're not returned in a subset of runs) with zeros, default to False
False Source code in neurolib/optimize/exploration/exploration.py
def aggregateResultsToDfResults(self, arrays=True, fillna=False):\n\"\"\"Aggregate all results in to dfResults dataframe.\n\n :param arrays: Load array results (like timeseries) if True. If False, only load scalar results, defaults to\n True\n :type arrays: bool, optional\n :param fillna: Fill nan results (for example if they're not returned in a subset of runs) with zeros, default\n to False\n :type fillna: bool, optional\n \"\"\"\n nan_value = np.nan\n # defines which variable types will be saved in the results dataframe\n SUPPORTED_TYPES = (float, int, np.ndarray, list)\n SCALAR_TYPES = (float, int)\n ARRAY_TYPES = (np.ndarray, list)\n\n logging.info(\"Aggregating results to `dfResults` ...\")\n for runId, parameters in tqdm.tqdm(self.dfResults.iterrows(), total=len(self.dfResults)):\n # if the results were previously loaded into memory, use them\n if hasattr(self, \"results\"):\n # only if the length matches the number of results\n if len(self.results) == len(self.dfResults):\n result = self.results[runId]\n # else, load results individually from hdf file\n else:\n result = self.getRun(runId)\n # else, load results individually from hdf file\n else:\n result = self.getRun(runId)\n\n for key, value in result.items():\n # only save floats, ints and arrays\n if isinstance(value, SUPPORTED_TYPES):\n # save 1-dim arrays\n if isinstance(value, ARRAY_TYPES) and arrays:\n # to save a numpy array, convert column to object type\n if key not in self.dfResults:\n self.dfResults[key] = None\n self.dfResults[key] = self.dfResults[key].astype(object)\n self.dfResults.at[runId, key] = value\n elif isinstance(value, SCALAR_TYPES):\n # save scalars\n self.dfResults.loc[runId, key] = value\n else:\n self.dfResults.loc[runId, key] = nan_value\n # drop nan columns\n self.dfResults = self.dfResults.dropna(axis=\"columns\", how=\"all\")\n\n if fillna:\n self.dfResults = self.dfResults.fillna(0)\n
Return the appropriate model with parameters for this run
Parameters:
Name Type Description Default traj
Pypet trajectory of current run
required
Returns:
Type Description
Model with the parameters of this run.
Source code in neurolib/optimize/exploration/exploration.py
def getModelFromTraj(self, traj):\n\"\"\"Return the appropriate model with parameters for this run\n :params traj: Pypet trajectory of current run\n\n :returns model: Model with the parameters of this run.\n \"\"\"\n model = self.model\n runParams = self.getParametersFromTraj(traj)\n # removes keys with None values\n # runParams = {k: v for k, v in runParams.items() if v is not None}\n if self.parameterSpace.star:\n runParams = flatten_nested_dict(flat_dict_to_nested(runParams)[\"parameters\"])\n\n model.params.update(runParams)\n return model\n
Returns the parameters of the current run as a (dot.able) dictionary
Parameters:
Name Type Description Default traj`pypet.trajectory.Trajectory`
Pypet trajectory
required
Returns:
Type Description dict
Parameter set of the current run
Source code in neurolib/optimize/exploration/exploration.py
def getParametersFromTraj(self, traj):\n\"\"\"Returns the parameters of the current run as a (dot.able) dictionary\n\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n :return: Parameter set of the current run\n :rtype: dict\n \"\"\"\n # DO NOT use short names for star notation dicts\n runParams = self.traj.parameters.f_to_dict(short_names=not self.parameterSpace.star, fast_access=True)\n runParams = self._validatePypetParameters(runParams)\n return dotdict(runParams)\n
Returns either a loaded result or reads from disk.
Parameters:
Name Type Description Default runIdint
runId of result
required
Returns:
Type Description dict
result
Source code in neurolib/optimize/exploration/exploration.py
def getResult(self, runId):\n\"\"\"Returns either a loaded result or reads from disk.\n\n :param runId: runId of result\n :type runId: int\n :return: result\n :rtype: dict\n \"\"\"\n # if hasattr(self, \"results\"):\n # # load result from either the preloaded .result attribute (from .loadResults)\n # result = self.results[runId]\n # else:\n # # or from disk if results haven't been loaded yet\n # result = self.getRun(runId)\n\n # load result from either the preloaded .result attribute (from .loadResults)\n # or from disk if results haven't been loaded yet\n # result = self.results[runId] if hasattr(self, \"results\") else self.getRun(runId)\n return self.results[runId] if hasattr(self, \"results\") else self.getRun(runId)\n
Load the simulated data of a run and its parameters from a pypetTrajectory.
Parameters:
Name Type Description Default runIdint
ID of the run
required
Returns:
Type Description
Dictionary with simulated data and parameters of the run.
Source code in neurolib/optimize/exploration/exploration.py
def getRun(self, runId, filename=None, trajectoryName=None, pypetShortNames=True):\n\"\"\"Load the simulated data of a run and its parameters from a pypetTrajectory.\n\n :param runId: ID of the run\n :type runId: int\n\n :return: Dictionary with simulated data and parameters of the run.\n :type return: dict\n \"\"\"\n # chose HDF file to load\n filename = self.HDF_FILE or filename\n\n # either use loaded pypetTrajectory or load from HDF file if it isn't available\n pypetTrajectory = (\n self.pypetTrajectory\n if hasattr(self, \"pypetTrajectory\")\n else pu.loadPypetTrajectory(filename, trajectoryName)\n )\n\n # # if there was no pypetTrajectory loaded before\n # if pypetTrajectory is None:\n # # chose HDF file to load\n # filename = self.HDF_FILE or filename\n # pypetTrajectory = pu.loadPypetTrajectory(filename, trajectoryName)\n\n return pu.getRun(runId, pypetTrajectory, pypetShortNames=pypetShortNames)\n
Source code in neurolib/optimize/exploration/exploration.py
def info(self):\n\"\"\"Print info about the current search.\"\"\"\n now = datetime.datetime.now().strftime(\"%Y-%m-%d-%HH-%MM-%SS\")\n print(f\"Exploration info ({now})\")\n print(f\"HDF name: {self.HDF_FILE}\")\n print(f\"Trajectory name: {self.trajectoryName}\")\n if self.model is not None:\n print(f\"Model: {self.model.name}\")\n if hasattr(self, \"nRuns\"):\n print(f\"Number of runs {self.nRuns}\")\n print(f\"Explored parameters: {self.exploreParameters.keys()}\")\n if hasattr(self, \"_t_end_exploration\") and hasattr(self, \"_t_start_exploration\"):\n print(f\"Duration of exploration: {self._t_end_exploration-self._t_start_exploration}\")\n
Name Type Description Default filenamestr, optional
hdf file name in which results are stored, defaults to None
NonetrajectoryNamestr, optional
Name of the trajectory inside the hdf file, newest will be used if left empty, defaults to None
None Source code in neurolib/optimize/exploration/exploration.py
def loadDfResults(self, filename=None, trajectoryName=None):\n\"\"\"Load results from a previous simulation.\n\n :param filename: hdf file name in which results are stored, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory inside the hdf file, newest will be used if left empty, defaults\n to None\n :type trajectoryName: str, optional\n \"\"\"\n # chose HDF file to load\n filename = filename or self.HDF_FILE\n self.pypetTrajectory = pu.loadPypetTrajectory(filename, trajectoryName)\n self.nResults = len(self.pypetTrajectory.f_get_run_names())\n\n exploredParameters = self.pypetTrajectory.f_get_explored_parameters()\n\n # create pandas dataframe of all runs with parameters as keys\n logging.info(\"Creating `dfResults` dataframe ...\")\n niceParKeys = [p[11:] for p in exploredParameters.keys()]\n if not self.parameterSpace:\n niceParKeys = [p.split(\".\")[-1] for p in niceParKeys]\n self.dfResults = pd.DataFrame(columns=niceParKeys, dtype=object)\n for nicep, p in zip(niceParKeys, exploredParameters.keys()):\n self.dfResults[nicep] = exploredParameters[p].f_get_range()\n
Load results from a hdf file of a previous simulation.
Parameters:
Name Type Description Default allbool, optional
Load all simulated results into memory, which will be available as the .results attribute. Can use a lot of RAM if your simulation is large, please use this with caution. , defaults to True
Truefilenamestr, optional
hdf file name in which results are stored, defaults to None
NonetrajectoryNamestr, optional
Name of the trajectory inside the hdf file, newest will be used if left empty, defaults to None
NonepypetShortNamesbool
Use pypet short names as keys for the results dictionary. Use if you are experiencing errors due to natural naming collisions.
Truememory_capfloat, int, optional
Percentage memory cap between 0 and 100. If all=True is used, a memory cap can be set to avoid filling up the available RAM. Example: use memory_cap = 95 to avoid loading more data if memory is at 95% use, defaults to 95
95.0 Source code in neurolib/optimize/exploration/exploration.py
def loadResults(self, all=True, filename=None, trajectoryName=None, pypetShortNames=True, memory_cap=95.0):\n\"\"\"Load results from a hdf file of a previous simulation.\n\n :param all: Load all simulated results into memory, which will be available as the `.results` attribute. Can\n use a lot of RAM if your simulation is large, please use this with caution. , defaults to True\n :type all: bool, optional\n :param filename: hdf file name in which results are stored, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory inside the hdf file, newest will be used if left empty, defaults\n to None\n :type trajectoryName: str, optional\n :param pypetShortNames: Use pypet short names as keys for the results dictionary. Use if you are experiencing\n errors due to natural naming collisions.\n :type pypetShortNames: bool\n :param memory_cap: Percentage memory cap between 0 and 100. If `all=True` is used, a memory cap can be set to\n avoid filling up the available RAM. Example: use `memory_cap = 95` to avoid loading more data if memory is\n at 95% use, defaults to 95\n :type memory_cap: float, int, optional\n \"\"\"\n\n self.loadDfResults(filename, trajectoryName)\n\n # make a list of dictionaries with results\n self.results = dotdict({})\n if all:\n logging.info(\"Loading all results to `results` dictionary ...\")\n for rInd in tqdm.tqdm(range(self.nResults), total=self.nResults):\n\n # check if enough memory is available\n if memory_cap:\n assert isinstance(memory_cap, (int, float)), \"`memory_cap` must be float.\"\n assert (memory_cap > 0) and (memory_cap < 100), \"`memory_cap` must be between 0 and 100\"\n # check ram usage with psutil\n used_memory_percent = psutil.virtual_memory()[2]\n if used_memory_percent > memory_cap:\n raise MemoryError(\n f\"Memory use is at {used_memory_percent}% and capped at {memory_cap}. Aborting.\"\n )\n\n self.pypetTrajectory.results[rInd].f_load()\n result = self.pypetTrajectory.results[rInd].f_to_dict(fast_access=True, short_names=pypetShortNames)\n result = dotdict(result)\n self.pypetTrajectory.results[rInd].f_remove()\n self.results[rInd] = copy.deepcopy(result)\n\n # Postprocess result keys if pypet short names aren't used\n # Before: results.run_00000001.outputs.rates_inh\n # After: outputs.rates_inh\n if not pypetShortNames:\n for i, r in self.results.items():\n new_dict = dotdict({})\n for key, value in r.items():\n new_key = \"\".join(key.split(\".\", 2)[2:])\n new_dict[new_key] = r[key]\n self.results[i] = copy.deepcopy(new_dict)\n\n self.aggregateResultsToDfResults()\n\n logging.info(\"All results loaded.\")\n
Source code in neurolib/optimize/exploration/exploration.py
def run(self, **kwargs):\n\"\"\"\n Call this function to run the exploration\n \"\"\"\n self.runKwargs = kwargs\n assert self.initialized, \"Pypet environment not initialized yet.\"\n self._t_start_exploration = datetime.datetime.now()\n self.env.run(self.evalFunction)\n self._t_end_exploration = datetime.datetime.now()\n
This function takes simulation results in the form of a nested dictionary and stores all data into the pypet hdf file.
Parameters:
Name Type Description Default outputsdict
Simulation outputs as a dictionary.
required traj`pypet.trajectory.Trajectory`
Pypet trajectory
required Source code in neurolib/optimize/exploration/exploration.py
def saveToPypet(self, outputs, traj):\n\"\"\"This function takes simulation results in the form of a nested dictionary\n and stores all data into the pypet hdf file.\n\n :param outputs: Simulation outputs as a dictionary.\n :type outputs: dict\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n \"\"\"\n\n def makeSaveStringForPypet(value, savestr):\n\"\"\"Builds the pypet-style results string from the results\n dictionary's keys.\n \"\"\"\n for k, v in value.items():\n if isinstance(v, dict):\n _savestr = savestr + k + \".\"\n makeSaveStringForPypet(v, _savestr)\n else:\n _savestr = savestr + k\n self.traj.f_add_result(_savestr, v)\n\n assert isinstance(outputs, dict), \"Outputs must be an instance of dict.\"\n value = outputs\n savestr = \"results.$.\"\n makeSaveStringForPypet(value, savestr)\n
False Source code in neurolib/optimize/exploration/exploration.py
def xr(self, bold=False):\n\"\"\"\n Return `xr.Dataset` from the exploration results.\n\n :param bold: if True, will load and return only BOLD output\n :type bold: bool\n \"\"\"\n\n def _sanitize_nc_key(k):\n return k.replace(\"*\", \"_\").replace(\".\", \"_\").replace(\"|\", \"_\")\n\n assert self.results is not None, \"Run `loadResults()` first to populate the results\"\n assert len(self.results) == len(self.dfResults)\n # create intrisinsic dims for one run\n timeDictKey, run_coords = self._getCoordsFromRun(self.results[0], bold=bold)\n dataarrays = []\n orig_search_coords = self.parameterSpace.get_parametrization()\n for runId, run_result in self.results.items():\n # take exploration coordinates for this run\n expl_coords = {k: v[runId] for k, v in orig_search_coords.items()}\n outputs = []\n run_result = self._filterDictionaryBold(run_result, bold=bold)\n for key, value in run_result.items():\n if key == timeDictKey:\n continue\n outputs.append(value)\n # create DataArray for run only - we need to add exploration coordinates\n data_temp = xr.DataArray(\n np.stack(outputs), dims=[\"output\", \"space\", \"time\"], coords=run_coords, name=\"exploration\"\n )\n expand_coords = {}\n # iterate exploration coordinates\n for k, v in expl_coords.items():\n # sanitize keys in the case of stars etc\n k = _sanitize_nc_key(k)\n # if single values, just assign\n if isinstance(v, (str, float, int)):\n expand_coords[k] = [v]\n # if arrays, check whether they can be squeezed into one value\n elif isinstance(v, np.ndarray):\n if np.unique(v).size == 1:\n # if yes, just assign that one value\n expand_coords[k] = [float(np.unique(v))]\n else:\n # if no, sorry - coordinates cannot be array\n raise ValueError(\"Cannot squeeze coordinates\")\n # assing exploration coordinates to the DataArray\n dataarrays.append(data_temp.expand_dims(expand_coords))\n\n # finally, combine all arrays into one\n if self.parameterSpace.kind == \"sequence\":\n # when run in sequence, cannot combine to grid, so just concatenate along new dimension\n combined = xr.concat(dataarrays, dim=\"run_no\", coords=\"all\")\n else:\n # sometimes combining xr.DataArrays does not work, see https://github.com/pydata/xarray/issues/3248#issuecomment-531511177\n # resolved by casting them explicitely to xr.Dataset\n combined = xr.combine_by_coords([da.to_dataset() for da in dataarrays])[\"exploration\"]\n if self.parameterSpace.star:\n # if we explored over star params, unwrap them into attributes\n combined.attrs = {\n _sanitize_nc_key(k): list(self.model.params[k].keys()) for k in orig_search_coords.keys() if \"*\" in k\n }\n return combined\n
Models are the core of neurolib. The Model superclass will help you to load, simulate, and analyse models. It also makes it very easy to implement your own neural mass model (see Example 0.6 custom model).
"},{"location":"models/model/#loading-a-model","title":"Loading a model","text":"
To load a model, we need to import the submodule of a model and instantiate it. This example shows how to load a single node of the ALNModel. See Example 0 aln minimal on how to simulate a whole-brain network using this model.
from neurolib.models.aln import ALNModel # Import the model\nmodel = ALNModel() # Create an instance\nmodel.run() # Run it\n
"},{"location":"models/model/#model-base-class-methods","title":"Model base class methods","text":"
The Model base class runs models, manages their outputs, parameters and more. This class should serve as the base class for all implemented models.
Source code in neurolib/models/model.py
class Model:\n\"\"\"The Model base class runs models, manages their outputs, parameters and more.\n This class should serve as the base class for all implemented models.\n \"\"\"\n\n def __init__(self, integration, params):\n if hasattr(self, \"name\"):\n if self.name is not None:\n assert isinstance(self.name, str), f\"Model name is not a string.\"\n else:\n self.name = \"Noname\"\n\n assert integration is not None, \"Model integration function not given.\"\n self.integration = integration\n\n assert isinstance(params, dict), \"Parameters must be a dictionary.\"\n self.params = dotdict(params)\n\n # assert self.state_vars not None:\n assert hasattr(\n self, \"state_vars\"\n ), f\"Model {self.name} has no attribute `state_vars`, which should be alist of strings containing all variable names.\"\n assert np.all([type(s) is str for s in self.state_vars]), \"All entries in state_vars must be strings.\"\n\n assert hasattr(\n self, \"default_output\"\n ), f\"Model {self.name} needs to define a default output variable in `default_output`.\"\n\n assert isinstance(self.default_output, str), \"`default_output` must be a string.\"\n\n # if no output_vars is set, it will be replaced by state_vars\n if not hasattr(self, \"output_vars\"):\n self.output_vars = self.state_vars\n\n # create output and state dictionary\n self.outputs = dotdict({})\n self.state = dotdict({})\n self.maxDelay = None\n self.initializeRun()\n\n self.boldInitialized = False\n\n logging.info(f\"{self.name}: Model initialized.\")\n\n def initializeBold(self):\n\"\"\"Initialize BOLD model.\"\"\"\n self.boldInitialized = False\n\n # function to transform model state before passing it to the bold model\n # Note: This can be used like the parameter \\epsilon in Friston2000\n # (neural efficacy) by multiplying the input with a constant via\n # self.boldInputTransform = lambda x: x * epsilon\n if not hasattr(self, \"boldInputTransform\"):\n self.boldInputTransform = None\n\n self.boldModel = bold.BOLDModel(self.params[\"N\"], self.params[\"dt\"])\n self.boldInitialized = True\n # logging.info(f\"{self.name}: BOLD model initialized.\")\n\n def simulateBold(self, t, variables, append=False):\n\"\"\"Gets the default output of the model and simulates the BOLD model.\n Adds the simulated BOLD signal to outputs.\n \"\"\"\n if self.boldInitialized:\n # first we loop through all state variables\n for svn, sv in zip(self.state_vars, variables):\n # the default output is used as the input for the bold model\n if svn == self.default_output:\n bold_input = sv[:, self.startindt :]\n # logging.debug(f\"BOLD input `{svn}` of shape {bold_input.shape}\")\n if bold_input.shape[1] >= self.boldModel.samplingRate_NDt:\n # only if the length of the output has a zero mod to the sampling rate,\n # the downsampled output from the boldModel can correctly appended to previous data\n # so: we are lazy here and simply disable appending in that case ...\n if not bold_input.shape[1] % self.boldModel.samplingRate_NDt == 0:\n append = False\n logging.warn(\n f\"Output size {bold_input.shape[1]} is not a multiple of BOLD sampling length { self.boldModel.samplingRate_NDt}, will not append data.\"\n )\n logging.debug(f\"Simulating BOLD: boldModel.run(append={append})\")\n\n # transform bold input according to self.boldInputTransform\n if self.boldInputTransform:\n bold_input = self.boldInputTransform(bold_input)\n\n # simulate bold model\n self.boldModel.run(bold_input, append=append)\n\n t_BOLD = self.boldModel.t_BOLD\n BOLD = self.boldModel.BOLD\n self.setOutput(\"BOLD.t_BOLD\", t_BOLD)\n self.setOutput(\"BOLD.BOLD\", BOLD)\n else:\n logging.warn(\n f\"Will not simulate BOLD if output {bold_input.shape[1]*self.params['dt']} not at least of duration {self.boldModel.samplingRate_NDt*self.params['dt']}\"\n )\n else:\n logging.warn(\"BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`\")\n\n def checkChunkwise(self, chunksize):\n\"\"\"Checks if the model fulfills requirements for chunkwise simulation.\n Checks whether the sampling rate for outputs fits to chunksize and duration.\n Throws errors if not.\"\"\"\n assert self.state_vars is not None, \"State variable names not given.\"\n assert self.init_vars is not None, \"Initial value variable names not given.\"\n assert len(self.state_vars) == len(self.init_vars), \"State variables are not same length as initial values.\"\n\n # throw a warning if the user is nasty\n if int(self.params[\"duration\"] / self.params[\"dt\"]) % chunksize != 0:\n logging.warning(\n f\"It is strongly advised to use a `chunksize` ({chunksize}) that is a divisor of `duration / dt` ({int(self.params['duration']/self.params['dt'])}).\"\n )\n\n # if `sampling_dt` is set, do some checks\n if self.params.get(\"sampling_dt\") is not None:\n # sample_dt checks are required after setting chunksize\n assert (\n chunksize * self.params[\"dt\"] >= self.params[\"sampling_dt\"]\n ), \"`chunksize * dt` must be >= `sampling_dt`\"\n\n # ugly floating point modulo hack: instead of float1%float2==0, we do\n # (float1/float2)%1==0\n assert ((chunksize * self.params[\"dt\"]) / self.params[\"sampling_dt\"]) % 1 == 0, (\n f\"Chunksize {chunksize * self.params['dt']} must be divisible by sampling dt \"\n f\"{self.params['sampling_dt']}\"\n )\n assert (\n (self.params[\"duration\"] % (chunksize * self.params[\"dt\"])) / self.params[\"sampling_dt\"]\n ) % 1 == 0, (\n f\"Last chunk of size {self.params['duration'] % (chunksize * self.params['dt'])} must be divisible by sampling dt \"\n f\"{self.params['sampling_dt']}\"\n )\n\n def setSamplingDt(self):\n\"\"\"Checks if sampling_dt is set correctly and sets self.`sample_every`\n 1) Check if sampling_dt is multiple of dt\n 2) Check if semplind_dt is greater than duration\n \"\"\"\n\n if self.params.get(\"sampling_dt\") is None:\n self.sample_every = 1\n elif self.params.get(\"sampling_dt\") > 0:\n assert self.params[\"sampling_dt\"] >= self.params[\"dt\"], \"`sampling_dt` needs to be >= `dt`\"\n assert (\n self.params[\"duration\"] >= self.params[\"sampling_dt\"]\n ), \"`sampling_dt` needs to be lower than `duration`\"\n self.sample_every = int(self.params[\"sampling_dt\"] / self.params[\"dt\"])\n else:\n raise ValueError(f\"Can't handle `sampling_dt`={self.params.get('sampling_dt')}\")\n\n def initializeRun(self, initializeBold=False):\n\"\"\"Initialization before each run.\n\n :param initializeBold: initialize BOLD model\n :type initializeBold: bool\n \"\"\"\n # get the maxDelay of the system\n self.maxDelay = self.getMaxDelay()\n\n # length of the initial condition\n self.startindt = self.maxDelay + 1\n\n # check dt / sampling_dt\n self.setSamplingDt()\n\n # force bold if params['bold'] == True\n if self.params.get(\"bold\"):\n initializeBold = True\n # set up the bold model, if it didn't happen yet\n if initializeBold and not self.boldInitialized:\n self.initializeBold()\n\n def run(\n self,\n inputs=None,\n chunkwise=False,\n chunksize=None,\n bold=False,\n append=False,\n append_outputs=None,\n continue_run=False,\n ):\n\"\"\"\n Main interfacing function to run a model.\n\n The model can be run in three different ways:\n 1) `model.run()` starts a new run.\n 2) `model.run(chunkwise=True)` runs the simulation in chunks of length `chunksize`.\n 3) `mode.run(continue_run=True)` continues the simulation of a previous run.\n\n :param inputs: list of inputs to the model, must have the same order as model.input_vars. Note: no sanity check is performed for performance reasons. Take care of the inputs yourself.\n :type inputs: list[np.ndarray|]\n :param chunkwise: simulate model chunkwise or in one single run, defaults to False\n :type chunkwise: bool, optional\n :param chunksize: size of the chunk to simulate in dt, if set will imply chunkwise=True, defaults to 2s\n :type chunksize: int, optional\n :param bold: simulate BOLD signal (only for chunkwise integration), defaults to False\n :type bold: bool, optional\n :param append: append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False\n :type append: bool, optional\n :param continue_run: continue a simulation by using the initial values from a previous simulation\n :type continue_run: bool\n \"\"\"\n # TODO: legacy argument support\n if append_outputs is not None:\n append = append_outputs\n\n # if a previous run is not to be continued clear the model's state\n if continue_run is False:\n self.clearModelState()\n\n self.initializeRun(initializeBold=bold)\n\n # enable chunkwise if chunksize is set\n chunkwise = chunkwise if chunksize is None else True\n\n if chunkwise is False:\n self.integrate(append_outputs=append, simulate_bold=bold)\n if continue_run:\n self.setInitialValuesToLastState()\n\n else:\n if chunksize is None:\n chunksize = int(2000 / self.params[\"dt\"])\n\n # check if model is safe for chunkwise integration\n # and whether sampling_dt is compatible with duration and chunksize\n self.checkChunkwise(chunksize)\n if bold and not self.boldInitialized:\n logging.warn(f\"{self.name}: BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`\")\n bold = False\n self.integrateChunkwise(chunksize=chunksize, bold=bold, append_outputs=append)\n\n # check if there was a problem with the simulated data\n self.checkOutputs()\n\n def checkOutputs(self):\n # check nans in output\n if np.isnan(self.output).any():\n logging.error(\"nan in model output!\")\n else:\n EXPLOSION_THRESHOLD = 1e20\n if (self.output > EXPLOSION_THRESHOLD).any() > 0:\n logging.error(\"nan in model output!\")\n\n # check nans in BOLD\n if \"BOLD\" in self.outputs:\n if np.isnan(self.outputs.BOLD.BOLD).any():\n logging.error(\"nan in BOLD output!\")\n\n def integrate(self, append_outputs=False, simulate_bold=False):\n\"\"\"Calls each models `integration` function and saves the state and the outputs of the model.\n\n :param append: append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False\n :type append: bool, optional\n \"\"\"\n # run integration\n t, *variables = self.integration(self.params)\n self.storeOutputsAndStates(t, variables, append=append_outputs)\n\n # force bold if params['bold'] == True\n if self.params.get(\"bold\"):\n simulate_bold = True\n\n # bold simulation after integration\n if simulate_bold and self.boldInitialized:\n self.simulateBold(t, variables, append=True)\n\n def integrateChunkwise(self, chunksize, bold=False, append_outputs=False):\n\"\"\"Repeatedly calls the chunkwise integration for the whole duration of the simulation.\n If `bold==True`, the BOLD model is simulated after each chunk.\n\n :param chunksize: size of each chunk to simulate in units of dt\n :type chunksize: int\n :param bold: simulate BOLD model after each chunk, defaults to False\n :type bold: bool, optional\n :param append_outputs: append the chunkwise outputs to the outputs attribute, defaults to False\n :type append_outputs: bool, optional\n \"\"\"\n totalDuration = self.params[\"duration\"]\n\n dt = self.params[\"dt\"]\n # create a shallow copy of the parameters\n lastT = 0\n while totalDuration - lastT >= dt - 1e-6:\n # Determine the size of the next chunk\n # account for floating point errors\n remainingChunkSize = int(round((totalDuration - lastT) / dt))\n currentChunkSize = min(chunksize, remainingChunkSize)\n\n self.autochunk(chunksize=currentChunkSize, append_outputs=append_outputs, bold=bold)\n # we save the last simulated time step\n lastT += currentChunkSize * dt\n # or\n # lastT = self.state[\"t\"][-1]\n\n # set duration back to its original value\n self.params[\"duration\"] = totalDuration\n\n def clearModelState(self):\n\"\"\"Clears the model's state to create a fresh one\"\"\"\n self.state = dotdict({})\n self.outputs = dotdict({})\n # reinitialize bold model\n if self.params.get(\"bold\"):\n self.initializeBold()\n\n def storeOutputsAndStates(self, t, variables, append=False):\n\"\"\"Takes the simulated variables of the integration and stores it to the appropriate model output and state object.\n\n :param t: time vector\n :type t: list\n :param variables: variable from time integration\n :type variables: numpy.ndarray\n :param append: append output to existing output or overwrite, defaults to False\n :type append: bool, optional\n \"\"\"\n # save time array\n self.setOutput(\"t\", t, append=append, removeICs=True)\n self.setStateVariables(\"t\", t)\n # save outputs\n for svn, sv in zip(self.state_vars, variables):\n if svn in self.output_vars:\n self.setOutput(svn, sv, append=append, removeICs=True)\n self.setStateVariables(svn, sv)\n\n def setInitialValuesToLastState(self):\n\"\"\"Reads the last state of the model and sets the initial conditions to that state for continuing a simulation.\"\"\"\n for iv, sv in zip(self.init_vars, self.state_vars):\n # if state variables are one-dimensional (in space only)\n if (self.state[sv].ndim == 0) or (self.state[sv].ndim == 1):\n self.params[iv] = self.state[sv]\n # if they are space-time arrays\n else:\n # we set the next initial condition to the last state\n self.params[iv] = self.state[sv][:, -self.startindt :]\n\n def randomICs(self, min=0, max=1):\n\"\"\"Generates a new set of uniformly-distributed random initial conditions for the model.\n\n TODO: All parameters are drawn from the same distribution / range. Allow for independent ranges.\n\n :param min: Minium of uniform distribution\n :type min: float\n :param max: Maximum of uniform distribution\n :type max: float\n \"\"\"\n for iv in self.init_vars:\n if self.params[iv].ndim == 1:\n self.params[iv] = np.random.uniform(min, max, (self.params[\"N\"]))\n elif self.params[iv].ndim == 2:\n self.params[iv] = np.random.uniform(min, max, (self.params[\"N\"], 1))\n\n def setInputs(self, inputs):\n\"\"\"Take inputs from a list and store it in the appropriate model parameter for external input.\n TODO: This is not safe yet, checks should be implemented whether the model has inputs defined or not for example.\n\n :param inputs: list of inputs\n :type inputs: list[np.ndarray(), ...]\n \"\"\"\n for i, iv in enumerate(self.input_vars):\n self.params[iv] = inputs[i].copy()\n\n def autochunk(self, inputs=None, chunksize=1, append_outputs=False, bold=False):\n\"\"\"Executes a single chunk of integration, either for a given duration\n or a single timestep `dt`. Gathers all inputs to the model and resets\n the initial conditions as a preparation for the next chunk.\n\n :param inputs: list of input values, ordered according to self.input_vars, defaults to None\n :type inputs: list[np.ndarray|], optional\n :param chunksize: length of a chunk to simulate in dt, defaults 1\n :type chunksize: int, optional\n :param append_outputs: append the chunkwise outputs to the outputs attribute, defaults to False\n :type append_outputs: bool, optional\n \"\"\"\n\n # set the duration for this chunk\n self.params[\"duration\"] = chunksize * self.params[\"dt\"]\n\n # set inputs\n if inputs is not None:\n self.setInputs(inputs)\n\n # run integration\n self.integrate(append_outputs=append_outputs, simulate_bold=bold)\n\n # set initial conditions to last state for the next chunk\n self.setInitialValuesToLastState()\n\n def getMaxDelay(self):\n\"\"\"Computes the maximum delay of the model. This function should be overloaded\n if the model has internal delays (additional to delay between nodes defined by Dmat)\n such as the delay between an excitatory and inhibitory population within each brain area.\n If this function is not overloaded, the maximum delay is assumed to be defined from the\n global delay matrix `Dmat`.\n\n Note: Maxmimum delay is given in units of dt.\n\n :return: maxmimum delay of the model in units of dt\n :rtype: int\n \"\"\"\n dt = self.params.get(\"dt\")\n Dmat = self.params.get(\"lengthMat\")\n\n if Dmat is not None:\n # divide Dmat by signalV\n signalV = self.params.get(\"signalV\") or 0\n if signalV > 0:\n Dmat = Dmat / signalV\n else:\n # if signalV is 0, eliminate delays\n Dmat = Dmat * 0.0\n\n # only if Dmat and dt exist, a global max delay can be computed\n if Dmat is not None and dt is not None:\n Dmat_ndt = np.around(Dmat / dt) # delay matrix in multiples of dt\n max_global_delay = int(np.amax(Dmat_ndt))\n else:\n max_global_delay = 0\n return max_global_delay\n\n def setStateVariables(self, name, data):\n\"\"\"Saves the models current state variables.\n\n TODO: Cut state variables to length of self.maxDelay\n However, this could be time-memory tradeoff\n\n :param name: name of the state variable\n :type name: str\n :param data: value of the variable\n :type data: np.ndarray\n \"\"\"\n # old\n # self.state[name] = data.copy()\n\n # if the data is temporal, cut off initial values\n # NOTE: this shuold actually check for\n # if data.shape[1] > 1:\n # else: data.copy()\n # there coulb be (N, 1)-dimensional output, right now\n # it is requred to be of shape (N, )\n if data.ndim == 2:\n self.state[name] = data[:, -self.startindt :].copy()\n else:\n self.state[name] = data.copy()\n\n def setOutput(self, name, data, append=False, removeICs=False):\n\"\"\"Adds an output to the model, typically a simulation result.\n :params name: Name of the output in dot.notation, a la \"outputgroup.output\"\n :type name: str\n :params data: Output data, can't be a dictionary!\n :type data: `numpy.ndarray`\n \"\"\"\n assert not isinstance(data, dict), \"Output data cannot be a dictionary.\"\n assert isinstance(name, str), \"Output name must be a string.\"\n assert isinstance(data, np.ndarray), \"Output must be a `numpy.ndarray`.\"\n\n # remove initial conditions from output\n if removeICs and name != \"t\":\n if data.ndim == 1:\n data = data[self.startindt :]\n elif data.ndim == 2:\n data = data[:, self.startindt :]\n else:\n raise ValueError(f\"Don't know how to truncate data of shape {data.shape}.\")\n\n # subsample to sampling dt\n if data.ndim == 1:\n data = data[:: self.sample_every]\n elif data.ndim == 2:\n data = data[:, :: self.sample_every]\n else:\n raise ValueError(f\"Don't know how to subsample data of shape {data.shape}.\")\n\n # if the output is a single name (not dot.separated)\n if \".\" not in name:\n # append data\n if append and name in self.outputs:\n # special treatment for time data:\n # increment the time by the last recorded duration\n if name == \"t\":\n data += self.outputs[name][-1]\n self.outputs[name] = np.hstack((self.outputs[name], data))\n else:\n # save all data into output dict\n self.outputs[name] = data\n # set output as an attribute\n setattr(self, name, self.outputs[name])\n else:\n # build results dictionary and write into self.outputs\n # dot.notation iteration\n keys = name.split(\".\")\n level = self.outputs # not copy, reference!\n for i, k in enumerate(keys):\n # if it's the last iteration, store data\n if i == len(keys) - 1:\n # TODO: this needs to be append-aware like above\n # if append:\n # if k == \"t\":\n # data += level[k][-1]\n # level[k] = np.hstack((level[k], data))\n # else:\n # level[k] = data\n level[k] = data\n # if key is in outputs, then go deeper\n elif k in level:\n level = level[k]\n setattr(self, k, level)\n # if it's a new key, create new nested dictionary, set attribute, then go deeper\n else:\n level[k] = dotdict({})\n setattr(self, k, level[k])\n level = level[k]\n\n def getOutput(self, name):\n\"\"\"Get an output of a given name (dot.semarated)\n :param name: A key, grouped outputs in the form group.subgroup.variable\n :type name: str\n\n :returns: Output data\n \"\"\"\n assert isinstance(name, str), \"Output name must be a string.\"\n keys = name.split(\".\")\n lastOutput = self.outputs.copy()\n for i, k in enumerate(keys):\n assert k in lastOutput, f\"Key {k} not found in outputs.\"\n lastOutput = lastOutput[k]\n return lastOutput\n\n def __getitem__(self, key):\n\"\"\"Index outputs with a dictionary-like key, e.g., `model['rates_exc']`.\"\"\"\n return self.getOutput(key)\n\n def getOutputs(self, group=\"\"):\n\"\"\"Get all outputs of an output group. Examples: `getOutputs(\"BOLD\")` or simply `getOutputs()`\n\n :param group: Group name, subgroups separated by dots. If left empty (default), all outputs of the root group\n are returned.\n :type group: str\n \"\"\"\n assert isinstance(group, str), \"Group name must be a string.\"\n\n def filterOutputsFromGroupDict(groupDict):\n\"\"\"Return a dictionary with the output data of a group disregarding all other nested dicts.\n :param groupDict: Dictionary of outputs (can include other groups)\n :type groupDict: dict\n \"\"\"\n assert isinstance(groupDict, dict), \"Not a dictionary.\"\n # make a deep copy of the dictionary\n returnDict = groupDict.copy()\n for key, value in groupDict.items():\n if isinstance(value, dict):\n del returnDict[key]\n return returnDict\n\n # if a group deeper than the root is given, select the last node\n lastOutput = self.outputs.copy()\n if len(group) > 0:\n keys = group.split(\".\")\n for i, k in enumerate(keys):\n assert k in lastOutput, f\"Key {k} not found in outputs.\"\n lastOutput = lastOutput[k]\n assert isinstance(lastOutput, dict), f\"Key {k} does not refer to a group.\"\n # filter out all output *groups* that might be in this node and return only output data\n return filterOutputsFromGroupDict(lastOutput)\n\n @property\n def output(self):\n\"\"\"Returns value of default output as defined by `self.default_output`.\n Note that all outputs are saved in the attribute `self.outputs`.\n \"\"\"\n assert self.default_output is not None, \"Default output has not been set yet. Use `setDefaultOutput()`.\"\n return self.getOutput(self.default_output)\n\n def xr(self, group=\"\"):\n\"\"\"Converts a group of outputs to xarray. Output group needs to contain an\n element that starts with the letter \"t\" or it will not recognize any time axis.\n\n :param group: Output group name, example: \"BOLD\". Leave empty for top group.\n :type group: str\n \"\"\"\n assert isinstance(group, str), \"Group name must be a string.\"\n # take all outputs of one group: disregard all dictionaries because they are subgroups\n outputDict = self.getOutputs(group)\n # make sure that there is a time array\n timeDictKey = \"\"\n if \"t\" in outputDict:\n timeDictKey = \"t\"\n else:\n for k in outputDict:\n if k.startswith(\"t\"):\n timeDictKey = k\n logging.info(f\"Assuming {k} to be the time axis.\")\n break\n assert len(timeDictKey) > 0, f\"No time array found (starting with t) in output group {group}.\"\n t = outputDict[timeDictKey].copy()\n del outputDict[timeDictKey]\n outputs = []\n outputNames = []\n for key, value in outputDict.items():\n outputNames.append(key)\n outputs.append(value)\n\n nNodes = outputs[0].shape[0]\n nodes = list(range(nNodes))\n allOutputsStacked = np.stack(outputs) # What? Where? When?\n result = xr.DataArray(allOutputsStacked, coords=[outputNames, nodes, t], dims=[\"output\", \"space\", \"time\"])\n return result\n
Executes a single chunk of integration, either for a given duration or a single timestep dt. Gathers all inputs to the model and resets the initial conditions as a preparation for the next chunk.
Parameters:
Name Type Description Default inputslist[np.ndarray|], optional
list of input values, ordered according to self.input_vars, defaults to None
Nonechunksizeint, optional
length of a chunk to simulate in dt, defaults 1
1append_outputsbool, optional
append the chunkwise outputs to the outputs attribute, defaults to False
False Source code in neurolib/models/model.py
def autochunk(self, inputs=None, chunksize=1, append_outputs=False, bold=False):\n\"\"\"Executes a single chunk of integration, either for a given duration\n or a single timestep `dt`. Gathers all inputs to the model and resets\n the initial conditions as a preparation for the next chunk.\n\n :param inputs: list of input values, ordered according to self.input_vars, defaults to None\n :type inputs: list[np.ndarray|], optional\n :param chunksize: length of a chunk to simulate in dt, defaults 1\n :type chunksize: int, optional\n :param append_outputs: append the chunkwise outputs to the outputs attribute, defaults to False\n :type append_outputs: bool, optional\n \"\"\"\n\n # set the duration for this chunk\n self.params[\"duration\"] = chunksize * self.params[\"dt\"]\n\n # set inputs\n if inputs is not None:\n self.setInputs(inputs)\n\n # run integration\n self.integrate(append_outputs=append_outputs, simulate_bold=bold)\n\n # set initial conditions to last state for the next chunk\n self.setInitialValuesToLastState()\n
Checks if the model fulfills requirements for chunkwise simulation. Checks whether the sampling rate for outputs fits to chunksize and duration. Throws errors if not.
Source code in neurolib/models/model.py
def checkChunkwise(self, chunksize):\n\"\"\"Checks if the model fulfills requirements for chunkwise simulation.\n Checks whether the sampling rate for outputs fits to chunksize and duration.\n Throws errors if not.\"\"\"\n assert self.state_vars is not None, \"State variable names not given.\"\n assert self.init_vars is not None, \"Initial value variable names not given.\"\n assert len(self.state_vars) == len(self.init_vars), \"State variables are not same length as initial values.\"\n\n # throw a warning if the user is nasty\n if int(self.params[\"duration\"] / self.params[\"dt\"]) % chunksize != 0:\n logging.warning(\n f\"It is strongly advised to use a `chunksize` ({chunksize}) that is a divisor of `duration / dt` ({int(self.params['duration']/self.params['dt'])}).\"\n )\n\n # if `sampling_dt` is set, do some checks\n if self.params.get(\"sampling_dt\") is not None:\n # sample_dt checks are required after setting chunksize\n assert (\n chunksize * self.params[\"dt\"] >= self.params[\"sampling_dt\"]\n ), \"`chunksize * dt` must be >= `sampling_dt`\"\n\n # ugly floating point modulo hack: instead of float1%float2==0, we do\n # (float1/float2)%1==0\n assert ((chunksize * self.params[\"dt\"]) / self.params[\"sampling_dt\"]) % 1 == 0, (\n f\"Chunksize {chunksize * self.params['dt']} must be divisible by sampling dt \"\n f\"{self.params['sampling_dt']}\"\n )\n assert (\n (self.params[\"duration\"] % (chunksize * self.params[\"dt\"])) / self.params[\"sampling_dt\"]\n ) % 1 == 0, (\n f\"Last chunk of size {self.params['duration'] % (chunksize * self.params['dt'])} must be divisible by sampling dt \"\n f\"{self.params['sampling_dt']}\"\n )\n
def clearModelState(self):\n\"\"\"Clears the model's state to create a fresh one\"\"\"\n self.state = dotdict({})\n self.outputs = dotdict({})\n # reinitialize bold model\n if self.params.get(\"bold\"):\n self.initializeBold()\n
Computes the maximum delay of the model. This function should be overloaded if the model has internal delays (additional to delay between nodes defined by Dmat) such as the delay between an excitatory and inhibitory population within each brain area. If this function is not overloaded, the maximum delay is assumed to be defined from the global delay matrix Dmat.
Note: Maxmimum delay is given in units of dt.
Returns:
Type Description int
maxmimum delay of the model in units of dt
Source code in neurolib/models/model.py
def getMaxDelay(self):\n\"\"\"Computes the maximum delay of the model. This function should be overloaded\n if the model has internal delays (additional to delay between nodes defined by Dmat)\n such as the delay between an excitatory and inhibitory population within each brain area.\n If this function is not overloaded, the maximum delay is assumed to be defined from the\n global delay matrix `Dmat`.\n\n Note: Maxmimum delay is given in units of dt.\n\n :return: maxmimum delay of the model in units of dt\n :rtype: int\n \"\"\"\n dt = self.params.get(\"dt\")\n Dmat = self.params.get(\"lengthMat\")\n\n if Dmat is not None:\n # divide Dmat by signalV\n signalV = self.params.get(\"signalV\") or 0\n if signalV > 0:\n Dmat = Dmat / signalV\n else:\n # if signalV is 0, eliminate delays\n Dmat = Dmat * 0.0\n\n # only if Dmat and dt exist, a global max delay can be computed\n if Dmat is not None and dt is not None:\n Dmat_ndt = np.around(Dmat / dt) # delay matrix in multiples of dt\n max_global_delay = int(np.amax(Dmat_ndt))\n else:\n max_global_delay = 0\n return max_global_delay\n
A key, grouped outputs in the form group.subgroup.variable
required
Returns:
Type Description
Output data
Source code in neurolib/models/model.py
def getOutput(self, name):\n\"\"\"Get an output of a given name (dot.semarated)\n :param name: A key, grouped outputs in the form group.subgroup.variable\n :type name: str\n\n :returns: Output data\n \"\"\"\n assert isinstance(name, str), \"Output name must be a string.\"\n keys = name.split(\".\")\n lastOutput = self.outputs.copy()\n for i, k in enumerate(keys):\n assert k in lastOutput, f\"Key {k} not found in outputs.\"\n lastOutput = lastOutput[k]\n return lastOutput\n
Get all outputs of an output group. Examples: getOutputs(\"BOLD\") or simply getOutputs()
Parameters:
Name Type Description Default groupstr
Group name, subgroups separated by dots. If left empty (default), all outputs of the root group are returned.
'' Source code in neurolib/models/model.py
def getOutputs(self, group=\"\"):\n\"\"\"Get all outputs of an output group. Examples: `getOutputs(\"BOLD\")` or simply `getOutputs()`\n\n :param group: Group name, subgroups separated by dots. If left empty (default), all outputs of the root group\n are returned.\n :type group: str\n \"\"\"\n assert isinstance(group, str), \"Group name must be a string.\"\n\n def filterOutputsFromGroupDict(groupDict):\n\"\"\"Return a dictionary with the output data of a group disregarding all other nested dicts.\n :param groupDict: Dictionary of outputs (can include other groups)\n :type groupDict: dict\n \"\"\"\n assert isinstance(groupDict, dict), \"Not a dictionary.\"\n # make a deep copy of the dictionary\n returnDict = groupDict.copy()\n for key, value in groupDict.items():\n if isinstance(value, dict):\n del returnDict[key]\n return returnDict\n\n # if a group deeper than the root is given, select the last node\n lastOutput = self.outputs.copy()\n if len(group) > 0:\n keys = group.split(\".\")\n for i, k in enumerate(keys):\n assert k in lastOutput, f\"Key {k} not found in outputs.\"\n lastOutput = lastOutput[k]\n assert isinstance(lastOutput, dict), f\"Key {k} does not refer to a group.\"\n # filter out all output *groups* that might be in this node and return only output data\n return filterOutputsFromGroupDict(lastOutput)\n
def initializeBold(self):\n\"\"\"Initialize BOLD model.\"\"\"\n self.boldInitialized = False\n\n # function to transform model state before passing it to the bold model\n # Note: This can be used like the parameter \\epsilon in Friston2000\n # (neural efficacy) by multiplying the input with a constant via\n # self.boldInputTransform = lambda x: x * epsilon\n if not hasattr(self, \"boldInputTransform\"):\n self.boldInputTransform = None\n\n self.boldModel = bold.BOLDModel(self.params[\"N\"], self.params[\"dt\"])\n self.boldInitialized = True\n
def initializeRun(self, initializeBold=False):\n\"\"\"Initialization before each run.\n\n :param initializeBold: initialize BOLD model\n :type initializeBold: bool\n \"\"\"\n # get the maxDelay of the system\n self.maxDelay = self.getMaxDelay()\n\n # length of the initial condition\n self.startindt = self.maxDelay + 1\n\n # check dt / sampling_dt\n self.setSamplingDt()\n\n # force bold if params['bold'] == True\n if self.params.get(\"bold\"):\n initializeBold = True\n # set up the bold model, if it didn't happen yet\n if initializeBold and not self.boldInitialized:\n self.initializeBold()\n
Calls each models integration function and saves the state and the outputs of the model.
Parameters:
Name Type Description Default appendbool, optional
append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False
required Source code in neurolib/models/model.py
def integrate(self, append_outputs=False, simulate_bold=False):\n\"\"\"Calls each models `integration` function and saves the state and the outputs of the model.\n\n :param append: append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False\n :type append: bool, optional\n \"\"\"\n # run integration\n t, *variables = self.integration(self.params)\n self.storeOutputsAndStates(t, variables, append=append_outputs)\n\n # force bold if params['bold'] == True\n if self.params.get(\"bold\"):\n simulate_bold = True\n\n # bold simulation after integration\n if simulate_bold and self.boldInitialized:\n self.simulateBold(t, variables, append=True)\n
Repeatedly calls the chunkwise integration for the whole duration of the simulation. If bold==True, the BOLD model is simulated after each chunk.
Parameters:
Name Type Description Default chunksizeint
size of each chunk to simulate in units of dt
required boldbool, optional
simulate BOLD model after each chunk, defaults to False
Falseappend_outputsbool, optional
append the chunkwise outputs to the outputs attribute, defaults to False
False Source code in neurolib/models/model.py
def integrateChunkwise(self, chunksize, bold=False, append_outputs=False):\n\"\"\"Repeatedly calls the chunkwise integration for the whole duration of the simulation.\n If `bold==True`, the BOLD model is simulated after each chunk.\n\n :param chunksize: size of each chunk to simulate in units of dt\n :type chunksize: int\n :param bold: simulate BOLD model after each chunk, defaults to False\n :type bold: bool, optional\n :param append_outputs: append the chunkwise outputs to the outputs attribute, defaults to False\n :type append_outputs: bool, optional\n \"\"\"\n totalDuration = self.params[\"duration\"]\n\n dt = self.params[\"dt\"]\n # create a shallow copy of the parameters\n lastT = 0\n while totalDuration - lastT >= dt - 1e-6:\n # Determine the size of the next chunk\n # account for floating point errors\n remainingChunkSize = int(round((totalDuration - lastT) / dt))\n currentChunkSize = min(chunksize, remainingChunkSize)\n\n self.autochunk(chunksize=currentChunkSize, append_outputs=append_outputs, bold=bold)\n # we save the last simulated time step\n lastT += currentChunkSize * dt\n # or\n # lastT = self.state[\"t\"][-1]\n\n # set duration back to its original value\n self.params[\"duration\"] = totalDuration\n
Generates a new set of uniformly-distributed random initial conditions for the model.
TODO: All parameters are drawn from the same distribution / range. Allow for independent ranges.
Parameters:
Name Type Description Default minfloat
Minium of uniform distribution
0maxfloat
Maximum of uniform distribution
1 Source code in neurolib/models/model.py
def randomICs(self, min=0, max=1):\n\"\"\"Generates a new set of uniformly-distributed random initial conditions for the model.\n\n TODO: All parameters are drawn from the same distribution / range. Allow for independent ranges.\n\n :param min: Minium of uniform distribution\n :type min: float\n :param max: Maximum of uniform distribution\n :type max: float\n \"\"\"\n for iv in self.init_vars:\n if self.params[iv].ndim == 1:\n self.params[iv] = np.random.uniform(min, max, (self.params[\"N\"]))\n elif self.params[iv].ndim == 2:\n self.params[iv] = np.random.uniform(min, max, (self.params[\"N\"], 1))\n
The model can be run in three different ways: 1) model.run() starts a new run. 2) model.run(chunkwise=True) runs the simulation in chunks of length chunksize. 3) mode.run(continue_run=True) continues the simulation of a previous run.
Parameters:
Name Type Description Default inputslist[np.ndarray|]
list of inputs to the model, must have the same order as model.input_vars. Note: no sanity check is performed for performance reasons. Take care of the inputs yourself.
Nonechunkwisebool, optional
simulate model chunkwise or in one single run, defaults to False
Falsechunksizeint, optional
size of the chunk to simulate in dt, if set will imply chunkwise=True, defaults to 2s
Noneboldbool, optional
simulate BOLD signal (only for chunkwise integration), defaults to False
Falseappendbool, optional
append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False
Falsecontinue_runbool
continue a simulation by using the initial values from a previous simulation
False Source code in neurolib/models/model.py
def run(\n self,\n inputs=None,\n chunkwise=False,\n chunksize=None,\n bold=False,\n append=False,\n append_outputs=None,\n continue_run=False,\n):\n\"\"\"\n Main interfacing function to run a model.\n\n The model can be run in three different ways:\n 1) `model.run()` starts a new run.\n 2) `model.run(chunkwise=True)` runs the simulation in chunks of length `chunksize`.\n 3) `mode.run(continue_run=True)` continues the simulation of a previous run.\n\n :param inputs: list of inputs to the model, must have the same order as model.input_vars. Note: no sanity check is performed for performance reasons. Take care of the inputs yourself.\n :type inputs: list[np.ndarray|]\n :param chunkwise: simulate model chunkwise or in one single run, defaults to False\n :type chunkwise: bool, optional\n :param chunksize: size of the chunk to simulate in dt, if set will imply chunkwise=True, defaults to 2s\n :type chunksize: int, optional\n :param bold: simulate BOLD signal (only for chunkwise integration), defaults to False\n :type bold: bool, optional\n :param append: append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False\n :type append: bool, optional\n :param continue_run: continue a simulation by using the initial values from a previous simulation\n :type continue_run: bool\n \"\"\"\n # TODO: legacy argument support\n if append_outputs is not None:\n append = append_outputs\n\n # if a previous run is not to be continued clear the model's state\n if continue_run is False:\n self.clearModelState()\n\n self.initializeRun(initializeBold=bold)\n\n # enable chunkwise if chunksize is set\n chunkwise = chunkwise if chunksize is None else True\n\n if chunkwise is False:\n self.integrate(append_outputs=append, simulate_bold=bold)\n if continue_run:\n self.setInitialValuesToLastState()\n\n else:\n if chunksize is None:\n chunksize = int(2000 / self.params[\"dt\"])\n\n # check if model is safe for chunkwise integration\n # and whether sampling_dt is compatible with duration and chunksize\n self.checkChunkwise(chunksize)\n if bold and not self.boldInitialized:\n logging.warn(f\"{self.name}: BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`\")\n bold = False\n self.integrateChunkwise(chunksize=chunksize, bold=bold, append_outputs=append)\n\n # check if there was a problem with the simulated data\n self.checkOutputs()\n
Reads the last state of the model and sets the initial conditions to that state for continuing a simulation.
Source code in neurolib/models/model.py
def setInitialValuesToLastState(self):\n\"\"\"Reads the last state of the model and sets the initial conditions to that state for continuing a simulation.\"\"\"\n for iv, sv in zip(self.init_vars, self.state_vars):\n # if state variables are one-dimensional (in space only)\n if (self.state[sv].ndim == 0) or (self.state[sv].ndim == 1):\n self.params[iv] = self.state[sv]\n # if they are space-time arrays\n else:\n # we set the next initial condition to the last state\n self.params[iv] = self.state[sv][:, -self.startindt :]\n
Take inputs from a list and store it in the appropriate model parameter for external input. TODO: This is not safe yet, checks should be implemented whether the model has inputs defined or not for example.
Parameters:
Name Type Description Default inputslist[np.ndarray(), ...]
list of inputs
required Source code in neurolib/models/model.py
def setInputs(self, inputs):\n\"\"\"Take inputs from a list and store it in the appropriate model parameter for external input.\n TODO: This is not safe yet, checks should be implemented whether the model has inputs defined or not for example.\n\n :param inputs: list of inputs\n :type inputs: list[np.ndarray(), ...]\n \"\"\"\n for i, iv in enumerate(self.input_vars):\n self.params[iv] = inputs[i].copy()\n
Adds an output to the model, typically a simulation result.
Parameters:
Name Type Description Default namestr
Name of the output in dot.notation, a la \"outputgroup.output\"
required data`numpy.ndarray`
Output data, can't be a dictionary!
required Source code in neurolib/models/model.py
def setOutput(self, name, data, append=False, removeICs=False):\n\"\"\"Adds an output to the model, typically a simulation result.\n :params name: Name of the output in dot.notation, a la \"outputgroup.output\"\n :type name: str\n :params data: Output data, can't be a dictionary!\n :type data: `numpy.ndarray`\n \"\"\"\n assert not isinstance(data, dict), \"Output data cannot be a dictionary.\"\n assert isinstance(name, str), \"Output name must be a string.\"\n assert isinstance(data, np.ndarray), \"Output must be a `numpy.ndarray`.\"\n\n # remove initial conditions from output\n if removeICs and name != \"t\":\n if data.ndim == 1:\n data = data[self.startindt :]\n elif data.ndim == 2:\n data = data[:, self.startindt :]\n else:\n raise ValueError(f\"Don't know how to truncate data of shape {data.shape}.\")\n\n # subsample to sampling dt\n if data.ndim == 1:\n data = data[:: self.sample_every]\n elif data.ndim == 2:\n data = data[:, :: self.sample_every]\n else:\n raise ValueError(f\"Don't know how to subsample data of shape {data.shape}.\")\n\n # if the output is a single name (not dot.separated)\n if \".\" not in name:\n # append data\n if append and name in self.outputs:\n # special treatment for time data:\n # increment the time by the last recorded duration\n if name == \"t\":\n data += self.outputs[name][-1]\n self.outputs[name] = np.hstack((self.outputs[name], data))\n else:\n # save all data into output dict\n self.outputs[name] = data\n # set output as an attribute\n setattr(self, name, self.outputs[name])\n else:\n # build results dictionary and write into self.outputs\n # dot.notation iteration\n keys = name.split(\".\")\n level = self.outputs # not copy, reference!\n for i, k in enumerate(keys):\n # if it's the last iteration, store data\n if i == len(keys) - 1:\n # TODO: this needs to be append-aware like above\n # if append:\n # if k == \"t\":\n # data += level[k][-1]\n # level[k] = np.hstack((level[k], data))\n # else:\n # level[k] = data\n level[k] = data\n # if key is in outputs, then go deeper\n elif k in level:\n level = level[k]\n setattr(self, k, level)\n # if it's a new key, create new nested dictionary, set attribute, then go deeper\n else:\n level[k] = dotdict({})\n setattr(self, k, level[k])\n level = level[k]\n
Checks if sampling_dt is set correctly and sets self.sample_every 1) Check if sampling_dt is multiple of dt 2) Check if semplind_dt is greater than duration
Source code in neurolib/models/model.py
def setSamplingDt(self):\n\"\"\"Checks if sampling_dt is set correctly and sets self.`sample_every`\n 1) Check if sampling_dt is multiple of dt\n 2) Check if semplind_dt is greater than duration\n \"\"\"\n\n if self.params.get(\"sampling_dt\") is None:\n self.sample_every = 1\n elif self.params.get(\"sampling_dt\") > 0:\n assert self.params[\"sampling_dt\"] >= self.params[\"dt\"], \"`sampling_dt` needs to be >= `dt`\"\n assert (\n self.params[\"duration\"] >= self.params[\"sampling_dt\"]\n ), \"`sampling_dt` needs to be lower than `duration`\"\n self.sample_every = int(self.params[\"sampling_dt\"] / self.params[\"dt\"])\n else:\n raise ValueError(f\"Can't handle `sampling_dt`={self.params.get('sampling_dt')}\")\n
TODO: Cut state variables to length of self.maxDelay However, this could be time-memory tradeoff
Parameters:
Name Type Description Default namestr
name of the state variable
required datanp.ndarray
value of the variable
required Source code in neurolib/models/model.py
def setStateVariables(self, name, data):\n\"\"\"Saves the models current state variables.\n\n TODO: Cut state variables to length of self.maxDelay\n However, this could be time-memory tradeoff\n\n :param name: name of the state variable\n :type name: str\n :param data: value of the variable\n :type data: np.ndarray\n \"\"\"\n # old\n # self.state[name] = data.copy()\n\n # if the data is temporal, cut off initial values\n # NOTE: this shuold actually check for\n # if data.shape[1] > 1:\n # else: data.copy()\n # there coulb be (N, 1)-dimensional output, right now\n # it is requred to be of shape (N, )\n if data.ndim == 2:\n self.state[name] = data[:, -self.startindt :].copy()\n else:\n self.state[name] = data.copy()\n
Gets the default output of the model and simulates the BOLD model. Adds the simulated BOLD signal to outputs.
Source code in neurolib/models/model.py
def simulateBold(self, t, variables, append=False):\n\"\"\"Gets the default output of the model and simulates the BOLD model.\n Adds the simulated BOLD signal to outputs.\n \"\"\"\n if self.boldInitialized:\n # first we loop through all state variables\n for svn, sv in zip(self.state_vars, variables):\n # the default output is used as the input for the bold model\n if svn == self.default_output:\n bold_input = sv[:, self.startindt :]\n # logging.debug(f\"BOLD input `{svn}` of shape {bold_input.shape}\")\n if bold_input.shape[1] >= self.boldModel.samplingRate_NDt:\n # only if the length of the output has a zero mod to the sampling rate,\n # the downsampled output from the boldModel can correctly appended to previous data\n # so: we are lazy here and simply disable appending in that case ...\n if not bold_input.shape[1] % self.boldModel.samplingRate_NDt == 0:\n append = False\n logging.warn(\n f\"Output size {bold_input.shape[1]} is not a multiple of BOLD sampling length { self.boldModel.samplingRate_NDt}, will not append data.\"\n )\n logging.debug(f\"Simulating BOLD: boldModel.run(append={append})\")\n\n # transform bold input according to self.boldInputTransform\n if self.boldInputTransform:\n bold_input = self.boldInputTransform(bold_input)\n\n # simulate bold model\n self.boldModel.run(bold_input, append=append)\n\n t_BOLD = self.boldModel.t_BOLD\n BOLD = self.boldModel.BOLD\n self.setOutput(\"BOLD.t_BOLD\", t_BOLD)\n self.setOutput(\"BOLD.BOLD\", BOLD)\n else:\n logging.warn(\n f\"Will not simulate BOLD if output {bold_input.shape[1]*self.params['dt']} not at least of duration {self.boldModel.samplingRate_NDt*self.params['dt']}\"\n )\n else:\n logging.warn(\"BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`\")\n
Takes the simulated variables of the integration and stores it to the appropriate model output and state object.
Parameters:
Name Type Description Default tlist
time vector
required variablesnumpy.ndarray
variable from time integration
required appendbool, optional
append output to existing output or overwrite, defaults to False
False Source code in neurolib/models/model.py
def storeOutputsAndStates(self, t, variables, append=False):\n\"\"\"Takes the simulated variables of the integration and stores it to the appropriate model output and state object.\n\n :param t: time vector\n :type t: list\n :param variables: variable from time integration\n :type variables: numpy.ndarray\n :param append: append output to existing output or overwrite, defaults to False\n :type append: bool, optional\n \"\"\"\n # save time array\n self.setOutput(\"t\", t, append=append, removeICs=True)\n self.setStateVariables(\"t\", t)\n # save outputs\n for svn, sv in zip(self.state_vars, variables):\n if svn in self.output_vars:\n self.setOutput(svn, sv, append=append, removeICs=True)\n self.setStateVariables(svn, sv)\n
Converts a group of outputs to xarray. Output group needs to contain an element that starts with the letter \"t\" or it will not recognize any time axis.
Parameters:
Name Type Description Default groupstr
Output group name, example: \"BOLD\". Leave empty for top group.
'' Source code in neurolib/models/model.py
def xr(self, group=\"\"):\n\"\"\"Converts a group of outputs to xarray. Output group needs to contain an\n element that starts with the letter \"t\" or it will not recognize any time axis.\n\n :param group: Output group name, example: \"BOLD\". Leave empty for top group.\n :type group: str\n \"\"\"\n assert isinstance(group, str), \"Group name must be a string.\"\n # take all outputs of one group: disregard all dictionaries because they are subgroups\n outputDict = self.getOutputs(group)\n # make sure that there is a time array\n timeDictKey = \"\"\n if \"t\" in outputDict:\n timeDictKey = \"t\"\n else:\n for k in outputDict:\n if k.startswith(\"t\"):\n timeDictKey = k\n logging.info(f\"Assuming {k} to be the time axis.\")\n break\n assert len(timeDictKey) > 0, f\"No time array found (starting with t) in output group {group}.\"\n t = outputDict[timeDictKey].copy()\n del outputDict[timeDictKey]\n outputs = []\n outputNames = []\n for key, value in outputDict.items():\n outputNames.append(key)\n outputs.append(value)\n\n nNodes = outputs[0].shape[0]\n nodes = list(range(nNodes))\n allOutputsStacked = np.stack(outputs) # What? Where? When?\n result = xr.DataArray(allOutputsStacked, coords=[outputNames, nodes, t], dims=[\"output\", \"space\", \"time\"])\n return result\n
Model parameters in neurolib are stored as a dictionary-like object params as one of a model's attributes. Changing parameters is straightforward:
from neurolib.models.aln import ALNModel # Import the model\nmodel = ALNModel() # Create an instance\n\nmodel.params['duration'] = 10 * 1000 # in ms\nmodel.run() # Run it\n
Parameters are dotdict objects that can also be accessed using the more simple syntax model.params.parameter_name = 123 (see Collections).
The default parameters of a model are stored in the loadDefaultParams.py within each model's directory. This function is called by the model.py file upon initialisation and returns all necessary parameters of the model.
Below is an example function that prepares the structural connectivity matrices Cmat and Dmat, all parameters of the model, and its initial values.
def loadDefaultParams(Cmat=None, Dmat=None, seed=None):\n\"\"\"Load default parameters for a model\n\n :param Cmat: Structural connectivity matrix (adjacency matrix) of coupling strengths, will be normalized to 1. If not given, then a single node simulation will be assumed, defaults to None\n :type Cmat: numpy.ndarray, optional\n :param Dmat: Fiber length matrix, will be used for computing the delay matrix together with the signal transmission speed parameter `signalV`, defaults to None\n :type Dmat: numpy.ndarray, optional\n :param seed: Seed for the random number generator, defaults to None\n :type seed: int, optional\n\n :return: A dictionary with the default parameters of the model\n :rtype: dict\n \"\"\"\n\n params = dotdict({})\n\n ### runtime parameters\n params.dt = 0.1 # ms 0.1ms is reasonable\n params.duration = 2000 # Simulation duration (ms)\n np.random.seed(seed) # seed for RNG of noise and ICs\n # set seed to 0 if None, pypet will complain otherwise\n params.seed = seed or 0\n\n # make sure that seed=0 remains None\n if seed == 0:\n seed = None\n\n # ------------------------------------------------------------------------\n # global whole-brain network parameters\n # ------------------------------------------------------------------------\n\n # the coupling parameter determines how nodes are coupled.\n # \"diffusive\" for diffusive coupling, \"additive\" for additive coupling\n params.coupling = \"diffusive\"\n\n params.signalV = 20.0\n params.K_gl = 0.6 # global coupling strength\n\n if Cmat is None:\n params.N = 1\n params.Cmat = np.zeros((1, 1))\n params.lengthMat = np.zeros((1, 1))\n\n else:\n params.Cmat = Cmat.copy() # coupling matrix\n np.fill_diagonal(params.Cmat, 0) # no self connections\n params.N = len(params.Cmat) # number of nodes\n params.lengthMat = Dmat\n\n # ------------------------------------------------------------------------\n # local node parameters\n # ------------------------------------------------------------------------\n\n # external input parameters:\n params.tau_ou = 5.0 # ms Timescale of the Ornstein-Uhlenbeck noise process\n params.sigma_ou = 0.0 # mV/ms/sqrt(ms) noise intensity\n params.x_ou_mean = 0.0 # mV/ms (OU process) [0-5]\n params.y_ou_mean = 0.0 # mV/ms (OU process) [0-5]\n\n # neural mass model parameters\n params.a = 0.25 # Hopf bifurcation parameter\n params.w = 0.2 # Oscillator frequency, 32 Hz at w = 0.2\n\n # ------------------------------------------------------------------------\n\n # initial values of the state variables\n params.xs_init = 0.5 * np.random.uniform(-1, 1, (params.N, 1))\n params.ys_init = 0.5 * np.random.uniform(-1, 1, (params.N, 1))\n\n # Ornstein-Uhlenbeck noise state variables\n params.x_ou = np.zeros((params.N,))\n params.y_ou = np.zeros((params.N,))\n\n # values of the external inputs\n params.x_ext = np.zeros((params.N,))\n params.y_ext = np.zeros((params.N,))\n\n return params\n
Evolutionary parameter optimization. This class helps you to optimize any function or model using an evolutionary algorithm. It uses the package deap and supports its builtin mating and selection functions as well as custom ones.
Source code in neurolib/optimize/evolution/evolution.py
class Evolution:\n\"\"\"Evolutionary parameter optimization. This class helps you to optimize any function or model using an evolutionary algorithm.\n It uses the package `deap` and supports its builtin mating and selection functions as well as custom ones.\n \"\"\"\n\n def __init__(\n self,\n evalFunction,\n parameterSpace,\n weightList=None,\n model=None,\n filename=\"evolution.hdf\",\n ncores=None,\n POP_INIT_SIZE=100,\n POP_SIZE=20,\n NGEN=10,\n algorithm=\"adaptive\",\n matingOperator=None,\n MATE_P=None,\n mutationOperator=None,\n MUTATE_P=None,\n selectionOperator=None,\n SELECT_P=None,\n parentSelectionOperator=None,\n PARENT_SELECT_P=None,\n individualGenerator=None,\n IND_GENERATOR_P=None,\n ):\n\"\"\"Initialize evolutionary optimization.\n :param evalFunction: Evaluation function of a run that provides a fitness vector and simulation outputs\n :type evalFunction: function\n :param parameterSpace: Parameter space to run evolution in.\n :type parameterSpace: `neurolib.utils.parameterSpace.ParameterSpace`\n :param weightList: List of floats that defines the dimensionality of the fitness vector returned from evalFunction and the weights of each component for multiobjective optimization (positive = maximize, negative = minimize). If not given, then a single positive weight will be used, defaults to None\n :type weightList: list[float], optional\n :param model: Model to simulate, defaults to None\n :type model: `neurolib.models.model.Model`, optional\n\n :param filename: HDF file to store all results in, defaults to \"evolution.hdf\"\n :type filename: str, optional\n :param ncores: Number of cores to simulate on (max cores default), defaults to None\n :type ncores: int, optional\n\n :param POP_INIT_SIZE: Size of first population to initialize evolution with (random, uniformly distributed), defaults to 100\n :type POP_INIT_SIZE: int, optional\n :param POP_SIZE: Size of the population during evolution, defaults to 20\n :type POP_SIZE: int, optional\n :param NGEN: Numbers of generations to evaluate, defaults to 10\n :type NGEN: int, optional\n\n :param matingOperator: Custom mating operator, defaults to deap.tools.cxBlend\n :type matingOperator: deap operator, optional\n :param MATE_P: Mating operator keyword arguments (for the default crossover operator cxBlend, this defaults `alpha` = 0.5)\n :type MATE_P: dict, optional\n\n :param mutationOperator: Custom mutation operator, defaults to du.gaussianAdaptiveMutation_nStepSizes\n :type mutationOperator: deap operator, optional\n :param MUTATE_P: Mutation operator keyword arguments\n :type MUTATE_P: dict, optional\n\n :param selectionOperator: Custom selection operator, defaults to du.selBest_multiObj\n :type selectionOperator: deap operator, optional\n :param SELECT_P: Selection operator keyword arguments\n :type SELECT_P: dict, optional\n\n :param parentSelectionOperator: Operator for parent selection, defaults to du.selRank\n :param PARENT_SELECT_P: Parent selection operator keyword arguments (for the default operator selRank, this defaults to `s` = 1.5 in Eiben&Smith p.81)\n :type PARENT_SELECT_P: dict, optional\n\n :param individualGenerator: Function to generate initial individuals, defaults to du.randomParametersAdaptive\n \"\"\"\n\n if weightList is None:\n logging.info(\"weightList not set, assuming single fitness value to be maximized.\")\n weightList = [1.0]\n\n trajectoryName = \"results\" + datetime.datetime.now().strftime(\"-%Y-%m-%d-%HH-%MM-%SS\")\n logging.info(f\"Trajectory Name: {trajectoryName}\")\n self.HDF_FILE = os.path.join(paths.HDF_DIR, filename)\n trajectoryFileName = self.HDF_FILE\n\n logging.info(\"Storing data to: {}\".format(trajectoryFileName))\n logging.info(\"Trajectory Name: {}\".format(trajectoryName))\n if ncores is None:\n ncores = multiprocessing.cpu_count()\n logging.info(\"Number of cores: {}\".format(ncores))\n\n # initialize pypet environment\n # env = pp.Environment(trajectory=trajectoryName, filename=trajectoryFileName)\n env = pp.Environment(\n trajectory=trajectoryName,\n filename=trajectoryFileName,\n use_pool=False,\n multiproc=True,\n ncores=ncores,\n complevel=9,\n log_config=paths.PYPET_LOGGING_CONFIG,\n )\n\n # Get the trajectory from the environment\n traj = env.traj\n # Sanity check if everything went ok\n assert (\n trajectoryName == traj.v_name\n ), f\"Pypet trajectory has a different name than trajectoryName {trajectoryName}\"\n # trajectoryName = traj.v_name\n\n self.model = model\n self.evalFunction = evalFunction\n self.weightList = weightList\n\n self.NGEN = NGEN\n assert POP_SIZE % 2 == 0, \"Please chose an even number for POP_SIZE!\"\n self.POP_SIZE = POP_SIZE\n assert POP_INIT_SIZE % 2 == 0, \"Please chose an even number for POP_INIT_SIZE!\"\n self.POP_INIT_SIZE = POP_INIT_SIZE\n self.ncores = ncores\n\n # comment string for storing info\n self.comments = \"no comments\"\n\n self.traj = env.traj\n self.env = env\n self.trajectoryName = trajectoryName\n self.trajectoryFileName = trajectoryFileName\n\n self._initialPopulationSimulated = False\n\n # -------- settings\n self.verbose = False\n self.verbose_plotting = True\n self.plotColor = \"C0\"\n\n # -------- simulation\n self.parameterSpace = parameterSpace\n self.ParametersInterval = self.parameterSpace.named_tuple_constructor\n self.paramInterval = self.parameterSpace.named_tuple\n\n self.toolbox = deap.base.Toolbox()\n\n # -------- algorithms\n if algorithm == \"adaptive\":\n logging.info(f\"Evolution: Using algorithm: {algorithm}\")\n self.matingOperator = tools.cxBlend\n self.MATE_P = {\"alpha\": 0.5} or MATE_P\n self.mutationOperator = du.gaussianAdaptiveMutation_nStepSizes\n self.selectionOperator = du.selBest_multiObj\n self.parentSelectionOperator = du.selRank\n self.PARENT_SELECT_P = {\"s\": 1.5} or PARENT_SELECT_P\n self.individualGenerator = du.randomParametersAdaptive\n\n elif algorithm == \"nsga2\":\n logging.info(f\"Evolution: Using algorithm: {algorithm}\")\n self.matingOperator = tools.cxSimulatedBinaryBounded\n self.MATE_P = {\n \"low\": self.parameterSpace.lowerBound,\n \"up\": self.parameterSpace.upperBound,\n \"eta\": 20.0,\n } or MATE_P\n self.mutationOperator = tools.mutPolynomialBounded\n self.MUTATE_P = {\n \"low\": self.parameterSpace.lowerBound,\n \"up\": self.parameterSpace.upperBound,\n \"eta\": 20.0,\n \"indpb\": 1.0 / len(self.weightList),\n } or MUTATE_P\n self.selectionOperator = tools.selNSGA2\n self.parentSelectionOperator = tools.selTournamentDCD\n self.individualGenerator = du.randomParameters\n\n else:\n raise ValueError(\"Evolution: algorithm must be one of the following: ['adaptive', 'nsga2']\")\n\n # if the operators are set manually, then overwrite them\n self.matingOperator = self.matingOperator if hasattr(self, \"matingOperator\") else matingOperator\n self.mutationOperator = self.mutationOperator if hasattr(self, \"mutationOperator\") else mutationOperator\n self.selectionOperator = self.selectionOperator if hasattr(self, \"selectionOperator\") else selectionOperator\n self.parentSelectionOperator = (\n self.parentSelectionOperator if hasattr(self, \"parentSelectionOperator\") else parentSelectionOperator\n )\n self.individualGenerator = (\n self.individualGenerator if hasattr(self, \"individualGenerator\") else individualGenerator\n )\n\n # let's also make sure that the parameters are set correctly\n self.MATE_P = self.MATE_P if hasattr(self, \"MATE_P\") else {}\n self.PARENT_SELECT_P = self.PARENT_SELECT_P if hasattr(self, \"PARENT_SELECT_P\") else {}\n self.MUTATE_P = self.MUTATE_P if hasattr(self, \"MUTATE_P\") else {}\n self.SELECT_P = self.SELECT_P if hasattr(self, \"SELECT_P\") else {}\n\n self._initDEAP(\n self.toolbox,\n self.env,\n self.paramInterval,\n self.evalFunction,\n weightList=self.weightList,\n matingOperator=self.matingOperator,\n mutationOperator=self.mutationOperator,\n selectionOperator=self.selectionOperator,\n parentSelectionOperator=self.parentSelectionOperator,\n individualGenerator=self.individualGenerator,\n )\n\n # set up pypet trajectory\n self._initPypetTrajectory(\n self.traj,\n self.paramInterval,\n self.POP_SIZE,\n self.NGEN,\n self.model,\n )\n\n # population history: dict of all valid individuals per generation\n self.history = {}\n\n # initialize population\n self.evaluationCounter = 0\n self.last_id = 0\n\n def run(self, verbose=False, verbose_plotting=True):\n\"\"\"Run the evolution or continue previous evolution. If evolution was not initialized first\n using `runInitial()`, this will be done.\n\n :param verbose: Print and plot state of evolution during run, defaults to False\n :type verbose: bool, optional\n \"\"\"\n\n self.verbose = verbose\n self.verbose_plotting = verbose_plotting\n if not self._initialPopulationSimulated:\n self.runInitial()\n\n self.runEvolution()\n\n def getIndividualFromTraj(self, traj):\n\"\"\"Get individual from pypet trajectory\n\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n :return: Individual (`DEAP` type)\n :rtype: `deap.creator.Individual`\n \"\"\"\n # either pass an individual or a pypet trajectory with the attribute individual\n if type(traj).__name__ == \"Individual\":\n individual = traj\n else:\n individual = traj.individual\n ind_id = traj.id\n individual = [p for p in self.pop if p.id == ind_id]\n if len(individual) > 0:\n individual = individual[0]\n return individual\n\n def getModelFromTraj(self, traj):\n\"\"\"Return the appropriate model with parameters for this individual\n :params traj: Pypet trajectory with individual (traj.individual) or directly a deap.Individual\n\n :returns model: Model with the parameters of this individual.\n\n :param traj: Pypet trajectory with individual (traj.individual) or directly a deap.Individual\n :type traj: `pypet.trajectory.Trajectory`\n :return: Model with the parameters of this individual.\n :rtype: `neurolib.models.model.Model`\n \"\"\"\n model = self.model\n # resolve star notation - MultiModel\n individual_params = self.individualToDict(self.getIndividualFromTraj(traj))\n if self.parameterSpace.star:\n individual_params = unwrap_star_dotdict(individual_params, self.model, replaced_dict=BACKWARD_REPLACE)\n model.params.update(individual_params)\n return model\n\n def getIndividualFromHistory(self, id):\n\"\"\"Searches the entire evolution history for an individual with a specific id and returns it.\n\n :param id: Individual id\n :type id: int\n :return: Individual (`DEAP` type)\n :rtype: `deap.creator.Individual`\n \"\"\"\n for key, value in self.history.items():\n for p in value:\n if p.id == id:\n return p\n logging.warning(f\"No individual with id={id} found. Returning `None`\")\n return None\n\n def individualToDict(self, individual):\n\"\"\"Convert an individual to a parameter dictionary.\n\n :param individual: Individual (`DEAP` type)\n :type individual: `deap.creator.Individual`\n :return: Parameter dictionary of this individual\n :rtype: dict\n \"\"\"\n return self.ParametersInterval(*(individual[: len(self.paramInterval)]))._asdict().copy()\n\n def _initPypetTrajectory(self, traj, paramInterval, POP_SIZE, NGEN, model):\n\"\"\"Initializes pypet trajectory and store all simulation parameters for later analysis.\n\n :param traj: Pypet trajectory (must be already initialized!)\n :type traj: `pypet.trajectory.Trajectory`\n :param paramInterval: Parameter space, from ParameterSpace class\n :type paramInterval: parameterSpace.named_tuple\n :param POP_SIZE: Population size\n :type POP_SIZE: int\n :param MATE_P: Crossover parameter\n :type MATE_P: float\n :param NGEN: Number of generations\n :type NGEN: int\n :param model: Model to store the default parameters of\n :type model: `neurolib.models.model.Model`\n \"\"\"\n # Initialize pypet trajectory and add all simulation parameters\n traj.f_add_parameter(\"popsize\", POP_SIZE, comment=\"Population size\") #\n traj.f_add_parameter(\"NGEN\", NGEN, comment=\"Number of generations\")\n\n # Placeholders for individuals and results that are about to be explored\n traj.f_add_parameter(\"generation\", 0, comment=\"Current generation\")\n\n traj.f_add_result(\"scores\", [], comment=\"Score of all individuals for each generation\")\n traj.f_add_result_group(\"evolution\", comment=\"Contains results for each generation\")\n traj.f_add_result_group(\"outputs\", comment=\"Contains simulation results\")\n\n # TODO: save evolution parameters and operators as well, MATE_P, MUTATE_P, etc..\n\n # if a model was given, save its parameters\n # NOTE: Convert model.params to dict() since it is a dotdict() and pypet doesn't like that\n if model is not None:\n params_dict = dict(model.params)\n # replace all None with zeros, pypet doesn't like None\n for key, value in params_dict.items():\n if value is None:\n params_dict[key] = \"None\"\n traj.f_add_result(\"params\", params_dict, comment=\"Default parameters\")\n\n # todo: initialize this after individuals have been defined!\n traj.f_add_parameter(\"id\", 0, comment=\"Index of individual\")\n traj.f_add_parameter(\"ind_len\", 20, comment=\"Length of individual\")\n traj.f_add_derived_parameter(\n \"individual\",\n [0 for x in range(traj.ind_len)],\n \"An indivudal of the population\",\n )\n\n def _initDEAP(\n self,\n toolbox,\n pypetEnvironment,\n paramInterval,\n evalFunction,\n weightList,\n matingOperator,\n mutationOperator,\n selectionOperator,\n parentSelectionOperator,\n individualGenerator,\n ):\n\"\"\"Initializes DEAP and registers all methods to the deap.toolbox\n\n :param toolbox: Deap toolbox\n :type toolbox: deap.base.Toolbox\n :param pypetEnvironment: Pypet environment (must be initialized first!)\n :type pypetEnvironment: [type]\n :param paramInterval: Parameter space, from ParameterSpace class\n :type paramInterval: parameterSpace.named_tuple\n :param evalFunction: Evaluation function\n :type evalFunction: function\n :param weightList: List of weiths for multiobjective optimization\n :type weightList: list[float]\n :param matingOperator: Mating function (crossover)\n :type matingOperator: function\n :param selectionOperator: Parent selection function\n :type selectionOperator: function\n :param individualGenerator: Function that generates individuals\n \"\"\"\n # ------------- register everything in deap\n deap.creator.create(\"FitnessMulti\", deap.base.Fitness, weights=tuple(weightList))\n deap.creator.create(\"Individual\", list, fitness=deap.creator.FitnessMulti)\n\n # initially, each individual has randomized genes\n # need to create a lambda funciton because du.generateRandomParams wants an argument but\n # toolbox.register cannot pass an argument to it.\n toolbox.register(\n \"individual\",\n deap.tools.initIterate,\n deap.creator.Individual,\n lambda: individualGenerator(paramInterval),\n )\n logging.info(f\"Evolution: Individual generation: {individualGenerator}\")\n\n toolbox.register(\"population\", deap.tools.initRepeat, list, toolbox.individual)\n toolbox.register(\"map\", pypetEnvironment.run)\n toolbox.register(\"run_map\", pypetEnvironment.run_map)\n\n def _worker(arg, fn):\n\"\"\"\n Wrapper to get original exception from inner, `fn`, function.\n \"\"\"\n try:\n return fn(arg)\n except Exception as e:\n logging.exception(e)\n raise\n\n toolbox.register(\"evaluate\", partial(_worker, fn=evalFunction))\n\n # Operator registering\n\n toolbox.register(\"mate\", matingOperator)\n logging.info(f\"Evolution: Mating operator: {matingOperator}\")\n\n toolbox.register(\"mutate\", mutationOperator)\n logging.info(f\"Evolution: Mutation operator: {mutationOperator}\")\n\n toolbox.register(\"selBest\", du.selBest_multiObj)\n toolbox.register(\"selectParents\", parentSelectionOperator)\n logging.info(f\"Evolution: Parent selection: {parentSelectionOperator}\")\n toolbox.register(\"select\", selectionOperator)\n logging.info(f\"Evolution: Selection operator: {selectionOperator}\")\n\n def _evalPopulationUsingPypet(self, traj, toolbox, pop, gIdx):\n\"\"\"Evaluate the fitness of the popoulation of the current generation using pypet\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n :param toolbox: `deap` toolbox\n :type toolbox: deap.base.Toolbox\n :param pop: Population\n :type pop: list\n :param gIdx: Index of the current generation\n :type gIdx: int\n :return: Evaluated population with fitnesses\n :rtype: list\n \"\"\"\n # Add as many explored runs as individuals that need to be evaluated.\n # Furthermore, add the individuals as explored parameters.\n # We need to convert them to lists or write our own custom IndividualParameter ;-)\n # Note the second argument to `cartesian_product`:\n # This is for only having the cartesian product\n # between ``generation x (ind_idx AND individual)``, so that every individual has just one\n # unique index within a generation.\n\n # this function is necessary for the NSGA-2 algorithms because\n # some operators return np.float64 instead of float and pypet\n # does not like individuals with mixed types... sigh.\n def _cleanIndividual(ind):\n return [float(i) for i in ind]\n\n traj.f_expand(\n pp.cartesian_product(\n {\n \"generation\": [gIdx],\n \"id\": [x.id for x in pop],\n \"individual\": [list(_cleanIndividual(x)) for x in pop],\n },\n [(\"id\", \"individual\"), \"generation\"],\n )\n ) # the current generation # unique id of each individual\n\n # increment the evaluationCounter\n self.evaluationCounter += len(pop)\n\n # run simulations for one generation\n evolutionResult = toolbox.map(toolbox.evaluate)\n\n # This error can have different reasons but is most likely\n # due to multiprocessing problems. One possibility is that your evaluation\n # function is not pickleable or that it returns an object that is not pickleable.\n assert len(evolutionResult) > 0, \"No results returned from simulations.\"\n\n for idx, result in enumerate(evolutionResult):\n runIndex, packedReturnFromEvalFunction = result\n\n # packedReturnFromEvalFunction is the return from the evaluation function\n # it has length two, the first is the fitness, second is the model output\n assert (\n len(packedReturnFromEvalFunction) == 2\n ), \"Evaluation function must return tuple with shape (fitness, output_data)\"\n\n fitnessesResult, returnedOutputs = packedReturnFromEvalFunction\n\n # store simulation outputs\n pop[idx].outputs = returnedOutputs\n\n # store fitness values\n pop[idx].fitness.values = fitnessesResult\n\n # compute score\n pop[idx].fitness.score = np.ma.masked_invalid(pop[idx].fitness.wvalues).sum() / (\n len(pop[idx].fitness.wvalues)\n )\n return pop\n\n def getValidPopulation(self, pop=None):\n\"\"\"Returns a list of the valid population.\n\n :params pop: Population to check, defaults to self.pop\n :type pop: deap population\n :return: List of valid population\n :rtype: list\n \"\"\"\n pop = pop or self.pop\n return [p for p in pop if np.isfinite(p.fitness.values).all()]\n\n def getInvalidPopulation(self, pop=None):\n\"\"\"Returns a list of the invalid population.\n\n :params pop: Population to check, defaults to self.pop\n :type pop: deap population\n :return: List of invalid population\n :rtype: list\n \"\"\"\n pop = pop or self.pop\n return [p for p in pop if not np.isfinite(p.fitness.values).all()]\n\n def _tagPopulation(self, pop):\n\"\"\"Take a fresh population and add id's and attributes such as parameters that we can use later\n\n :param pop: Fresh population\n :type pop: list\n :return: Population with tags\n :rtype: list\n \"\"\"\n for i, ind in enumerate(pop):\n assert not hasattr(ind, \"id\"), \"Individual has an id already, will not overwrite it!\"\n ind.id = self.last_id\n ind.gIdx = self.gIdx\n ind.simulation_stored = False\n ind_dict = self.individualToDict(ind)\n for key, value in ind_dict.items():\n # set the parameters as attributes for easy access\n setattr(ind, key, value)\n ind.params = ind_dict\n # increment id counter\n self.last_id += 1\n return pop\n\n def runInitial(self):\n\"\"\"Run the first round of evolution with the initial population of size `POP_INIT_SIZE`\n and select the best `POP_SIZE` for the following evolution. This needs to be run before `runEvolution()`\n \"\"\"\n self._t_start_initial_population = datetime.datetime.now()\n\n # Create the initial population\n self.pop = self.toolbox.population(n=self.POP_INIT_SIZE)\n\n ### Evaluate the initial population\n logging.info(\"Evaluating initial population of size %i ...\" % len(self.pop))\n self.gIdx = 0 # set generation index\n self.pop = self._tagPopulation(self.pop)\n\n # evaluate\n self.pop = self._evalPopulationUsingPypet(self.traj, self.toolbox, self.pop, self.gIdx)\n\n if self.verbose:\n eu.printParamDist(self.pop, self.paramInterval, self.gIdx)\n\n # save all simulation data to pypet\n self.pop = eu.saveToPypet(self.traj, self.pop, self.gIdx)\n\n # reduce initial population to popsize\n self.pop = self.toolbox.select(self.pop, k=self.traj.popsize, **self.SELECT_P)\n\n self._initialPopulationSimulated = True\n\n # populate history for tracking\n self.history[self.gIdx] = self.pop # self.getValidPopulation(self.pop)\n\n self._t_end_initial_population = datetime.datetime.now()\n\n def runEvolution(self):\n\"\"\"Run the evolutionary optimization process for `NGEN` generations.\"\"\"\n # Start evolution\n logging.info(\"Start of evolution\")\n self._t_start_evolution = datetime.datetime.now()\n for self.gIdx in range(self.gIdx + 1, self.gIdx + self.traj.NGEN):\n # ------- Weed out the invalid individuals and replace them by random new individuals -------- #\n validpop = self.getValidPopulation(self.pop)\n # replace invalid individuals\n invalidpop = self.getInvalidPopulation(self.pop)\n\n logging.info(\"Replacing {} invalid individuals.\".format(len(invalidpop)))\n newpop = self.toolbox.population(n=len(invalidpop))\n newpop = self._tagPopulation(newpop)\n\n # ------- Create the next generation by crossover and mutation -------- #\n ### Select parents using rank selection and clone them ###\n offspring = list(\n map(\n self.toolbox.clone,\n self.toolbox.selectParents(self.pop, self.POP_SIZE, **self.PARENT_SELECT_P),\n )\n )\n\n ##### cross-over ####\n for i in range(1, len(offspring), 2):\n offspring[i - 1], offspring[i] = self.toolbox.mate(offspring[i - 1], offspring[i], **self.MATE_P)\n # delete fitness inherited from parents\n del offspring[i - 1].fitness.values, offspring[i].fitness.values\n del offspring[i - 1].fitness.wvalues, offspring[i].fitness.wvalues\n\n # assign parent IDs to new offspring\n offspring[i - 1].parentIds = offspring[i - 1].id, offspring[i].id\n offspring[i].parentIds = offspring[i - 1].id, offspring[i].id\n\n # delete id originally set from parents, needs to be deleted here!\n # will be set later in _tagPopulation()\n del offspring[i - 1].id, offspring[i].id\n\n ##### Mutation ####\n # Apply mutation\n du.mutateUntilValid(offspring, self.paramInterval, self.toolbox, MUTATE_P=self.MUTATE_P)\n\n offspring = self._tagPopulation(offspring)\n\n # ------- Evaluate next generation -------- #\n\n self.pop = offspring + newpop\n self._evalPopulationUsingPypet(self.traj, self.toolbox, offspring + newpop, self.gIdx)\n\n # log individuals\n self.history[self.gIdx] = validpop + offspring + newpop # self.getValidPopulation(self.pop)\n\n # ------- Select surviving population -------- #\n\n # select next generation\n self.pop = self.toolbox.select(validpop + offspring + newpop, k=self.traj.popsize, **self.SELECT_P)\n\n # ------- END OF ROUND -------\n\n # save all simulation data to pypet\n self.pop = eu.saveToPypet(self.traj, self.pop, self.gIdx)\n\n # select best individual for logging\n self.best_ind = self.toolbox.selBest(self.pop, 1)[0]\n\n # text log\n next_print = print if self.verbose else logging.info\n next_print(\"----------- Generation %i -----------\" % self.gIdx)\n next_print(\"Best individual is {}\".format(self.best_ind))\n next_print(\"Score: {}\".format(self.best_ind.fitness.score))\n next_print(\"Fitness: {}\".format(self.best_ind.fitness.values))\n next_print(\"--- Population statistics ---\")\n\n # verbose output\n if self.verbose:\n self.info(plot=self.verbose_plotting, info=True)\n\n logging.info(\"--- End of evolution ---\")\n logging.info(\"Best individual is %s, %s\" % (self.best_ind, self.best_ind.fitness.values))\n logging.info(\"--- End of evolution ---\")\n\n self.traj.f_store() # We switched off automatic storing, so we need to store manually\n self._t_end_evolution = datetime.datetime.now()\n\n self._buildEvolutionTree()\n\n def _buildEvolutionTree(self):\n\"\"\"Builds a genealogy tree that is networkx compatible.\n\n Plot the tree using:\n\n import matplotlib.pyplot as plt\n import networkx as nx\n from networkx.drawing.nx_pydot import graphviz_layout\n\n G = nx.DiGraph(evolution.tree)\n G = G.reverse() # Make the graph top-down\n pos = graphviz_layout(G, prog='dot')\n plt.figure(figsize=(8, 8))\n nx.draw(G, pos, node_size=50, alpha=0.5, node_color=list(evolution.genx.values()), with_labels=False)\n plt.show()\n \"\"\"\n self.tree = dict()\n self.id_genx = dict()\n self.id_score = dict()\n\n for gen, pop in self.history.items():\n for p in pop:\n self.tree[p.id] = p.parentIds if hasattr(p, \"parentIds\") else ()\n self.id_genx[p.id] = p.gIdx\n self.id_score[p.id] = p.fitness.score\n\n def info(self, plot=True, bestN=5, info=True, reverse=False):\n\"\"\"Print and plot information about the evolution and the current population\n\n :param plot: plot a plot using `matplotlib`, defaults to True\n :type plot: bool, optional\n :param bestN: Print summary of `bestN` best individuals, defaults to 5\n :type bestN: int, optional\n :param info: Print information about the evolution environment\n :type info: bool, optional\n \"\"\"\n if info:\n eu.printEvolutionInfo(self)\n validPop = self.getValidPopulation(self.pop)\n scores = self.getScores()\n # Text output\n print(\"--- Info summary ---\")\n print(\"Valid: {}\".format(len(validPop)))\n print(\"Mean score (weighted fitness): {:.2}\".format(np.mean(scores)))\n eu.printParamDist(self.pop, self.paramInterval, self.gIdx)\n print(\"--------------------\")\n print(f\"Best {bestN} individuals:\")\n eu.printIndividuals(self.toolbox.selBest(self.pop, bestN), self.paramInterval)\n print(\"--------------------\")\n # Plotting evolutionary progress\n if plot:\n # hack: during the evolution we need to use reverse=True\n # after the evolution (with evolution.info()), we need False\n try:\n self.plotProgress(reverse=reverse)\n except:\n logging.warning(\"Could not plot progress, is this a previously saved simulation?\")\n eu.plotPopulation(\n self,\n plotScattermatrix=True,\n save_plots=self.trajectoryName,\n color=self.plotColor,\n )\n\n def plotProgress(self, reverse=False):\n\"\"\"Plots progress of fitnesses of current evolution run\"\"\"\n eu.plotProgress(self, reverse=reverse)\n\n def saveEvolution(self, fname=None):\n\"\"\"Save evolution to file using dill.\n\n :param fname: Filename, defaults to a path in ./data/\n :type fname: str, optional\n \"\"\"\n import dill\n\n fname = fname or os.path.join(\"data/\", \"evolution-\" + self.trajectoryName + \".dill\")\n dill.dump(self, open(fname, \"wb\"))\n logging.info(f\"Saving evolution to {fname}\")\n\n def loadEvolution(self, fname):\n\"\"\"Load evolution from previously saved simulations.\n\n Example usage:\n ```\n evaluateSimulation = lambda x: x # the function can be omitted, that's why we define a lambda here\n pars = ParameterSpace(['a', 'b'], # should be same as previously saved evolution\n [[0.0, 4.0], [0.0, 5.0]])\n evolution = Evolution(evaluateSimulation, pars, weightList = [1.0])\n evolution = evolution.loadEvolution(\"data/evolution-results-2020-05-15-00H-24M-48S.dill\")\n ```\n\n :param fname: Filename, defaults to a path in ./data/\n :type fname: str\n :return: Evolution\n :rtype: self\n \"\"\"\n import dill\n\n evolution = dill.load(open(fname, \"rb\"))\n # parameter space is not saved correctly in dill, don't know why\n # that is why we recreate it using the values of\n # the parameter space in the dill\n pars = ParameterSpace(\n evolution.parameterSpace.parameterNames,\n evolution.parameterSpace.parameterValues,\n )\n\n evolution.parameterSpace = pars\n evolution.paramInterval = evolution.parameterSpace.named_tuple\n evolution.ParametersInterval = evolution.parameterSpace.named_tuple_constructor\n return evolution\n\n def _outputToDf(self, pop, df):\n\"\"\"Loads outputs dictionary from evolution from the .outputs attribute\n and writes data into a dataframe.\n\n :param pop: Population of which to get outputs from.\n :type pop: list\n :param df: Dataframe to which outputs are written\n :type df: pandas.core.frame.DataFrame\n :return: Dataframe with outputs\n :rtype: pandas.core.frame.DataFrame\n \"\"\"\n # defines which variable types will be saved in the results dataframe\n SUPPORTED_TYPES = (float, int, np.ndarray, list)\n SCALAR_TYPES = (float, int)\n ARRAY_TYPES = (np.ndarray, list)\n\n assert len(pop) == len(df), \"Dataframe and population do not have same length.\"\n nan_value = np.nan\n # load outputs into dataframe\n for i, p in enumerate(pop):\n if hasattr(p, \"outputs\"):\n for key, value in p.outputs.items():\n # only save floats, ints and arrays\n if isinstance(value, SUPPORTED_TYPES):\n # save 1-dim arrays\n if isinstance(value, ARRAY_TYPES):\n # to save a numpy array, convert column to object type\n if key not in df:\n df[key] = None\n df[key] = df[key].astype(object)\n df.at[i, key] = value\n elif isinstance(value, SCALAR_TYPES):\n # save numbers\n df.loc[i, key] = value\n else:\n df.loc[i, key] = nan_value\n return df\n\n def _dropDuplicatesFromDf(self, df):\n\"\"\"Drops duplicates from dfEvolution dataframe.\n Tries vanilla drop_duplicates, which fails if the Dataframe contains\n data objects like numpy.arrays. Tries to drop via key \"id\" if it fails.\n\n :param df: Input dataframe with duplicates to drop\n :type df: pandas.core.frame.DataFrame\n :return: Dataframe without duplicates\n :rtype: pandas.core.frame.DataFrame\n \"\"\"\n try:\n df = df.drop_duplicates()\n except:\n logging.info('Failed to drop_duplicates() without column name. Trying by column \"id\".')\n try:\n df = df.drop_duplicates(subset=\"id\")\n except:\n logging.warning(\"Failed to drop_duplicates from dataframe.\")\n return df\n\n def dfPop(self, outputs=False):\n\"\"\"Returns a `pandas` DataFrame of the current generation's population parameters.\n This object can be further used to easily analyse the population.\n :return: Pandas DataFrame with all individuals and their parameters\n :rtype: `pandas.core.frame.DataFrame`\n \"\"\"\n # add the current population to the dataframe\n validPop = self.getValidPopulation(self.pop)\n indIds = [p.id for p in validPop]\n popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in validPop]).T\n\n dfPop = pd.DataFrame(popArray, index=self.parameterSpace.parameterNames).T\n\n # add more information to the dataframe\n scores = self.getScores()\n dfPop[\"score\"] = scores\n dfPop[\"id\"] = indIds\n dfPop[\"gen\"] = [p.gIdx for p in validPop]\n\n if outputs:\n dfPop = self._outputToDf(validPop, dfPop)\n\n # add fitness columns\n # NOTE: when loading an evolution with dill using loadingEvolution\n # MultiFitness values dissappear and only one is left.\n # See dfEvolution() for a solution using wvalues\n n_fitnesses = len(validPop[0].fitness.values)\n for i in range(n_fitnesses):\n for ip, p in enumerate(validPop):\n column_name = \"f\" + str(i)\n dfPop.loc[ip, column_name] = p.fitness.values[i]\n return dfPop\n\n def dfEvolution(self, outputs=False):\n\"\"\"Returns a `pandas` DataFrame with the individuals of the the whole evolution.\n This method can be usef after loading an evolution from disk using loadEvolution()\n\n :return: Pandas DataFrame with all individuals and their parameters\n :rtype: `pandas.core.frame.DataFrame`\n \"\"\"\n parameters = self.parameterSpace.parameterNames\n allIndividuals = [p for gen, pop in self.history.items() for p in pop]\n popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in allIndividuals]).T\n dfEvolution = pd.DataFrame(popArray, index=parameters).T\n # add more information to the dataframe\n scores = [float(p.fitness.score) for p in allIndividuals]\n indIds = [p.id for p in allIndividuals]\n dfEvolution[\"score\"] = scores\n dfEvolution[\"id\"] = indIds\n dfEvolution[\"gen\"] = [p.gIdx for p in allIndividuals]\n\n if outputs:\n dfEvolution = self._outputToDf(allIndividuals, dfEvolution)\n\n # add fitness columns\n # NOTE: have to do this with wvalues and divide by weights later, why?\n # Because after loading the evolution with dill, somehow multiple fitnesses\n # dissappear and only the first one is left. However, wvalues still has all\n # fitnesses, and we have acces to weightList, so this hack kind of helps\n n_fitnesses = len(self.pop[0].fitness.wvalues)\n for i in range(n_fitnesses):\n for ip, p in enumerate(allIndividuals):\n dfEvolution.loc[ip, f\"f{i}\"] = p.fitness.wvalues[i] / self.weightList[i]\n\n # the history keeps all individuals of all generations\n # there can be duplicates (in elitism for example), which we filter\n # out for the dataframe\n dfEvolution = self._dropDuplicatesFromDf(dfEvolution)\n dfEvolution = dfEvolution.reset_index(drop=True)\n return dfEvolution\n\n def loadResults(self, filename=None, trajectoryName=None):\n\"\"\"Load results from a hdf file of a previous evolution and store the\n pypet trajectory in `self.traj`\n\n :param filename: hdf filename of the previous run, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory in the hdf file to load. If not given, the last one will be loaded, defaults to None\n :type trajectoryName: str, optional\n \"\"\"\n if filename == None:\n filename = self.HDF_FILE\n self.traj = pu.loadPypetTrajectory(filename, trajectoryName)\n\n def getScores(self):\n\"\"\"Returns the scores of the current valid population\"\"\"\n validPop = self.getValidPopulation(self.pop)\n return np.array([pop.fitness.score for pop in validPop])\n\n def getScoresDuringEvolution(self, traj=None, drop_first=True, reverse=False):\n\"\"\"Get the scores of each generation's population.\n\n :param traj: Pypet trajectory. If not given, the current trajectory is used, defaults to None\n :type traj: `pypet.trajectory.Trajectory`, optional\n :param drop_first: Drop the first (initial) generation. This can be usefull because it can have a different size (`POP_INIT_SIZE`) than the succeeding populations (`POP_SIZE`) which can make data handling tricky, defaults to True\n :type drop_first: bool, optional\n :param reverse: Reverse the order of each generation. This is a necessary workaraound because loading from the an hdf file returns the generations in a reversed order compared to loading each generation from the pypet trajectory in memory, defaults to False\n :type reverse: bool, optional\n :return: Tuple of list of all generations and an array of the scores of all individuals\n :rtype: tuple[list, numpy.ndarray]\n \"\"\"\n if traj == None:\n traj = self.traj\n\n generation_names = list(traj.results.evolution.f_to_dict(nested=True).keys())\n\n if reverse:\n generation_names = generation_names[::-1]\n if drop_first and \"gen_000000\" in generation_names:\n generation_names.remove(\"gen_000000\")\n\n npop = len(traj.results.evolution[generation_names[0]].scores)\n\n gens = []\n all_scores = np.empty((len(generation_names), npop))\n\n for i, r in enumerate(generation_names):\n gens.append(i)\n scores = traj.results.evolution[r].scores\n all_scores[i] = scores\n\n if drop_first:\n gens = np.add(gens, 1)\n\n return gens, all_scores\n
List of floats that defines the dimensionality of the fitness vector returned from evalFunction and the weights of each component for multiobjective optimization (positive = maximize, negative = minimize). If not given, then a single positive weight will be used, defaults to None
Nonemodel`neurolib.models.model.Model`, optional
Model to simulate, defaults to None
Nonefilenamestr, optional
HDF file to store all results in, defaults to \"evolution.hdf\"
'evolution.hdf'ncoresint, optional
Number of cores to simulate on (max cores default), defaults to None
NonePOP_INIT_SIZEint, optional
Size of first population to initialize evolution with (random, uniformly distributed), defaults to 100
100POP_SIZEint, optional
Size of the population during evolution, defaults to 20
20NGENint, optional
Numbers of generations to evaluate, defaults to 10
10matingOperatordeap operator, optional
Custom mating operator, defaults to deap.tools.cxBlend
NoneMATE_Pdict, optional
Mating operator keyword arguments (for the default crossover operator cxBlend, this defaults alpha = 0.5)
NonemutationOperatordeap operator, optional
Custom mutation operator, defaults to du.gaussianAdaptiveMutation_nStepSizes
NoneMUTATE_Pdict, optional
Mutation operator keyword arguments
NoneselectionOperatordeap operator, optional
Custom selection operator, defaults to du.selBest_multiObj
NoneSELECT_Pdict, optional
Selection operator keyword arguments
NoneparentSelectionOperator
Operator for parent selection, defaults to du.selRank
NonePARENT_SELECT_Pdict, optional
Parent selection operator keyword arguments (for the default operator selRank, this defaults to s = 1.5 in Eiben&Smith p.81)
NoneindividualGenerator
Function to generate initial individuals, defaults to du.randomParametersAdaptive
None Source code in neurolib/optimize/evolution/evolution.py
def __init__(\n self,\n evalFunction,\n parameterSpace,\n weightList=None,\n model=None,\n filename=\"evolution.hdf\",\n ncores=None,\n POP_INIT_SIZE=100,\n POP_SIZE=20,\n NGEN=10,\n algorithm=\"adaptive\",\n matingOperator=None,\n MATE_P=None,\n mutationOperator=None,\n MUTATE_P=None,\n selectionOperator=None,\n SELECT_P=None,\n parentSelectionOperator=None,\n PARENT_SELECT_P=None,\n individualGenerator=None,\n IND_GENERATOR_P=None,\n):\n\"\"\"Initialize evolutionary optimization.\n :param evalFunction: Evaluation function of a run that provides a fitness vector and simulation outputs\n :type evalFunction: function\n :param parameterSpace: Parameter space to run evolution in.\n :type parameterSpace: `neurolib.utils.parameterSpace.ParameterSpace`\n :param weightList: List of floats that defines the dimensionality of the fitness vector returned from evalFunction and the weights of each component for multiobjective optimization (positive = maximize, negative = minimize). If not given, then a single positive weight will be used, defaults to None\n :type weightList: list[float], optional\n :param model: Model to simulate, defaults to None\n :type model: `neurolib.models.model.Model`, optional\n\n :param filename: HDF file to store all results in, defaults to \"evolution.hdf\"\n :type filename: str, optional\n :param ncores: Number of cores to simulate on (max cores default), defaults to None\n :type ncores: int, optional\n\n :param POP_INIT_SIZE: Size of first population to initialize evolution with (random, uniformly distributed), defaults to 100\n :type POP_INIT_SIZE: int, optional\n :param POP_SIZE: Size of the population during evolution, defaults to 20\n :type POP_SIZE: int, optional\n :param NGEN: Numbers of generations to evaluate, defaults to 10\n :type NGEN: int, optional\n\n :param matingOperator: Custom mating operator, defaults to deap.tools.cxBlend\n :type matingOperator: deap operator, optional\n :param MATE_P: Mating operator keyword arguments (for the default crossover operator cxBlend, this defaults `alpha` = 0.5)\n :type MATE_P: dict, optional\n\n :param mutationOperator: Custom mutation operator, defaults to du.gaussianAdaptiveMutation_nStepSizes\n :type mutationOperator: deap operator, optional\n :param MUTATE_P: Mutation operator keyword arguments\n :type MUTATE_P: dict, optional\n\n :param selectionOperator: Custom selection operator, defaults to du.selBest_multiObj\n :type selectionOperator: deap operator, optional\n :param SELECT_P: Selection operator keyword arguments\n :type SELECT_P: dict, optional\n\n :param parentSelectionOperator: Operator for parent selection, defaults to du.selRank\n :param PARENT_SELECT_P: Parent selection operator keyword arguments (for the default operator selRank, this defaults to `s` = 1.5 in Eiben&Smith p.81)\n :type PARENT_SELECT_P: dict, optional\n\n :param individualGenerator: Function to generate initial individuals, defaults to du.randomParametersAdaptive\n \"\"\"\n\n if weightList is None:\n logging.info(\"weightList not set, assuming single fitness value to be maximized.\")\n weightList = [1.0]\n\n trajectoryName = \"results\" + datetime.datetime.now().strftime(\"-%Y-%m-%d-%HH-%MM-%SS\")\n logging.info(f\"Trajectory Name: {trajectoryName}\")\n self.HDF_FILE = os.path.join(paths.HDF_DIR, filename)\n trajectoryFileName = self.HDF_FILE\n\n logging.info(\"Storing data to: {}\".format(trajectoryFileName))\n logging.info(\"Trajectory Name: {}\".format(trajectoryName))\n if ncores is None:\n ncores = multiprocessing.cpu_count()\n logging.info(\"Number of cores: {}\".format(ncores))\n\n # initialize pypet environment\n # env = pp.Environment(trajectory=trajectoryName, filename=trajectoryFileName)\n env = pp.Environment(\n trajectory=trajectoryName,\n filename=trajectoryFileName,\n use_pool=False,\n multiproc=True,\n ncores=ncores,\n complevel=9,\n log_config=paths.PYPET_LOGGING_CONFIG,\n )\n\n # Get the trajectory from the environment\n traj = env.traj\n # Sanity check if everything went ok\n assert (\n trajectoryName == traj.v_name\n ), f\"Pypet trajectory has a different name than trajectoryName {trajectoryName}\"\n # trajectoryName = traj.v_name\n\n self.model = model\n self.evalFunction = evalFunction\n self.weightList = weightList\n\n self.NGEN = NGEN\n assert POP_SIZE % 2 == 0, \"Please chose an even number for POP_SIZE!\"\n self.POP_SIZE = POP_SIZE\n assert POP_INIT_SIZE % 2 == 0, \"Please chose an even number for POP_INIT_SIZE!\"\n self.POP_INIT_SIZE = POP_INIT_SIZE\n self.ncores = ncores\n\n # comment string for storing info\n self.comments = \"no comments\"\n\n self.traj = env.traj\n self.env = env\n self.trajectoryName = trajectoryName\n self.trajectoryFileName = trajectoryFileName\n\n self._initialPopulationSimulated = False\n\n # -------- settings\n self.verbose = False\n self.verbose_plotting = True\n self.plotColor = \"C0\"\n\n # -------- simulation\n self.parameterSpace = parameterSpace\n self.ParametersInterval = self.parameterSpace.named_tuple_constructor\n self.paramInterval = self.parameterSpace.named_tuple\n\n self.toolbox = deap.base.Toolbox()\n\n # -------- algorithms\n if algorithm == \"adaptive\":\n logging.info(f\"Evolution: Using algorithm: {algorithm}\")\n self.matingOperator = tools.cxBlend\n self.MATE_P = {\"alpha\": 0.5} or MATE_P\n self.mutationOperator = du.gaussianAdaptiveMutation_nStepSizes\n self.selectionOperator = du.selBest_multiObj\n self.parentSelectionOperator = du.selRank\n self.PARENT_SELECT_P = {\"s\": 1.5} or PARENT_SELECT_P\n self.individualGenerator = du.randomParametersAdaptive\n\n elif algorithm == \"nsga2\":\n logging.info(f\"Evolution: Using algorithm: {algorithm}\")\n self.matingOperator = tools.cxSimulatedBinaryBounded\n self.MATE_P = {\n \"low\": self.parameterSpace.lowerBound,\n \"up\": self.parameterSpace.upperBound,\n \"eta\": 20.0,\n } or MATE_P\n self.mutationOperator = tools.mutPolynomialBounded\n self.MUTATE_P = {\n \"low\": self.parameterSpace.lowerBound,\n \"up\": self.parameterSpace.upperBound,\n \"eta\": 20.0,\n \"indpb\": 1.0 / len(self.weightList),\n } or MUTATE_P\n self.selectionOperator = tools.selNSGA2\n self.parentSelectionOperator = tools.selTournamentDCD\n self.individualGenerator = du.randomParameters\n\n else:\n raise ValueError(\"Evolution: algorithm must be one of the following: ['adaptive', 'nsga2']\")\n\n # if the operators are set manually, then overwrite them\n self.matingOperator = self.matingOperator if hasattr(self, \"matingOperator\") else matingOperator\n self.mutationOperator = self.mutationOperator if hasattr(self, \"mutationOperator\") else mutationOperator\n self.selectionOperator = self.selectionOperator if hasattr(self, \"selectionOperator\") else selectionOperator\n self.parentSelectionOperator = (\n self.parentSelectionOperator if hasattr(self, \"parentSelectionOperator\") else parentSelectionOperator\n )\n self.individualGenerator = (\n self.individualGenerator if hasattr(self, \"individualGenerator\") else individualGenerator\n )\n\n # let's also make sure that the parameters are set correctly\n self.MATE_P = self.MATE_P if hasattr(self, \"MATE_P\") else {}\n self.PARENT_SELECT_P = self.PARENT_SELECT_P if hasattr(self, \"PARENT_SELECT_P\") else {}\n self.MUTATE_P = self.MUTATE_P if hasattr(self, \"MUTATE_P\") else {}\n self.SELECT_P = self.SELECT_P if hasattr(self, \"SELECT_P\") else {}\n\n self._initDEAP(\n self.toolbox,\n self.env,\n self.paramInterval,\n self.evalFunction,\n weightList=self.weightList,\n matingOperator=self.matingOperator,\n mutationOperator=self.mutationOperator,\n selectionOperator=self.selectionOperator,\n parentSelectionOperator=self.parentSelectionOperator,\n individualGenerator=self.individualGenerator,\n )\n\n # set up pypet trajectory\n self._initPypetTrajectory(\n self.traj,\n self.paramInterval,\n self.POP_SIZE,\n self.NGEN,\n self.model,\n )\n\n # population history: dict of all valid individuals per generation\n self.history = {}\n\n # initialize population\n self.evaluationCounter = 0\n self.last_id = 0\n
Returns a pandas DataFrame with the individuals of the the whole evolution. This method can be usef after loading an evolution from disk using loadEvolution()
Returns:
Type Description `pandas.core.frame.DataFrame`
Pandas DataFrame with all individuals and their parameters
Source code in neurolib/optimize/evolution/evolution.py
def dfEvolution(self, outputs=False):\n\"\"\"Returns a `pandas` DataFrame with the individuals of the the whole evolution.\n This method can be usef after loading an evolution from disk using loadEvolution()\n\n :return: Pandas DataFrame with all individuals and their parameters\n :rtype: `pandas.core.frame.DataFrame`\n \"\"\"\n parameters = self.parameterSpace.parameterNames\n allIndividuals = [p for gen, pop in self.history.items() for p in pop]\n popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in allIndividuals]).T\n dfEvolution = pd.DataFrame(popArray, index=parameters).T\n # add more information to the dataframe\n scores = [float(p.fitness.score) for p in allIndividuals]\n indIds = [p.id for p in allIndividuals]\n dfEvolution[\"score\"] = scores\n dfEvolution[\"id\"] = indIds\n dfEvolution[\"gen\"] = [p.gIdx for p in allIndividuals]\n\n if outputs:\n dfEvolution = self._outputToDf(allIndividuals, dfEvolution)\n\n # add fitness columns\n # NOTE: have to do this with wvalues and divide by weights later, why?\n # Because after loading the evolution with dill, somehow multiple fitnesses\n # dissappear and only the first one is left. However, wvalues still has all\n # fitnesses, and we have acces to weightList, so this hack kind of helps\n n_fitnesses = len(self.pop[0].fitness.wvalues)\n for i in range(n_fitnesses):\n for ip, p in enumerate(allIndividuals):\n dfEvolution.loc[ip, f\"f{i}\"] = p.fitness.wvalues[i] / self.weightList[i]\n\n # the history keeps all individuals of all generations\n # there can be duplicates (in elitism for example), which we filter\n # out for the dataframe\n dfEvolution = self._dropDuplicatesFromDf(dfEvolution)\n dfEvolution = dfEvolution.reset_index(drop=True)\n return dfEvolution\n
Returns a pandas DataFrame of the current generation's population parameters. This object can be further used to easily analyse the population.
Returns:
Type Description `pandas.core.frame.DataFrame`
Pandas DataFrame with all individuals and their parameters
Source code in neurolib/optimize/evolution/evolution.py
def dfPop(self, outputs=False):\n\"\"\"Returns a `pandas` DataFrame of the current generation's population parameters.\n This object can be further used to easily analyse the population.\n :return: Pandas DataFrame with all individuals and their parameters\n :rtype: `pandas.core.frame.DataFrame`\n \"\"\"\n # add the current population to the dataframe\n validPop = self.getValidPopulation(self.pop)\n indIds = [p.id for p in validPop]\n popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in validPop]).T\n\n dfPop = pd.DataFrame(popArray, index=self.parameterSpace.parameterNames).T\n\n # add more information to the dataframe\n scores = self.getScores()\n dfPop[\"score\"] = scores\n dfPop[\"id\"] = indIds\n dfPop[\"gen\"] = [p.gIdx for p in validPop]\n\n if outputs:\n dfPop = self._outputToDf(validPop, dfPop)\n\n # add fitness columns\n # NOTE: when loading an evolution with dill using loadingEvolution\n # MultiFitness values dissappear and only one is left.\n # See dfEvolution() for a solution using wvalues\n n_fitnesses = len(validPop[0].fitness.values)\n for i in range(n_fitnesses):\n for ip, p in enumerate(validPop):\n column_name = \"f\" + str(i)\n dfPop.loc[ip, column_name] = p.fitness.values[i]\n return dfPop\n
Searches the entire evolution history for an individual with a specific id and returns it.
Parameters:
Name Type Description Default idint
Individual id
required
Returns:
Type Description `deap.creator.Individual`
Individual (DEAP type)
Source code in neurolib/optimize/evolution/evolution.py
def getIndividualFromHistory(self, id):\n\"\"\"Searches the entire evolution history for an individual with a specific id and returns it.\n\n :param id: Individual id\n :type id: int\n :return: Individual (`DEAP` type)\n :rtype: `deap.creator.Individual`\n \"\"\"\n for key, value in self.history.items():\n for p in value:\n if p.id == id:\n return p\n logging.warning(f\"No individual with id={id} found. Returning `None`\")\n return None\n
Source code in neurolib/optimize/evolution/evolution.py
def getInvalidPopulation(self, pop=None):\n\"\"\"Returns a list of the invalid population.\n\n :params pop: Population to check, defaults to self.pop\n :type pop: deap population\n :return: List of invalid population\n :rtype: list\n \"\"\"\n pop = pop or self.pop\n return [p for p in pop if not np.isfinite(p.fitness.values).all()]\n
Return the appropriate model with parameters for this individual
Parameters:
Name Type Description Default traj`pypet.trajectory.Trajectory`
Pypet trajectory with individual (traj.individual) or directly a deap.Individual
required
Returns:
Type Description `neurolib.models.model.Model`
Model with the parameters of this individual.
Source code in neurolib/optimize/evolution/evolution.py
def getModelFromTraj(self, traj):\n\"\"\"Return the appropriate model with parameters for this individual\n :params traj: Pypet trajectory with individual (traj.individual) or directly a deap.Individual\n\n :returns model: Model with the parameters of this individual.\n\n :param traj: Pypet trajectory with individual (traj.individual) or directly a deap.Individual\n :type traj: `pypet.trajectory.Trajectory`\n :return: Model with the parameters of this individual.\n :rtype: `neurolib.models.model.Model`\n \"\"\"\n model = self.model\n # resolve star notation - MultiModel\n individual_params = self.individualToDict(self.getIndividualFromTraj(traj))\n if self.parameterSpace.star:\n individual_params = unwrap_star_dotdict(individual_params, self.model, replaced_dict=BACKWARD_REPLACE)\n model.params.update(individual_params)\n return model\n
Returns the scores of the current valid population
Source code in neurolib/optimize/evolution/evolution.py
def getScores(self):\n\"\"\"Returns the scores of the current valid population\"\"\"\n validPop = self.getValidPopulation(self.pop)\n return np.array([pop.fitness.score for pop in validPop])\n
Name Type Description Default traj`pypet.trajectory.Trajectory`, optional
Pypet trajectory. If not given, the current trajectory is used, defaults to None
Nonedrop_firstbool, optional
Drop the first (initial) generation. This can be usefull because it can have a different size (POP_INIT_SIZE) than the succeeding populations (POP_SIZE) which can make data handling tricky, defaults to True
Truereversebool, optional
Reverse the order of each generation. This is a necessary workaraound because loading from the an hdf file returns the generations in a reversed order compared to loading each generation from the pypet trajectory in memory, defaults to False
False
Returns:
Type Description tuple[list, numpy.ndarray]
Tuple of list of all generations and an array of the scores of all individuals
Source code in neurolib/optimize/evolution/evolution.py
def getScoresDuringEvolution(self, traj=None, drop_first=True, reverse=False):\n\"\"\"Get the scores of each generation's population.\n\n :param traj: Pypet trajectory. If not given, the current trajectory is used, defaults to None\n :type traj: `pypet.trajectory.Trajectory`, optional\n :param drop_first: Drop the first (initial) generation. This can be usefull because it can have a different size (`POP_INIT_SIZE`) than the succeeding populations (`POP_SIZE`) which can make data handling tricky, defaults to True\n :type drop_first: bool, optional\n :param reverse: Reverse the order of each generation. This is a necessary workaraound because loading from the an hdf file returns the generations in a reversed order compared to loading each generation from the pypet trajectory in memory, defaults to False\n :type reverse: bool, optional\n :return: Tuple of list of all generations and an array of the scores of all individuals\n :rtype: tuple[list, numpy.ndarray]\n \"\"\"\n if traj == None:\n traj = self.traj\n\n generation_names = list(traj.results.evolution.f_to_dict(nested=True).keys())\n\n if reverse:\n generation_names = generation_names[::-1]\n if drop_first and \"gen_000000\" in generation_names:\n generation_names.remove(\"gen_000000\")\n\n npop = len(traj.results.evolution[generation_names[0]].scores)\n\n gens = []\n all_scores = np.empty((len(generation_names), npop))\n\n for i, r in enumerate(generation_names):\n gens.append(i)\n scores = traj.results.evolution[r].scores\n all_scores[i] = scores\n\n if drop_first:\n gens = np.add(gens, 1)\n\n return gens, all_scores\n
Source code in neurolib/optimize/evolution/evolution.py
def getValidPopulation(self, pop=None):\n\"\"\"Returns a list of the valid population.\n\n :params pop: Population to check, defaults to self.pop\n :type pop: deap population\n :return: List of valid population\n :rtype: list\n \"\"\"\n pop = pop or self.pop\n return [p for p in pop if np.isfinite(p.fitness.values).all()]\n
Print and plot information about the evolution and the current population
Parameters:
Name Type Description Default plotbool, optional
plot a plot using matplotlib, defaults to True
TruebestNint, optional
Print summary of bestN best individuals, defaults to 5
5infobool, optional
Print information about the evolution environment
True Source code in neurolib/optimize/evolution/evolution.py
def info(self, plot=True, bestN=5, info=True, reverse=False):\n\"\"\"Print and plot information about the evolution and the current population\n\n :param plot: plot a plot using `matplotlib`, defaults to True\n :type plot: bool, optional\n :param bestN: Print summary of `bestN` best individuals, defaults to 5\n :type bestN: int, optional\n :param info: Print information about the evolution environment\n :type info: bool, optional\n \"\"\"\n if info:\n eu.printEvolutionInfo(self)\n validPop = self.getValidPopulation(self.pop)\n scores = self.getScores()\n # Text output\n print(\"--- Info summary ---\")\n print(\"Valid: {}\".format(len(validPop)))\n print(\"Mean score (weighted fitness): {:.2}\".format(np.mean(scores)))\n eu.printParamDist(self.pop, self.paramInterval, self.gIdx)\n print(\"--------------------\")\n print(f\"Best {bestN} individuals:\")\n eu.printIndividuals(self.toolbox.selBest(self.pop, bestN), self.paramInterval)\n print(\"--------------------\")\n # Plotting evolutionary progress\n if plot:\n # hack: during the evolution we need to use reverse=True\n # after the evolution (with evolution.info()), we need False\n try:\n self.plotProgress(reverse=reverse)\n except:\n logging.warning(\"Could not plot progress, is this a previously saved simulation?\")\n eu.plotPopulation(\n self,\n plotScattermatrix=True,\n save_plots=self.trajectoryName,\n color=self.plotColor,\n )\n
evaluateSimulation = lambda x: x # the function can be omitted, that's why we define a lambda here\npars = ParameterSpace(['a', 'b'], # should be same as previously saved evolution\n [[0.0, 4.0], [0.0, 5.0]])\nevolution = Evolution(evaluateSimulation, pars, weightList = [1.0])\nevolution = evolution.loadEvolution(\"data/evolution-results-2020-05-15-00H-24M-48S.dill\")\n
Parameters:
Name Type Description Default fnamestr
Filename, defaults to a path in ./data/
required
Returns:
Type Description self
Evolution
Source code in neurolib/optimize/evolution/evolution.py
def loadEvolution(self, fname):\n\"\"\"Load evolution from previously saved simulations.\n\n Example usage:\n ```\n evaluateSimulation = lambda x: x # the function can be omitted, that's why we define a lambda here\n pars = ParameterSpace(['a', 'b'], # should be same as previously saved evolution\n [[0.0, 4.0], [0.0, 5.0]])\n evolution = Evolution(evaluateSimulation, pars, weightList = [1.0])\n evolution = evolution.loadEvolution(\"data/evolution-results-2020-05-15-00H-24M-48S.dill\")\n ```\n\n :param fname: Filename, defaults to a path in ./data/\n :type fname: str\n :return: Evolution\n :rtype: self\n \"\"\"\n import dill\n\n evolution = dill.load(open(fname, \"rb\"))\n # parameter space is not saved correctly in dill, don't know why\n # that is why we recreate it using the values of\n # the parameter space in the dill\n pars = ParameterSpace(\n evolution.parameterSpace.parameterNames,\n evolution.parameterSpace.parameterValues,\n )\n\n evolution.parameterSpace = pars\n evolution.paramInterval = evolution.parameterSpace.named_tuple\n evolution.ParametersInterval = evolution.parameterSpace.named_tuple_constructor\n return evolution\n
Load results from a hdf file of a previous evolution and store the pypet trajectory in self.traj
Parameters:
Name Type Description Default filenamestr, optional
hdf filename of the previous run, defaults to None
NonetrajectoryNamestr, optional
Name of the trajectory in the hdf file to load. If not given, the last one will be loaded, defaults to None
None Source code in neurolib/optimize/evolution/evolution.py
def loadResults(self, filename=None, trajectoryName=None):\n\"\"\"Load results from a hdf file of a previous evolution and store the\n pypet trajectory in `self.traj`\n\n :param filename: hdf filename of the previous run, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory in the hdf file to load. If not given, the last one will be loaded, defaults to None\n :type trajectoryName: str, optional\n \"\"\"\n if filename == None:\n filename = self.HDF_FILE\n self.traj = pu.loadPypetTrajectory(filename, trajectoryName)\n
Run the evolution or continue previous evolution. If evolution was not initialized first using runInitial(), this will be done.
Parameters:
Name Type Description Default verbosebool, optional
Print and plot state of evolution during run, defaults to False
False Source code in neurolib/optimize/evolution/evolution.py
def run(self, verbose=False, verbose_plotting=True):\n\"\"\"Run the evolution or continue previous evolution. If evolution was not initialized first\n using `runInitial()`, this will be done.\n\n :param verbose: Print and plot state of evolution during run, defaults to False\n :type verbose: bool, optional\n \"\"\"\n\n self.verbose = verbose\n self.verbose_plotting = verbose_plotting\n if not self._initialPopulationSimulated:\n self.runInitial()\n\n self.runEvolution()\n
Run the evolutionary optimization process for NGEN generations.
Source code in neurolib/optimize/evolution/evolution.py
def runEvolution(self):\n\"\"\"Run the evolutionary optimization process for `NGEN` generations.\"\"\"\n # Start evolution\n logging.info(\"Start of evolution\")\n self._t_start_evolution = datetime.datetime.now()\n for self.gIdx in range(self.gIdx + 1, self.gIdx + self.traj.NGEN):\n # ------- Weed out the invalid individuals and replace them by random new individuals -------- #\n validpop = self.getValidPopulation(self.pop)\n # replace invalid individuals\n invalidpop = self.getInvalidPopulation(self.pop)\n\n logging.info(\"Replacing {} invalid individuals.\".format(len(invalidpop)))\n newpop = self.toolbox.population(n=len(invalidpop))\n newpop = self._tagPopulation(newpop)\n\n # ------- Create the next generation by crossover and mutation -------- #\n ### Select parents using rank selection and clone them ###\n offspring = list(\n map(\n self.toolbox.clone,\n self.toolbox.selectParents(self.pop, self.POP_SIZE, **self.PARENT_SELECT_P),\n )\n )\n\n ##### cross-over ####\n for i in range(1, len(offspring), 2):\n offspring[i - 1], offspring[i] = self.toolbox.mate(offspring[i - 1], offspring[i], **self.MATE_P)\n # delete fitness inherited from parents\n del offspring[i - 1].fitness.values, offspring[i].fitness.values\n del offspring[i - 1].fitness.wvalues, offspring[i].fitness.wvalues\n\n # assign parent IDs to new offspring\n offspring[i - 1].parentIds = offspring[i - 1].id, offspring[i].id\n offspring[i].parentIds = offspring[i - 1].id, offspring[i].id\n\n # delete id originally set from parents, needs to be deleted here!\n # will be set later in _tagPopulation()\n del offspring[i - 1].id, offspring[i].id\n\n ##### Mutation ####\n # Apply mutation\n du.mutateUntilValid(offspring, self.paramInterval, self.toolbox, MUTATE_P=self.MUTATE_P)\n\n offspring = self._tagPopulation(offspring)\n\n # ------- Evaluate next generation -------- #\n\n self.pop = offspring + newpop\n self._evalPopulationUsingPypet(self.traj, self.toolbox, offspring + newpop, self.gIdx)\n\n # log individuals\n self.history[self.gIdx] = validpop + offspring + newpop # self.getValidPopulation(self.pop)\n\n # ------- Select surviving population -------- #\n\n # select next generation\n self.pop = self.toolbox.select(validpop + offspring + newpop, k=self.traj.popsize, **self.SELECT_P)\n\n # ------- END OF ROUND -------\n\n # save all simulation data to pypet\n self.pop = eu.saveToPypet(self.traj, self.pop, self.gIdx)\n\n # select best individual for logging\n self.best_ind = self.toolbox.selBest(self.pop, 1)[0]\n\n # text log\n next_print = print if self.verbose else logging.info\n next_print(\"----------- Generation %i -----------\" % self.gIdx)\n next_print(\"Best individual is {}\".format(self.best_ind))\n next_print(\"Score: {}\".format(self.best_ind.fitness.score))\n next_print(\"Fitness: {}\".format(self.best_ind.fitness.values))\n next_print(\"--- Population statistics ---\")\n\n # verbose output\n if self.verbose:\n self.info(plot=self.verbose_plotting, info=True)\n\n logging.info(\"--- End of evolution ---\")\n logging.info(\"Best individual is %s, %s\" % (self.best_ind, self.best_ind.fitness.values))\n logging.info(\"--- End of evolution ---\")\n\n self.traj.f_store() # We switched off automatic storing, so we need to store manually\n self._t_end_evolution = datetime.datetime.now()\n\n self._buildEvolutionTree()\n
Run the first round of evolution with the initial population of size POP_INIT_SIZE and select the best POP_SIZE for the following evolution. This needs to be run before runEvolution()
Source code in neurolib/optimize/evolution/evolution.py
def runInitial(self):\n\"\"\"Run the first round of evolution with the initial population of size `POP_INIT_SIZE`\n and select the best `POP_SIZE` for the following evolution. This needs to be run before `runEvolution()`\n \"\"\"\n self._t_start_initial_population = datetime.datetime.now()\n\n # Create the initial population\n self.pop = self.toolbox.population(n=self.POP_INIT_SIZE)\n\n ### Evaluate the initial population\n logging.info(\"Evaluating initial population of size %i ...\" % len(self.pop))\n self.gIdx = 0 # set generation index\n self.pop = self._tagPopulation(self.pop)\n\n # evaluate\n self.pop = self._evalPopulationUsingPypet(self.traj, self.toolbox, self.pop, self.gIdx)\n\n if self.verbose:\n eu.printParamDist(self.pop, self.paramInterval, self.gIdx)\n\n # save all simulation data to pypet\n self.pop = eu.saveToPypet(self.traj, self.pop, self.gIdx)\n\n # reduce initial population to popsize\n self.pop = self.toolbox.select(self.pop, k=self.traj.popsize, **self.SELECT_P)\n\n self._initialPopulationSimulated = True\n\n # populate history for tracking\n self.history[self.gIdx] = self.pop # self.getValidPopulation(self.pop)\n\n self._t_end_initial_population = datetime.datetime.now()\n
Empirical datasets are stored in the neurolib/data/datasets directory. In each dataset, subject-wise functional and structural data is stored as MATLAB .mat matrices that can be opened in Python using SciPy's loadmat function. Structural data are \\(N \\times N\\), and functional time series are \\(N \\times t\\) matrices, \\(N\\) being the number of brain regions and \\(t\\) the number of time steps. Example datasets are included in neurolib and custom datasets can be added by placing them in the dataset directory.
To simulate a whole-brain network model, first we need to load the structural connectivity matrices from a DTI data set. The matrices are usually a result of processing DTI data and performing fiber tractography using software like FSL or DSIStudio. The handling of the datasets is done by the Dataset class, and the attributes in the following refer to its instances. Upon initialization, the subject-wise data set is loaded from disk. For all examples in this paper, we use freely available data from the ConnectomeDB of the Human Connectome Project (HCP). For a given parcellation of the brain into \\(N\\) brain regions, these matrices are the \\(N \\times N\\) adjacency matrix self.Cmat, i.e. the structural connectivity matrix, which determines the coupling strengths between brain areas, and the fiber length matrix Dmat which determines the signal transmission delays. The two example datasets currently included in neurolib use the the 80 cortical regions of the AAL2 atlas to define the brain areas and are sorted in a LRLR-ordering.
The elements of the structural connectivity matrix Cmat are typically the number of reconstructed fibers from DTI tractography. Since the number of fibers depends on the method and the parameters of the (probabilistic or deterministic) tractography, they need to be normalized using one of the three implemented methods. The first method max is to simply divide the entries of Cmat by the largest entry, such that the the largest entry becomes 1. The second method waytotal divides the entries of each column of Cmat by the number fiber tracts generated from the respective brain region during probabilistic tractography in FSL, which is contained in the waytotal.txt file. The third method nvoxel divides the entries of each column of Cmat by the size, e.g., the number of voxels of the corresponding brain area. The last two methods yield an asymmetric connectivity matrix, while the first one keeps Cmat symmetric. All normalization steps are done on the subject-wise matrices Cmats and Dmats. In a final step, all matrices can also be averaged across all subjects to yield one Cmat and Dmat per dataset.
Subject-wise fMRI time series must be in a \\((N \\times t)\\)-dimensional format, where \\(N\\) is the number of brain regions and \\(t\\) the length of the time series. Each region-wise time series represents the BOLD activity averaged across all voxels of that region, which can be also obtained from software like FSL. Functional connectivity (FC) captures the spatial correlation structure of the BOLD time series averaged across the entire time of the recording. FC matrices are accessible via the attribute FCs and are generated by computing the Pearson correlation of the time series between all regions, yielding a \\(N \\times N\\) matrix for each subject.
To capture the temporal fluctuations of time-dependent FC(t), which are lost when averaging across the entire recording time, functional connectivity dynamics matrices (FCDs) are computed as the element-wise Pearson correlation of time-dependent FC(t) matrices in a moving window across the BOLD time series of a chosen window length of, for example, 1 min. This yields a \\(t_{FCD} \\times t_{FCD}\\) FCD matrix for each subject, with \\(t_{FCD}\\) being the number of steps the window was moved.
Source code in neurolib/utils/loadData.py
class Dataset:\n\"\"\"\n This class is for loading empirical Datasets. Datasets are stored as matrices and can\n include functional (e.g. fMRI) and structural (e.g. DTI) data.\n\n ## Format\n\n Empirical datasets are\n stored in the `neurolib/data/datasets` directory. In each dataset, subject-wise functional\n and structural data is stored as MATLAB `.mat` matrices that can be opened in\n Python using SciPy's `loadmat` function. Structural data are\n $N \\\\times N$, and functional time series are $N \\\\times t$ matrices, $N$ being the number\n of brain regions and $t$ the number of time steps. Example datasets are included in `neurolib`\n and custom datasets can be added by placing them in the dataset directory.\n\n ## Structural DTI data\n\n To simulate a whole-brain network model, first we need to load the structural connectivity\n matrices from a DTI data set. The matrices are usually a result of processing DTI data and\n performing fiber tractography using software like *FSL* or\n *DSIStudio*. The handling of the datasets is done by the\n `Dataset` class, and the attributes in the following refer to its instances.\n Upon initialization, the subject-wise data set is loaded from disk. For all examples\n in this paper, we use freely available data from the ConnectomeDB of the\n Human Connectome Project (HCP). For a given parcellation of the brain\n into $N$ brain regions, these matrices are the $N \\\\times N$ adjacency matrix `self.Cmat`,\n i.e. the structural connectivity matrix, which determines the coupling strengths between\n brain areas, and the fiber length matrix `Dmat` which determines the signal\n transmission delays. The two example datasets currently included in `neurolib` use the the 80\n cortical regions of the AAL2 atlas to define the brain areas and are\n sorted in a LRLR-ordering.\n\n\n ## Connectivity matrix normalization\n\n The elements of the structural connectivity matrix `Cmat` are typically the number\n of reconstructed fibers from DTI tractography. Since the number of fibers depends on the\n method and the parameters of the (probabilistic or deterministic) tractography, they need to\n be normalized using one of the three implemented methods. The first method `max` is to\n simply divide the entries of `Cmat` by the largest entry, such that the the largest\n entry becomes 1. The second method `waytotal` divides the entries of each column of\n `Cmat` by the number fiber tracts generated from the respective brain region during\n probabilistic tractography in FSL, which is contained in the `waytotal.txt` file.\n The third method `nvoxel` divides the entries of each column of `Cmat` by the\n size, e.g., the number of voxels of the corresponding brain area. The last two methods yield\n an asymmetric connectivity matrix, while the first one keeps `Cmat` symmetric.\n All normalization steps are done on the subject-wise matrices `Cmats` and\n `Dmats`. In a final step, all matrices can also be averaged across all subjects\n to yield one `Cmat` and `Dmat` per dataset.\n\n ## Functional MRI data\n\n Subject-wise fMRI time series must be in a $(N \\\\times t)$-dimensional format, where $N$ is the\n number of brain regions and $t$ the length of the time series. Each region-wise time series\n represents the BOLD activity averaged across all voxels of that region, which can be also obtained\n from software like FSL. Functional connectivity (FC) captures the spatial correlation structure\n of the BOLD time series averaged across the entire time of the recording. FC matrices are\n accessible via the attribute `FCs` and are generated by computing the Pearson correlation\n of the time series between all regions, yielding a $N \\\\times N$ matrix for each subject.\n\n To capture the temporal fluctuations of time-dependent FC(t), which are lost when averaging\n across the entire recording time, functional connectivity dynamics matrices (`FCDs`) are\n computed as the element-wise Pearson correlation of time-dependent FC(t) matrices in a moving\n window across the BOLD time series of a chosen window length of, for example, 1 min. This\n yields a $t_{FCD} \\\\times t_{FCD}$ FCD matrix for each subject, with $t_{FCD}$ being the number\n of steps the window was moved.\n\n \"\"\"\n\n def __init__(self, datasetName=None, normalizeCmats=\"max\", fcd=False, subcortical=False):\n\"\"\"\n Load the empirical data sets that are provided with `neurolib`.\n\n Right now, datasets work on a per-subject base. A dataset must be located\n in the `neurolib/data/datasets/` directory. Each subject's dataset\n must be in the `subjects` subdirectory of that folder. In each subject\n folder there is a directory called `functional` for time series data\n and `structural` the structural connectivity data.\n\n See `loadData.loadSubjectFiles()` for more details on which files are\n being loaded.\n\n The structural connectivity data (accessible using the attribute\n loadData.Cmat), can be normalized using the `normalizeCmats` flag.\n This defaults to \"max\" which normalizes the Cmat by its maxmimum.\n Other options are `waytotal` or `nvoxel`, which normalizes the\n Cmat by dividing every row of the matrix by the waytotal or\n nvoxel files that are provided in the datasets.\n\n Info: the waytotal.txt and the nvoxel.txt are files extracted from\n the tractography of DTI data using `probtrackX` from the `fsl` pipeline.\n\n Individual subject data is provided with the class attributes:\n self.BOLDs: BOLD timeseries of each individual\n self.FCs: Functional connectivity of BOLD timeseries\n\n Mean data is provided with the class attributes:\n self.Cmat: Structural connectivity matrix (for coupling strenghts between areas)\n self.Dmat: Fiber length matrix (for delays)\n self.BOLDs: BOLD timeseries of each area\n self.FCs: Functional connectiviy matrices of each BOLD timeseries\n\n :param datasetName: Name of the dataset to load\n :type datasetName: str\n :param normalizeCmats: Normalization method for the structural connectivity matrix. normalizationMethods = [\"max\", \"waytotal\", \"nvoxel\"]\n :type normalizeCmats: str\n :param fcd: Compute FCD matrices of BOLD data, defaults to False\n :type fcd: bool\n :param subcortical: Include subcortical areas from the atlas or not, defaults to False\n :type subcortical: bool\n\n \"\"\"\n self.has_subjects = None\n if datasetName:\n self.loadDataset(datasetName, normalizeCmats=normalizeCmats, fcd=fcd, subcortical=subcortical)\n\n def loadDataset(self, datasetName, normalizeCmats=\"max\", fcd=False, subcortical=False):\n\"\"\"Load data into accessible class attributes.\n\n :param datasetName: Name of the dataset (must be in `datasets` directory)\n :type datasetName: str\n :param normalizeCmats: Normalization method for Cmats, defaults to \"max\"\n :type normalizeCmats: str, optional\n :raises NotImplementedError: If unknown normalization method is used\n \"\"\"\n # the base directory of the dataset\n dsBaseDirectory = os.path.join(os.path.dirname(__file__), \"..\", \"data\", \"datasets\", datasetName)\n assert os.path.exists(dsBaseDirectory), f\"Dataset {datasetName} not found in {dsBaseDirectory}.\"\n self.dsBaseDirectory = dsBaseDirectory\n self.data = dotdict({})\n\n # load all available subject data from disk to memory\n logging.info(f\"Loading dataset {datasetName} from {self.dsBaseDirectory}.\")\n self._loadSubjectFiles(self.dsBaseDirectory, subcortical=subcortical)\n assert len(self.data) > 0, \"No data loaded.\"\n assert self.has_subjects\n\n self.Cmats = self._normalizeCmats(self.getDataPerSubject(\"cm\"), method=normalizeCmats)\n self.Dmats = self.getDataPerSubject(\"len\")\n\n # take the average of all\n self.Cmat = np.mean(self.Cmats, axis=0)\n\n self.Dmat = self.getDataPerSubject(\n \"len\",\n apply=\"all\",\n apply_function=np.mean,\n apply_function_kwargs={\"axis\": 0},\n )\n self.BOLDs = self.getDataPerSubject(\"bold\")\n self.FCs = self.getDataPerSubject(\"bold\", apply_function=func.fc)\n\n if fcd:\n self.computeFCD()\n\n logging.info(f\"Dataset {datasetName} loaded.\")\n\n def computeFCD(self):\n logging.info(\"Computing FCD matrices ...\")\n self.FCDs = self.getDataPerSubject(\"bold\", apply_function=func.fcd, apply_function_kwargs={\"stepsize\": 10})\n\n def getDataPerSubject(\n self,\n name,\n apply=\"single\",\n apply_function=None,\n apply_function_kwargs={},\n normalizeCmats=\"max\",\n ):\n\"\"\"Load data of a certain kind for all users of the current dataset\n\n :param name: Name of data type, i.e. \"bold\" or \"cm\"\n :type name: str\n :param apply: Apply function per subject (\"single\") or on all subjects (\"all\"), defaults to \"single\"\n :type apply: str, optional\n :param apply_function: Apply function on data, defaults to None\n :type apply_function: function, optional\n :param apply_function_kwargs: Keyword arguments of fuction, defaults to {}\n :type apply_function_kwargs: dict, optional\n :return: Subjectwise data, after function apply\n :rtype: list[np.ndarray]\n \"\"\"\n values = []\n for subject, value in self.data[\"subjects\"].items():\n assert name in value, f\"Data type {name} not found in dataset of subject {subject}.\"\n val = value[name]\n if apply_function and apply == \"single\":\n val = apply_function(val, **apply_function_kwargs)\n values.append(val)\n\n if apply_function and apply == \"all\":\n values = apply_function(values, **apply_function_kwargs)\n return values\n\n def _normalizeCmats(self, Cmats, method=\"max\", FSL_SAMPLES_PER_VOXEL=5000):\n # normalize per subject data\n normalizationMethods = [None, \"max\", \"waytotal\", \"nvoxel\"]\n if method not in normalizationMethods:\n raise NotImplementedError(\n f'\"{method}\" is not a known normalization method. Use one of these: {normalizationMethods}'\n )\n if method == \"max\":\n Cmats = [cm / np.max(cm) for cm in Cmats]\n elif method == \"waytotal\":\n self.waytotal = self.getDataPerSubject(\"waytotal\")\n Cmats = [cm / wt for cm, wt in zip(Cmats, self.waytotal)]\n elif method == \"nvoxel\":\n self.nvoxel = self.getDataPerSubject(\"nvoxel\")\n Cmats = [cm / (nv[:, 0] * FSL_SAMPLES_PER_VOXEL) for cm, nv in zip(Cmats, self.nvoxel)]\n return Cmats\n\n def _loadSubjectFiles(self, dsBaseDirectory, subcortical=False):\n\"\"\"Dirty subject-wise file loader. Depends on the exact naming of all\n files as provided in the `neurolib/data/datasets` directory. Uses `glob.glob()`\n to find all files based on hardcoded file name matching.\n\n Can filter out subcortical regions from the AAL2 atlas.\n\n Info: Dirty implementation that assumes a lot of things about the dataset and filenames.\n\n :param dsBaseDirectory: Base directory of the dataset\n :type dsBaseDirectory: str\n :param subcortical: Filter subcortical regions from files defined by the AAL2 atlas, defaults to False\n :type subcortical: bool, optional\n \"\"\"\n # check if there are subject files in the dataset\n if os.path.exists(os.path.join(dsBaseDirectory, \"subjects\")):\n self.has_subjects = True\n self.data[\"subjects\"] = {}\n\n # data type paths, glob strings, dirty\n BOLD_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"functional\", \"*rsfMRI*.mat\")\n CM_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"structural\", \"DTI_CM*.mat\")\n LEN_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"structural\", \"DTI_LEN*.mat\")\n WAY_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"structural\", \"waytotal*.txt\")\n NVOXEL_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"structural\", \"nvoxel*.txt\")\n\n _ftypes = {\n \"bold\": BOLD_paths_glob,\n \"cm\": CM_paths_glob,\n \"len\": LEN_paths_glob,\n \"waytotal\": WAY_paths_glob,\n \"nvoxel\": NVOXEL_paths_glob,\n }\n\n for _name, _glob in _ftypes.items():\n fnames = glob.glob(_glob)\n # if there is none of this data type\n if len(fnames) == 0:\n continue\n for f in fnames:\n # dirty\n subject = f.split(os.path.sep)[-3]\n # create subject in dict if not present yet\n if not subject in self.data[\"subjects\"]:\n self.data[\"subjects\"][subject] = {}\n\n # if the data for this type is not already loaded\n if _name not in self.data[\"subjects\"][subject]:\n # bold, cm and len matrixes are provided as .mat files\n if _name in [\"bold\", \"cm\", \"len\"]:\n filter_subcotrical_axis = \"both\"\n if _name == \"bold\":\n key = \"tc\"\n filter_subcotrical_axis = 0\n elif _name == \"cm\":\n key = \"sc\"\n elif _name == \"len\":\n key = \"len\"\n # load the data\n data = self.loadMatrix(f, key=key)\n if not subcortical:\n data = filterSubcortical(data, axis=filter_subcotrical_axis)\n self.data[\"subjects\"][subject][_name] = data\n # waytotal and nvoxel files are .txt files\n elif _name in [\"waytotal\", \"nvoxel\"]:\n data = np.loadtxt(f)\n if not subcortical:\n data = filterSubcortical(data, axis=0)\n self.data[\"subjects\"][subject][_name] = data\n\n def loadMatrix(self, matFileName, key=\"\", verbose=False):\n\"\"\"Function to furiously load .mat files with scipy.io.loadmat.\n Info: More formats are supported but commented out in the code.\n\n :param matFileName: Filename of matrix to load\n :type matFileName: str\n :param key: .mat file key in which data is stored (example: \"sc\")\n :type key: str\n\n :return: Loaded matrix\n :rtype: numpy.ndarray\n \"\"\"\n if verbose:\n print(f\"Loading {matFileName}\")\n matrix = scipy.io.loadmat(matFileName)\n if verbose:\n print(\"\\tLoading using scipy.io.loadmat...\")\n print(f\"Keys: {list(matrix.keys())}\")\n if key != \"\" and key in list(matrix.keys()):\n matrix = matrix[key]\n if verbose:\n print(f'\\tLoaded key \"{key}\"')\n elif type(matrix) is dict:\n raise ValueError(f\"Object is still a dict. Here are the keys: {matrix.keys()}\")\n return matrix\n return 0\n
Load the empirical data sets that are provided with neurolib.
Right now, datasets work on a per-subject base. A dataset must be located in the neurolib/data/datasets/ directory. Each subject's dataset must be in the subjects subdirectory of that folder. In each subject folder there is a directory called functional for time series data and structural the structural connectivity data.
See loadData.loadSubjectFiles() for more details on which files are being loaded.
The structural connectivity data (accessible using the attribute loadData.Cmat), can be normalized using the normalizeCmats flag. This defaults to \"max\" which normalizes the Cmat by its maxmimum. Other options are waytotal or nvoxel, which normalizes the Cmat by dividing every row of the matrix by the waytotal or nvoxel files that are provided in the datasets.
Info: the waytotal.txt and the nvoxel.txt are files extracted from the tractography of DTI data using probtrackX from the fsl pipeline.
Individual subject data is provided with the class attributes: self.BOLDs: BOLD timeseries of each individual self.FCs: Functional connectivity of BOLD timeseries
Mean data is provided with the class attributes: self.Cmat: Structural connectivity matrix (for coupling strenghts between areas) self.Dmat: Fiber length matrix (for delays) self.BOLDs: BOLD timeseries of each area self.FCs: Functional connectiviy matrices of each BOLD timeseries
Parameters:
Name Type Description Default datasetNamestr
Name of the dataset to load
NonenormalizeCmatsstr
Normalization method for the structural connectivity matrix. normalizationMethods = [\"max\", \"waytotal\", \"nvoxel\"]
'max'fcdbool
Compute FCD matrices of BOLD data, defaults to False
Falsesubcorticalbool
Include subcortical areas from the atlas or not, defaults to False
False Source code in neurolib/utils/loadData.py
def __init__(self, datasetName=None, normalizeCmats=\"max\", fcd=False, subcortical=False):\n\"\"\"\n Load the empirical data sets that are provided with `neurolib`.\n\n Right now, datasets work on a per-subject base. A dataset must be located\n in the `neurolib/data/datasets/` directory. Each subject's dataset\n must be in the `subjects` subdirectory of that folder. In each subject\n folder there is a directory called `functional` for time series data\n and `structural` the structural connectivity data.\n\n See `loadData.loadSubjectFiles()` for more details on which files are\n being loaded.\n\n The structural connectivity data (accessible using the attribute\n loadData.Cmat), can be normalized using the `normalizeCmats` flag.\n This defaults to \"max\" which normalizes the Cmat by its maxmimum.\n Other options are `waytotal` or `nvoxel`, which normalizes the\n Cmat by dividing every row of the matrix by the waytotal or\n nvoxel files that are provided in the datasets.\n\n Info: the waytotal.txt and the nvoxel.txt are files extracted from\n the tractography of DTI data using `probtrackX` from the `fsl` pipeline.\n\n Individual subject data is provided with the class attributes:\n self.BOLDs: BOLD timeseries of each individual\n self.FCs: Functional connectivity of BOLD timeseries\n\n Mean data is provided with the class attributes:\n self.Cmat: Structural connectivity matrix (for coupling strenghts between areas)\n self.Dmat: Fiber length matrix (for delays)\n self.BOLDs: BOLD timeseries of each area\n self.FCs: Functional connectiviy matrices of each BOLD timeseries\n\n :param datasetName: Name of the dataset to load\n :type datasetName: str\n :param normalizeCmats: Normalization method for the structural connectivity matrix. normalizationMethods = [\"max\", \"waytotal\", \"nvoxel\"]\n :type normalizeCmats: str\n :param fcd: Compute FCD matrices of BOLD data, defaults to False\n :type fcd: bool\n :param subcortical: Include subcortical areas from the atlas or not, defaults to False\n :type subcortical: bool\n\n \"\"\"\n self.has_subjects = None\n if datasetName:\n self.loadDataset(datasetName, normalizeCmats=normalizeCmats, fcd=fcd, subcortical=subcortical)\n
Load data of a certain kind for all users of the current dataset
Parameters:
Name Type Description Default namestr
Name of data type, i.e. \"bold\" or \"cm\"
required applystr, optional
Apply function per subject (\"single\") or on all subjects (\"all\"), defaults to \"single\"
'single'apply_functionfunction, optional
Apply function on data, defaults to None
Noneapply_function_kwargsdict, optional
Keyword arguments of fuction, defaults to {}
{}
Returns:
Type Description list[np.ndarray]
Subjectwise data, after function apply
Source code in neurolib/utils/loadData.py
def getDataPerSubject(\n self,\n name,\n apply=\"single\",\n apply_function=None,\n apply_function_kwargs={},\n normalizeCmats=\"max\",\n):\n\"\"\"Load data of a certain kind for all users of the current dataset\n\n :param name: Name of data type, i.e. \"bold\" or \"cm\"\n :type name: str\n :param apply: Apply function per subject (\"single\") or on all subjects (\"all\"), defaults to \"single\"\n :type apply: str, optional\n :param apply_function: Apply function on data, defaults to None\n :type apply_function: function, optional\n :param apply_function_kwargs: Keyword arguments of fuction, defaults to {}\n :type apply_function_kwargs: dict, optional\n :return: Subjectwise data, after function apply\n :rtype: list[np.ndarray]\n \"\"\"\n values = []\n for subject, value in self.data[\"subjects\"].items():\n assert name in value, f\"Data type {name} not found in dataset of subject {subject}.\"\n val = value[name]\n if apply_function and apply == \"single\":\n val = apply_function(val, **apply_function_kwargs)\n values.append(val)\n\n if apply_function and apply == \"all\":\n values = apply_function(values, **apply_function_kwargs)\n return values\n
Name of the dataset (must be in datasets directory)
required normalizeCmatsstr, optional
Normalization method for Cmats, defaults to \"max\"
'max'
Raises:
Type Description NotImplementedError
If unknown normalization method is used
Source code in neurolib/utils/loadData.py
def loadDataset(self, datasetName, normalizeCmats=\"max\", fcd=False, subcortical=False):\n\"\"\"Load data into accessible class attributes.\n\n :param datasetName: Name of the dataset (must be in `datasets` directory)\n :type datasetName: str\n :param normalizeCmats: Normalization method for Cmats, defaults to \"max\"\n :type normalizeCmats: str, optional\n :raises NotImplementedError: If unknown normalization method is used\n \"\"\"\n # the base directory of the dataset\n dsBaseDirectory = os.path.join(os.path.dirname(__file__), \"..\", \"data\", \"datasets\", datasetName)\n assert os.path.exists(dsBaseDirectory), f\"Dataset {datasetName} not found in {dsBaseDirectory}.\"\n self.dsBaseDirectory = dsBaseDirectory\n self.data = dotdict({})\n\n # load all available subject data from disk to memory\n logging.info(f\"Loading dataset {datasetName} from {self.dsBaseDirectory}.\")\n self._loadSubjectFiles(self.dsBaseDirectory, subcortical=subcortical)\n assert len(self.data) > 0, \"No data loaded.\"\n assert self.has_subjects\n\n self.Cmats = self._normalizeCmats(self.getDataPerSubject(\"cm\"), method=normalizeCmats)\n self.Dmats = self.getDataPerSubject(\"len\")\n\n # take the average of all\n self.Cmat = np.mean(self.Cmats, axis=0)\n\n self.Dmat = self.getDataPerSubject(\n \"len\",\n apply=\"all\",\n apply_function=np.mean,\n apply_function_kwargs={\"axis\": 0},\n )\n self.BOLDs = self.getDataPerSubject(\"bold\")\n self.FCs = self.getDataPerSubject(\"bold\", apply_function=func.fc)\n\n if fcd:\n self.computeFCD()\n\n logging.info(f\"Dataset {datasetName} loaded.\")\n
Function to furiously load .mat files with scipy.io.loadmat. Info: More formats are supported but commented out in the code.
Parameters:
Name Type Description Default matFileNamestr
Filename of matrix to load
required keystr
.mat file key in which data is stored (example: \"sc\")
''
Returns:
Type Description numpy.ndarray
Loaded matrix
Source code in neurolib/utils/loadData.py
def loadMatrix(self, matFileName, key=\"\", verbose=False):\n\"\"\"Function to furiously load .mat files with scipy.io.loadmat.\n Info: More formats are supported but commented out in the code.\n\n :param matFileName: Filename of matrix to load\n :type matFileName: str\n :param key: .mat file key in which data is stored (example: \"sc\")\n :type key: str\n\n :return: Loaded matrix\n :rtype: numpy.ndarray\n \"\"\"\n if verbose:\n print(f\"Loading {matFileName}\")\n matrix = scipy.io.loadmat(matFileName)\n if verbose:\n print(\"\\tLoading using scipy.io.loadmat...\")\n print(f\"Keys: {list(matrix.keys())}\")\n if key != \"\" and key in list(matrix.keys()):\n matrix = matrix[key]\n if verbose:\n print(f'\\tLoaded key \"{key}\"')\n elif type(matrix) is dict:\n raise ValueError(f\"Object is still a dict. Here are the keys: {matrix.keys()}\")\n return matrix\n return 0\n
Computes FCD (functional connectivity dynamics) matrix, as described in Deco's whole-brain model papers. Default paramters are suited for computing FCS matrices of BOLD timeseries: A windowsize of 30 at the BOLD sampling rate of 0.5 Hz equals 60s and stepsize = 5 equals 10s.
Parameters:
Name Type Description Default tsnumpy.ndarray
Nxt timeseries
required windowsizeint, optional
Size of each rolling window in timesteps, defaults to 30
30stepsizeint, optional
Stepsize between each rolling window, defaults to 5
5
Returns:
Type Description numpy.ndarray
T x T FCD matrix
Source code in neurolib/utils/functions.py
def fcd(ts, windowsize=30, stepsize=5):\n\"\"\"Computes FCD (functional connectivity dynamics) matrix, as described in Deco's whole-brain model papers.\n Default paramters are suited for computing FCS matrices of BOLD timeseries:\n A windowsize of 30 at the BOLD sampling rate of 0.5 Hz equals 60s and stepsize = 5 equals 10s.\n\n :param ts: Nxt timeseries\n :type ts: numpy.ndarray\n :param windowsize: Size of each rolling window in timesteps, defaults to 30\n :type windowsize: int, optional\n :param stepsize: Stepsize between each rolling window, defaults to 5\n :type stepsize: int, optional\n :return: T x T FCD matrix\n :rtype: numpy.ndarray\n \"\"\"\n t_window_width = int(windowsize) # int(windowsize * 30) # x minutes\n stepsize = stepsize # ts.shape[1]/N\n corrFCs = []\n try:\n counter = range(0, ts.shape[1] - t_window_width, stepsize)\n\n for t in counter:\n ts_slice = ts[:, t : t + t_window_width]\n corrFCs.append(np.corrcoef(ts_slice))\n\n FCd = np.empty([len(corrFCs), len(corrFCs)])\n f1i = 0\n for f1 in corrFCs:\n f2i = 0\n for f2 in corrFCs:\n FCd[f1i, f2i] = np.corrcoef(f1.reshape((1, f1.size)), f2.reshape((1, f2.size)))[0, 1]\n f2i += 1\n f1i += 1\n\n return FCd\n except:\n return 0\n
Returns the mean power spectrum of multiple timeseries.
Parameters:
Name Type Description Default activitiesnp.ndarray
N-dimensional timeseries
required dtfloat
Simulation time step
required maxfrint, optional
Maximum frequency in Hz to cutoff from return, defaults to 70
70spectrum_windowsizefloat, optional
Length of the window used in Welch's method (in seconds), defaults to 1.0
1.0normalizebool, optional
Maximum power is normalized to 1 if True, defaults to False
False
Returns:
Type Description [np.ndarray, np.ndarray]
Frquencies and the power of each frequency
Source code in neurolib/utils/functions.py
def getMeanPowerSpectrum(activities, dt, maxfr=70, spectrum_windowsize=1.0, normalize=False):\n\"\"\"Returns the mean power spectrum of multiple timeseries.\n\n :param activities: N-dimensional timeseries\n :type activities: np.ndarray\n :param dt: Simulation time step\n :type dt: float\n :param maxfr: Maximum frequency in Hz to cutoff from return, defaults to 70\n :type maxfr: int, optional\n :param spectrum_windowsize: Length of the window used in Welch's method (in seconds), defaults to 1.0\n :type spectrum_windowsize: float, optional\n :param normalize: Maximum power is normalized to 1 if True, defaults to False\n :type normalize: bool, optional\n\n :return: Frquencies and the power of each frequency\n :rtype: [np.ndarray, np.ndarray]\n \"\"\"\n\n powers = np.zeros(getPowerSpectrum(activities[0], dt, maxfr, spectrum_windowsize)[0].shape)\n ps = []\n for rate in activities:\n f, Pxx_spec = getPowerSpectrum(rate, dt, maxfr, spectrum_windowsize)\n ps.append(Pxx_spec)\n powers += Pxx_spec\n powers /= len(ps)\n if normalize:\n powers /= np.max(powers)\n return f, powers\n
Maximum frequency in Hz to cutoff from return, defaults to 70
70spectrum_windowsizefloat, optional
Length of the window used in Welch's method (in seconds), defaults to 1.0
1.0normalizebool, optional
Maximum power is normalized to 1 if True, defaults to False
False
Returns:
Type Description [np.ndarray, np.ndarray]
Frquencies and the power of each frequency
Source code in neurolib/utils/functions.py
def getPowerSpectrum(activity, dt, maxfr=70, spectrum_windowsize=1.0, normalize=False):\n\"\"\"Returns a power spectrum using Welch's method.\n\n :param activity: One-dimensional timeseries\n :type activity: np.ndarray\n :param dt: Simulation time step\n :type dt: float\n :param maxfr: Maximum frequency in Hz to cutoff from return, defaults to 70\n :type maxfr: int, optional\n :param spectrum_windowsize: Length of the window used in Welch's method (in seconds), defaults to 1.0\n :type spectrum_windowsize: float, optional\n :param normalize: Maximum power is normalized to 1 if True, defaults to False\n :type normalize: bool, optional\n\n :return: Frquencies and the power of each frequency\n :rtype: [np.ndarray, np.ndarray]\n \"\"\"\n # convert to one-dimensional array if it is an (1xn)-D array\n if activity.shape[0] == 1 and activity.shape[1] > 1:\n activity = activity[0]\n assert len(activity.shape) == 1, \"activity is not one-dimensional!\"\n\n f, Pxx_spec = scipy.signal.welch(\n activity,\n 1000 / dt,\n window=\"hann\",\n nperseg=int(spectrum_windowsize * 1000 / dt),\n scaling=\"spectrum\",\n )\n f = f[f < maxfr]\n Pxx_spec = Pxx_spec[0 : len(f)]\n if normalize:\n Pxx_spec /= np.max(Pxx_spec)\n return f, Pxx_spec\n
Computes the Kuramoto order parameter of a timeseries which is a measure for synchrony. Can smooth timeseries if there is noise. Peaks are then detected using a peakfinder. From these peaks a phase is derived and then the amount of phase synchrony (the Kuramoto order parameter) is computed.
Parameters:
Name Type Description Default tracesnumpy.ndarray
Multidimensional timeseries array
required smoothingfloat, optional
Gaussian smoothing strength
0.0distanceint, optional
minimum distance between peaks in samples
10prominenceint, optional
vertical distance between the peak and its lowest contour line
5
Returns:
Type Description numpy.ndarray
Timeseries of Kuramoto order paramter
Source code in neurolib/utils/functions.py
def kuramoto(traces, smoothing=0.0, distance=10, prominence=5):\n\"\"\"\n Computes the Kuramoto order parameter of a timeseries which is a measure for synchrony.\n Can smooth timeseries if there is noise.\n Peaks are then detected using a peakfinder. From these peaks a phase is derived and then\n the amount of phase synchrony (the Kuramoto order parameter) is computed.\n\n :param traces: Multidimensional timeseries array\n :type traces: numpy.ndarray\n :param smoothing: Gaussian smoothing strength\n :type smoothing: float, optional\n :param distance: minimum distance between peaks in samples\n :type distance: int, optional\n :param prominence: vertical distance between the peak and its lowest contour line\n :type prominence: int, optional\n\n :return: Timeseries of Kuramoto order paramter\n :rtype: numpy.ndarray\n \"\"\"\n @numba.njit\n def _estimate_phase(maximalist, n_times):\n lastMax = 0\n phases = np.empty((n_times), dtype=np.float64)\n n = 0\n for m in maximalist:\n for t in range(lastMax, m):\n # compute instantaneous phase\n phi = 2 * np.pi * float(t - lastMax) / float(m - lastMax)\n phases[n] = phi\n n += 1\n lastMax = m\n phases[-1] = 2 * np.pi\n return phases\n\n @numba.njit\n def _estimate_r(ntraces, times, phases):\n kuramoto = np.empty((times), dtype=np.float64)\n for t in range(times):\n R = 1j*0\n for n in range(ntraces):\n R += np.exp(1j * phases[n, t])\n R /= ntraces\n kuramoto[t] = np.absolute(R)\n return kuramoto\n\n nTraces, nTimes = traces.shape\n phases = np.empty_like(traces)\n for n in range(nTraces):\n a = traces[n]\n # find peaks\n if smoothing > 0:\n # smooth data\n a = scipy.ndimage.filters.gaussian_filter(traces[n], smoothing)\n maximalist = scipy.signal.find_peaks(a, distance=distance,\n prominence=prominence)[0]\n maximalist = np.append(maximalist, len(traces[n])-1).astype(int)\n\n if len(maximalist) > 1:\n phases[n, :] = _estimate_phase(maximalist, nTimes)\n else:\n logging.warning(\"Kuramoto: No peaks found, returning 0.\")\n return 0\n # determine kuramoto order paramter\n kuramoto = _estimate_r(nTraces, nTimes, phases)\n return kuramoto\n
Pearson correlation of the lower triagonal of two matrices. The triangular matrix is offset by k = 1 in order to ignore the diagonal line
Parameters:
Name Type Description Default M1numpy.ndarray
First matrix
required M2numpy.ndarray
Second matrix
required
Returns:
Type Description float
Correlation coefficient
Source code in neurolib/utils/functions.py
def matrix_correlation(M1, M2):\n\"\"\"Pearson correlation of the lower triagonal of two matrices.\n The triangular matrix is offset by k = 1 in order to ignore the diagonal line\n\n :param M1: First matrix\n :type M1: numpy.ndarray\n :param M2: Second matrix\n :type M2: numpy.ndarray\n :return: Correlation coefficient\n :rtype: float\n \"\"\"\n cc = np.corrcoef(M1[np.triu_indices_from(M1, k=1)], M2[np.triu_indices_from(M2, k=1)])[0, 1]\n return cc\n
Computes the Kolmogorov distance between the distributions of lower-triangular entries of two matrices See: https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Two-sample_Kolmogorov%E2%80%93Smirnov_test
Parameters:
Name Type Description Default m1np.ndarray
matrix 1
required m2np.ndarray
matrix 2
required
Returns:
Type Description float
2-sample KS statistics
Source code in neurolib/utils/functions.py
def matrix_kolmogorov(m1, m2):\n\"\"\"Computes the Kolmogorov distance between the distributions of lower-triangular entries of two matrices\n See: https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Two-sample_Kolmogorov%E2%80%93Smirnov_test\n\n :param m1: matrix 1\n :type m1: np.ndarray\n :param m2: matrix 2\n :type m2: np.ndarray\n :return: 2-sample KS statistics\n :rtype: float\n \"\"\"\n # get the values of the lower triangle\n triu_ind1 = np.triu_indices(m1.shape[0], k=1)\n m1_vals = m1[triu_ind1]\n\n triu_ind2 = np.triu_indices(m2.shape[0], k=1)\n m2_vals = m2[triu_ind2]\n\n # return the distance, omit p-value\n return scipy.stats.ks_2samp(m1_vals, m2_vals)[0]\n
Computes kolmogorov distance between two timeseries. This is done by first computing two FCD matrices (one for each timeseries) and then measuring the Kolmogorov distance of the upper triangle of these matrices.
Parameters:
Name Type Description Default ts1np.ndarray
Timeseries 1
required ts2np.ndarray
Timeseries 2
required
Returns:
Type Description float
2-sample KS statistics
Source code in neurolib/utils/functions.py
def ts_kolmogorov(ts1, ts2, **fcd_kwargs):\n\"\"\"Computes kolmogorov distance between two timeseries.\n This is done by first computing two FCD matrices (one for each timeseries)\n and then measuring the Kolmogorov distance of the upper triangle of these matrices.\n\n :param ts1: Timeseries 1\n :type ts1: np.ndarray\n :param ts2: Timeseries 2\n :type ts2: np.ndarray\n :return: 2-sample KS statistics\n :rtype: float\n \"\"\"\n fcd1 = fcd(ts1, **fcd_kwargs)\n fcd2 = fcd(ts2, **fcd_kwargs)\n\n return matrix_kolmogorov(fcd1, fcd2)\n
"},{"location":"utils/functions/#neurolib.utils.functions.weighted_correlation","title":"weighted_correlation(x, y, w)","text":"
Weighted Pearson correlation of two series.
Parameters:
Name Type Description Default xlist, np.array
Timeseries 1
required ylist, np.array
Timeseries 2, must have same length as x
required wlist, np.array
Weight vector, must have same length as x and y
required
Returns:
Type Description float
Weighted correlation coefficient
Source code in neurolib/utils/functions.py
def weighted_correlation(x, y, w):\n\"\"\"Weighted Pearson correlation of two series.\n\n :param x: Timeseries 1\n :type x: list, np.array\n :param y: Timeseries 2, must have same length as x\n :type y: list, np.array\n :param w: Weight vector, must have same length as x and y\n :type w: list, np.array\n :return: Weighted correlation coefficient\n :rtype: float\n \"\"\"\n\n def weighted_mean(x, w):\n\"\"\"Weighted Mean\"\"\"\n return np.sum(x * w) / np.sum(w)\n\n def weighted_cov(x, y, w):\n\"\"\"Weighted Covariance\"\"\"\n return np.sum(w * (x - weighted_mean(x, w)) * (y - weighted_mean(y, w))) / np.sum(w)\n\n return weighted_cov(x, y, w) / np.sqrt(weighted_cov(x, x, w) * weighted_cov(y, y, w))\n
class ParameterSpace:\n\"\"\"\n Parameter space\n \"\"\"\n\n def __init__(self, parameters, parameterValues=None, kind=None, allow_star_notation=False):\n\"\"\"\n Initialize parameter space. Parameter space can be initialized in two ways:\n Either a `parameters` is a dictionary of the form `{\"parName1\" : [0, 1, 2], \"parName2\" : [3, 4]}`,\n or `parameters` is a list of names and `parameterValues` are values of each parameter.\n\n :param parameters: parameter dictionary or list of names of parameters e.g. `['x', 'y']`\n :type parameters: `dict, list[str, str]`\n :param parameterValues: list of parameter values (must be floats) e.g. `[[x_min, x_max], [y_min, y_max], ...]`\n :type parameterValues: `list[list[float, float]]`\n :param kind: string describing the kind of parameter space:\n - `point`: a single point in parameter space\n - `bound`: a bound in parameter space, i.e. two values per parameter\n - `grid`: a cartesian product over parameters\n - `sequence`: a sequence of univariate parameter changes - only one will change at the time, other\n parameters will stay as default\n - `explicit`: explicitely define a parameter space, i.e lists of all parameters have to have the same length\n - None: parameterSpace tries to auto-detect the correct kind\n :type kind: str\n :param allow_star_notation: whether to allow star notation in parameter names - MultiModel\n :type allow_star_notation: bool\n \"\"\"\n assert kind in SUPPORTED_KINDS\n self.kind = kind\n self.parameters = parameters\n self.star = allow_star_notation\n # in case a parameter dictionary was given\n if parameterValues is None:\n assert isinstance(\n parameters, dict\n ), \"Parameters must be a dict, if no values are given in `parameterValues`\"\n else:\n # check if all names are strings\n assert np.all([isinstance(pn, str) for pn in parameters]), \"Parameter names must all be strings.\"\n # check if all parameter values are lists\n assert np.all([isinstance(pv, (list, tuple)) for pv in parameterValues]), \"Parameter values must be a list.\"\n parameters = self._parameterListsToDict(parameters, parameterValues)\n\n self.parameters = self._processParameterDict(parameters)\n self.parameterNames = list(self.parameters.keys())\n self.parameterValues = list(self.parameters.values())\n\n # let's create a named tuple of the parameters\n # Note: evolution.py implementation relies on named tuples\n self.named_tuple_constructor = namedtuple(\"ParameterSpace\", sanitize_dot_dict(parameters))\n self.named_tuple = self.named_tuple_constructor(*self.parameterValues)\n\n # set attributes of this class to make it accessible\n for i, p in enumerate(self.parameters):\n setattr(self, p, self.parameterValues[i])\n\n def __str__(self):\n\"\"\"Print the named_tuple object\"\"\"\n return str(self.parameters)\n\n def __getitem__(self, key):\n return self.parameters[key]\n\n def __setitem__(self, key, value):\n self.parameters[key] = value\n self._processParameterDict(self.parameters)\n\n def dict(self):\n\"\"\"Returns the parameter space as a dicitonary of lists.\n :rtype: dict\n \"\"\"\n return self.parameters\n\n def get_parametrization(self):\n assert self.kind is not None\n if self.kind in [\"point\", \"bound\", \"explicit\"]:\n # check same length\n it = iter(self.parameters.values())\n length = len(next(it))\n assert all(len(l) == length for l in it)\n # just return as dict\n return self.parameters\n elif self.kind == \"grid\":\n # cartesian product\n return pypet.cartesian_product(self.parameters)\n elif self.kind == \"sequence\":\n # return as sequence\n return self._inflate_to_sequence(self.parameters)\n\n @staticmethod\n def _inflate_to_sequence(param_dict):\n\"\"\"\n Inflate dict of parameters to a sequence of same length, using None as\n placeholder when a particular parameter should not change.\n {\"a\": [1, 2], \"b\": [3, 4, 5]} ->\n {\"a\": [1, 2, None, None, None], \"b\": [None, None, 3, 4, 5]}\n \"\"\"\n return {\n k: [None] * sum([len(tmp) for tmp in list(param_dict.values())[:i]])\n + v\n + [None] * sum([len(tmp) for tmp in list(param_dict.values())[i + 1 :]])\n for i, (k, v) in enumerate(param_dict.items())\n }\n\n def getRandom(self, safe=False):\n\"\"\"This function returns a random single parameter from the whole space\n in the form of { \"par1\" : 1, \"par2\" : 2}.\n\n This function is used by neurolib/optimize/exploarion.py\n to add parameters of the space to pypet (for initialization)\n\n :param safe: Return a \"safe\" parameter or the original. Safe refers to\n returning python floats, not, for example numpy.float64 (necessary for pypet).\n ;type safe: bool\n \"\"\"\n randomPar = {}\n if safe:\n for key, value in self.parameters.items():\n random_value = np.random.choice(value)\n if isinstance(random_value, np.float64):\n random_value = float(random_value)\n elif isinstance(random_value, np.int64):\n random_value = int(random_value)\n randomPar[key] = random_value\n else:\n for key, value in self.parameters.items():\n randomPar[key] = np.random.choice(value)\n return randomPar\n\n @property\n def lowerBound(self):\n\"\"\"Returns lower bound of all parameters as a list\"\"\"\n return [np.min(p) for p in self.parameterValues]\n\n @property\n def upperBound(self):\n\"\"\"Returns upper bound of all parameters as a list\"\"\"\n return [np.max(p) for p in self.parameterValues]\n\n @property\n def ndims(self):\n\"\"\"Number of dimensions (parameters)\"\"\"\n return len(self.parameters)\n\n @staticmethod\n def _validate_single_bound(single_bound):\n\"\"\"\n Validate single bound.\n :param single_bound: single coordinate bound to validate\n :type single_bound: list|tuple\n \"\"\"\n assert isinstance(\n single_bound, (list, tuple)\n ), \"An error occured while validating the ParameterSpace of kind 'bound': Pass parameter bounds as a list or tuple!\"\n assert (\n len(single_bound) == 2\n ), \"An error occured while validating the ParameterSpace of kind 'bound': Only two bounds (min and max) are allowed\"\n assert (\n single_bound[1] > single_bound[0]\n ), \"An error occured while validating the ParameterSpace of kind 'bound': Minimum parameter value can't be larger than the maximum!\"\n\n def _validate_param_bounds(self, param_bounds):\n\"\"\"\n Validate param bounds.\n :param param_bounds: parameter bounds to validate\n :type param_bounds: list|None\n \"\"\"\n assert param_bounds is not None\n assert isinstance(param_bounds, (list, tuple))\n # check every single parameter bound\n for single_bound in param_bounds:\n self._validate_single_bound(single_bound)\n\n def _processParameterDict(self, parameters):\n\"\"\"Processes all parameters and do checks. Determine the kind of the parameter space.\n :param parameters: parameter dictionary\n :type param: dict\n\n :retun: processed parameter dictionary\n :rtype: dict\n \"\"\"\n\n # convert all parameter arrays into lists\n for key, value in parameters.items():\n if isinstance(value, np.ndarray):\n assert len(value.shape) == 1, f\"Parameter {key} is not one-dimensional.\"\n value = value.tolist()\n parameters[key] = value\n\n # auto detect the parameter kind\n if self.kind is None:\n for key, value in parameters.items():\n # auto detect what kind of space we have\n # kind = \"point\" is a single point in parameter space, one value only\n # kind = \"bound\" is a bounded parameter space with 2 values: min and max\n # kind = \"grid\" is a grid space with as many values on each axis as wished\n\n # first, we assume grid\n self.kind = \"grid\"\n parameterLengths = [len(value) for key, value in parameters.items()]\n # if all parameters have the same length\n if parameterLengths.count(parameterLengths[0]) == len(parameterLengths):\n if parameterLengths[0] == 1:\n self.kind = \"point\"\n elif parameterLengths[0] == 2:\n self.kind = \"bound\"\n logging.info(f'Assuming parameter kind \"{self.kind}\"')\n\n # do some kind-specific tests\n if self.kind == \"bound\":\n # check the boundaries\n self._validate_param_bounds(list(parameters.values()))\n\n # set all parameters as attributes for easy access\n for key, value in parameters.items():\n setattr(self, key, value)\n\n return parameters\n\n def _parameterListsToDict(self, keys, values):\n parameters = {}\n assert len(keys) == len(values), \"Names and values of parameters are not same length.\"\n for key, value in zip(keys, values):\n parameters[key] = value\n return parameters\n
Initialize parameter space. Parameter space can be initialized in two ways: Either a parameters is a dictionary of the form {\"parName1\" : [0, 1, 2], \"parName2\" : [3, 4]}, or parameters is a list of names and parameterValues are values of each parameter.
Parameters:
Name Type Description Default parameters`dict, list[str, str]`
parameter dictionary or list of names of parameters e.g. ['x', 'y']
list of parameter values (must be floats) e.g. [[x_min, x_max], [y_min, y_max], ...]
Nonekindstr
string describing the kind of parameter space: - point: a single point in parameter space - bound: a bound in parameter space, i.e. two values per parameter - grid: a cartesian product over parameters - sequence: a sequence of univariate parameter changes - only one will change at the time, other parameters will stay as default - explicit: explicitely define a parameter space, i.e lists of all parameters have to have the same length - None: parameterSpace tries to auto-detect the correct kind
Noneallow_star_notationbool
whether to allow star notation in parameter names - MultiModel
False Source code in neurolib/utils/parameterSpace.py
def __init__(self, parameters, parameterValues=None, kind=None, allow_star_notation=False):\n\"\"\"\n Initialize parameter space. Parameter space can be initialized in two ways:\n Either a `parameters` is a dictionary of the form `{\"parName1\" : [0, 1, 2], \"parName2\" : [3, 4]}`,\n or `parameters` is a list of names and `parameterValues` are values of each parameter.\n\n :param parameters: parameter dictionary or list of names of parameters e.g. `['x', 'y']`\n :type parameters: `dict, list[str, str]`\n :param parameterValues: list of parameter values (must be floats) e.g. `[[x_min, x_max], [y_min, y_max], ...]`\n :type parameterValues: `list[list[float, float]]`\n :param kind: string describing the kind of parameter space:\n - `point`: a single point in parameter space\n - `bound`: a bound in parameter space, i.e. two values per parameter\n - `grid`: a cartesian product over parameters\n - `sequence`: a sequence of univariate parameter changes - only one will change at the time, other\n parameters will stay as default\n - `explicit`: explicitely define a parameter space, i.e lists of all parameters have to have the same length\n - None: parameterSpace tries to auto-detect the correct kind\n :type kind: str\n :param allow_star_notation: whether to allow star notation in parameter names - MultiModel\n :type allow_star_notation: bool\n \"\"\"\n assert kind in SUPPORTED_KINDS\n self.kind = kind\n self.parameters = parameters\n self.star = allow_star_notation\n # in case a parameter dictionary was given\n if parameterValues is None:\n assert isinstance(\n parameters, dict\n ), \"Parameters must be a dict, if no values are given in `parameterValues`\"\n else:\n # check if all names are strings\n assert np.all([isinstance(pn, str) for pn in parameters]), \"Parameter names must all be strings.\"\n # check if all parameter values are lists\n assert np.all([isinstance(pv, (list, tuple)) for pv in parameterValues]), \"Parameter values must be a list.\"\n parameters = self._parameterListsToDict(parameters, parameterValues)\n\n self.parameters = self._processParameterDict(parameters)\n self.parameterNames = list(self.parameters.keys())\n self.parameterValues = list(self.parameters.values())\n\n # let's create a named tuple of the parameters\n # Note: evolution.py implementation relies on named tuples\n self.named_tuple_constructor = namedtuple(\"ParameterSpace\", sanitize_dot_dict(parameters))\n self.named_tuple = self.named_tuple_constructor(*self.parameterValues)\n\n # set attributes of this class to make it accessible\n for i, p in enumerate(self.parameters):\n setattr(self, p, self.parameterValues[i])\n
This function returns a random single parameter from the whole space in the form of { \"par1\" : 1, \"par2\" : 2}.
This function is used by neurolib/optimize/exploarion.py to add parameters of the space to pypet (for initialization)
Parameters:
Name Type Description Default safe
Return a \"safe\" parameter or the original. Safe refers to returning python floats, not, for example numpy.float64 (necessary for pypet). ;type safe: bool
False Source code in neurolib/utils/parameterSpace.py
def getRandom(self, safe=False):\n\"\"\"This function returns a random single parameter from the whole space\n in the form of { \"par1\" : 1, \"par2\" : 2}.\n\n This function is used by neurolib/optimize/exploarion.py\n to add parameters of the space to pypet (for initialization)\n\n :param safe: Return a \"safe\" parameter or the original. Safe refers to\n returning python floats, not, for example numpy.float64 (necessary for pypet).\n ;type safe: bool\n \"\"\"\n randomPar = {}\n if safe:\n for key, value in self.parameters.items():\n random_value = np.random.choice(value)\n if isinstance(random_value, np.float64):\n random_value = float(random_value)\n elif isinstance(random_value, np.int64):\n random_value = int(random_value)\n randomPar[key] = random_value\n else:\n for key, value in self.parameters.items():\n randomPar[key] = np.random.choice(value)\n return randomPar\n
"},{"location":"utils/signal/","title":"Signal","text":"Source code in neurolib/utils/signal.py
class Signal:\n name = \"\"\n label = \"\"\n signal_type = \"\"\n unit = \"\"\n description = \"\"\n _copy_attributes = [\n \"name\",\n \"label\",\n \"signal_type\",\n \"unit\",\n \"description\",\n \"process_steps\",\n ]\n PROCESS_STEPS_KEY = \"process_steps\"\n\n @classmethod\n def from_model_output(cls, model, group=\"\", time_in_ms=True):\n\"\"\"\n Initial Signal from modelling output.\n \"\"\"\n assert isinstance(model, Model)\n return cls(model.xr(group=group), time_in_ms=time_in_ms)\n\n @classmethod\n def from_file(cls, filename):\n\"\"\"\n Load signal from saved file.\n\n :param filename: filename for the Signal\n :type filename: str\n \"\"\"\n if not filename.endswith(NC_EXT):\n filename += NC_EXT\n # load NC file\n xarray = xr.load_dataarray(filename)\n # init class\n signal = cls(xarray)\n # if nc file has attributes, copy them to signal class\n if xarray.attrs:\n process_steps = []\n for k, v in xarray.attrs.items():\n if cls.PROCESS_STEPS_KEY in k:\n idx = int(k[len(cls.PROCESS_STEPS_KEY) + 1 :])\n process_steps.insert(idx, v)\n else:\n setattr(signal, k, v)\n else:\n logging.warning(\"No metadata found, setting empty...\")\n process_steps = [f\"raw {signal.signal_type} signal: {signal.start_time}--\" f\"{signal.end_time}s\"]\n setattr(signal, cls.PROCESS_STEPS_KEY, process_steps)\n return signal\n\n def __init__(self, data, time_in_ms=False):\n\"\"\"\n :param data: data for the signal, assumes time dimension with time in seconds\n :type data: xr.DataArray\n :param time_in_ms: whether time dimension is in ms\n :type time_in_ms: bool\n \"\"\"\n assert isinstance(data, xr.DataArray)\n data = deepcopy(data)\n assert \"time\" in data.dims, \"DataArray must have time axis\"\n if time_in_ms:\n data[\"time\"] = data[\"time\"] / 1000.0\n data[\"time\"] = np.around(data[\"time\"], 6)\n self.data = data\n # assert time dimension is last\n self.data = self.data.transpose(*(self.dims_not_time + [\"time\"]))\n # compute dt and sampling frequency\n self.dt = np.around(np.diff(data.time).mean(), 6)\n self.sampling_frequency = 1.0 / self.dt\n self.process_steps = [f\"raw {self.signal_type} signal: {self.start_time}--{self.end_time}s\"]\n\n def __str__(self):\n\"\"\"\n String representation.\n \"\"\"\n return (\n f\"{self.name} representing {self.signal_type} signal with unit of \"\n f\"{self.unit} with user-provided description: `{self.description}`\"\n f\". Shape of the signal is {self.shape} with dimensions \"\n f\"{self.data.dims}. Signal starts at {self.start_time} and ends at \"\n f\"{self.end_time}.\"\n )\n\n def __repr__(self):\n\"\"\"\n Representation.\n \"\"\"\n return self.__str__()\n\n def __eq__(self, other):\n\"\"\"\n Comparison operator.\n\n :param other: other `Signal` to compare with\n :type other: `Signal`\n :return: whether two `Signals` are the same\n :rtype: bool\n \"\"\"\n assert isinstance(other, Signal)\n # assert data are the same\n try:\n xr.testing.assert_allclose(self.data, other.data)\n eq = True\n except AssertionError:\n eq = False\n # check attributes, but if not equal, only warn the user\n for attr in self._copy_attributes:\n if getattr(self, attr) != getattr(other, attr):\n logging.warning(f\"`{attr}` not equal between signals.\")\n return eq\n\n def __getitem__(self, pos):\n\"\"\"\n Get item selects in output dimension.\n \"\"\"\n add_steps = [f\"select `{pos}` output\"]\n return self.__constructor__(self.data.sel(output=pos)).__finalize__(self, add_steps)\n\n def __finalize__(self, other, add_steps=None):\n\"\"\"\n Copy attributes from other to self. Used when constructing class\n instance with different data, but same metadata.\n\n :param other: other instance of `Signal`\n :type other: `Signal`\n :param add_steps: add steps to preprocessing\n :type add_steps: list|None\n \"\"\"\n assert isinstance(other, Signal)\n for attr in self._copy_attributes:\n setattr(self, attr, deepcopy(getattr(other, attr)))\n if add_steps is not None:\n self.process_steps += add_steps\n return self\n\n @property\n def __constructor__(self):\n\"\"\"\n Return constructor, so that each child class would initiate a new\n instance of the correct class, i.e. first in the method resolution\n order.\n \"\"\"\n return self.__class__.mro()[0]\n\n def _write_attrs_to_xr(self):\n\"\"\"\n Copy attributes to xarray before saving.\n \"\"\"\n # write attributes to xarray\n for attr in self._copy_attributes:\n value = getattr(self, attr)\n # if list need to unwrap\n if isinstance(value, (list, tuple)):\n for idx, val in enumerate(value):\n self.data.attrs[f\"{attr}_{idx}\"] = val\n else:\n self.data.attrs[attr] = deepcopy(value)\n\n def save(self, filename):\n\"\"\"\n Save signal.\n\n :param filename: filename to save, currently saves to netCDF file, which is natively supported by xarray\n :type filename: str\n \"\"\"\n self._write_attrs_to_xr()\n if not filename.endswith(NC_EXT):\n filename += NC_EXT\n self.data.to_netcdf(filename)\n\n def iterate(self, return_as=\"signal\"):\n\"\"\"\n Return iterator over columns, so univariate measures can be computed\n per column. Loops over tuples as (variable name, timeseries).\n\n :param return_as: how to return columns: `xr` as xr.DataArray, `signal` as\n instance of NeuroSignal with the same attributes as the mother signal\n :type return_as: str\n \"\"\"\n try:\n stacked = self.data.stack({\"all\": self.dims_not_time})\n except ValueError:\n logging.warning(\"No dimensions along which to stack...\")\n stacked = self.data.expand_dims(\"all\")\n\n if return_as == \"xr\":\n yield from stacked.groupby(\"all\")\n elif return_as == \"signal\":\n for name_coords, column in stacked.groupby(\"all\"):\n if not isinstance(name_coords, (list, tuple)):\n name_coords = [name_coords]\n name_dict = {k: v for k, v in zip(self.dims_not_time, name_coords)}\n yield name_dict, self.__constructor__(column).__finalize__(self, [f\"select {column.name}\"])\n else:\n raise ValueError(f\"Data type not understood: {return_as}\")\n\n def sel(self, sel_args, inplace=True):\n\"\"\"\n Subselect part of signal using xarray's `sel`, i.e. selecting by actual\n physical index, hence time in seconds.\n\n :param sel_args: arguments you'd give to xr.sel(), i.e. slice of times\n you want to select, in seconds as a len=2 list or tuple\n :type sel_args: tuple|list\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert len(sel_args) == 2, \"Must provide 2 arguments\"\n selected = self.data.sel(time=slice(sel_args[0], sel_args[1]))\n add_steps = [f\"select {sel_args[0] or 'x'}:{sel_args[1] or 'x'}s\"]\n if inplace:\n self.data = selected\n self.process_steps += add_steps\n else:\n return self.__constructor__(selected).__finalize__(self, add_steps)\n\n def isel(self, isel_args, inplace=True):\n\"\"\"\n Subselect part of signal using xarray's `isel`, i.e. selecting by index,\n hence integers.\n\n :param loc_args: arguments you'd give to xr.isel(), i.e. slice of\n indices you want to select, in seconds as a len=2 list or tuple\n :type loc_args: tuple|list\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert len(isel_args) == 2, \"Must provide 2 arguments\"\n selected = self.data.isel(time=slice(isel_args[0], isel_args[1]))\n start = isel_args[0] * self.dt if isel_args[0] is not None else \"x\"\n end = isel_args[1] * self.dt if isel_args[1] is not None else \"x\"\n add_steps = [f\"select {start}:{end}s\"]\n if inplace:\n self.data = selected\n self.process_steps += add_steps\n else:\n return self.__constructor__(selected).__finalize__(self, add_steps)\n\n def rolling(self, roll_over, function=np.mean, dropnans=True, inplace=True):\n\"\"\"\n Return rolling reduction over signal's time dimension. The window is\n centered around the midpoint.\n\n :param roll_over: window to use, in seconds\n :type roll_over: float\n :param function: function to use for reduction\n :type function: callable\n :param dropnans: whether to drop NaNs - will shorten time dimension, or\n not\n :type dropnans: bool\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert callable(function)\n rolling = self.data.rolling(time=int(roll_over * self.sampling_frequency), center=True).reduce(function)\n add_steps = [f\"rolling {function.__name__} over {roll_over}s\"]\n if dropnans:\n rolling = rolling.dropna(\"time\")\n add_steps[0] += \"; drop NaNs\"\n if inplace:\n self.data = rolling\n self.process_steps += add_steps\n else:\n return self.__constructor__(rolling).__finalize__(self, add_steps)\n\n def sliding_window(self, length, step=1, window_function=\"boxcar\", lengths_in_seconds=False):\n\"\"\"\n Return iterator over sliding windows with windowing function applied.\n Each window has length `length` and each is translated by `step` steps.\n For no windowing function use \"boxcar\". If the last window would have\n the same length as other, it is omitted, i.e. last window does not have\n to end with the final timeseries point!\n\n :param length: length of the window, can be index or time in seconds,\n see `lengths_in_seconds`\n :type length: int|float\n :param step: how much to translate window in the temporal sense, can be\n index or time in seconds, see `lengths_in_seconds`\n :type step: int|float\n :param window_function: windowing function to use, this is passed to\n `get_window()`; see `scipy.signal.windows.get_window` documentation\n :type window_function: str|tuple|float\n :param lengths_in_seconds: if True, `length` and `step` are interpreted\n in seconds, if False they are indices\n :type lengths_in_seconds: bool\n :yield: generator with windowed Signals\n \"\"\"\n if lengths_in_seconds:\n length = int(length / self.dt)\n step = int(step / self.dt)\n assert (\n length < self.data.time.shape[0]\n ), f\"Length must be smaller than time span of the timeseries: {self.data.time.shape[0]}\"\n assert step <= length, \"Step cannot be larger than length, some part of timeseries would be omitted!\"\n current_idx = 0\n add_steps = f\"{str(window_function)} window: \"\n windowing_function = get_window(window_function, Nx=length)\n while current_idx <= (self.data.time.shape[0] - length):\n yield self.__constructor__(\n self.data.isel(time=slice(current_idx, current_idx + length)) * windowing_function\n ).__finalize__(self, [add_steps + f\"{current_idx}:{current_idx + length}\"])\n current_idx += step\n\n @property\n def shape(self):\n\"\"\"\n Return shape of the data. Time axis is the first one.\n \"\"\"\n return self.data.shape\n\n @property\n def dims_not_time(self):\n\"\"\"\n Return list of dimensions that are not time.\n \"\"\"\n return [dim for dim in self.data.dims if dim != \"time\"]\n\n @property\n def coords_not_time(self):\n\"\"\"\n Return dict with all coordinates except time.\n \"\"\"\n return {k: v.values for k, v in self.data.coords.items() if k != \"time\"}\n\n @property\n def start_time(self):\n\"\"\"\n Return starting time of the signal.\n \"\"\"\n return self.data.time.values[0]\n\n @property\n def end_time(self):\n\"\"\"\n Return ending time of the signal.\n \"\"\"\n return self.data.time.values[-1]\n\n @property\n def time(self):\n\"\"\"\n Return time vector.\n \"\"\"\n return self.data.time.values\n\n @property\n def preprocessing_steps(self):\n\"\"\"\n Return preprocessing steps done on the data.\n \"\"\"\n return \" -> \".join(self.process_steps)\n\n def pad(self, how_much, in_seconds=False, padding_type=\"constant\", side=\"both\", inplace=True, **kwargs):\n\"\"\"\n Pad signal by `how_much` on given side of given type.\n\n :param how_much: how much we should pad, can be time points, or seconds,\n see `in_seconds`\n :type how_much: float|int\n :param in_seconds: whether `how_much` is in seconds, if False, it is\n number of time points\n :type in_seconds: bool\n :param padding_type: how to pad the signal, see `np.pad` documentation\n :type padding_type: str\n :param side: which side to pad - \"before\", \"after\", or \"both\"\n :type side: str\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n :kwargs: passed to `np.pad`\n \"\"\"\n if in_seconds:\n how_much = int(np.around(how_much / self.dt))\n if side == \"before\":\n pad_width = (how_much, 0)\n pad_times = np.arange(-how_much, 0) * self.dt + self.data.time.values[0]\n new_times = np.concatenate([pad_times, self.data.time.values], axis=0)\n elif side == \"after\":\n pad_width = (0, how_much)\n pad_times = np.arange(1, how_much + 1) * self.dt + self.data.time.values[-1]\n new_times = np.concatenate([self.data.time.values, pad_times], axis=0)\n elif side == \"both\":\n pad_width = (how_much, how_much)\n pad_before = np.arange(-how_much, 0) * self.dt + self.data.time.values[0]\n pad_after = np.arange(1, how_much + 1) * self.dt + self.data.time.values[-1]\n new_times = np.concatenate([pad_before, self.data.time.values, pad_after], axis=0)\n side += \" sides\"\n else:\n raise ValueError(f\"Unknown padding side: {side}\")\n # add padding for other axes than time - zeroes\n pad_width = [(0, 0)] * len(self.dims_not_time) + [pad_width]\n padded = np.pad(self.data.values, pad_width, mode=padding_type, **kwargs)\n # to dataframe\n padded = xr.DataArray(padded, dims=self.data.dims, coords={**self.coords_not_time, \"time\": new_times})\n add_steps = [f\"{how_much * self.dt}s {padding_type} {side} padding\"]\n if inplace:\n self.data = padded\n self.process_steps += add_steps\n else:\n return self.__constructor__(padded).__finalize__(self, add_steps)\n\n def normalize(self, std=False, inplace=True):\n\"\"\"\n De-mean the timeseries. Optionally also standardise.\n\n :param std: normalize by std, i.e. to unit variance\n :type std: bool\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n\n def norm_func(x, dim):\n demeaned = x - x.mean(dim=dim)\n if std:\n return demeaned / x.std(dim=dim)\n else:\n return demeaned\n\n normalized = norm_func(self.data, dim=\"time\")\n add_steps = [\"normalize\", \"standardize\"] if std else [\"normalize\"]\n if inplace:\n self.data = normalized\n self.process_steps += add_steps\n else:\n return self.__constructor__(normalized).__finalize__(self, add_steps)\n\n def resample(self, to_frequency, inplace=True):\n\"\"\"\n Resample signal to target frequency.\n\n :param to_frequency: target frequency of the signal, in Hz\n :type to_frequency: float\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n to_frequency = float(to_frequency)\n try:\n from mne.filter import resample\n\n resample_func = partial(\n resample, up=to_frequency, down=self.sampling_frequency, npad=\"auto\", axis=-1, pad=\"edge\"\n )\n except ImportError:\n logging.warning(\"`mne` module not found, falling back to basic scipy's function\")\n\n def resample_func(x):\n return scipy_resample(\n x,\n num=int(round((to_frequency / self.sampling_frequency) * self.data.shape[-1])),\n axis=-1,\n window=\"boxcar\",\n )\n\n resampled = resample_func(self.data.values)\n # construct new times\n new_times = (np.arange(resampled.shape[-1], dtype=float) / to_frequency) + self.data.time.values[0]\n # to dataframe\n resampled = xr.DataArray(resampled, dims=self.data.dims, coords={**self.coords_not_time, \"time\": new_times})\n add_steps = [f\"resample to {to_frequency}Hz\"]\n if inplace:\n self.data = resampled\n self.sampling_frequency = to_frequency\n self.dt = np.around(np.diff(resampled.time).mean(), 6)\n self.process_steps += add_steps\n else:\n return self.__constructor__(resampled).__finalize__(self, add_steps)\n\n def hilbert_transform(self, return_as=\"complex\", inplace=True):\n\"\"\"\n Perform hilbert transform on the signal resulting in analytic signal.\n\n :param return_as: what to return\n `complex` will compute only analytical signal\n `amplitude` will compute amplitude, hence abs(H(x))\n `phase_wrapped` will compute phase, hence angle(H(x)), in -pi,pi\n `phase_unwrapped` will compute phase in a continuous sense, hence\n monotonic\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n analytic = hilbert(self.data, axis=-1)\n if return_as == \"amplitude\":\n analytic = np.abs(analytic)\n add_steps = [\"Hilbert - amplitude\"]\n elif return_as == \"phase_unwrapped\":\n analytic = np.unwrap(np.angle(analytic))\n add_steps = [\"Hilbert - unwrapped phase\"]\n elif return_as == \"phase_wrapped\":\n analytic = np.angle(analytic)\n add_steps = [\"Hilbert - wrapped phase\"]\n elif return_as == \"complex\":\n add_steps = [\"Hilbert - complex\"]\n else:\n raise ValueError(f\"Do not know how to return: {return_as}\")\n\n analytic = xr.DataArray(analytic, dims=self.data.dims, coords=self.data.coords)\n if inplace:\n self.data = analytic\n self.process_steps += add_steps\n else:\n return self.__constructor__(analytic).__finalize__(self, add_steps)\n\n def detrend(self, segments=None, inplace=True):\n\"\"\"\n Linearly detrend signal. If segments are given, detrending will be\n performed in each part.\n\n :param segments: segments for detrending, if None will detrend whole\n signal, given as indices of the time array\n :type segments: list|None\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n segments = segments or 0\n detrended = detrend(self.data, type=\"linear\", bp=segments, axis=-1)\n detrended = xr.DataArray(detrended, dims=self.data.dims, coords=self.data.coords)\n segments_text = f\" with segments: {segments}\" if segments != 0 else \"\"\n add_steps = [f\"detrend{segments_text}\"]\n if inplace:\n self.data = detrended\n self.process_steps += add_steps\n else:\n return self.__constructor__(detrended).__finalize__(self, add_steps)\n\n def filter(self, low_freq, high_freq, l_trans_bandwidth=\"auto\", h_trans_bandwidth=\"auto\", inplace=True, **kwargs):\n\"\"\"\n Filter data. Can be:\n low-pass (low_freq is None, high_freq is not None),\n high-pass (high_freq is None, low_freq is not None),\n band-pass (l_freq < h_freq),\n band-stop (l_freq > h_freq) filter type\n\n :param low_freq: frequency below which to filter the data\n :type low_freq: float|None\n :param high_freq: frequency above which to filter the data\n :type high_freq: float|None\n :param l_trans_bandwidth: transition band width for low frequency\n :type l_trans_bandwidth: float|str\n :param h_trans_bandwidth: transition band width for high frequency\n :type h_trans_bandwidth: float|str\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n :**kwargs: possible keywords to `mne.filter.create_filter`:\n `filter_length`=\"auto\",\n `method`=\"fir\",\n `iir_params`=None\n `phase`=\"zero\",\n `fir_window`=\"hamming\",\n `fir_design`=\"firwin\"\n \"\"\"\n try:\n from mne.filter import filter_data\n\n except ImportError:\n logging.warning(\"`mne` module not found, falling back to basic scipy's function\")\n filter_data = scipy_iir_filter_data\n\n filtered = filter_data(\n self.data.values, # times has to be the last axis\n sfreq=self.sampling_frequency,\n l_freq=low_freq,\n h_freq=high_freq,\n l_trans_bandwidth=l_trans_bandwidth,\n h_trans_bandwidth=h_trans_bandwidth,\n **kwargs,\n )\n add_steps = [f\"filter: low {low_freq or 'x'}Hz - high {high_freq or 'x'}Hz\"]\n # to dataframe\n filtered = xr.DataArray(filtered, dims=self.data.dims, coords=self.data.coords)\n if inplace:\n self.data = filtered\n self.process_steps += add_steps\n else:\n return self.__constructor__(filtered).__finalize__(self, add_steps)\n\n def functional_connectivity(self, fc_function=np.corrcoef):\n\"\"\"\n Compute and return functional connectivity from the data.\n\n :param fc_function: function which to use for FC computation, should\n take 2D array as space x time and convert it to space x space with\n desired measure\n \"\"\"\n if len(self.data[\"space\"]) <= 1:\n logging.error(\"Cannot compute functional connectivity from one timeseries.\")\n return None\n if self.data.ndim == 3:\n assert callable(fc_function)\n fcs = []\n for output in self.data[\"output\"]:\n current_slice = self.data.sel({\"output\": output})\n assert current_slice.ndim == 2\n fcs.append(fc_function(current_slice.values))\n\n return xr.DataArray(\n np.array(fcs),\n dims=[\"output\", \"space\", \"space\"],\n coords={\"output\": self.data.coords[\"output\"], \"space\": self.data.coords[\"space\"]},\n )\n if self.data.ndim == 2:\n return xr.DataArray(\n fc_function(self.data.values),\n dims=[\"space\", \"space\"],\n coords={\"space\": self.data.coords[\"space\"]},\n )\n\n def apply(self, func, inplace=True):\n\"\"\"\n Apply func for each timeseries.\n\n :param func: function to be applied for each 1D timeseries\n :type func: callable\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert callable(func)\n try:\n # this will work for element-wise function that does not reduces dimensions\n processed = xr.apply_ufunc(func, self.data, input_core_dims=[[\"time\"]], output_core_dims=[[\"time\"]])\n add_steps = [f\"apply `{func.__name__}` function over time dim\"]\n if inplace:\n self.data = processed\n self.process_steps += add_steps\n else:\n return self.__constructor__(processed).__finalize__(self, add_steps)\n except ValueError:\n # this works for functions that reduce time dimension\n processed = xr.apply_ufunc(func, self.data, input_core_dims=[[\"time\"]])\n logging.warning(\n f\"Shape changed after operation! Old shape: {self.shape}, new \"\n f\"shape: {processed.shape}; Cannot cast to Signal class, \"\n \"returing as `xr.DataArray`\"\n )\n return processed\n
def __eq__(self, other):\n\"\"\"\n Comparison operator.\n\n :param other: other `Signal` to compare with\n :type other: `Signal`\n :return: whether two `Signals` are the same\n :rtype: bool\n \"\"\"\n assert isinstance(other, Signal)\n # assert data are the same\n try:\n xr.testing.assert_allclose(self.data, other.data)\n eq = True\n except AssertionError:\n eq = False\n # check attributes, but if not equal, only warn the user\n for attr in self._copy_attributes:\n if getattr(self, attr) != getattr(other, attr):\n logging.warning(f\"`{attr}` not equal between signals.\")\n return eq\n
Copy attributes from other to self. Used when constructing class instance with different data, but same metadata.
Parameters:
Name Type Description Default other`Signal`
other instance of Signal
required add_stepslist|None
add steps to preprocessing
None Source code in neurolib/utils/signal.py
def __finalize__(self, other, add_steps=None):\n\"\"\"\n Copy attributes from other to self. Used when constructing class\n instance with different data, but same metadata.\n\n :param other: other instance of `Signal`\n :type other: `Signal`\n :param add_steps: add steps to preprocessing\n :type add_steps: list|None\n \"\"\"\n assert isinstance(other, Signal)\n for attr in self._copy_attributes:\n setattr(self, attr, deepcopy(getattr(other, attr)))\n if add_steps is not None:\n self.process_steps += add_steps\n return self\n
data for the signal, assumes time dimension with time in seconds
required time_in_msbool
whether time dimension is in ms
False Source code in neurolib/utils/signal.py
def __init__(self, data, time_in_ms=False):\n\"\"\"\n :param data: data for the signal, assumes time dimension with time in seconds\n :type data: xr.DataArray\n :param time_in_ms: whether time dimension is in ms\n :type time_in_ms: bool\n \"\"\"\n assert isinstance(data, xr.DataArray)\n data = deepcopy(data)\n assert \"time\" in data.dims, \"DataArray must have time axis\"\n if time_in_ms:\n data[\"time\"] = data[\"time\"] / 1000.0\n data[\"time\"] = np.around(data[\"time\"], 6)\n self.data = data\n # assert time dimension is last\n self.data = self.data.transpose(*(self.dims_not_time + [\"time\"]))\n # compute dt and sampling frequency\n self.dt = np.around(np.diff(data.time).mean(), 6)\n self.sampling_frequency = 1.0 / self.dt\n self.process_steps = [f\"raw {self.signal_type} signal: {self.start_time}--{self.end_time}s\"]\n
def __str__(self):\n\"\"\"\n String representation.\n \"\"\"\n return (\n f\"{self.name} representing {self.signal_type} signal with unit of \"\n f\"{self.unit} with user-provided description: `{self.description}`\"\n f\". Shape of the signal is {self.shape} with dimensions \"\n f\"{self.data.dims}. Signal starts at {self.start_time} and ends at \"\n f\"{self.end_time}.\"\n )\n
def apply(self, func, inplace=True):\n\"\"\"\n Apply func for each timeseries.\n\n :param func: function to be applied for each 1D timeseries\n :type func: callable\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert callable(func)\n try:\n # this will work for element-wise function that does not reduces dimensions\n processed = xr.apply_ufunc(func, self.data, input_core_dims=[[\"time\"]], output_core_dims=[[\"time\"]])\n add_steps = [f\"apply `{func.__name__}` function over time dim\"]\n if inplace:\n self.data = processed\n self.process_steps += add_steps\n else:\n return self.__constructor__(processed).__finalize__(self, add_steps)\n except ValueError:\n # this works for functions that reduce time dimension\n processed = xr.apply_ufunc(func, self.data, input_core_dims=[[\"time\"]])\n logging.warning(\n f\"Shape changed after operation! Old shape: {self.shape}, new \"\n f\"shape: {processed.shape}; Cannot cast to Signal class, \"\n \"returing as `xr.DataArray`\"\n )\n return processed\n
Linearly detrend signal. If segments are given, detrending will be performed in each part.
Parameters:
Name Type Description Default segmentslist|None
segments for detrending, if None will detrend whole signal, given as indices of the time array
Noneinplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def detrend(self, segments=None, inplace=True):\n\"\"\"\n Linearly detrend signal. If segments are given, detrending will be\n performed in each part.\n\n :param segments: segments for detrending, if None will detrend whole\n signal, given as indices of the time array\n :type segments: list|None\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n segments = segments or 0\n detrended = detrend(self.data, type=\"linear\", bp=segments, axis=-1)\n detrended = xr.DataArray(detrended, dims=self.data.dims, coords=self.data.coords)\n segments_text = f\" with segments: {segments}\" if segments != 0 else \"\"\n add_steps = [f\"detrend{segments_text}\"]\n if inplace:\n self.data = detrended\n self.process_steps += add_steps\n else:\n return self.__constructor__(detrended).__finalize__(self, add_steps)\n
Filter data. Can be: low-pass (low_freq is None, high_freq is not None), high-pass (high_freq is None, low_freq is not None), band-pass (l_freq < h_freq), band-stop (l_freq > h_freq) filter type
:**kwargs: possible keywords to mne.filter.create_filter: filter_length=\"auto\", method=\"fir\", iir_params=None phase=\"zero\", fir_window=\"hamming\", fir_design=\"firwin\"
Parameters:
Name Type Description Default low_freqfloat|None
frequency below which to filter the data
required high_freqfloat|None
frequency above which to filter the data
required l_trans_bandwidthfloat|str
transition band width for low frequency
'auto'h_trans_bandwidthfloat|str
transition band width for high frequency
'auto'inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def filter(self, low_freq, high_freq, l_trans_bandwidth=\"auto\", h_trans_bandwidth=\"auto\", inplace=True, **kwargs):\n\"\"\"\n Filter data. Can be:\n low-pass (low_freq is None, high_freq is not None),\n high-pass (high_freq is None, low_freq is not None),\n band-pass (l_freq < h_freq),\n band-stop (l_freq > h_freq) filter type\n\n :param low_freq: frequency below which to filter the data\n :type low_freq: float|None\n :param high_freq: frequency above which to filter the data\n :type high_freq: float|None\n :param l_trans_bandwidth: transition band width for low frequency\n :type l_trans_bandwidth: float|str\n :param h_trans_bandwidth: transition band width for high frequency\n :type h_trans_bandwidth: float|str\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n :**kwargs: possible keywords to `mne.filter.create_filter`:\n `filter_length`=\"auto\",\n `method`=\"fir\",\n `iir_params`=None\n `phase`=\"zero\",\n `fir_window`=\"hamming\",\n `fir_design`=\"firwin\"\n \"\"\"\n try:\n from mne.filter import filter_data\n\n except ImportError:\n logging.warning(\"`mne` module not found, falling back to basic scipy's function\")\n filter_data = scipy_iir_filter_data\n\n filtered = filter_data(\n self.data.values, # times has to be the last axis\n sfreq=self.sampling_frequency,\n l_freq=low_freq,\n h_freq=high_freq,\n l_trans_bandwidth=l_trans_bandwidth,\n h_trans_bandwidth=h_trans_bandwidth,\n **kwargs,\n )\n add_steps = [f\"filter: low {low_freq or 'x'}Hz - high {high_freq or 'x'}Hz\"]\n # to dataframe\n filtered = xr.DataArray(filtered, dims=self.data.dims, coords=self.data.coords)\n if inplace:\n self.data = filtered\n self.process_steps += add_steps\n else:\n return self.__constructor__(filtered).__finalize__(self, add_steps)\n
Compute and return functional connectivity from the data.
Parameters:
Name Type Description Default fc_function
function which to use for FC computation, should take 2D array as space x time and convert it to space x space with desired measure
np.corrcoef Source code in neurolib/utils/signal.py
def functional_connectivity(self, fc_function=np.corrcoef):\n\"\"\"\n Compute and return functional connectivity from the data.\n\n :param fc_function: function which to use for FC computation, should\n take 2D array as space x time and convert it to space x space with\n desired measure\n \"\"\"\n if len(self.data[\"space\"]) <= 1:\n logging.error(\"Cannot compute functional connectivity from one timeseries.\")\n return None\n if self.data.ndim == 3:\n assert callable(fc_function)\n fcs = []\n for output in self.data[\"output\"]:\n current_slice = self.data.sel({\"output\": output})\n assert current_slice.ndim == 2\n fcs.append(fc_function(current_slice.values))\n\n return xr.DataArray(\n np.array(fcs),\n dims=[\"output\", \"space\", \"space\"],\n coords={\"output\": self.data.coords[\"output\"], \"space\": self.data.coords[\"space\"]},\n )\n if self.data.ndim == 2:\n return xr.DataArray(\n fc_function(self.data.values),\n dims=[\"space\", \"space\"],\n coords={\"space\": self.data.coords[\"space\"]},\n )\n
Perform hilbert transform on the signal resulting in analytic signal.
Parameters:
Name Type Description Default return_as
what to return complex will compute only analytical signal amplitude will compute amplitude, hence abs(H(x)) phase_wrapped will compute phase, hence angle(H(x)), in -pi,pi phase_unwrapped will compute phase in a continuous sense, hence monotonic
'complex'inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def hilbert_transform(self, return_as=\"complex\", inplace=True):\n\"\"\"\n Perform hilbert transform on the signal resulting in analytic signal.\n\n :param return_as: what to return\n `complex` will compute only analytical signal\n `amplitude` will compute amplitude, hence abs(H(x))\n `phase_wrapped` will compute phase, hence angle(H(x)), in -pi,pi\n `phase_unwrapped` will compute phase in a continuous sense, hence\n monotonic\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n analytic = hilbert(self.data, axis=-1)\n if return_as == \"amplitude\":\n analytic = np.abs(analytic)\n add_steps = [\"Hilbert - amplitude\"]\n elif return_as == \"phase_unwrapped\":\n analytic = np.unwrap(np.angle(analytic))\n add_steps = [\"Hilbert - unwrapped phase\"]\n elif return_as == \"phase_wrapped\":\n analytic = np.angle(analytic)\n add_steps = [\"Hilbert - wrapped phase\"]\n elif return_as == \"complex\":\n add_steps = [\"Hilbert - complex\"]\n else:\n raise ValueError(f\"Do not know how to return: {return_as}\")\n\n analytic = xr.DataArray(analytic, dims=self.data.dims, coords=self.data.coords)\n if inplace:\n self.data = analytic\n self.process_steps += add_steps\n else:\n return self.__constructor__(analytic).__finalize__(self, add_steps)\n
Subselect part of signal using xarray's isel, i.e. selecting by index, hence integers.
Parameters:
Name Type Description Default loc_argstuple|list
arguments you'd give to xr.isel(), i.e. slice of indices you want to select, in seconds as a len=2 list or tuple
required inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def isel(self, isel_args, inplace=True):\n\"\"\"\n Subselect part of signal using xarray's `isel`, i.e. selecting by index,\n hence integers.\n\n :param loc_args: arguments you'd give to xr.isel(), i.e. slice of\n indices you want to select, in seconds as a len=2 list or tuple\n :type loc_args: tuple|list\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert len(isel_args) == 2, \"Must provide 2 arguments\"\n selected = self.data.isel(time=slice(isel_args[0], isel_args[1]))\n start = isel_args[0] * self.dt if isel_args[0] is not None else \"x\"\n end = isel_args[1] * self.dt if isel_args[1] is not None else \"x\"\n add_steps = [f\"select {start}:{end}s\"]\n if inplace:\n self.data = selected\n self.process_steps += add_steps\n else:\n return self.__constructor__(selected).__finalize__(self, add_steps)\n
Return iterator over columns, so univariate measures can be computed per column. Loops over tuples as (variable name, timeseries).
Parameters:
Name Type Description Default return_asstr
how to return columns: xr as xr.DataArray, signal as instance of NeuroSignal with the same attributes as the mother signal
'signal' Source code in neurolib/utils/signal.py
def iterate(self, return_as=\"signal\"):\n\"\"\"\n Return iterator over columns, so univariate measures can be computed\n per column. Loops over tuples as (variable name, timeseries).\n\n :param return_as: how to return columns: `xr` as xr.DataArray, `signal` as\n instance of NeuroSignal with the same attributes as the mother signal\n :type return_as: str\n \"\"\"\n try:\n stacked = self.data.stack({\"all\": self.dims_not_time})\n except ValueError:\n logging.warning(\"No dimensions along which to stack...\")\n stacked = self.data.expand_dims(\"all\")\n\n if return_as == \"xr\":\n yield from stacked.groupby(\"all\")\n elif return_as == \"signal\":\n for name_coords, column in stacked.groupby(\"all\"):\n if not isinstance(name_coords, (list, tuple)):\n name_coords = [name_coords]\n name_dict = {k: v for k, v in zip(self.dims_not_time, name_coords)}\n yield name_dict, self.__constructor__(column).__finalize__(self, [f\"select {column.name}\"])\n else:\n raise ValueError(f\"Data type not understood: {return_as}\")\n
De-mean the timeseries. Optionally also standardise.
Parameters:
Name Type Description Default stdbool
normalize by std, i.e. to unit variance
Falseinplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def normalize(self, std=False, inplace=True):\n\"\"\"\n De-mean the timeseries. Optionally also standardise.\n\n :param std: normalize by std, i.e. to unit variance\n :type std: bool\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n\n def norm_func(x, dim):\n demeaned = x - x.mean(dim=dim)\n if std:\n return demeaned / x.std(dim=dim)\n else:\n return demeaned\n\n normalized = norm_func(self.data, dim=\"time\")\n add_steps = [\"normalize\", \"standardize\"] if std else [\"normalize\"]\n if inplace:\n self.data = normalized\n self.process_steps += add_steps\n else:\n return self.__constructor__(normalized).__finalize__(self, add_steps)\n
Pad signal by how_much on given side of given type.
:kwargs: passed to np.pad
Parameters:
Name Type Description Default how_muchfloat|int
how much we should pad, can be time points, or seconds, see in_seconds
required in_secondsbool
whether how_much is in seconds, if False, it is number of time points
Falsepadding_typestr
how to pad the signal, see np.pad documentation
'constant'sidestr
which side to pad - \"before\", \"after\", or \"both\"
'both'inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def pad(self, how_much, in_seconds=False, padding_type=\"constant\", side=\"both\", inplace=True, **kwargs):\n\"\"\"\n Pad signal by `how_much` on given side of given type.\n\n :param how_much: how much we should pad, can be time points, or seconds,\n see `in_seconds`\n :type how_much: float|int\n :param in_seconds: whether `how_much` is in seconds, if False, it is\n number of time points\n :type in_seconds: bool\n :param padding_type: how to pad the signal, see `np.pad` documentation\n :type padding_type: str\n :param side: which side to pad - \"before\", \"after\", or \"both\"\n :type side: str\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n :kwargs: passed to `np.pad`\n \"\"\"\n if in_seconds:\n how_much = int(np.around(how_much / self.dt))\n if side == \"before\":\n pad_width = (how_much, 0)\n pad_times = np.arange(-how_much, 0) * self.dt + self.data.time.values[0]\n new_times = np.concatenate([pad_times, self.data.time.values], axis=0)\n elif side == \"after\":\n pad_width = (0, how_much)\n pad_times = np.arange(1, how_much + 1) * self.dt + self.data.time.values[-1]\n new_times = np.concatenate([self.data.time.values, pad_times], axis=0)\n elif side == \"both\":\n pad_width = (how_much, how_much)\n pad_before = np.arange(-how_much, 0) * self.dt + self.data.time.values[0]\n pad_after = np.arange(1, how_much + 1) * self.dt + self.data.time.values[-1]\n new_times = np.concatenate([pad_before, self.data.time.values, pad_after], axis=0)\n side += \" sides\"\n else:\n raise ValueError(f\"Unknown padding side: {side}\")\n # add padding for other axes than time - zeroes\n pad_width = [(0, 0)] * len(self.dims_not_time) + [pad_width]\n padded = np.pad(self.data.values, pad_width, mode=padding_type, **kwargs)\n # to dataframe\n padded = xr.DataArray(padded, dims=self.data.dims, coords={**self.coords_not_time, \"time\": new_times})\n add_steps = [f\"{how_much * self.dt}s {padding_type} {side} padding\"]\n if inplace:\n self.data = padded\n self.process_steps += add_steps\n else:\n return self.__constructor__(padded).__finalize__(self, add_steps)\n
Return rolling reduction over signal's time dimension. The window is centered around the midpoint.
Parameters:
Name Type Description Default roll_overfloat
window to use, in seconds
required functioncallable
function to use for reduction
np.meandropnansbool
whether to drop NaNs - will shorten time dimension, or not
Trueinplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def rolling(self, roll_over, function=np.mean, dropnans=True, inplace=True):\n\"\"\"\n Return rolling reduction over signal's time dimension. The window is\n centered around the midpoint.\n\n :param roll_over: window to use, in seconds\n :type roll_over: float\n :param function: function to use for reduction\n :type function: callable\n :param dropnans: whether to drop NaNs - will shorten time dimension, or\n not\n :type dropnans: bool\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert callable(function)\n rolling = self.data.rolling(time=int(roll_over * self.sampling_frequency), center=True).reduce(function)\n add_steps = [f\"rolling {function.__name__} over {roll_over}s\"]\n if dropnans:\n rolling = rolling.dropna(\"time\")\n add_steps[0] += \"; drop NaNs\"\n if inplace:\n self.data = rolling\n self.process_steps += add_steps\n else:\n return self.__constructor__(rolling).__finalize__(self, add_steps)\n
filename to save, currently saves to netCDF file, which is natively supported by xarray
required Source code in neurolib/utils/signal.py
def save(self, filename):\n\"\"\"\n Save signal.\n\n :param filename: filename to save, currently saves to netCDF file, which is natively supported by xarray\n :type filename: str\n \"\"\"\n self._write_attrs_to_xr()\n if not filename.endswith(NC_EXT):\n filename += NC_EXT\n self.data.to_netcdf(filename)\n
Subselect part of signal using xarray's sel, i.e. selecting by actual physical index, hence time in seconds.
Parameters:
Name Type Description Default sel_argstuple|list
arguments you'd give to xr.sel(), i.e. slice of times you want to select, in seconds as a len=2 list or tuple
required inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def sel(self, sel_args, inplace=True):\n\"\"\"\n Subselect part of signal using xarray's `sel`, i.e. selecting by actual\n physical index, hence time in seconds.\n\n :param sel_args: arguments you'd give to xr.sel(), i.e. slice of times\n you want to select, in seconds as a len=2 list or tuple\n :type sel_args: tuple|list\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert len(sel_args) == 2, \"Must provide 2 arguments\"\n selected = self.data.sel(time=slice(sel_args[0], sel_args[1]))\n add_steps = [f\"select {sel_args[0] or 'x'}:{sel_args[1] or 'x'}s\"]\n if inplace:\n self.data = selected\n self.process_steps += add_steps\n else:\n return self.__constructor__(selected).__finalize__(self, add_steps)\n
Return iterator over sliding windows with windowing function applied. Each window has length length and each is translated by step steps. For no windowing function use \"boxcar\". If the last window would have the same length as other, it is omitted, i.e. last window does not have to end with the final timeseries point!
:yield: generator with windowed Signals
Parameters:
Name Type Description Default lengthint|float
length of the window, can be index or time in seconds, see lengths_in_seconds
required stepint|float
how much to translate window in the temporal sense, can be index or time in seconds, see lengths_in_seconds
1window_functionstr|tuple|float
windowing function to use, this is passed to get_window(); see scipy.signal.windows.get_window documentation
'boxcar'lengths_in_secondsbool
if True, length and step are interpreted in seconds, if False they are indices
False Source code in neurolib/utils/signal.py
def sliding_window(self, length, step=1, window_function=\"boxcar\", lengths_in_seconds=False):\n\"\"\"\n Return iterator over sliding windows with windowing function applied.\n Each window has length `length` and each is translated by `step` steps.\n For no windowing function use \"boxcar\". If the last window would have\n the same length as other, it is omitted, i.e. last window does not have\n to end with the final timeseries point!\n\n :param length: length of the window, can be index or time in seconds,\n see `lengths_in_seconds`\n :type length: int|float\n :param step: how much to translate window in the temporal sense, can be\n index or time in seconds, see `lengths_in_seconds`\n :type step: int|float\n :param window_function: windowing function to use, this is passed to\n `get_window()`; see `scipy.signal.windows.get_window` documentation\n :type window_function: str|tuple|float\n :param lengths_in_seconds: if True, `length` and `step` are interpreted\n in seconds, if False they are indices\n :type lengths_in_seconds: bool\n :yield: generator with windowed Signals\n \"\"\"\n if lengths_in_seconds:\n length = int(length / self.dt)\n step = int(step / self.dt)\n assert (\n length < self.data.time.shape[0]\n ), f\"Length must be smaller than time span of the timeseries: {self.data.time.shape[0]}\"\n assert step <= length, \"Step cannot be larger than length, some part of timeseries would be omitted!\"\n current_idx = 0\n add_steps = f\"{str(window_function)} window: \"\n windowing_function = get_window(window_function, Nx=length)\n while current_idx <= (self.data.time.shape[0] - length):\n yield self.__constructor__(\n self.data.isel(time=slice(current_idx, current_idx + length)) * windowing_function\n ).__finalize__(self, [add_steps + f\"{current_idx}:{current_idx + length}\"])\n current_idx += step\n
Base class for stimuli consisting of multiple time series, such as summed inputs or concatenated inputs.
Source code in neurolib/utils/stimulus.py
class BaseMultipleInputs(Stimulus):\n\"\"\"\n Base class for stimuli consisting of multiple time series, such as summed inputs or concatenated inputs.\n \"\"\"\n\n def __init__(self, inputs):\n\"\"\"\n :param inputs: List of Inputs to combine\n :type inputs: list[`Input`]\n \"\"\"\n assert all(isinstance(input, Input) for input in inputs)\n self.inputs = inputs\n\n def __len__(self):\n\"\"\"\n Return number of inputs.\n \"\"\"\n return len(self.inputs)\n\n def __getitem__(self, index):\n\"\"\"\n Return inputs by index. This also allows iteration.\n \"\"\"\n return self.inputs[index]\n\n @property\n def n(self):\n n = set([input.n for input in self])\n assert len(n) == 1\n return next(iter(n))\n\n @n.setter\n def n(self, n):\n for input in self:\n input.n = n\n\n def get_params(self):\n\"\"\"\n Get all parameters recursively for all inputs.\n \"\"\"\n return {\n \"type\": self.__class__.__name__,\n **{f\"input_{i}\": input.get_params() for i, input in enumerate(self)},\n }\n\n def update_params(self, params_dict):\n\"\"\"\n Update all parameters recursively.\n \"\"\"\n for i, input in enumerate(self):\n input.update_params(params_dict.get(f\"input_{i}\", {}))\n
required Source code in neurolib/utils/stimulus.py
def __init__(self, inputs):\n\"\"\"\n :param inputs: List of Inputs to combine\n :type inputs: list[`Input`]\n \"\"\"\n assert all(isinstance(input, Input) for input in inputs)\n self.inputs = inputs\n
def get_params(self):\n\"\"\"\n Get all parameters recursively for all inputs.\n \"\"\"\n return {\n \"type\": self.__class__.__name__,\n **{f\"input_{i}\": input.get_params() for i, input in enumerate(self)},\n }\n
def update_params(self, params_dict):\n\"\"\"\n Update all parameters recursively.\n \"\"\"\n for i, input in enumerate(self):\n input.update_params(params_dict.get(f\"input_{i}\", {}))\n
Return concatenation of all stimuli as numpy array.
Source code in neurolib/utils/stimulus.py
def as_array(self, duration, dt):\n\"\"\"\n Return concatenation of all stimuli as numpy array.\n \"\"\"\n # normalize ratios to sum = 1\n ratios = [i / sum(self.length_ratios) for i in self.length_ratios]\n concat = np.concatenate(\n [input.as_array(duration * ratio, dt) for input, ratio in zip(self.inputs, ratios)],\n axis=1,\n )\n length = int(duration / dt)\n # due to rounding errors, the overall length might be longer by a few dt\n return concat[:, :length]\n
class Input:\n\"\"\"\n Generates input to model.\n\n Base class for other input types.\n \"\"\"\n\n def __init__(self, n=1, seed=None):\n\"\"\"\n :param n: Number of spatial dimensions / independent realizations of the input.\n For determinstic inputs, the array is just copied,\n for stociastic / noisy inputs, this means independent realizations.\n :type n: int\n :param seed: Seed for the random number generator.\n :type seed: int|None\n \"\"\"\n self.n = n\n self.seed = seed\n # seed the generator\n np.random.seed(seed)\n # get parameter names\n self.param_names = inspect.getfullargspec(self.__init__).args\n self.param_names.remove(\"self\")\n\n def __add__(self, other):\n\"\"\"\n Sum two inputs into one SummedStimulus.\n \"\"\"\n assert isinstance(other, Input)\n assert self.n == other.n\n if isinstance(other, SummedStimulus):\n return SummedStimulus(inputs=[self] + other.inputs)\n else:\n return SummedStimulus(inputs=[self, other])\n\n def __and__(self, other):\n\"\"\"\n Concatenate two inputs into ConcatenatedStimulus.\n \"\"\"\n assert isinstance(other, Input)\n assert self.n == other.n\n if isinstance(other, ConcatenatedStimulus):\n return ConcatenatedStimulus(inputs=[self] + other.inputs, length_ratios=[1] + other.length_ratios)\n else:\n return ConcatenatedStimulus(inputs=[self, other])\n\n def _reset(self):\n\"\"\"\n Reset is called after generating an input. Can be used to reset\n intrinsic properties.\n \"\"\"\n pass\n\n def get_params(self):\n\"\"\"\n Return the parameters of the input as dict.\n \"\"\"\n assert all(hasattr(self, name) for name in self.param_names), self.param_names\n params = {name: getattr(self, name) for name in self.param_names}\n return {\"type\": self.__class__.__name__, **params}\n\n def update_params(self, params_dict):\n\"\"\"\n Update model input parameters.\n\n :param params_dict: New parameters for this input\n :type params_dict: dict\n \"\"\"\n\n def _sanitize(value):\n\"\"\"\n Change string `None` to actual None - can happen with Exploration or\n Evolution, since `pypet` does None -> \"None\".\n \"\"\"\n if value == \"None\":\n return None\n else:\n return value\n\n for param, value in params_dict.items():\n if hasattr(self, param):\n setattr(self, param, _sanitize(value))\n\n def _get_times(self, duration, dt):\n\"\"\"\n Generate time vector.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n \"\"\"\n self.times = np.arange(dt, duration + dt, dt)\n\n def generate_input(self, duration, dt):\n\"\"\"\n Function to generate input.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n \"\"\"\n raise NotImplementedError\n\n def as_array(self, duration, dt):\n\"\"\"\n Return input as numpy array.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n \"\"\"\n array = self.generate_input(duration, dt)\n self._reset()\n return array\n\n def as_cubic_splines(self, duration, dt, shift_start_time=0.0):\n\"\"\"\n Return as cubic Hermite splines.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n :param shift_start_time: By how much to shift the stimulus start time\n :type shift_start_time: float\n \"\"\"\n self._get_times(duration, dt)\n splines = CubicHermiteSpline.from_data(self.times + shift_start_time, self.generate_input(duration, dt).T)\n self._reset()\n return splines\n\n def to_model(self, model):\n\"\"\"\n Return numpy array of stimuli based on model parameters.\n\n Example:\n ```\n model.params[\"ext_exc_input\"] = SinusoidalInput(...).to_model(model)\n ```\n\n :param model: neurolib's model\n :type model: `neurolib.models.Model`\n \"\"\"\n assert isinstance(model, Model)\n # set number of spatial dimensions as the number of nodes in the brian network\n self.n = model.params[\"N\"]\n return self.as_array(duration=model.params[\"duration\"], dt=model.params[\"dt\"])\n
Number of spatial dimensions / independent realizations of the input. For determinstic inputs, the array is just copied, for stociastic / noisy inputs, this means independent realizations.
1seedint|None
Seed for the random number generator.
None Source code in neurolib/utils/stimulus.py
def __init__(self, n=1, seed=None):\n\"\"\"\n :param n: Number of spatial dimensions / independent realizations of the input.\n For determinstic inputs, the array is just copied,\n for stociastic / noisy inputs, this means independent realizations.\n :type n: int\n :param seed: Seed for the random number generator.\n :type seed: int|None\n \"\"\"\n self.n = n\n self.seed = seed\n # seed the generator\n np.random.seed(seed)\n # get parameter names\n self.param_names = inspect.getfullargspec(self.__init__).args\n self.param_names.remove(\"self\")\n
required Source code in neurolib/utils/stimulus.py
def generate_input(self, duration, dt):\n\"\"\"\n Function to generate input.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n \"\"\"\n raise NotImplementedError\n
def get_params(self):\n\"\"\"\n Return the parameters of the input as dict.\n \"\"\"\n assert all(hasattr(self, name) for name in self.param_names), self.param_names\n params = {name: getattr(self, name) for name in self.param_names}\n return {\"type\": self.__class__.__name__, **params}\n
Name Type Description Default model`neurolib.models.Model`
neurolib's model
required Source code in neurolib/utils/stimulus.py
def to_model(self, model):\n\"\"\"\n Return numpy array of stimuli based on model parameters.\n\n Example:\n ```\n model.params[\"ext_exc_input\"] = SinusoidalInput(...).to_model(model)\n ```\n\n :param model: neurolib's model\n :type model: `neurolib.models.Model`\n \"\"\"\n assert isinstance(model, Model)\n # set number of spatial dimensions as the number of nodes in the brian network\n self.n = model.params[\"N\"]\n return self.as_array(duration=model.params[\"duration\"], dt=model.params[\"dt\"])\n
required Source code in neurolib/utils/stimulus.py
def update_params(self, params_dict):\n\"\"\"\n Update model input parameters.\n\n :param params_dict: New parameters for this input\n :type params_dict: dict\n \"\"\"\n\n def _sanitize(value):\n\"\"\"\n Change string `None` to actual None - can happen with Exploration or\n Evolution, since `pypet` does None -> \"None\".\n \"\"\"\n if value == \"None\":\n return None\n else:\n return value\n\n for param, value in params_dict.items():\n if hasattr(self, param):\n setattr(self, param, _sanitize(value))\n
Standard deviation of the Wiener process, i.e. strength of the noise
required taufloat
Timescale of the OU process, in ms
required Source code in neurolib/utils/stimulus.py
def __init__(\n self,\n mu,\n sigma,\n tau,\n n=1,\n seed=None,\n):\n\"\"\"\n :param mu: Drift of the OU process\n :type mu: float\n :param sigma: Standard deviation of the Wiener process, i.e. strength of the noise\n :type sigma: float\n :param tau: Timescale of the OU process, in ms\n :type tau: float\n \"\"\"\n self.mu = mu\n self.sigma = sigma\n self.tau = tau\n super().__init__(\n n=n,\n seed=seed,\n )\n
def as_array(self, duration, dt):\n\"\"\"\n Return sum of all inputes as numpy array.\n \"\"\"\n return np.sum(\n np.stack([input.as_array(duration, dt) for input in self.inputs]),\n axis=0,\n )\n
Return sum of all inputes as cubic Hermite splines.
Source code in neurolib/utils/stimulus.py
def as_cubic_splines(self, duration, dt, shift_start_time=0.0):\n\"\"\"\n Return sum of all inputes as cubic Hermite splines.\n \"\"\"\n result = self.inputs[0].as_cubic_splines(duration, dt, shift_start_time)\n for input in self.inputs[1:]:\n result.plus(input.as_cubic_splines(duration, dt, shift_start_time))\n return result\n
Stimulus sampled from a Wiener process, i.e. drawn from standard normal distribution N(0, sqrt(dt)).
Source code in neurolib/utils/stimulus.py
class WienerProcess(Input):\n\"\"\"\n Stimulus sampled from a Wiener process, i.e. drawn from standard normal distribution N(0, sqrt(dt)).\n \"\"\"\n\n def generate_input(self, duration, dt):\n self._get_times(duration=duration, dt=dt)\n return np.random.normal(0.0, np.sqrt(dt), (self.n, self.times.shape[0]))\n
No stimulus, i.e. all zeros. Can be used to add a delay between two stimuli.
Source code in neurolib/utils/stimulus.py
class ZeroInput(Input):\n\"\"\"\n No stimulus, i.e. all zeros. Can be used to add a delay between two stimuli.\n \"\"\"\n\n def generate_input(self, duration, dt):\n self._get_times(duration=duration, dt=dt)\n return np.zeros((self.n, self.times.shape[0]))\n
Return rectified input with exponential decay, i.e. a negative step followed by a slow decay to zero, followed by a positive step and again a slow decay to zero. Can be used for bistablity detection.
Parameters:
Name Type Description Default amplitudefloat
Amplitude (both negative and positive) for the step
required nint
Number of realizations (spatial dimension)
1
Returns:
Type Description `ConctatenatedInput`
Concatenated input which represents the rectified stimulus with exponential decay
Source code in neurolib/utils/stimulus.py
def RectifiedInput(amplitude, n=1):\n\"\"\"\n Return rectified input with exponential decay, i.e. a negative step followed by a\n slow decay to zero, followed by a positive step and again a slow decay to zero.\n Can be used for bistablity detection.\n\n :param amplitude: Amplitude (both negative and positive) for the step\n :type amplitude: float\n :param n: Number of realizations (spatial dimension)\n :type n: int\n :return: Concatenated input which represents the rectified stimulus with exponential decay\n :rtype: `ConctatenatedInput`\n \"\"\"\n\n return ConcatenatedStimulus(\n [\n StepInput(step_size=-amplitude, n=n),\n ExponentialInput(inp_max=amplitude, exp_type=\"rise\", exp_coef=12.5, n=n)\n + StepInput(step_size=-amplitude, n=n),\n StepInput(step_size=amplitude, n=n),\n ExponentialInput(amplitude, exp_type=\"decay\", exp_coef=7.5, n=n),\n StepInput(step_size=0.0, n=n),\n ],\n length_ratios=[0.5, 2.5, 0.5, 1.5, 1.0],\n )\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#what-is-neurolib","title":"What is neurolib?","text":"
neurolib is a simulation and optimization framework for whole-brain modeling. It allows you to implement your own neural mass models which can simulate fMRI BOLD activity. neurolib helps you to analyse your simulations, to load and handle structural and functional brain data, and to use powerful evolutionary algorithms to tune your model's parameters and fit it to empirical data.
You can chose from different neural mass models to simulate the activity of each brain area. The main implementation is a mean-field model of spiking adaptive exponential integrate-and-fire neurons (AdEx) called ALNModel where each brain area contains two populations of excitatory and inhibitory neurons. An analysis and validation of the ALNModel model can be found in our paper.
\ud83d\udcda Please read the gentle introduction to neurolib for an overview of the basic functionality and the science behind whole-brain simulations or read the documentation for getting started.
To browse the source code of neurolib visit out GitHub repository.
\ud83d\udcdd Cite the following paper if you use neurolib for your own research:
Cakan, C., Jajcay, N. & Obermayer, K. neurolib: A Simulation Framework for Whole-Brain Neural Mass Modeling. Cogn. Comput. (2021).
The figure below shows a schematic of how a brain network is constructed:
Typically, in whole-brain modeling, diffusion tensor imaging (DTI) is used to infer the structural connectivity (the connection strengths) between different brain areas. In a DTI scan, the direction of the diffusion of molecules is measured across the whole brain. Using tractography, this information can yield the distribution of axonal fibers in the brain that connect distant brain areas, called the connectome. Together with an atlas that divides the brain into distinct areas, a matrix can be computed that encodes how many fibers go from one area to another, the so-called structural connectivity (SC) matrix. This matrix defines the coupling strengths between brain areas and acts as an adjacency matrix of the brain network. The fiber length determines the signal transmission delay between all brain areas. Combining the structural data with a computational model of the neuronal activity of each brain area, we can create a dynamical model of the whole brain.
The resulting whole-brain model consists of interconnected brain areas, with each brain area having their internal neural dynamics. The neural activity can also be used to simulate hemodynamic BOLD activity using the Balloon-Windkessel model, which can be compared to empirical fMRI data. Often, BOLD activity is used to compute correlations of activity between brain areas, the so called resting state functional connectivity, resulting in a matrix with correlations between each brain area. This matrix can then be fitted to empirical fMRI recordings of the resting-state activity of the brain.
Below is an animation of the neuronal activity of a whole-brain model plotted on a brain.
It is recommended to clone or fork the entire repository since it will also include all examples and tests."},{"location":"#project-layout","title":"Project layout","text":"
neurolib/ # Main module\n\u251c\u2500\u2500 models/ # Neural mass models\n \u251c\u2500\u2500model.py # Base model class\n \u2514\u2500\u2500 /.../ # Implemented mass models\n\u251c\u2500\u2500 optimize/ # Optimization submodule\n \u251c\u2500\u2500 evolution/ # Evolutionary optimization\n \u2514\u2500\u2500 exploration/ # Parameter exploration\n\u251c\u2500\u2500 control/optimal_control/ # Optimal control submodule\n \u251c\u2500\u2500 oc.py # Optimal control base class\n \u251c\u2500\u2500 cost_functions.py # cost functions for OC\n \u251c\u2500\u2500 /.../ # Implemented OC models\n\u251c\u2500\u2500 data/ # Empirical datasets (structural, functional)\n\u251c\u2500\u2500 utils/ # Utility belt\n \u251c\u2500\u2500 atlases.py # Atlases (Region names, coordinates)\n \u251c\u2500\u2500 collections.py # Custom data types\n \u251c\u2500\u2500 functions.py # Useful functions\n \u251c\u2500\u2500 loadData.py # Dataset loader\n \u251c\u2500\u2500 parameterSpace.py # Parameter space\n \u251c\u2500\u2500 saver.py # Save simulation outputs\n \u251c\u2500\u2500 signal.py # Signal processing functions\n \u2514\u2500\u2500 stimulus.py # Stimulus construction\n\u251c\u2500\u2500 examples/ # Example Jupyter notebooks\n\u251c\u2500\u2500 docs/ # Documentation \n\u2514\u2500\u2500 tests/ # Automated tests\n
Example IPython Notebooks on how to use the library can be found in the ./examples/ directory, don't forget to check them out! You can run the examples in your browser using Binder by clicking here or one of the following links:
Example 0.0 - Basic use of the aln model
Example 0.3 - Fitz-Hugh Nagumo model fhn on a brain network
Example 0.6 - Minimal example of how to implement your own model in neurolib
Example 1.2 - Parameter exploration of a brain network and fitting to BOLD data
Example 2.0 - A simple example of the evolutionary optimization framework
Example 5.2 - Example of optimal control of the noise-free Wilson-Cowan model
A basic overview of the functionality of neurolib is also given in the following.
A detailed example is available as a IPython Notebook.
To simulate a whole-brain network model, first we need to load a DTI and a resting-state fMRI dataset. neurolib already provides some example data for you:
from neurolib.utils.loadData import Dataset\n\nds = Dataset(\"gw\")\n
The dataset that we just loaded, looks like this:
We initialize a model with the dataset and run it:
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\nmodel.params['duration'] = 5*60*1000 # in ms, simulates for 5 minutes\n\nmodel.run(bold=True)\n
This can take several minutes to compute, since we are simulating 80 brain regions for 5 minutes realtime. Note that we specified bold=True which simulates the BOLD model in parallel to the neuronal model. The resulting firing rates and BOLD functional connectivity looks like this:
The quality of the fit of this simulation can be computed by correlating the simulated functional connectivity matrix above to the empirical resting-state functional connectivity for each subject of the dataset. This gives us an estimate of how well the model reproduces inter-areal BOLD correlations. As a rule of thumb, a value above 0.5 is considered good.
We can compute the quality of the fit of the simulated data using func.fc() which calculates a functional connectivity matrix of N (N = number of brain regions) time series. We use func.matrix_correlation() to compare this matrix to empirical data.
scores = [func.matrix_correlation(func.fc(model.BOLD.BOLD[:, 5:]), fcemp) for fcemp in ds.FCs]\n\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(f\"Mean FC/FC correlation: {np.mean(scores):.2}\")\n
A detailed example of a single-node exploration is available as a IPython Notebook. For an example of a brain network exploration, see this Notebook.
Whenever you work with a model, it is of great importance to know what kind of dynamics it exhibits given a certain set of parameters. It is often useful to get an overview of the state space of a given model of interest. For example in the case of aln, the dynamics depends a lot on the mean inputs to the excitatory and the inhibitory population. neurolib makes it very easy to quickly explore parameter spaces of a given model:
# create model\nmodel = ALNModel()\n# define the parameter space to explore\nparameters = ParameterSpace({\"mue_ext_mean\": np.linspace(0, 3, 21), # input to E\n \"mui_ext_mean\": np.linspace(0, 3, 21)}) # input to I\n\n# define exploration \nsearch = BoxSearch(model, parameters)\n\nsearch.run() \n
That's it!. You can now use the builtin functions to load the simulation results from disk and perform your analysis:
search.loadResults()\n\n# calculate maximum firing rate for each parameter\nfor i in search.dfResults.index:\n search.dfResults.loc[i, 'max_r'] = np.max(search.results[i]['rates_exc'][:, -int(1000/model.params['dt']):])\n
We can plot the results to get something close to a bifurcation diagram!
A detailed example is available as a IPython Notebook.
neurolib also implements evolutionary parameter optimization, which works particularly well with brain networks. In an evolutionary algorithm, each simulation is represented as an individual and the parameters of the simulation, for example coupling strengths or noise level values, are represented as the genes of each individual. An individual is a part of a population. In each generation, individuals are evaluated and ranked according to a fitness criterion. For whole-brain network simulations, this could be the fit of the simulated activity to empirical data. Then, individuals with a high fitness value are selected as parents and mate to create offspring. These offspring undergo random mutations of their genes. After all offspring are evaluated, the best individuals of the population are selected to transition into the next generation. This process goes on for a given amount generations until a stopping criterion is reached. This could be a predefined maximum number of generations or when a large enough population with high fitness values is found.
An example genealogy tree is shown below. You can see the evolution starting at the top and individuals reproducing generation by generation. The color indicates the fitness.
neurolib makes it very easy to set up your own evolutionary optimization and everything else is handled under the hood. You can chose between two implemented evolutionary algorithms: adaptive is a gaussian mutation and rank selection algorithm with adaptive step size that ensures convergence (a schematic is shown in the image below). nsga2 is an implementation of the popular multi-objective optimization algorithm by Deb et al. 2002.
Of course, if you like, you can dig deeper, define your own selection, mutation and mating operators. In the following demonstration, we will simply evaluate the fitness of each individual as the distance to the unit circle. After a couple of generations of mating, mutating and selecting, only individuals who are close to the circle should survive:
from neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.evolution import Evolution\n\ndef optimize_me(traj):\n ind = evolution.getIndividualFromTraj(traj)\n\n # let's make a circle\n fitness_result = abs((ind.x**2 + ind.y**2) - 1)\n\n # gather results\n fitness_tuple = (fitness_result ,)\n result_dict = {\"result\" : [fitness_result]}\n\n return fitness_tuple, result_dict\n\n# we define a parameter space and its boundaries\npars = ParameterSpace(['x', 'y'], [[-5.0, 5.0], [-5.0, 5.0]])\n\n# initialize the evolution and go\nevolution = Evolution(optimize_me, pars, weightList = [-1.0], POP_INIT_SIZE= 100, POP_SIZE = 50, NGEN=10)\nevolution.run() \n
This will gives us a summary of the last generation and plots a distribution of the individuals (and their parameters). Below is an animation of 10 generations of the evolutionary process. Ass you can see, after a couple of generations, all remaining individuals lie very close to the unit circle.
The optimal control module enables to compute efficient stimulation for your neural model. If you know how your output should look like, this module computes the optimal input. Detailes example notebooks can be found in the example folder (examples 5.1, 5.2, 5.3, 5.4). In optimal control computations, you trade precision with respect to a target against control strength. You can determine how much each contribution affects the results, by setting weights accordingly.
To compute an optimal control signal, you need to create a model (e.g., an FHN model) and define a target state (e.g., a sine curve with period 2).
You can then create a controlled model and run the iterative optimization to find the most efficient control input. The optimal control and the controlled model activity can be taken from the controlled model.
For a comprehensive study on optimal control of the Wilson-Cowan model based on the neurolib optimal control module, see Salfenmoser, L. & Obermayer, K. Optimal control of a Wilson\u2013Cowan model of neural population dynamics. Chaos 33, 043135 (2023). https://doi.org/10.1063/5.0144682.
neurolib is built using other amazing open source projects:
pypet - Python parameter exploration toolbox
deap - Distributed Evolutionary Algorithms in Python
numpy - The fundamental package for scientific computing with Python
numba - NumPy aware dynamic Python compiler using LLVM
Jupyter - Jupyter Interactive Notebook
"},{"location":"#how-to-cite","title":"How to cite","text":"
Cakan, C., Jajcay, N. & Obermayer, K. neurolib: A Simulation Framework for Whole-Brain Neural Mass Modeling. Cogn. Comput. (2021). https://doi.org/10.1007/s12559-021-09931-9
@article{cakan2021,\nauthor={Cakan, Caglar and Jajcay, Nikola and Obermayer, Klaus},\ntitle={neurolib: A Simulation Framework for Whole-Brain Neural Mass Modeling},\njournal={Cognitive Computation},\nyear={2021},\nmonth={Oct},\nissn={1866-9964},\ndoi={10.1007/s12559-021-09931-9},\nurl={https://doi.org/10.1007/s12559-021-09931-9}\n}\n
"},{"location":"#get-in-touch","title":"Get in touch","text":"
Caglar Cakan (cakan@ni.tu-berlin.de) Department of Software Engineering and Theoretical Computer Science, Technische Universit\u00e4t Berlin, Germany Bernstein Center for Computational Neuroscience Berlin, Germany
This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) with the project number 327654276 (SFB 1315) and the Research Training Group GRK1589/2.
The optimal control module was developed by Lena Salfenmoser and Martin Kr\u00fcck supported by the DFG project 163436311 (SFB 910).
"},{"location":"contributing/","title":"Contributing to neurolib","text":"
Thank you for your interest in contributing to neurolib. We welcome bug reports through the issues tab and pull requests for fixes or improvements. You are warlmy invited to join our development efforts and make brain network modeling easier and more useful for all researchers.
To propose a change to neurolib's code, you should first clone the repository to your own Github account. Then, create a branch and make some changes. You can then send a pull request to neurolib's own repository and we will review and discuss your proposed changes.
More information on how to make pull requests can be found in the Github help pages.
Please be aware that we have a conservative policy for implementing new functionality. All new features need to be maintained, sometimes forever. We are a small team of developers and can only maintain a limited amount of code. Therefore, ideally, you should also feel responsible for the changes you have proposed and maintain it after it becomes part of neurolib.
We are using the black code formatter with the additional argument --line-length=120. It's called the \"uncompromising formatter\" because it is completely deterministic and you have literally no control over how your code will look like. We like that! We recommend using black directly in your IDE, for example in VSCode.
We are using the sphinx format for commenting code. Comments are incredibly important to us since neurolib is supposed to be a library of user-facing code. It's encouraged to read the code, change it and build something on top of it. Our users are coders. Please write as many comments as you can, including a description of each function and method and its arguments but also single-line comments for the code itself.
"},{"location":"contributing/#implementing-a-neural-mass-model","title":"Implementing a neural mass model","text":"
You are very welcome to implement your favorite neural mass model and contribute it to neurolib.
The easiest way of implementing a model is to copy a model directory and adapt the relevant parts of it to your own model. Please have a look of how other models are implemented. We recommend having a look at the HopfModel which is a fairly simple model.
All models inherit from the Model base class which can be found in neurolib/models/model.py.
You can also check out the model implementation example to find out how a model is implemented.
All models need to pass tests. Tests are located in the tests/ directory of the project. A model should be added to the test files tests/test_models.py and tests/test_autochunk.py. However, you should also make sure that your model supports as many neurolib features as possible, such as exploration and optimization. If you did everything right, this should be the case.
As of now, models consist of three parts:
The model.py file which contains the class of the model. Here the model specifies attributes like its name, its state variables, its initial value parameters. Additionally, in the constructor (the __init__() method), the model loads its default parameters.
The loadDefaultParams.py file contains a function (loadDefaultParams()) which has the arguments Cmat for the structural connectivity matrix, Dmat for the delay matrix and seed for the seed of the random number generator. This function returns a dictionary (or dotdict, see neurolib/utils/collections.py) with all parrameters inside.
The timeIntegration.py file which contains a timeIntegration() function which has the argument params coming from the previous step. Here, we need to prepare the numerical integration. We load all relevant parameters from the params dictionary and pass it to the main integration loop. The integration loop is written such that it can be accelerated by numba (numba's page) which speeds up the integration by a factor of around 1000.
We very much welcome example contributions since they help new users to learn how to make use of neurolib. They can include basic usage examples or tutorials of neurolib's features, or a demonstration of how to solve a specific scientific task using neural mass models or whole-brain networks.
Examples are provided as Jupyter Notebooks in the /examples/ directory of the project repository.
Notebooks should have a brief description of what they are trying to accomplish at the beginning.
It is recommended to change the working directory to the root directory at the very beginning of the notebook (os.chdir('..')).
Notebooks should be structured with different subheadings (Markdown style). Please also describe in words what you are doing in code.
We have a few small datasets already in neurolib so everyone can start simulating right away. If you'd like to contribute more data to the project, please feel invited to do so. We're looking for more structural connectivity matrices and fiber length matrices in the MATLAB matrix .mat format (which can be loaded by scipy.loadmat). We also appreciate BOLD data, EEG data, or MEG data. Other modalities could be useful as well. Please be aware that the data has to be in a parcellated form, i.e., the brain areas need to be organized according to an atlas like the AAL2 atlas (or others).
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\nimport scipy\n\n# Let's import the aln model\nfrom neurolib.models.aln import ALNModel\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
# Create the model\nmodel = ALNModel()\n\n# Each model comes with a set of default parameters which are a dictionary. \n# Let's change the parameter that controls the duration of a simulation to 10s.\nmodel.params['duration'] = 10.0 * 1000 \n\n# For convenience, we could also use:\nmodel.params.duration = 10.0 * 1000\n\n# In the aln model an Ornstein-Uhlenbeck process is simulated in parallel\n# as the source of input noise fluctuations. Here we can set the variance\n# of the process. \n# For more info: https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process \n# Let's add some noise.\nmodel.params['sigma_ou'] = 0.1\n\n# Finally, we run the model\nmodel.run()\n
Accessing the outputs is straight-forward. Every model's outputs are stored in the model.outputs attribute. According to the specific name of each of the model's outputs, they can also be accessed as a key of the Model object, i.e. model['rates_exc'].
# Outputs are also available as an xr DataArray\nxr = model.xr()\nprint(xr.dims)\n# outputs can also be accessed via attributes in dot.notation\nprint(\"rates_exc\", model.rates_exc)\n
Bifurcation diagrams can give us an overview of how different parameters of the model affect its dynamics. The simplest method for drawing a bifurcation diagram is to simply change relevant parameters step by step and record the model's behavior in response to these changes. In this example, we want to see how the model's dynamics changes with respect to the external input currents to the excitatory population. These input currents could be due to couplings with other nodes in a brain network or we could model other factors like external electrical stimulation.
Below, you can see a schematic of the aln model. As you can see, a single node consists of one excitatory (red) and one inhibitory population (blue). The parameter that controls the mean input to the excitatory population is \\(\\mu_{E}\\) or model.params[\"mue_ext_mean\"] .
Let's first decrease the duration of a single run so we can scan the parameter space a bit faster and let's also disable the noisy input.
We draw a one-dimensional bifurcation diagram, so it is enough to loop through different values of mue_ext_mean and record the minimum and maximum of the rate for each parameter.
max_rate_e = []\nmin_rate_e = []\n# these are the different input values that we want to scan\nmue_inputs = np.linspace(0, 2, 50)\nfor mue in mue_inputs:\n # Note: this has to be a vector since it is input for all nodes\n # (but we have only one node in this example)\n model.params['mue_ext_mean'] = mue\n model.run()\n # we add the maximum and the minimum of the last second of the \n # simulation to a list\n max_rate_e.append(np.max(model.output[0, -int(1000/model.params['dt']):]))\n min_rate_e.append(np.min(model.output[0, -int(1000/model.params['dt']):]))\n
Let's plot the results!
plt.plot(mue_inputs, max_rate_e, c='k', lw = 2)\nplt.plot(mue_inputs, min_rate_e, c='k', lw = 2)\nplt.title(\"Bifurcation diagram of the aln model\")\nplt.xlabel(\"Input to excitatory population\")\nplt.ylabel(\"Min / max firing rate\")\n
\nText(0, 0.5, 'Min / max firing rate')\n
neurolib comes with some example datasets for exploring its functionality. Please be aware that these datasets are not tested and should not be used for your research, only for experimentation with the software.
A dataset for whole-brain modeling can consists of the following parts:
A structural connectivity matrix capturing the synaptic connection strengths between brain areas, often derived from DTI tractography of the whole brain. The connectome is then typically parcellated in a preferred atlas (for example the AAL2 atlas) and the number of axonal fibers connecting each brain area with every other area is counted. This number serves as a indication of the synaptic coupling strengths between the areas of the brain.
A delay matrix which can be calculated from the average length of the axonal fibers connecting each brain area with another.
A set of functional data that can act as a target for model optimization. Resting-state fMRI offers an easy and fairly unbiased way for calibrating whole-brain models. EEG data could be used as well.
We can load a Dataset by passing the name of it in the constructor.
from neurolib.utils.loadData import Dataset\nds = Dataset(\"gw\")\n
We now create the aln model with a structural connectivity matrix and a delay matrix. In order to achieve a good fit of the BOLD activity to the empirical data, the model has to run for quite a while. A a rule of thumb, a simulation of resting-state BOLD activity should not be shorter than 3 minutes and preferably longer than 5 minutes real time. If the empirical recordings are for example 10 minutes long, ideally, a simulation of 10 minutes would be used to compare the output of the model to the resting state recording.
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\n\nmodel.params['duration'] = 0.2*60*1000 \n# Info: value 0.2*60*1000 is low for testing\n# use 5*60*1000 for real simulation\n
After some optimization to the resting-state fMRI data of the dataset, we found a set of parameters that creates interesting whole-brain dynamics. We set the mean input of the excitatory and the inhibitory population to be close to the E-I limit cycle.
model.params['mue_ext_mean'] = 1.57\nmodel.params['mui_ext_mean'] = 1.6\n# We set an appropriate level of noise\nmodel.params['sigma_ou'] = 0.09\n# And turn on adaptation with a low value of spike-triggered adaptation currents.\nmodel.params['b'] = 5.0\n
Let's have a look what the data looks like. We can access the data of each model by calling its internal attributes. Here, we plot the structural connectivity matrix by calling model.params['Cmat'] and fiber length matrix by calling model.params['lengthMat']. Of course, we can also access the dataset using the Dataset object itself. For example the functional connectivity matrices of the BOLD timeseries in the datasets are given as list with ds.FCs.
We run the model with bold simulation by using bold=True. This simulates the Balloon-Windkessel BOLD model in parallel to the neural population model in order to estimate the blood oxygen levels of the underlying neural activity. The output of the bold model can be used to compare the simulated data to empirical fMRI data (resting-state fMRI for example).
To save (a lot of) RAM, we can run the simulation in chunkwise mode. In this mode, the model will be simulated for a length of chunksize steps (not time in ms, but actual integration steps!), and the output of that chunk will be used to automatically reinitiate the model with the appropriate initial conditions. This allows for a serial continuation of the model without having to store all the data in memory and is particularly useful for very long and many parallel simulations.
For convenience, they can also be accessed directly using attributes of the model with the outputs name, like model.rates_exc. The outputs are also available as xr DataArrays as model.xr().
Since we used bold=True to simulate BOLD, we can also access model.BOLD.BOLD for the actual BOLD activity, and model.BOLD.t for the time steps of the BOLD simulation (which are downsampled to 0.5 Hz by default).
# Plot functional connectivity and BOLD timeseries (z-scored)\nfig, axs = plt.subplots(1, 2, figsize=(6, 2), dpi=75, gridspec_kw={'width_ratios' : [1, 2]})\naxs[0].imshow(func.fc(model.BOLD.BOLD[:, 5:]))\naxs[1].imshow(scipy.stats.mstats.zscore(model.BOLD.BOLD[:, model.BOLD.t_BOLD>10000], axis=1), aspect='auto', extent=[model.BOLD.t_BOLD[model.BOLD.t_BOLD>10000][0], model.BOLD.t_BOLD[-1], 0, model.params['N']]);\n\naxs[0].set_title(\"FC\")\naxs[0].set_xlabel(\"Node\")\naxs[0].set_ylabel(\"Node\")\naxs[1].set_xlabel(\"t [ms]\")\n\n# the results of the model are also accessible through an xarray DataArray\nfig, axs = plt.subplots(1, 1, figsize=(6, 2), dpi=75)\nplt.plot(model.xr().time, model.xr().loc['rates_exc'].T);\n
scores = [func.matrix_correlation(func.fc(model.BOLD.BOLD[:, 5:]), fcemp) for fcemp in ds.FCs]\n\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(f\"Mean FC/FC correlation: {np.mean(scores):.2}\")\n
"},{"location":"examples/example-0-aln-minimal/#the-neural-mass-model","title":"The neural mass model","text":"
In this example, we will learn about the basic of neurolib. We will create a two-population mean-field model of exponential integrate-and-fire neurons called the aln model. We will learn how to create a Model, set some parameters and run a simulation. We will also see how we can easily access the output of each simulation.
"},{"location":"examples/example-0-aln-minimal/#aln-the-adaptive-linear-nonlinear-cascade-model","title":"aln - the adaptive linear-nonlinear cascade model","text":"
The adaptive linear-nonlinear (aln) cascade model is a low-dimensional population model of spiking neural networks. Mathematically, it is a dynamical system of non-linear ODEs. The dynamical variables of the system simulated in the aln model describe the average firing rate and other macroscopic variables of a randomly connected, delay-coupled network of excitatory and inhibitory adaptive exponential integrate-and-fire neurons (AdEx) with non-linear synaptic currents.
Ultimately, the model is a result of various steps of model reduction starting from the Fokker-Planck equation of the AdEx neuron subject to white noise input at many steps of input means \\(\\mu\\) and variances \\(\\sigma\\). The resulting mean firing rates and mean membrane potentials are then stored in a lookup table and serve as the nonlinear firing rate transfer function, \\(r = \\Phi(\\mu, \\sigma)\\).
"},{"location":"examples/example-0-aln-minimal/#basic-use","title":"Basic use","text":""},{"location":"examples/example-0-aln-minimal/#simulating-a-single-aln-node","title":"Simulating a single aln node","text":"
To create a single node, we simply instantiate the model without any arguments.
"},{"location":"examples/example-0-aln-minimal/#accessing-the-outputs","title":"Accessing the outputs","text":""},{"location":"examples/example-0-aln-minimal/#bifurcation-diagram","title":"Bifurcation diagram","text":""},{"location":"examples/example-0-aln-minimal/#whole-brain-model","title":"Whole-brain model","text":""},{"location":"examples/example-0-aln-minimal/#run-model","title":"Run model","text":""},{"location":"examples/example-0-aln-minimal/#results","title":"Results","text":"
The outputs of the model can be accessed using the attribute model.outputs
"},{"location":"examples/example-0-aln-minimal/#plot-simulated-activity","title":"Plot simulated activity","text":""},{"location":"examples/example-0-aln-minimal/#correlation-of-simulated-bold-to-empirical-data","title":"Correlation of simulated BOLD to empirical data","text":"
We can compute the element-wise Pearson correlation of the functional connectivity matrices of the simulated data to the empirical data to estimate how well the model captures the inter-areal BOLD correlations found in empirical resting-state recordings.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n%load_ext autoreload\n%autoreload 2\n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\n# Let's import the Hopf model\nfrom neurolib.models.hopf import HopfModel\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
model = HopfModel()\nmodel.params['duration'] = 1.0*1000\nmodel.params['sigma_ou'] = 0.03\n\nmodel.run()\n
plt.plot(model.t, model.x.T, c='k', lw = 2)\n# alternatively plot the results in the xarray:\n# plt.plot(hopfModel.xr[0, 0].time, hopfModel.xr[0, 0].values)\nplt.xlabel(\"t [ms]\")\nplt.ylabel(\"Activity\")\n
\nText(0, 0.5, 'Activity')\n
model = HopfModel()\nmodel.params['duration'] = 2.0*1000\n
max_x = []\nmin_x = []\n# these are the different input values that we want to scan\na_s = np.linspace(-2, 2, 50)\nfor a in a_s:\n model.params['a'] = a\n model.run()\n # we add the maximum and the minimum of the last second of the \n # simulation to a list\n max_x.append(np.max(model.x[0, -int(1000/model.params['dt']):]))\n min_x.append(np.min(model.x[0, -int(1000/model.params['dt']):]))\n
plt.plot(a_s, max_x, c='k', lw = 2)\nplt.plot(a_s, min_x, c='k', lw = 2)\nplt.title(\"Bifurcation diagram of the Hopf oscillator\")\nplt.xlabel(\"a\")\nplt.ylabel(\"Min / max x\")\n
\nText(0, 0.5, 'Min / max x')\n
from neurolib.utils.loadData import Dataset\n\nds = Dataset(\"hcp\")\n
model = HopfModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\n
scores = [func.matrix_correlation(func.fc(model.x[:, -int(5000/model.params['dt']):]), fcemp) for fcemp in ds.FCs]\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(\"Mean FC/FC correlation: {:.2f}\".format(np.mean(scores)))\n
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
ds = Dataset(\"gw\")\n# simulates the whole-brain model\naln = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\n# Resting state fits\naln.params['mue_ext_mean'] = 1.57\naln.params['mui_ext_mean'] = 1.6\naln.params['sigma_ou'] = 0.09\naln.params['b'] = 5.0\naln.params['duration'] = 0.2*60*1000 \n# info: value 0.2*60*1000 is low for testing\n# use 5*60*1000 for real simulation\naln.run(chunkwise=True, bold = True)\n
\nWARNING:root:aln: BOLD simulation is supported only with chunkwise integration. Enabling chunkwise integration.\n\n
Now we can cast the modelling result into our Signal class. Signal is a parent base class for any neuro signal. We also provide three child class for particular signals: RatesSignal (for firing rate of the populations), VoltageSignal (for average membrane potential of the populations), and BOLDSignal (for simulated BOLD). They only differ in name, labels and units. Nothing fancy. Of course, you can implement your own class for your particular results very easily as:
from neurolib.utils.signal import Signal\n\n\nclass PostSynapticCurrentSignal(Signal):\n name = \"Population post-synaptic current signal\"\n label = \"I_syn\"\n signal_type = \"post_current\"\n unit = \"mA\"\n
and that's it. All useful methods and attributes are directly inherited from the Signal parent.
# Create Signal out of firing rates\nfr = RatesSignal.from_model_output(aln, group=\"\", time_in_ms=True)\n# optional description\nfr.description = \"Output of the ALN model with default SC and fiber lengths\"\n\n# Create Signal out of BOLD simulated timeseries\nbold = BOLDSignal.from_model_output(aln, group=\"BOLD\", time_in_ms=True)\nbold.description = \"Simulated BOLD of the ALN model with default SC and fiber lengths\"\n
\nPopulation firing rate representing rate signal with unit of Hz with user-provided description: `Output of the ALN model with default SC and fiber lengths`. Shape of the signal is (2, 80, 8831) with dimensions ('output', 'space', 'time').\nPopulation blood oxygen level-dependent signal representing bold signal with unit of % with user-provided description: `Simulated BOLD of the ALN model with default SC and fiber lengths`. Shape of the signal is (1, 80, 7) with dimensions ('output', 'space', 'time').\n\n
Signal automatically computes useful attributes like dt, sampling rate, starting and ending times.
\nInherent attributes:\nPopulation firing rate\nq\nHz\nrate\nOutput of the ALN model with default SC and fiber lengths\n\nComputed attributes:\n0.0001\n10000.0\n0.0\n0.883\n(2, 80, 8831)\n\n
# internal representation of the signal is just xarray's DataArray\nprint(fr.data)\n# xarray is just pandas on steroids, i.e. it supports multi-dimensional arrays, not only 2D\n\n# if you'd need simple numpy array just call .values on signal's data\nprint(type(fr.data.values))\nprint(fr.data.values.shape)\n
Now let's see what Signal can do... Just a side note, all operations can be done inplace (everything happens inside signal class), or altered signal is returned with the same attributes as the original one
# basic operations\nnorm = fr.normalize(std=True, inplace=False)\n# so, are all temporal means close to zero?\nprint(np.allclose(norm.data.mean(dim=\"time\"), 0.))\n# aand, are all temporal std close to 1?\nprint(np.allclose(norm.data.std(dim=\"time\"), 1.0))\nplt.plot(fr[\"rates_exc\"].data.sel({\"space\": 0}), label=\"original\")\nplt.plot(norm[\"rates_exc\"].data.sel({\"space\": 0}), label=\"normalised\")\n\n# you can detrend the signal, all of it, or by segments (as indices within the signal)\n# let's first normalise (so inplace=False), then detrend (we can inplace=True)\ndetrended = fr.normalize(std=True, inplace=False)\ndetrended.detrend(inplace=True)\nplt.plot(detrended[\"rates_exc\"].data.sel({\"space\": 0}), label=\"normalised & detrended\")\ndetrended_segments = fr.detrend(segments=np.arange(20000, 1000), inplace=False)\nplt.legend()\n
\nTrue\nTrue\n\n
\n<matplotlib.legend.Legend at 0x1301a4320>\n
so, the sampling frequency is too high, let's resample
# init again to start fresh\nfr = RatesSignal.from_model_output(aln, group=\"\", time_in_ms=True)\nplt.plot(fr.data.time, fr[\"rates_exc\"].data.sel({\"space\": 0}), label=\"original\")\n\n# first resample\nfr.resample(to_frequency=1000., inplace=True)\n\n# next detrend\nfr.detrend(inplace=True)\nprint(fr.start_time, fr.end_time)\n\n# next pad with 0s for 0.5 seconds in order to suppress edge effect when filtering\npadded = fr.pad(how_much=0.5, in_seconds=True, padding_type=\"constant\", side=\"both\",\n constant_values=0., inplace=False)\nprint(padded.start_time, padded.end_time)\n\n# now filter - by default uses mne, if not installed, falls back to scipy basic IIR filter\npadded.filter(low_freq=8., high_freq=12., inplace=True)\n\n# now cut back the original length\nfiltered = padded.sel([fr.start_time, fr.end_time], inplace=False)\nprint(filtered.start_time, filtered.end_time)\n\nplt.plot(filtered.data.time, filtered[\"rates_exc\"].data.sel({\"space\": 0}), label=r\"filtered $\\alpha$\")\n\n# finally, get phase and amplitude via Hilbert transform\nphase = filtered.hilbert_transform(return_as=\"phase_wrapped\", inplace=False)\nplt.plot(phase.data.time, phase[\"rates_exc\"].data.sel({\"space\": 0}), label=r\"phase $\\alpha$\")\namplitude = filtered.hilbert_transform(return_as=\"amplitude\", inplace=False)\nplt.plot(amplitude.data.time, amplitude[\"rates_exc\"].data.sel({\"space\": 0}), label=r\"amplitude $\\alpha$\")\nplt.legend()\n
\n0.0 0.882\n-0.5 1.382\nSetting up band-pass filter from 8 - 12 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 8.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 7.00 Hz)\n- Upper passband edge: 12.00 Hz\n- Upper transition bandwidth: 3.00 Hz (-6 dB cutoff frequency: 13.50 Hz)\n- Filter length: 1651 samples (1.651 sec)\n\n0.0 0.882\n\n
\n<matplotlib.legend.Legend at 0x1322e6e80>\n
# in case you forget that happened in the processing, you can easily check all steps:\nprint(phase.preprocessing_steps)\nprint(amplitude.preprocessing_steps)\n
\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 8.0Hz - high 12.0Hz -> select x:0.882s -> Hilbert - wrapped phase\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 8.0Hz - high 12.0Hz -> select x:0.882s -> Hilbert - amplitude\n\n
# and you can save your signal for future generations! (saved as netCDF file)\nphase.save(\"phase_from_some_experiment\")\n
# and then load it\nphase_loaded = RatesSignal.from_file(\"phase_from_some_experiment\")\n# compare whether it is the same\nprint(phase == phase_loaded)\n# the attributes are saved/loaded as well\nprint(phase_loaded.name)\nprint(phase_loaded.unit)\nprint(phase_loaded.preprocessing_steps)\n# delete file\nos.remove(\"phase_from_some_experiment.nc\")\n
\nTrue\nPopulation firing rate\nHz\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 8.0Hz - high 12.0Hz -> select x:0.882s -> Hilbert - wrapped phase\n\n
# this will iterate over whole data and return one 1D temporal slice at the time, each slice is Signal class\nfor name, ts in fr.iterate(return_as=\"signal\"):\n print(name, type(ts), ts.start_time, ts.end_time)\n\n# this will iterate over whole data and return one 1D temporal slice at the time, each slice is DataArray\nfor name, ts in fr.iterate(return_as=\"xr\"):\n print(name, type(ts), ts.shape, ts.shape)\n
# sliding window - let's iterate over temporal windows of 0.5seconds, with 0.1s translation and boxcar window function\nfor window in fr.sliding_window(length=0.5, step=0.1, window_function=\"boxcar\", lengths_in_seconds=True):\n print(type(window), window.shape, window.start_time, window.end_time)\n
# apply 1D function - Signal supports applying 1D function per temporal slice\n# both are supported: function that reduces temporal dimension (e.g. mean which reduces timeseries of length N to one number),\n# and functions that preserve shape\n\n# reduce\nmean = fr.apply(partial(np.mean, axis=-1), inplace=False)\n# mean is now xr.DataArray, not Signal; but the coordinates except time are preserved\nprint(type(mean), mean.shape, mean.coords)\n\n# preserve shape\nabsolute_value = fr.apply(np.abs, inplace=False)\n# still Signal\nprint(absolute_value.shape)\n
\nWARNING:root:Shape changed after operation! Old shape: (2, 80, 883), new shape: (2, 80); Cannot cast to Signal class, returing as `xr.DataArray`\n\n
# basic FC from excitatory rates - using correlation\nfc_exc = fr[\"rates_exc\"].functional_connectivity(fc_function=np.corrcoef)\n# results is DataArray with space coordinates\nprint(type(fc_exc), fc_exc.shape, fc_exc.coords)\nplt.subplot(1,2,1)\nplt.title(\"Correlation FC\")\nplt.imshow(fc_exc.values)\n\n# FC from covariance\nfc_cov_exc = fr[\"rates_exc\"].functional_connectivity(fc_function=np.cov)\nplt.subplot(1,2,2)\nplt.title(\"Covariance FC\")\nplt.imshow(fc_cov_exc.values)\n\n# so fc_function can be any function that can take (nodes x time) array and transform it to (nodes x nodes) connectivity matrix\n
\nProcessing delta...\nSetting up band-pass filter from 2 - 4 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 2.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 1.00 Hz)\n- Upper passband edge: 4.00 Hz\n- Upper transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 5.00 Hz)\n- Filter length: 1651 samples (1.651 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 2Hz - high 4Hz -> select x:0.882s\nProcessing theta...\nSetting up band-pass filter from 4 - 8 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 4.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 3.00 Hz)\n- Upper passband edge: 8.00 Hz\n- Upper transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 9.00 Hz)\n- Filter length: 1651 samples (1.651 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 4Hz - high 8Hz -> select x:0.882s\nProcessing alpha...\nSetting up band-pass filter from 8 - 12 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 8.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 7.00 Hz)\n- Upper passband edge: 12.00 Hz\n- Upper transition bandwidth: 3.00 Hz (-6 dB cutoff frequency: 13.50 Hz)\n- Filter length: 1651 samples (1.651 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 8Hz - high 12Hz -> select x:0.882s\nProcessing beta...\nSetting up band-pass filter from 12 - 30 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 12.00\n- Lower transition bandwidth: 3.00 Hz (-6 dB cutoff frequency: 10.50 Hz)\n- Upper passband edge: 30.00 Hz\n- Upper transition bandwidth: 7.50 Hz (-6 dB cutoff frequency: 33.75 Hz)\n- Filter length: 1101 samples (1.101 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 12Hz - high 30Hz -> select x:0.882s\nProcessing low_gamma...\nSetting up band-pass filter from 30 - 60 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 30.00\n- Lower transition bandwidth: 7.50 Hz (-6 dB cutoff frequency: 26.25 Hz)\n- Upper passband edge: 60.00 Hz\n- Upper transition bandwidth: 15.00 Hz (-6 dB cutoff frequency: 67.50 Hz)\n- Filter length: 441 samples (0.441 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 30Hz - high 60Hz -> select x:0.882s\nProcessing high_gamma...\nSetting up band-pass filter from 60 - 1.2e+02 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 60.00\n- Lower transition bandwidth: 15.00 Hz (-6 dB cutoff frequency: 52.50 Hz)\n- Upper passband edge: 120.00 Hz\n- Upper transition bandwidth: 30.00 Hz (-6 dB cutoff frequency: 135.00 Hz)\n- Filter length: 221 samples (0.221 sec)\n\nresample to 1000.0Hz -> detrend -> 0.5s constant both sides padding -> filter: low 60Hz - high 120Hz -> select x:0.882s\n\n
# time-varying FC\nfor window in fr.sliding_window(length=0.5, step=0.2, window_function=\"boxcar\", lengths_in_seconds=True):\n fc = window[\"rates_exc\"].functional_connectivity(fc_function=np.corrcoef)\n plt.imshow(fc)\n plt.title(f\"FC: {window.start_time}-{window.end_time}s\")\n plt.show()\n
"},{"location":"examples/example-0.2-basic_analysis/#introduction","title":"Introduction","text":""},{"location":"examples/example-0.2-basic_analysis/#run-the-aln-model","title":"Run the ALN model","text":"
Firstly, let us run a network model given the structural connectivity and fiber lengths.
Let's do a more complete example. Let's say, you run the model and want to extract phase and amplitude of the \\(\\alpha\\) band (i.e. 8-12Hz) for some phase-amplitude coupling analyses.
Sometimes it is useful to apply or see something in a loop. That's why Signal supports both: iterating over space / outputs variables and applying some 1D function over temporal dimensions.
Lot of modelling effort actually goes to fitting the experimental functional connectivity with the modelled one. That's why Signal class supports functional connectivity computation and with other methods (like filtering and iterating over temporal windows) we can even do timeseries of FC or band-specific FC very easily within the couple of lines.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\n# Let's import the fhn model\nfrom neurolib.models.fhn import FHNModel\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
model = FHNModel()\nmodel.params['duration'] = 2.0*1000\n
Let's draw a simple one-dimensional bifurcation diagram of this model to orient ourselves in the parameter space
max_x = []\nmin_x = []\n# these are the different input values that we want to scan\nx_inputs = np.linspace(0, 2, 50)\nfor x_ext in x_inputs:\n # Note: this has to be a vector since it is input for all nodes\n # (but we have only one node in this example)\n model.params['x_ext'] = [x_ext]\n model.run()\n # we add the maximum and the minimum of the last second of the \n # simulation to a list\n max_x.append(np.max(model.x[0, -int(1000/model.params['dt']):]))\n min_x.append(np.min(model.x[0, -int(1000/model.params['dt']):]))\n
plt.plot(x_inputs, max_x, c='k', lw = 2)\nplt.plot(x_inputs, min_x, c='k', lw = 2)\nplt.title(\"Bifurcation diagram of the FHN oscillator\")\nplt.xlabel(\"Input to x\")\nplt.ylabel(\"Min / max x\")\n
\nText(0, 0.5, 'Min / max x')\n
In this model, there is a Hopf bifurcation happening at two input values. We can see the oscillatory region at input values from roughly 0.75 to 1.3.
from neurolib.utils.loadData import Dataset\n\nds = Dataset(\"hcp\")\n
model = FHNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)\n
model.params['duration'] = 10 * 1000 \n# add some noise\nmodel.params['sigma_ou'] = .01\n# set the global coupling strenght of the brain network\nmodel.params['K_gl'] = 1.0\n# let's put all nodes close to the limit cycle such that\n# noise can kick them in and out of the oscillation\n# all nodes get the same constant input\nmodel.params['x_ext'] = [0.72] * model.params['N']\n\nmodel.run(chunkwise=True, append_outputs=True)\n
scores = [func.matrix_correlation(func.fc(model.x[:, -int(5000/model.params['dt']):]), fcemp) for fcemp in ds.FCs]\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(\"Mean FC/FC correlation: {:.2f}\".format(np.mean(scores)))\n
In this notebook, the basic use of the implementation of the FitzHugh-Nagumo (fhn) model is presented. Usually, the fhn model is used to represent a single neuron (for example in Cakan et al. (2014), \"Heterogeneous delays in neural networks\"). This is due to the difference in timescales of the two equations that define the FHN model: The first equation is often referred to as the \"fast variable\" whereas the second one is the \"slow variable\". This makes it possible to create a model with a very fast spiking mechanism but with a slow refractory period.
In our case, we are using a parameterization of the fhn model that is not quite as usual. Inspired by the paper by Kostova et al. (2004) \"FitzHugh\u2013Nagumo revisited: Types of bifurcations, periodical forcing and stability regions by a Lyapunov functional.\", the implementation in neurolib produces a slowly oscillating dynamics and has the advantage to incorporate an external input term that causes a Hopf bifurcation. This means, that the model roughly approximates the behaviour of the aln model: For low input values, there is a low-activity fixed point, for intermediate inputs, there is an oscillatory region, and for high input values, the system is in a high-activity fixed point. Thus, it offers a simple way of exploring the dynamics of a neural mass model with these properties, such as the aln model.
We want to start by producing a bifurcation diagram of a single node. With neurolib, this can be done with a couple of lines of code, as seen further below.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n
%load_ext autoreload\n%autoreload 2\n
import matplotlib.pyplot as plt\nimport numpy as np\nimport glob\n\nfrom neurolib.models.wc import WCModel\n\nimport neurolib.utils.loadData as ld\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
model = WCModel()\nmodel.params['duration'] = 2.0*1000\n
Let's draw a simple one-dimensional bifurcation diagram of this model to orient ourselves in the parameter space
max_exc = []\nmin_exc = []\n# these are the different input values that we want to scan\nexc_inputs = np.linspace(0, 3.5, 50)\nfor exc_ext in exc_inputs:\n # Note: this has to be a vector since it is input for all nodes\n # (but we have only one node in this example)\n model.params['exc_ext'] = exc_ext\n model.run()\n # we add the maximum and the minimum of the last second of the \n # simulation to a list\n max_exc.append(np.max(model.exc[0, -int(1000/model.params['dt']):]))\n min_exc.append(np.min(model.exc[0, -int(1000/model.params['dt']):]))\n
plt.plot(exc_inputs, max_exc, c='k', lw = 2)\nplt.plot(exc_inputs, min_exc, c='k', lw = 2)\nplt.title(\"Bifurcation diagram of the Wilson-Cowan model\")\nplt.xlabel(\"Input to exc\")\nplt.ylabel(\"Min / max exc\")\n
\nText(0,0.5,'Min / max exc')\n
model = WCModel()\nmodel.params['duration'] = 1.0*1000\nmodel.params['sigma_ou'] = 0.01\n\nmodel.run()\n
scores = [func.matrix_correlation(func.fc(model.exc[:, -int(5000/model.params['dt']):]), fcemp) for fcemp in ds.FCs]\nprint(\"Correlation per subject:\", [f\"{s:.2}\" for s in scores])\nprint(\"Mean FC/FC correlation: {:.2f}\".format(np.mean(scores)))\n
In this notebook, the basic use of the implementation of the Wilson-Cowan (wc) model is presented.
In the wc model, the activity of a particular brain region is defined by a coupled system of excitatory (E) and inhibitory (I) neuronal populations with the mean firing rates of the E and I pools being the dynamic variables, as first described by Wilson and Cowan in 1972 ( H.R. Wilson and J.D. Cowan. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J., 12:1\u201324 (1972))
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\n# Let's import the Kuramoto model\nfrom neurolib.models.kuramoto import KuramotoModel\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n
model = KuramotoModel()\nmodel.params['duration'] = 10\nmodel.run()\n
theta = model['theta'].T\ntheta_capped = np.mod(theta, 2*np.pi) # cap theta to [0, 2*pi]\n\nplt.plot(model.t, theta_capped)\nplt.xlabel(\"Time\")\nplt.ylabel(\"Theta\")\nplt.yticks(np.arange(0, 2*np.pi+0.1, np.pi/2), [ r\"$0$\", r\"$\\pi/2$\", r\"$\\pi$\", r\"$3/4\\pi$\", r\"$2\\pi$\",])# modify y-axis ticks to be in multiples of pi\nplt.show()\n
Here we simulate networks of oscillators. We will simulate a network of 8 oscillators with a global coupling strength 0.3. Here we initialize a connectivity matrix with all-to-all connectivity. We then simulate the network for 30 milliseconds assuming dt is in ms. We will also plot the phase values over time.
theta = network_model['theta'].T\n# cap the phase to be between 0 and 2pi\ntheta_capped = np.mod(theta, 2*np.pi)\n\n# set up the figure\nfig, ax = plt.subplots(1, 1, figsize=(16, 8))\n\nplt.plot(network_model.t, theta_capped)\nplt.xlabel(\"Time [ms]\")\nplt.ylabel(\"Theta\")\nplt.yticks(np.arange(0, 2*np.pi+0.1, np.pi/2), [ r\"$0$\", r\"$\\pi/2$\", r\"$\\pi$\", r\"$3/4\\pi$\", r\"$2\\pi$\",])# modify y-axis ticks to be in multiples of pi\nplt.show()\n
We can see that there is synchronization between nodes after around 25 ms. This happened because the nodes do not really have strong connection with each others. Now we will try to increase global coupling to 1 to see if synchronization comes faster.
theta = network_model['theta'].T\n# cap the phase to be between 0 and 2pi\ntheta_capped = np.mod(theta, 2*np.pi)\n\n# set up the figure\nfig, ax = plt.subplots(1, 1, figsize=(16, 8))\n\nplt.plot(network_model.t, theta_capped)\nplt.xlabel(\"Time [ms]\")\nplt.ylabel(\"Theta\")\nplt.yticks(np.arange(0, 2*np.pi+0.1, np.pi/2), [ r\"$0$\", r\"$\\pi/2$\", r\"$\\pi$\", r\"$3/4\\pi$\", r\"$2\\pi$\",])# modify y-axis ticks to be in multiples of pi\nplt.show()\n
Now the synchronization happens after 7 ms which is faster compared to the previous simulation.
In this notebook, we will simulate the Kuramoto model. The Kuramoto model is defined by the following differential equation: $$ \\frac{d \\theta_i}{dt} = \\omega_i + \\zeta_i + \\frac{K}{N} \\sum_{j=1}^N A_{ij} sin(\\theta_j(t - \\tau_{ij}) - \\theta_i(t)) + h_i(t)$$ here \\(\\theta_i\\) is the phase of oscillator \\(i\\), \\(\\omega_i\\) is the natural frequency of oscillator \\(i\\), \\(\\zeta_i\\) is the noise term, \\(K\\) is the global coupling strength, \\(A\\) is the coupling matrix, \\(\\tau_{ij}\\) is the phase lag between oscillator \\(i\\) and \\(j\\), and \\(h_i(t)\\) is the external input to oscillator \\(i\\).
The Kuramoto model describes synchronization between oscillators. Nodes in the network are influenced not only by their own natural frequency but also by the other nodes in the network. The strength of this influence is determined by the global coupling and the connectivity matrix. The degree of synchronization depends on the strength of the coupling. The Kuramoto model is relatively simple, mathematically tractable, and easy to understand. Kuramoto model firstly described in 1975 by Yoshiki Kuramoto (Y. Kuramoto. Self-entrainment of a population of coupled non-linear oscillators. in International Symposium on Mathematical Problems in Theoretical Physics, H. Araki, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 1975, pp. 420\u2013422).
Here we will simulate a signal node with no noise. We then cap the phase values to be between 0 and 2*pi. We also will plot the phase values over time.
# change to the root directory of the project\nimport os\n\nif os.getcwd().split(\"/\")[-1] in [\"examples\", \"dev\"]:\n os.chdir(\"..\")\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2\n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\nimport neurolib.utils.stimulus as stim\nimport numpy as np\nimport scipy\n# Let's import the aln model\nfrom neurolib.models.aln import ALNModel\n
# you can also set stim_start and stim_end - in ms\ninp = stim.StepInput(\n step_size=1.43, start=1200, end=2400, n=2\n).as_array(duration, dt)\nplt.plot(inp.T);\n
# frequency in Hz; dc_bias=True shifts input by its amplitude\ninp = stim.SinusoidalInput(\n amplitude=2.5, frequency=2.0, start=1200, dc_bias=True\n).as_array(duration, dt)\ninp2 = stim.SinusoidalInput(amplitude=2.5, frequency=2.0).as_array(\n duration, dt\n)\nplt.plot(inp.T)\nplt.plot(inp2.T);\n
# frequency in Hz; dc_bias=True shifts input by its amplitude\ninp = stim.SquareInput(\n amplitude=2.5, frequency=2.0, start=1200, dc_bias=True\n).as_array(duration, dt)\ninp2 = stim.SquareInput(amplitude=2.5, frequency=2.0).as_array(\n duration, dt\n)\nplt.plot(inp.T)\nplt.plot(inp2.T);\n
summed = ou + sq + sin\nplt.plot(summed.as_array(duration, dt).T);\n
# same lengths - use &\nconc = ou & sq & sin\nplt.plot(conc.as_array(duration, dt).T);\n
# can also do different length ratios, but for this you need to call ConcatenatedStimulus directly\nconc = stim.ConcatenatedStimulus([ou, sq, sin], length_ratios=[0.5, 2, 5])\nplt.plot(conc.as_array(duration, dt).T);\n
class PoissonNoiseWithExpKernel(stim.Stimulus):\n\"\"\"\n Poisson noise with exponential kernel.\n By subclassing the `StimulusInput` we have an option to select `start` and `end`.\n \"\"\"\n\n def __init__(\n self, amp, freq, tau_syn, start=None, end=None, n=1, seed=None\n ):\n # save parameters as attributes\n self.freq = freq\n self.amp = amp\n self.tau_syn = tau_syn\n # pass other params to parent class\n super().__init__(\n start=start, end=end, n=n, seed=seed\n )\n\n def generate_input(self, duration, dt):\n # this is a helper function that creates self.times vector\n self._get_times(duration=duration, dt=dt)\n # do the magic here: prepare output vector\n x = np.zeros((self.n, self.times.shape[0]))\n # compute total number of spikes\n total_spikes = int(self.freq * (self.times[-1] - self.times[0]) / 1000.0)\n # randomly put spikes into the output vector\n spike_indices = np.random.choice(\n x.shape[1], (self.n, total_spikes), replace=True\n )\n x[np.arange(x.shape[0])[:, None], spike_indices] = 1.0\n # create exponential kernel\n time_spike_end = -self.tau_syn * np.log(0.001)\n arg_spike_end = np.argmin(np.abs(self.times - time_spike_end))\n spike_kernel = np.exp(-self.times[:arg_spike_end] / self.tau_syn)\n # convolve over dimensions\n x = np.apply_along_axis(np.convolve, axis=1, arr=x, v=spike_kernel, mode=\"same\")\n # self._trim_stim_input takes care of trimming the stimulus based on stim_start and stim_end\n return self._trim_stim(x * self.amp)\n
# sum and concat test\npois = PoissonNoiseWithExpKernel(freq=20.0, amp=1.2, tau_syn=50.0, n=2)\n\nsummed = pois + sin\nplt.plot(summed.as_array(duration, dt).T);\n
concat = pois & sin\nplt.plot(concat.as_array(duration, dt).T);\n
model = ALNModel()\nmodel.params[\"duration\"] = 5 * 1000\nmodel.params[\"sigma_ou\"] = 0.2 # we add some noise\n
After creating a base for stimulus, we can simply call to_model(model) function and our stimulus is generated.
The stimulus is then set as an input current parameter to the model. The parameter that models a current that goes to the excitatory population is called ext_exc_current. For the inhibitory population, we can use ext_inh_current. We can also set a firing rate input, that will then be integrated over the synapses using the parameter model.params['ext_exc_rate'].
from neurolib.utils.loadData import Dataset\n\nds = Dataset(\"hcp\")\n
model = ALNModel(Cmat=ds.Cmat, Dmat=ds.Dmat)\n\n# we chose a parameterization in which the brain network oscillates slowly\n# between up- and down-states\n\nmodel.params[\"mue_ext_mean\"] = 2.56\nmodel.params[\"mui_ext_mean\"] = 3.52\nmodel.params[\"b\"] = 4.67\nmodel.params[\"tauA\"] = 1522.68\nmodel.params[\"sigma_ou\"] = 0.40\n\nmodel.params[\"duration\"] = 0.2 * 60 * 1000\n
def plot_output_and_spectrum(model, individual=False, vertical_mark=None):\n\"\"\"A simple plotting function for the timeseries\n and the power spectrum of the activity.\n \"\"\"\n fig, axs = plt.subplots(\n 1, 2, figsize=(8, 2), dpi=150, gridspec_kw={\"width_ratios\": [2, 1]}\n )\n axs[0].plot(model.t, model.output.T, lw=1)\n axs[0].set_xlabel(\"Time [ms]\")\n axs[0].set_ylabel(\"Activity [Hz]\")\n\n frs, powers = func.getMeanPowerSpectrum(model.output, dt=model.params.dt)\n axs[1].plot(frs, powers, c=\"k\")\n\n if individual:\n for o in model.output:\n frs, powers = func.getPowerSpectrum(o, dt=model.params.dt)\n axs[1].plot(frs, powers)\n\n axs[1].set_xlabel(\"Frequency [Hz]\")\n axs[1].set_ylabel(\"Power\")\n\n plt.show()\n
model.run(chunkwise=True)\n
plot_output_and_spectrum(model)\n
neurolib helps you to create a few basic stimuli out of the box using the function stimulus.construct_stimulus().
# construct a stimulus\n# we want 1-dim input - to all the nodes - 25Hz\nac_stimulus = stim.SinusoidalInput(amplitude=0.2, frequency=25.0).to_model(model)\nprint(ac_stimulus.shape)\n\n# this stimulus is 1-dimensional. neurolib will threfore automatically apply it to *all nodes*.\nmodel.params[\"ext_exc_current\"] = ac_stimulus * 5.0\n
\n(80, 120000)\n\n
model.run(chunkwise=True)\n
plot_output_and_spectrum(model)\n
# now we create multi-d input of 25Hz\nac_stimulus = stim.SinusoidalInput(amplitude=0.2, frequency=25.0).to_model(model)\nprint(ac_stimulus.shape)\n\n# We set the input to a bunch of nodes to zero.\n# This will have the effect that only nodes from 0 to 4 will be sitmulated!\nac_stimulus[5:, :] = 0\n\n# multiply the stimulus amplitude\nmodel.params[\"ext_exc_current\"] = ac_stimulus * 5.0\n
\n(80, 120000)\n\n
model.run(chunkwise=True)\n
We can see that the spectrum has a peak at the frequency we stimulated with, but only in a subset of nodes (where we stimulated).
This notebook will demonstrate how to construct stimuli using a variety of different predefined classes in neurolib.
You can then apply them as an input to a whole-brain model. As an example, we will see how to add an external current to the excitatory population of the ALNModel.
neurolib offers a range of external stimuli you can apply to your models. These range from basic noise processes like a Wiener process or an Ornstein-Uhlenbeck process, to more simple forms of inputs such as sinousoids, rectified inputs etc. All stimuli are based on the ModelInput class, and are available in the neurolib.utils.stimulus subpackage. In the following we will detail the implemented inputs and also show how to easily implement your own custom stimulus further below.
All inputs are initialized as classes. Three different functions are provided for the generation of the actual stimulus as a usable input: - as_array(duration, dt) - will return numpy array. - as_cubic_splines(duration, dt) - will return a CubicHermiteSpline object, which represents a spline representation of the given input - useful for jitcdde backend in MultiModel. - to_model(model) - the easiest one - infers the duration, dt and number of nodes from the simulated model itself and returns numpy array of an appropriate shape.
Each stimulus type has their own init function with attributes that apply to the specific kind of stimulus. However, all of them include the attributes n and seed. n controls how many spatial dimensions the stimulus should have, and in the case of stochastic inputs, such as a noisy Ornstein-Uhlenbeck process, this controls the number of independent realizations that are returned. For a deterministic stimulus, such as the sinusoidal input, this just returns a copy of itself.
"},{"location":"examples/example-0.6-external-stimulus/#zero-input-for-convenience","title":"Zero input - for convenience","text":"
You'll probably never use it, but you know, it's there... Maybe you can use it as a \"pause\" when concatenating two different stimuli.
A mix of inputs that start with negative step, then we have exponential rise and subsequent decay to zero. Useful for detecting bistability
"},{"location":"examples/example-0.6-external-stimulus/#operations-on-stimuli","title":"Operations on stimuli","text":"
Sometimes you need to concatenate inputs in the temporal dimension to create a mix of different stimuli. This is easy with neurolib's stimuli. All of them allow two operations: + for a sum of different stimuli and & to concatenate them (one after another). Below, we will show some of the weird combinations you can make.
"},{"location":"examples/example-0.6-external-stimulus/#sum","title":"Sum","text":""},{"location":"examples/example-0.6-external-stimulus/#concatenation","title":"Concatenation","text":""},{"location":"examples/example-0.6-external-stimulus/#mixing-the-operations","title":"Mixing the operations","text":"
You should be able to use as many + and & as you want. Go crazy.
"},{"location":"examples/example-0.6-external-stimulus/#creating-a-custom-stimulus","title":"Creating a custom stimulus","text":"
Creating a custom stimulus is very easy and you can build your library of stimuili as inputs for your model. There are three necessary steps: 1. Subclass stim.Input for a basic input or stim.Stimulus to have the option to set start and end times. 2. Define an __init__() method with the necessary parameters of your stimulus and set the appropriate attributes. 3. Define a generate_input(duration, dt) method, which returns a numpy array as with a shape (space, time) and that's it. Everything else described above is taken care of. Your new input class will be also support operations like + and &.
Below we implement a new stimulus class that represents currents caused by a Poission spike train convolved with an exponential kernel.
"},{"location":"examples/example-0.6-external-stimulus/#using-stimuli-in-neurolib","title":"Using stimuli in neurolib","text":"
First, we initialize a single node.
"},{"location":"examples/example-0.6-external-stimulus/#brain-network-stimulation","title":"Brain network stimulation","text":""},{"location":"examples/example-0.6-external-stimulus/#without-stimulation","title":"Without stimulation","text":""},{"location":"examples/example-0.6-external-stimulus/#constructing-a-stimulus","title":"Constructing a stimulus","text":""},{"location":"examples/example-0.6-external-stimulus/#focal-stimulation","title":"Focal stimulation","text":"
In the previous example, the stimulus was applied to all nodes simultaneously. We can also apply stimulation to a specific set of nodes.
This notebook demonstrates how to implement your own model in neurolib. There are two main parts of each model: its class that inherits from the Model base class and its timeIntegration() function.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-2] == \"neurolib\":\n os.chdir('..')\n\n%load_ext autoreload\n%autoreload 2\n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n
In this example we will implement a linear model with the following equation:
\\(\\frac{d}{dt} x_i(t) = - \\frac{x_i(t)}{\\tau} + \\sum_{j=0}^{N} K G_{ij} x_j(t)\\).
Here, we simulate \\(N\\) nodes that are coupled in a network. \\(x_i\\) are the elements of an \\(N\\)-dimensional state vector, \\(\\tau\\) is the decay time constant, \\(G\\) is the adjacency matrix and \\(K\\) is the global coupling strength.
We first create a class for the model called LinearModel which inherits lots of functionality from the Model base class. We define state_vars and default_output so that neurolib knows how to handle the variables of the system. Next, we define init_vars in order to use the autochunk integration scheme, so we can save a lot of RAM when we run very long simulations.
Next we define a simple parameter dictionary called params. In here, we can define all the necessary parameters of the model and change their values later. In this example, we set the timescale \\(\\tau\\), the coupling strength \\(K\\), the integration time step dt (in ms) and the duration to 100 ms.
We are now ready to set up the constructor of our model! This method is supposed to set up the model and prepare it for integration. All the magic happens in the background! We pass the self.timeIntegration function and the parameter dictionary self.params to the base class using super().__init__().
That wasn't too bad, was it? We are finally ready to define the time integration method that prepares all variables and passes it to the last function that will crunch the numbers. Here we prepare the numpy arrays that will hold the simulation results. We have to prepare them before we can execute the numba code.
def timeIntegration(self, p):\n N = p['Cmat'].shape[0]\n t = np.arange(1, p['duration']/p['dt']) # holds time steps\n x = np.ndarray((N, len(t)+1)) # holds variable x\n
Next, we make use of a neurolib convention to prepare the initial conditions of our model. If you remember, we defined init_vars above in order to use the autochunk feature. The autochunk feature will automatically fill this parameter with the last state of the last simulated chunk so the model integration can be continued without having to remember the entire output and state variables of the model indefinitely. In this line, we check whether x_init is set or not (which it will be, when we use chunkwise integration). If it is not set, we simply use random initial conditions using rand((N, 1)). Remember that the convention for array dimensions is array[space, time], meaning that we only fill in the first time step with the initial condition.
# either use predefined initial conditions or random ones\nx[:, :1] = p.get('x_init') if p.get('x_init') is not None else rand((N, 1))\n
We're ready to call our accelerated integration part and return the results \ud83d\ude80!
return njit_integrate(x, t, p['tau'], p['K'], N, p['Cmat'], p['dt'])\n
Remember to put this function outside of the class definition, so we can use use numba acceleration to greatly increase the performance of our code. We first have to let numba know which part of the code to precompile. We do this by simply placing the decorator @numba.njit in the line above the integration function. Easy way of getting 100x faster code! \u2764\ufe0f numba!
@numba.njit\ndef njit_integrate(x, t, tau, K, N, Cmat, dt):\n
Next, we do some simple math. We first loop over all time steps. If you have prepared the array t as described above, you can simply loop over its length. In the next line, we calculate the coupling term from the model equation above. However, instead of looping over the sum, we use a little trick here and simply compute the dot product between the coupling matrix G and the state vector x. This results in a N-dimensional vector that carries the amount of input each node receives at each time step. Finally, we loop over all nodes so we can finally add up everything.
for i in range(1, 1 + len(t)): # loop over time\n inp = Cmat.dot(x[:, i-1]) # input vector\n for n in range(N): # loop over nodes\n
In the next line, we integrate the equation that we have shown above. This integration scheme is called Euler integration and is the most simple way of solving an ODE. The idea is easy and is best expressed as x_next = x_before + f(x) * dt where f(x) is simply the time derivative \\(\\frac{d}{dt} x_i(t)\\) shown above.
x[n, i] = x[n, i-1] + (- x[n, i-1] / tau + K * inp[n]) * dt # model equations\n
We're done! The only thing left to do is to return the data so that neurolib can take over from here on. The outputs of this simulation will be available in the model.outputs attribute. You can see an example time series below.
return t, x\n
import numba\nimport numpy as np\nfrom numpy.random import random as rand\nfrom neurolib.models.model import Model\n\nclass LinearModel(Model):\n state_vars = [\"x\"]\n default_output = \"x\"\n init_vars = [\"x_init\"]\n params = dict(tau=10, K=1e-2, dt=1e-1, duration=100)\n def __init__(self, Cmat=np.zeros((1,1))):\n self.params['Cmat'] = Cmat\n super().__init__(self.timeIntegration, self.params)\n\n def timeIntegration(self, p):\n p['N'] = p['Cmat'].shape[0] # number of nodes\n t = np.arange(1, p['duration']/p['dt'] + 1) # holds time steps\n x = np.ndarray((p['N'], len(t)+1)) # holds variable x\n # either use predefined initial conditions or random ones\n x[:, :1] = p['x_init'] if 'x_init' in p else rand((p['N'], 1))\n return njit_integrate(x, t, p['tau'], p['K'], p['N'], p['Cmat'], p['dt'])\n\n@numba.njit\ndef njit_integrate(x, t, tau, K, N, Cmat, dt):\n for i in range(1, 1 + len(t)): # loop over time\n inp = Cmat.dot(x[:, i-1]) # input vector\n for n in range(N): # loop over nodes\n x[n, i] = x[n, i-1] +\\\n (- x[n, i-1] / tau + K * inp[n]) * dt # model equations\n return t, x\n
We prepare a \"mock\" connectivity matrix, simply consisting of 12x12 random numbers, meaning that we will simulate 12 LinearModel's in a network.
Cmat = rand((12, 12)) # use a random connectivity matrix\nmodel = LinearModel(Cmat) # initialize the model\n
Since we've followed the model implementation guidelines, the model is also compatible with chunkwise integration and can produce a BOLD signal. Let's try it out!
"},{"location":"examples/example-0.7-custom-model/#minimal-model-implementation","title":"Minimal model implementation","text":""},{"location":"examples/example-0.7-custom-model/#model-equations","title":"Model equations","text":""},{"location":"examples/example-0.7-custom-model/#implementation","title":"Implementation","text":""},{"location":"examples/example-0.7-custom-model/#numba-time-integration","title":"Numba time integration","text":""},{"location":"examples/example-0.7-custom-model/#code","title":"Code","text":""},{"location":"examples/example-0.7-custom-model/#running-the-model","title":"Running the model","text":""},{"location":"examples/example-0.7-custom-model/#plot-outputs","title":"Plot outputs","text":""},{"location":"examples/example-0.7-custom-model/#bold-and-autochunk","title":"BOLD and autochunk","text":""},{"location":"examples/example-1-aln-parameter-exploration/","title":"Example 1 aln parameter exploration","text":"
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.exploration import BoxSearch\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
aln = ALNModel()\n
parameters = ParameterSpace({\"mue_ext_mean\": np.linspace(0, 3, 2), \"mui_ext_mean\": np.linspace(0, 3, 2)})\n# info: chose np.linspace(0, 3, 21) or more, values here are low for testing\nsearch = BoxSearch(aln, parameters, filename=\"example-1.hdf\")\n
search.run()\n
search.loadResults()\n
print(\"Number of results: {}\".format(len(search.results)))\n
# Example analysis of the results\n# The .results attribute is a list and can be indexed by the run \n# number (which is also the index of the pandas dataframe .dfResults).\n# Here we compute the maximum firing rate of the node in the last second\n# and add the result (a float) to the pandas dataframe.\nfor i in search.dfResults.index:\n search.dfResults.loc[i, 'max_r'] = np.max(search.results[i]['rates_exc'][:, -int(1000/aln.params['dt']):])\n
plt.imshow(search.dfResults.pivot_table(values='max_r', index = 'mui_ext_mean', columns='mue_ext_mean'), \\\n extent = [min(search.dfResults.mue_ext_mean), max(search.dfResults.mue_ext_mean),\n min(search.dfResults.mui_ext_mean), max(search.dfResults.mui_ext_mean)], origin='lower')\nplt.colorbar(label='Maximum rate [Hz]')\nplt.xlabel(\"Input to E\")\nplt.ylabel(\"Input to I\")\n
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.exploration import BoxSearch\n
def explore_me(traj):\n pars = search.getParametersFromTraj(traj)\n # let's calculate the distance to a circle\n computation_result = abs((pars['x']**2 + pars['y']**2) - 1)\n result_dict = {\"distance\" : computation_result}\n search.saveToPypet(result_dict, traj)\n
parameters = ParameterSpace({\"x\": np.linspace(-2, 2, 2), \"y\": np.linspace(-2, 2, 2)})\n# info: chose np.linspace(-2, 2, 40) or more, values here are low for testing\nsearch = BoxSearch(evalFunction = explore_me, parameterSpace = parameters, filename=\"example-1.1.hdf\")\n
search.run()\n
search.loadResults()\nprint(\"Number of results: {}\".format(len(search.results)))\n
The runs are also ordered in a simple pandas dataframe called search.dfResults. We cycle through all results by calling search.results[i] and loading the desired result (here the distance to the circle) into the dataframe
for i in search.dfResults.index:\n search.dfResults.loc[i, 'distance'] = search.results[i]['distance']\n\nsearch.dfResults\n
And of course a plot can visualize the results very easily.
plt.imshow(search.dfResults.pivot_table(values='distance', index = 'x', columns='y'), \\\n extent = [min(search.dfResults.x), max(search.dfResults.x),\n min(search.dfResults.y), max(search.dfResults.y)], origin='lower')\nplt.colorbar(label='Distance to the unit circle')\n
This notebook demonstrates a very simple parameter exploration of a custom function that we have defined. It is a simple function that returns the distance to a unit circle, so we expect our parameter exploration to resemble a circle.
"},{"location":"examples/example-1.1-custom-parameter-exploration/#define-the-evaluation-function","title":"Define the evaluation function","text":"
Here we define a very simple evaluation function. The function needs to take in traj as an argument, which is the pypet trajectory. This is how the function knows what parameters were assigned to it. Using the builtin function search.getParametersFromTraj(traj) we can then retrieve the parameters for this run. They are returned as a dictionary and can be accessed in the function.
In the last step, we use search.saveToPypet(result_dict, traj) to save the results to the pypet trajectory and to an HDF. In between, the computational magic happens!
"},{"location":"examples/example-1.1-custom-parameter-exploration/#define-the-parameter-space-and-exploration","title":"Define the parameter space and exploration","text":"
Here we define which space we want to cover. For this, we use the builtin class ParameterSpace which provides a very easy interface to the exploration. To initialize the exploration, we simply pass the evaluation function and the parameter space to the BoxSearch class.
We can easily obtain the results from pypet. First we call search.loadResults() to make sure that the results are loaded from the hdf file to our instance.
#hide\n# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
\nThe autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n\n
#hide\ntry:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n\nimport numpy as np\n\n# Let's import all the necessary functions for the parameter\nfrom neurolib.models.fhn import FHNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.exploration import BoxSearch\n\n# load some utilty functions for explorations\nimport neurolib.utils.pypetUtils as pu\nimport neurolib.utils.paths as paths\nimport neurolib.optimize.exploration.explorationUtils as eu\n\n# The brain network dataset\nfrom neurolib.utils.loadData import Dataset\n\n# Some useful functions are provided here\nimport neurolib.utils.functions as func\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
We load a dataset (in this case the hcp dataset from the Human Connectome Project) and initialize a model to run on each node of the brain network (here the FHNModel which is the Fitz-Hugh Nagumo model).
Running the model is as simple as entering model.run(chunkwise=True).
We define a parameter range to explore. Our first parameter is x_ext, which is the input to each node of the FHNModel in a brain network. Therefore, this parameter is a list with N entries, one per node. Our next parameter is K_gl, the global coupling strength. Finally, we have the coupling parameter, which defines how each FHNModel is coupled to its adjacent nodes via either additive coupling (activity += input) or diffusive (activity += (activity - input) ).
parameters = ParameterSpace({\"x_ext\": [np.ones((model.params['N'],)) * a for a in np.linspace(0, 2, 2)] # testing: 2, original: 41\n ,\"K_gl\": np.linspace(0, 2, 2) # testing: 2, original: 41\n ,\"coupling\" : [\"additive\", \"diffusive\"]\n }, kind=\"grid\")\nsearch = BoxSearch(model=model, parameterSpace=parameters, filename=\"example-1.2.0.hdf\")\n
We run the exploration, simply by calling the run() function of the BoxSearch class. We can pass parameters to this function, that will be directly passed to the FHNModel.run() function of the simulated model. This way, we can easily specify to run the simulation chunkwise, without storing all the activity in memory, and simulate bold activity as well.
Note that the default behaviour of the BoxSearch class is to save the default_output of each model and if bold is simulated, then also the BOLD data. If the exploration is initialized with BoxSearch(saveAllModelOutputs=True), the exploration would save all outputs of the model. This can obviously create a lot of data to store, so please use this option at your own discretion.
search.run(chunkwise=True, bold=True)\n
A simple helper function for getting the trajectories of an hdf file created by pypet can be found in pypetUtils.py (aka pu). This way, you can explore which explorations are in the file and decide later which one you want to load for analysis
The default behaviour will load the latest exploration. It's name is also stored in search.trajectoryName:
search.trajectoryName\n
\n'results-2020-04-08-02H-50M-09S'\n
Now we load all results. As said above, the newest exploration will be loaded by default. You can load results from earlier explorations by adding the argument trajectoryName=results-from-earlier and also chose another hdf file by using the argument filename=/path/to/explorations.hdf.
Remember that using search.loadResults() will load all results to memory. This can cause a lot of RAM, depending on how big the exploration was.
search.loadResults()\n
print(\"Number of results: {}\".format(len(search.results)))\n
One way of loading a result without loading everything else into RAM is to use the builtin function search.getRun(). However, you need to know which runId you're looking for! For this, you can run search.loadDfResults() to create a pandas.DataFrame search.dfResults with all parameters (which also happens when you call search.loadResults()).
After loading the results with search.loadResults() they are now available as a simple list using search.results. Let's look at the time series of one result.
If you remember from before, the external input parameter x_ext is a list of length N (one per node). Since they're all the same in this example, we reduce the parameter to only the first entry of each list.
search.dfResults.x_ext = [a[0] for a in list(search.dfResults.x_ext)]\n
We can use eu.processExplorationResults() from explorationUtils.py (aka eu) to process the results from the simluation and store results in our pandas.DataFrame of all results called search.dfResults:
This finally gives us a dataframe with parameters and respective values from postprocessing the results, which we can access using search.dfResults.
We can use the utility function eu.findCloseResults() to navigate in this DataFrame and find for example the runId of a run for a specific parameter configuration.
To understand what is happening in eu.processExplorationResults(), it helps to see how we could do postprocessing on the loaded data ourselves. Let's calculate the correlation to empirical functional connectivity using the builtin funtions func.fc() and func.matrix_correlation().
mean_corr = np.mean([func.matrix_correlation(func.fc(search.results[rId]['BOLD']), fc) for fc in ds.FCs])\n\nprint(f\"Mean correlation of run {rId} with empirical FC matrices is {mean_corr:.02}\")\n
\nMean correlation of run 3324 with empirical FC matrices is 0.28\n\n
Another usefull function is eu.plotExplorationResults(), which helps you to visualize the results from the exploration. You can specify which parameters should be the x- and the y-axis using the par1=[parameter_name, parameter_label] and par2 arguments, and you can define by which paramter plane the results should be \"sliced\".
We want to find parameter for which the brain network model produces realistic BOLD functional connectivity. For this, we calculated the entry fc in search.dfResults by taking the func.fc() of the model.BOLD timeseries and compared it to empirical data using func.matrix_correlation.
Below, the average of this value across all subjects of the dataset is plotted. A higher value (brighter color) means a better fit to the empirical data. Observe how the best solutions tend to cluster at the edges of bifurcations, indicating that correlations in the network are generated by multiple nodes undergoing bifurcation together, such as transitioning from the constant activity (fixed point) solution to an oscillation.
"},{"location":"examples/example-1.2-brain-network-exploration/#parameter-exploration-of-a-brain-network-model","title":"Parameter exploration of a brain network model","text":"
This notebook demonstrates how to scan the parameter space of a brain network model using neurolib. We will simulate BOLD activity and compare the results to empirical data to identify optimal parameters of the model.
The steps outlined in this notebook are the following:
We load a DTI and resting-state fMRI dataset (hcp) and set up a brain network using the FHNModel.
We simulate the system for a range of different parameter configurations.
We load the simulated data from disk.
We postprocess the results and obtain the model fit.
Finally, we plot the results in the parameter space of the exploration.
"},{"location":"examples/example-1.2-brain-network-exploration/#1-set-up-a-brain-network","title":"1. Set up a brain network","text":""},{"location":"examples/example-1.2-brain-network-exploration/#2-run-the-exploration","title":"2. Run the exploration","text":""},{"location":"examples/example-1.2-brain-network-exploration/#3-load-results","title":"3. Load results","text":""},{"location":"examples/example-1.2-brain-network-exploration/#4-postprocessing","title":"4. Postprocessing","text":""},{"location":"examples/example-1.2-brain-network-exploration/#5-plot","title":"5. Plot","text":""},{"location":"examples/example-1.2-brain-network-exploration/#bold-functional-connectivity","title":"BOLD functional connectivity","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/","title":"Example 1.2.1 brain exploration postprocessing","text":"
#hide\n# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
#hide\ntry:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib\n import matplotlib.pyplot as plt\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n\nimport numpy as np\n\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.exploration import BoxSearch\nimport neurolib.utils.functions as func\n\nfrom neurolib.utils.loadData import Dataset\nds = Dataset(\"hcp\")\n
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat) # simulates the whole-brain model in 10s chunks by default if bold == True\n# Resting state fits\nmodel.params['mue_ext_mean'] = 1.57\nmodel.params['mui_ext_mean'] = 1.6\n#model.params['sigma_ou'] = 0.09\nmodel.params['b'] = 5.0\nmodel.params['dt'] = 0.2\nmodel.params['duration'] = 0.2 * 1000 #ms\n# testing: model.params['duration'] = 0.2 * 60 * 1000 #ms\n# real: model.params['duration'] = 1.0 * 60 * 1000 #ms\n
\nMainProcess root INFO aln: Model initialized.\n\n
def evaluateSimulation(traj):\n # get the model from the trajectory using `search.getModelFromTraj(traj)`\n model = search.getModelFromTraj(traj)\n # initiate the model with random initial contitions\n model.randomICs()\n defaultDuration = model.params['duration']\n invalid_result = {\"fc\" : np.nan, \"fcd\" : np.nan}\n\n # -------- STAGEWISE EVALUATION --------\n stagewise = True\n if stagewise:\n # -------- stage wise simulation --------\n\n # Stage 1 : simulate for a few seconds to see if there is any activity\n # ---------------------------------------\n model.params['duration'] = 3*1000.\n model.run()\n\n # check if stage 1 was successful\n amplitude = np.max(model.output[:, model.t > 500]) - np.min(model.output[:, model.t > 500])\n if amplitude < 0.05:\n search.saveToPypet(invalid_result, traj)\n return invalid_result, {}\n\n # Stage 2: simulate BOLD for a few seconds to see if it moves\n # ---------------------------------------\n model.params['duration'] = 30*1000.\n model.run(chunkwise=True, bold = True)\n\n if np.max(np.std(model.outputs.BOLD.BOLD[:, 10:15], axis=1)) < 1e-5:\n search.saveToPypet(invalid_result, traj)\n return invalid_result, {}\n\n # Stage 3: full and final simulation\n # ---------------------------------------\n model.params['duration'] = defaultDuration\n model.run(chunkwise=True, bold = True)\n\n # -------- POSTPROCESSING --------\n # FC matrix correlation to all subject rs-fMRI\n BOLD_TRANSIENT = 10000\n fc_score = np.mean([func.matrix_correlation(func.fc(model.BOLD.BOLD[:, model.BOLD.t_BOLD > BOLD_TRANSIENT]), fc) for fc in ds.FCs])\n\n # FCD to all subject rs-fMRI\n try:\n fcd_score = np.mean([func.ts_kolmogorov(model.BOLD.BOLD[:, model.BOLD.t_BOLD > BOLD_TRANSIENT], ds.BOLDs[i]) for i in range(len(ds.BOLDs))])\n except:\n fcd_score = np.nan\n\n # let's build the results dictionary\n result_dict = {\"fc\" : fc_score, \"fcd\" : fcd_score}\n # we could also save the output of the model by adding to the results_dict like this:\n # result_dict = {\"fc\" : fc_score, \"fcd\" : fcd_score, \"outputs\" : model.outputs}\n\n # Save the results to pypet. \n # Remember: This has to be dictionary!\n search.saveToPypet(result_dict, traj)\n
parameters = ParameterSpace({\"mue_ext_mean\": np.linspace(0, 3.0, 2), \"mui_ext_mean\": np.linspace(0.2, 3.0, 2)})\n# info: chose np.linspace(0, 3, 21) or more, values here are low for testing\nsearch = BoxSearch(evalFunction = evaluateSimulation, model=model, parameterSpace=parameters, filename=\"example-1.2.1.hdf\")\n
\nMainProcess root INFO Number of processes: 80\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `/mnt/raid/data/cakan/hdf/example-1.2.1.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\n/home/cakan/anaconda/lib/python3.7/site-packages/pypet/parameter.py:884: FutureWarning: Conversion of the second argument of issubdtype from `str` to `str` is deprecated. In future, it will be treated as `np.str_ == np.dtype(str).type`.\n if np.issubdtype(dtype, np.str):\nMainProcess root INFO Number of parameter configurations: 4\nMainProcess root INFO BoxSearch: Environment initialized.\n\n
search.run()\n
\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2020-04-08-01H-16M-48S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 80 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 0/4 runs [ ] 0.0%\nMainProcess pypet INFO PROGRESS: Finished 1/4 runs [===== ] 25.0%, remaining: 0:00:02\nMainProcess pypet INFO PROGRESS: Finished 2/4 runs [========== ] 50.0%, remaining: 0:00:00\nMainProcess pypet INFO PROGRESS: Finished 3/4 runs [=============== ] 75.0%, remaining: 0:00:09\nMainProcess pypet INFO PROGRESS: Finished 4/4 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2020-04-08-01H-16M-48S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2020-04-08-01H-16M-48S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\n/home/cakan/anaconda/lib/python3.7/site-packages/pypet/storageservice.py:4597: FutureWarning: Conversion of the second argument of issubdtype from `str` to `str` is deprecated. In future, it will be treated as `np.str_ == np.dtype(str).type`.\n if (np.issubdtype(val.dtype, str) or\n/home/cakan/anaconda/lib/python3.7/site-packages/pypet/storageservice.py:4598: FutureWarning: Conversion of the second argument of issubdtype from `bytes` to `bytes` is deprecated. In future, it will be treated as `np.bytes_ == np.dtype(bytes).type`.\n np.issubdtype(val.dtype, bytes)):\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\n/home/cakan/anaconda/lib/python3.7/site-packages/pypet/storageservice.py:3110: FutureWarning: Conversion of the second argument of issubdtype from `str` to `str` is deprecated. In future, it will be treated as `np.str_ == np.dtype(str).type`.\n np.issubdtype(data.dtype, str)):\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2020-04-08-01H-16M-48S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2020-04-08-01H-16M-48S` were completed successfully.\n\n
search.loadResults()\nprint(\"Number of results: {}\".format(len(search.results)))\n
\nMainProcess root INFO Loading results from /mnt/raid/data/cakan/hdf/example-1.2.1.hdf\n/mnt/antares_raid/home/cakan/projects/neurolib/neurolib/utils/pypetUtils.py:21: H5pyDeprecationWarning: The default file mode will change to 'r' (read-only) in h5py 3.0. To suppress this warning, pass the mode you need to h5py.File(), or set the global default h5.get_config().default_file_mode, or set the environment variable H5PY_DEFAULT_READONLY=1. Available modes are: 'r', 'r+', 'w', 'w-'/'x', 'a'. See the docs for details.\n hdf = h5py.File(filename)\nMainProcess root INFO Analyzing trajectory results-2020-04-08-01H-16M-48S\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `/mnt/raid/data/cakan/hdf/example-1.2.1.hdf`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading trajectory `results-2020-04-08-01H-16M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `config` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `parameters` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `results` in mode `1`.\nMainProcess root INFO Creating pandas dataframe ...\nMainProcess root INFO Creating results dictionary ...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4/4 [00:00<00:00, 219.06it/s]\nMainProcess root INFO All results loaded.\n\n
\nNumber of results: 4\n\n
for i in search.dfResults.index:\n search.dfResults.loc[i, 'bold_cc'] = np.mean(search.results[i]['fc'])\nsearch.dfResults\n
plt.figure(dpi=150)\nplt.imshow(search.dfResults.pivot_table(values='bold_cc', index = 'mui_ext_mean', columns='mue_ext_mean'), \\\n extent = [min(search.dfResults.mue_ext_mean), max(search.dfResults.mue_ext_mean),\n min(search.dfResults.mui_ext_mean), max(search.dfResults.mui_ext_mean)], origin='lower')\nplt.colorbar(label='Mean correlation to empirical rs-FC')\nplt.xlabel(\"Input to E\")\nplt.ylabel(\"Input to I\")\n
\nText(0, 0.5, 'Input to I')\n
"},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#parameter-exploration-with-custom-run-function-and-postprocessing","title":"Parameter exploration with custom run function and postprocessing","text":"
This notebook demonstrates how to scan the parameter space of a brain network model using neurolib with a custom evaluation function to quickly find regions of interest. The evaluation function is designed to increase the speed for the exploration by focussing on regions where the simulated dynamics meets certain criteria. For this, the simulation is run in multiple, successive steps, that increase in duration.
In this scenario, we want to postprocess the simulated data as soon as the simulation is done and before writing the results to the hard disk. After the full simulation is run, the funciotnal connectivity (FC) of the BOLD signal is computed and compared to the empirical FC dataset. The Pearson correlation of the FC matrices is computed and the average is taken. We then tell pypet to save these postprocessed results along with the model output.
"},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#set-up-model","title":"Set up model","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#define-evaluation-function","title":"Define evaluation function","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#set-up-parameter-exploration","title":"Set up parameter exploration","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#load-data","title":"Load data","text":""},{"location":"examples/example-1.2.1-brain-exploration-postprocessing/#plot","title":"Plot","text":""},{"location":"examples/example-1.3-aln-bifurcation-diagram/","title":"Example 1.3 aln bifurcation diagram","text":"
# change into the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n
model = ALNModel()\nmodel.params['dt'] = 0.1 # Integration time step, ms\nmodel.params['duration'] = 20 * 1000 # Simulation time, ms\n\nmodel.params['save_dt'] = 10.0 # 10 ms sampling steps for saving data, should be multiple of dt\nmodel.params[\"tauA\"] = 600.0 # Adaptation timescale, ms\n
The aln model has a region of bistability, in which two states are stable at the same time: the low-activity down-state, and the high-activity up-state. We can find these states by constructing a stimulus, which uncovers the bistable nature of the system: Initially, we apply a negative push to the system, to make sure that it is in the down-state. We then relax this stimulus slowly and wait for the system to settle. We then apply a sharp push in order to reach the up-state and release the stimulus slowly back again. The difference of the two states after the stimulus has relaxed back to zero is a sign for bistability.
# we place the system in the bistable region\nmodel.params['mue_ext_mean'] = 2.5\nmodel.params['mui_ext_mean'] = 2.5\n\n# construct a stimulus\nrect_stimulus = stim.RectifiedInput(amplitude=0.2).to_model(model)\nmodel.params['ext_exc_current'] = rect_stimulus * 5.0 \n\nmodel.run()\n
Let's construct a rather lengthy evaluation function which does exactly that, for every parameter configuration that we want to explore. We will also measure other things like the dominant frequency and amplitude of oscillations and the maximum rate of the excitatory population.
def evaluateSimulation(traj):\n # get the model from the trajectory using `search.getModelFromTraj(traj)`\n model = search.getModelFromTraj(traj)\n # initiate the model with random initial contitions\n model.randomICs()\n defaultDuration = model.params['duration']\n\n # -------- stage wise simulation --------\n\n # Stage 3: full and final simulation\n # --------------------------------------- \n model.params['duration'] = defaultDuration\n\n rect_stimulus = stim.RectifiedInput(amplitude=0.2).to_model(model)\n model.params['ext_exc_current'] = rect_stimulus * 5.0 \n\n model.run()\n\n # up down difference \n state_length = 2000\n last_state = (model.t > defaultDuration - state_length)\n down_window = (defaultDuration/2-state_length<model.t) & (model.t<defaultDuration/2) # time period in ms where we expect the down-state\n up_window = (defaultDuration-state_length<model.t) & (model.t<defaultDuration) # and up state\n up_state_rate = np.mean(model.output[:, up_window], axis=1)\n down_state_rate = np.mean(model.output[:, down_window], axis=1)\n up_down_difference = np.max(up_state_rate - down_state_rate)\n\n # check rates!\n max_amp_output = np.max(\n np.max(model.output[:, up_window], axis=1) \n - np.min(model.output[:, up_window], axis=1)\n )\n max_output = np.max(model.output[:, up_window])\n\n model_frs, model_pwrs = func.getMeanPowerSpectrum(model.output, \n dt=model.params.dt, \n maxfr=40, \n spectrum_windowsize=10)\n max_power = np.max(model_pwrs) \n\n model_frs, model_pwrs = func.getMeanPowerSpectrum(model.output[:, up_window], dt=model.params.dt, maxfr=40, spectrum_windowsize=5)\n domfr = model_frs[np.argmax(model_pwrs)] \n\n result = {\n \"end\" : 3,\n \"max_output\": max_output, \n \"max_amp_output\" : max_amp_output,\n \"max_power\" : max_power,\n #\"model_pwrs\" : model_pwrs,\n #\"output\": model.output[:, ::int(model.params['save_dt']/model.params['dt'])],\n \"domfr\" : domfr,\n \"up_down_difference\" : up_down_difference\n }\n\n search.saveToPypet(result, traj)\n return \n
Let's now define the parameter space over which we want to serach. We apply a grid search over the mean external input parameters to the excitatory and the inhibitory population mue_ext_mean/mui_ext_mean and do this for two values of spike-frequency adapation strength \\(b\\), once without and once with adaptation.
\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-1.3-aln-bifurcation-diagram.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\nMainProcess root INFO Number of parameter configurations: 3362\nMainProcess root INFO BoxSearch: Environment initialized.\n\n
search.run()\n
\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-06-19-01H-23M-48S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 0/3362 runs [ ] 0.0%\nMainProcess pypet INFO PROGRESS: Finished 169/3362 runs [= ] 5.0%, remaining: 0:01:27\nMainProcess pypet INFO PROGRESS: Finished 337/3362 runs [== ] 10.0%, remaining: 0:01:24\nMainProcess pypet INFO PROGRESS: Finished 505/3362 runs [=== ] 15.0%, remaining: 0:01:27\nMainProcess pypet INFO PROGRESS: Finished 673/3362 runs [==== ] 20.0%, remaining: 0:01:26\nMainProcess pypet INFO PROGRESS: Finished 841/3362 runs [===== ] 25.0%, remaining: 0:01:26\nMainProcess pypet INFO PROGRESS: Finished 1009/3362 runs [====== ] 30.0%, remaining: 0:01:24\nMainProcess pypet INFO PROGRESS: Finished 1177/3362 runs [======= ] 35.0%, remaining: 0:01:19\nMainProcess pypet INFO PROGRESS: Finished 1345/3362 runs [======== ] 40.0%, remaining: 0:01:15\nMainProcess pypet INFO PROGRESS: Finished 1513/3362 runs [========= ] 45.0%, remaining: 0:01:10\nMainProcess pypet INFO PROGRESS: Finished 1681/3362 runs [========== ] 50.0%, remaining: 0:01:05\nMainProcess pypet INFO PROGRESS: Finished 1850/3362 runs [=========== ] 55.0%, remaining: 0:00:59\nMainProcess pypet INFO PROGRESS: Finished 2018/3362 runs [============ ] 60.0%, remaining: 0:00:55\nMainProcess pypet INFO PROGRESS: Finished 2186/3362 runs [============= ] 65.0%, remaining: 0:00:49\nMainProcess pypet INFO PROGRESS: Finished 2354/3362 runs [============== ] 70.0%, remaining: 0:00:42\nMainProcess pypet INFO PROGRESS: Finished 2522/3362 runs [=============== ] 75.0%, remaining: 0:00:36\nMainProcess pypet INFO PROGRESS: Finished 2690/3362 runs [================ ] 80.0%, remaining: 0:00:29\nMainProcess pypet INFO PROGRESS: Finished 2858/3362 runs [================= ] 85.0%, remaining: 0:00:22\nMainProcess pypet INFO PROGRESS: Finished 3026/3362 runs [================== ] 90.0%, remaining: 0:00:15\nMainProcess pypet INFO PROGRESS: Finished 3194/3362 runs [=================== ] 95.0%, remaining: 0:00:07\nMainProcess pypet INFO PROGRESS: Finished 3362/3362 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-06-19-01H-23M-48S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-06-19-01H-23M-48S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-06-19-01H-23M-48S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-06-19-01H-23M-48S` were completed successfully.\n\n
search.loadResults(all=False)\n
\nMainProcess root INFO Loading results from ./data/hdf/example-1.3-aln-bifurcation-diagram.hdf\nMainProcess root INFO Analyzing trajectory results-2021-06-19-01H-23M-48S\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-1.3-aln-bifurcation-diagram.hdf`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading trajectory `results-2021-06-19-01H-23M-48S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `config` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `parameters` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `results` in mode `1`.\nMainProcess root INFO Creating `dfResults` dataframe ...\nMainProcess root INFO Aggregating results to `dfResults` ...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3362/3362 [00:22<00:00, 152.47it/s]\nMainProcess root INFO All results loaded.\n\n
Let's draw the bifurcation diagrams. We will use a white contour for oscillatory areas (measured by max_amp_output) and a green dashed lined for the bistable region (measured by up_down_difference). We can use the function explorationUtils.plotExplorationResults() for this.
"},{"location":"examples/example-1.3-aln-bifurcation-diagram/#bifurcation-diagram-of-the-aln-model","title":"Bifurcation diagram of the aln model","text":"
In this notebook, we will discover how easy it is to draw bifurcation diagrams in neurolib using its powerful BoxSearch class.
Bifurcation diagrams are an important tool to understand a dynamical system, may it be a single neuron model or a whole-brain network. They show how a system behaves when certain parameters of the model are changed: whether the system transitions into an oscillation for example, or whethter the system remains in a fixed point (of sustained constant activity).
We will use this to draw a map of the aln model: Since the aln model consists of two populations of Adex neurons, we will change its inputs to the excitatory and to the inhibitory population independently and do so for two different values of spike-frequency adaptation strength \\(b\\). We will measure the activity of the system and identify regions of oscillatory activity and discover bistable states, in which the system can be in two different stable states for the same set of parameters.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib seaborn\n import matplotlib.pyplot as plt\n\nimport numpy as np\nimport logging\n\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.evolution import Evolution\n\nimport neurolib.optimize.evolution.evolutionaryUtils as eu\nimport neurolib.utils.functions as func\n\ndef optimize_me(traj):\n ind = evolution.getIndividualFromTraj(traj)\n logging.info(\"Hello, I am {}\".format(ind.id))\n logging.info(\"You can also call me {}, or simply ({:.2}, {:.2}).\".format(ind.params, ind.x, ind.y))\n\n # let's make a circle\n computation_result = abs((ind.x**2 + ind.y**2) - 1)\n # DEAP wants a tuple as fitness, ALWAYS!\n fitness_tuple = (computation_result ,)\n\n # we also require a dictionary with at least a single result for storing the results in the hdf\n result_dict = {}\n\n return fitness_tuple, result_dict\n\n\npars = ParameterSpace(['x', 'y'], [[-5.0, 5.0], [-5.0, 5.0]])\nevolution = Evolution(optimize_me, pars, weightList = [-1.0],\n POP_INIT_SIZE=10, POP_SIZE = 6, NGEN=4, filename=\"example-2.0.hdf\")\n# info: chose POP_INIT_SIZE=100, POP_SIZE = 50, NGEN=10 for real exploration, \n# values here are low for testing: POP_INIT_SIZE=10, POP_SIZE = 6, NGEN=4\n\nevolution.run(verbose = True)\n
"},{"location":"examples/example-2-evolutionary-optimization-minimal/#simple-example-of-the-evolutionary-optimization-framework","title":"Simple example of the evolutionary optimization framework","text":"
This notebook provides a simple example for the use of the evolutionary optimization framework builtin to the library. Under the hood, the implementation of the evolutionary algorithm is powered by deap and pypet cares about the parallelization and storage of the simulation data for us.
Here we demonstrate how to fit parameters of a the evaluation function optimize_me which simply computes the distance of the parameters to the unit circle and returns this as the fitness_tuple that DEAP expects.
"},{"location":"examples/example-2.0.1-save-and-load-evolution/","title":"Example 2.0.1 save and load evolution","text":"
In this example, we will demonstrate how to save an evolutionary optimization on one machine or instance and load the results in another machine. This is useful, when the optimization is carried out on another computer as the analysis of the results are done.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-2] == \"neurolib\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
\nMainProcess root INFO Saving evolution to saved_evolution.dill\n\n
Here, we pretend as if we're on a completely new machine. We need to instantiate the Evolution class in order to fill it with the data from the previous optimization. For this, we create a \"mock\" evolution with some fake parameters and then load the dill file to fill out the mock values with the real ones.
\nMainProcess root INFO weightList not set, assuming single fitness value to be maximized.\nMainProcess root INFO Trajectory Name: results-2021-02-15-12H-13M-39S\nMainProcess root INFO Storing data to: ./data/hdf/evolution.hdf\nMainProcess root INFO Trajectory Name: results-2021-02-15-12H-13M-39S\nMainProcess root INFO Number of cores: 8\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/evolution.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\nMainProcess root INFO Evolution: Using algorithm: adaptive\n/Users/caglar/anaconda/lib/python3.7/site-packages/deap/creator.py:141: RuntimeWarning: A class named 'FitnessMulti' has already been created and it will be overwritten. Consider deleting previous creation of that class or rename it.\n RuntimeWarning)\n/Users/caglar/anaconda/lib/python3.7/site-packages/deap/creator.py:141: RuntimeWarning: A class named 'Individual' has already been created and it will be overwritten. Consider deleting previous creation of that class or rename it.\n RuntimeWarning)\nMainProcess root INFO Evolution: Individual generation: <function randomParametersAdaptive at 0x7fd122dfa950>\nMainProcess root INFO Evolution: Mating operator: <function cxBlend at 0x7fd122dcdb70>\nMainProcess root INFO Evolution: Mutation operator: <function gaussianAdaptiveMutation_nStepSizes at 0x7fd122dfad90>\nMainProcess root INFO Evolution: Parent selection: <function selRank at 0x7fd122dfaae8>\nMainProcess root INFO Evolution: Selection operator: <function selBest_multiObj at 0x7fd122dfab70>\n\n
Now, we should be able to do everything we want with the new evolution object.
We can also be able to load the hdf file in which all simulated was stored (\"random_output\" in the evaluation function above).
evolution_new.loadResults()\n
\nMainProcess root INFO Loading results from ./data/hdf/example-2.0.1.hdf\nMainProcess root INFO Analyzing trajectory results-2021-02-15-12H-13M-24S\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-2.0.1.hdf`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading trajectory `results-2021-02-15-12H-13M-24S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `config` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `derived_parameters` in mode `1`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `parameters` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `results` in mode `1`.\n\n
We can load the output from the hdf file by passing the argument outputs=True to the dfEvolution() method:
\n> Simulation parameters\nHDF file storage: ./data/hdf/example-2.0.1.hdf\nTrajectory Name: results-2021-02-15-12H-13M-24S\nDuration of evaluating initial population 0:00:01.093011\nDuration of evolution 0:00:08.117928\nEval function: <function optimize_me at 0x7fd124ee4840>\nParameter space: {'x': [-5.0, 5.0], 'y': [-5.0, 5.0]}\n> Evolution parameters\nNumber of generations: 4\nInitial population size: 10\nPopulation size: 6\n> Evolutionary operators\nMating operator: <function cxBlend at 0x7fd122dcdb70>\nMating paramter: {'alpha': 0.5}\nSelection operator: <function selBest_multiObj at 0x7fd122dfab70>\nSelection paramter: {}\nParent selection operator: <function selRank at 0x7fd122dfaae8>\nComments: no comments\n--- Info summary ---\nValid: 6\nMean score (weighted fitness): -0.93\nParameter distribution (Generation 3):\nx: mean: 0.4360, std: 1.0159\ny: mean: 0.3560, std: 0.5401\n--------------------\nBest 5 individuals:\nPrinting 5 individuals\nIndividual 0\n Fitness values: 0.16\n Score: -0.16\n Weighted fitness: -0.16\n Stats mean 0.16 std 0.00 min 0.16 max 0.16\n model.params[\"x\"] = 0.74\n model.params[\"y\"] = 0.78\nIndividual 1\n Fitness values: 0.4\n Score: -0.4\n Weighted fitness: -0.4\n Stats mean 0.40 std 0.00 min 0.40 max 0.40\n model.params[\"x\"] = 0.76\n model.params[\"y\"] = 0.17\nIndividual 2\n Fitness values: 0.47\n Score: -0.47\n Weighted fitness: -0.47\n Stats mean 0.47 std 0.00 min 0.47 max 0.47\n model.params[\"x\"] = 0.61\n model.params[\"y\"] = -0.41\nIndividual 3\n Fitness values: 1.19\n Score: -1.19\n Weighted fitness: -1.19\n Stats mean 1.19 std 0.00 min 1.19 max 1.19\n model.params[\"x\"] = 1.48\n model.params[\"y\"] = 0.03\nIndividual 4\n Fitness values: 1.22\n Score: -1.22\n Weighted fitness: -1.22\n Stats mean 1.22 std 0.00 min 1.22 max 1.22\n model.params[\"x\"] = 0.78\n model.params[\"y\"] = 1.27\n--------------------\n\n
\n/Users/caglar/anaconda/lib/python3.7/site-packages/neurolib/optimize/evolution/evolutionaryUtils.py:212: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n plt.tight_layout()\n\n
\nMainProcess root INFO Saving plot to ./data/figures/results-2021-02-15-12H-13M-24S_hist_3.png\n\n
\nThere are 6 valid individuals\nMean score across population: -0.93\n\n
\n<Figure size 432x288 with 0 Axes>\n
"},{"location":"examples/example-2.0.1-save-and-load-evolution/#saving-and-loading-evolution","title":"Saving and loading Evolution","text":""},{"location":"examples/example-2.0.1-save-and-load-evolution/#save-evolution","title":"Save evolution","text":""},{"location":"examples/example-2.0.1-save-and-load-evolution/#load-evolution","title":"Load evolution","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/","title":"Example 2.1 evolutionary optimization aln","text":"
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2\n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib seaborn\n import matplotlib.pyplot as plt\n\nimport numpy as np\nimport logging \n\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.evolution import Evolution\nimport neurolib.utils.functions as func\n\nimport neurolib.optimize.evolution.deapUtils as deapUtils\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
aln = ALNModel()\n
# Here we define our evaluation function. This function will\n# be called reapedly and perform a single simulation. The object\n# that is passed to the function, `traj`, is a pypet trajectory\n# and serves as a \"bridge\" to load the parameter set of this \n# particular trajectory and execute a run.\n# Then the power spectrum of the run is computed and its maximum\n# is fitted to the target of 25 Hz peak frequency.\ndef evaluateSimulation(traj):\n # The trajectory id is provided as an attribute\n rid = traj.id\n logging.info(\"Running run id {}\".format(rid))\n # this function provides the a model with the partuclar\n # parameter set for this given run\n model = evolution.getModelFromTraj(traj)\n # parameters can also be modified after loading\n model.params['dt'] = 0.1\n model.params['duration'] = 2*1000.\n # and the simulation is run\n model.run()\n\n # compute power spectrum\n frs, powers = func.getPowerSpectrum(model.rates_exc[:, -int(1000/model.params['dt']):], dt=model.params['dt'])\n # find the peak frequency\n domfr = frs[np.argmax(powers)] \n # fitness evaluation: let's try to find a 25 Hz oscillation\n fitness = abs(domfr - 25) \n # deap needs a fitness *tuple*!\n fitness_tuple = ()\n # more fitness values could be added\n fitness_tuple += (fitness, )\n # we need to return the fitness tuple and the outputs of the model\n return fitness_tuple, model.outputs\n
The evolutionary algorithm tries to find the optimal parameter set that will maximize (or minimize) a certain fitness function.
This achieved by seeding an initial population of size POP_INIT_SIZE that is randomly initiated in the parameter space parameterSpace. INIT: After simulating the initial population using evalFunction, only a subset of the individuals is kept, defined by POP_SIZE.
START: Members of the remaining population are chosen based on their fitness (using rank selection) to mate and produce offspring. These offspring have parameters that are drawn from a normal distribution defined by the mean of the parameters between the two parents. Then the offspring population is evaluated and the process loops back to START:
This process is repeated for NGEN generations.
# Here we define the parameters and the range in which we want\n# to perform the evolutionary optimization.\n# Create a `ParameterSpace` \npars = ParameterSpace(['mue_ext_mean', 'mui_ext_mean'], [[0.0, 4.0], [0.0, 4.0]])\n# Iitialize evolution with\n# :evaluateSimulation: The function that returns a fitness, \n# :pars: The parameter space and its boundaries to optimize\n# :model: The model that should be passed to the evaluation function\n# :weightList: A list of optimization weights for the `fitness_tuple`,\n# positive values will lead to a maximization, negative \n# values to a minimzation. The length of this list must\n# be the same as the length of the `fitness_tuple`.\n# \n# :POP_INIT_SIZE: The size of the initial population that will be \n# randomly sampled in the parameter space `pars`.\n# Should be higher than POP_SIZE. 50-200 might be a good\n# range to start experimenting with.\n# :POP_SIZE: Size of the population that should evolve. Must be an\n# even number. 20-100 might be a good range to start with.\n# :NGEN: Number of generations to simulate the evolution for. A good\n# range to start with might be 20-100.\n\nweightList = [-1.0]\n\nevolution = Evolution(evalFunction = evaluateSimulation, parameterSpace = pars, model = aln, weightList = [-1.0],\n POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2, filename=\"example-2.1.hdf\")\n# info: chose POP_INIT_SIZE=50, POP_SIZE = 20, NGEN=20 for real exploration, \n# values are lower here for testing\n
# Enabling `verbose = True` will print statistics and generate plots \n# of the current population for each generation.\nevolution.run(verbose = False)\n
# the current population is always accesible via\npop = evolution.pop\n# we can also use the functions registered to deap\n# to select the best of the population:\nbest_10 = evolution.toolbox.selBest(pop, k=10)\n# Remember, we performed a minimization so a fitness\n# of 0 is optimal\nprint(\"Best individual\", best_10[0], \"fitness\", best_10[0].fitness)\n
We can look at the current population by calling evolution.dfPop() which returns a pandas dataframe with the parameters of each individual, its id, generation of birth, its outputs, and the fitness (called \"f0\" here).
# a sinple overview of the current population (in this case the \n# last one) is given via the `info()` method. This provides a \n# a histogram of the score (= mean fitness) and scatterplots\n# and density estimates across orthogonal parameter space cross \n# sections.\nevolution.info(plot=True)\n
\n> Simulation parameters\nHDF file storage: ./data/hdf/example-2.1.hdf\nTrajectory Name: results-2020-07-02-14H-20M-45S\nDuration of evaluating initial population 0:00:29.656935\nDuration of evolution 0:03:50.565418\nModel: <class 'neurolib.models.aln.model.ALNModel'>\nModel name: aln\nEval function: <function evaluateSimulation at 0x10ba8cae8>\nParameter space: {'mue_ext_mean': [0.0, 4.0], 'mui_ext_mean': [0.0, 4.0]}\n> Evolution parameters\nNumber of generations: 20\nInitial population size: 50\nPopulation size: 20\n> Evolutionary operators\nMating operator: <function cxBlend at 0x11dcaf510>\nMating paramter: {'alpha': 0.5}\nSelection operator: <function selBest_multiObj at 0x11f4d9d08>\nSelection paramter: {}\nParent selection operator: <function selRank at 0x11f4d9c80>\nComments: no comments\n--- Info summary ---\nValid: 20\nMean score (weighted fitness): -0.85\nParameter distribution (Generation 19):\nmue_ext_mean: mean: 1.0852, std: 0.1270\nmui_ext_mean: mean: 0.2200, std: 0.1042\n--------------------\nBest 5 individuals:\nPrinting 5 individuals\nIndividual 0\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"mue_ext_mean\"] = 1.18\n model.params[\"mui_ext_mean\"] = 0.30\nIndividual 1\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"mue_ext_mean\"] = 1.11\n model.params[\"mui_ext_mean\"] = 0.24\nIndividual 2\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"mue_ext_mean\"] = 0.91\n model.params[\"mui_ext_mean\"] = 0.08\nIndividual 3\n Fitness values: 1.0\n Score: -1.0\n Weighted fitness: -1.0\n Stats mean 1.00 std 0.00 min 1.00 max 1.00\n model.params[\"mue_ext_mean\"] = 1.19\n model.params[\"mui_ext_mean\"] = 0.36\nIndividual 4\n Fitness values: 1.0\n Score: -1.0\n Weighted fitness: -1.0\n Stats mean 1.00 std 0.00 min 1.00 max 1.00\n model.params[\"mue_ext_mean\"] = 1.01\n model.params[\"mui_ext_mean\"] = 0.11\n--------------------\n\n
\nMainProcess root INFO Saving plot to ./data/figures/results-2020-07-02-14H-20M-45S_hist_19.png\n\n
\nThere are 20 valid individuals\nMean score across population: -0.85\n\n
\n<Figure size 432x288 with 0 Axes>\n
neurolib keeps track of all individuals during the evolution. You can see all individuals from each generation by calling evolution.history. The object evolution.tree provides a network description of the genealogy of the evolution: each individual (indexed by its unique .id) is connected to its parents. We can use this object in combination with the network library networkx to plot the tree:
# we put this into a try except block since we don't do testing on networkx\ntry:\n import matplotlib.pyplot as plt\n import networkx as nx\n from networkx.drawing.nx_pydot import graphviz_layout\n\n G = nx.DiGraph(evolution.tree)\n G = G.reverse() # Make the graph top-down\n pos = graphviz_layout(G, prog='dot')\n plt.figure(figsize=(8, 8))\n nx.draw(G, pos, node_size=50, alpha=0.5, node_color=list(evolution.id_score.values()), with_labels=False)\n plt.show()\nexcept:\n print(\"It looks like networkx or pydot are not installed\")\n
\n/Users/caglar/anaconda/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:579: MatplotlibDeprecationWarning: \nThe iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.\n if not cb.iterable(width):\n/Users/caglar/anaconda/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:676: MatplotlibDeprecationWarning: \nThe iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.\n if cb.iterable(node_size): # many node sizes\n\n
"},{"location":"examples/example-2.1-evolutionary-optimization-aln/#evolutionary-parameter-search-with-a-single-neural-mass-model","title":"Evolutionary parameter search with a single neural mass model","text":"
This notebook provides a simple example for the use of the evolutionary optimization framework built-in to the library. Under the hood, the implementation of the evolutionary algorithm is powered by deap and pypet cares about the parallelization and storage of the simulation data for us.
We want to optimize for a simple target, namely finding a parameter configuration that produces activity with a peak power frequency spectrum at 25 Hz.
In this notebook, we will also plot the evolutionary genealogy tree, to visualize how the population evolves over generations.
"},{"location":"examples/example-2.1-evolutionary-optimization-aln/#model-definition","title":"Model definition","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/#initialize-and-run-evolution","title":"Initialize and run evolution","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/#analysis","title":"Analysis","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/#population","title":"Population","text":""},{"location":"examples/example-2.1-evolutionary-optimization-aln/#plotting-genealogy-tree","title":"Plotting genealogy tree","text":""},{"location":"examples/example-2.2-evolution-brain-network-aln-resting-state-fit/","title":"Example 2.2 evolution brain network aln resting state fit","text":"
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
try:\n import matplotlib.pyplot as plt\nexcept ImportError:\n import sys\n !{sys.executable} -m pip install matplotlib seaborn\n import matplotlib.pyplot as plt\n\nimport numpy as np\nimport logging \n\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.parameterSpace import ParameterSpace\nfrom neurolib.optimize.evolution import Evolution\nimport neurolib.utils.functions as func\n\nfrom neurolib.utils.loadData import Dataset\nds = Dataset(\"hcp\")\n\n# a nice color map\nplt.rcParams['image.cmap'] = 'plasma'\n
We create a brain network model using the empirical dataset ds:
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat) # simulates the whole-brain model in 10s chunks by default if bold == True\n# Resting state fits\nmodel.params['mue_ext_mean'] = 1.57\nmodel.params['mui_ext_mean'] = 1.6\nmodel.params['sigma_ou'] = 0.09\nmodel.params['b'] = 5.0\nmodel.params['signalV'] = 2\nmodel.params['dt'] = 0.2\nmodel.params['duration'] = 0.2 * 60 * 1000 #ms\n# testing: aln.params['duration'] = 0.2 * 60 * 1000 #ms\n# real: aln.params['duration'] = 1.0 * 60 * 1000 #ms\n
Our evaluation function will do the following: first it will simulate the model for a short time to see whether there is any sufficient activity. This speeds up the evolution considerably, since large regions of the state space show almost no neuronal activity. Only then do we simulate the model for the full duration and compute the fitness using the empirical dataset.
def evaluateSimulation(traj):\n rid = traj.id\n model = evolution.getModelFromTraj(traj)\n defaultDuration = model.params['duration']\n invalid_result = (np.nan,)* len(ds.BOLDs)\n\n # -------- stage wise simulation --------\n\n # Stage 1 : simulate for a few seconds to see if there is any activity\n # ---------------------------------------\n model.params['duration'] = 3*1000.\n model.run()\n\n # check if stage 1 was successful\n if np.max(model.output[:, model.t > 500]) > 160 or np.max(model.output[:, model.t > 500]) < 10:\n return invalid_result, {}\n\n\n # Stage 2: full and final simulation\n # ---------------------------------------\n model.params['duration'] = defaultDuration\n model.run(chunkwise=True, bold = True)\n\n # -------- fitness evaluation here --------\n\n scores = []\n for i, fc in enumerate(ds.FCs):#range(len(ds.FCs)):\n fc_score = func.matrix_correlation(func.fc(model.BOLD.BOLD[:, 5:]), fc)\n scores.append(fc_score)\n\n meanFitness = np.mean(scores)\n fitness_tuple = (meanFitness,)\n #print(f\"fitness {meanFitness}\")\n #print(f\"scores {scores}\")\n\n fitness_tuple = tuple(scores)\n return fitness_tuple, {}\n
We specify the parameter space that we want to search.
Note that we chose algorithm='nsga2' when we create the Evolution. This will use the multi-objective optimization algorithm by Deb et al. 2002. Although we have only one objective here (namely the FC fit), we could in principle add more objectives, like the FCD matrix fit or other objectives. For this, we would have to add these values to the fitness in the evaluation function above and add more weights in the definition of the Evolution. We can use positive weights for that objective to be maximized and negative ones for minimization. Please refer to the DEAP documentation for more information.
We could now save the full evolution object for later analysis using evolution.saveEvolution().
The info() method gives us a useful overview of the evolution, like a summary of the evolution parameters, the statistics of the population and also scatterplots of the individuals in our search space.
"},{"location":"examples/example-2.2-evolution-brain-network-aln-resting-state-fit/#evolutionary-optimization-of-a-whole-brain-model","title":"Evolutionary optimization of a whole-brain model","text":"
This notebook provides an example for the use of the evolutionary optimization framework built-in to the library. Under the hood, the implementation of the evolutionary algorithm is powered by deap and pypet cares about the parallelization and storage of the simulation data for us.
We want to optimize a whole-brain network that should produce simulated BOLD activity (fMRI data) that is similar to the empirical dataset. We measure the fitness of each simulation by computing the func.matrix_correlation of the functional connectivity func.fc(model.BOLD.BOLD) to the empirical data ds.FCs. The ones that are closest to the empirical data get a higher fitness and have a higher chance of reproducing and survival.
"},{"location":"examples/example-2.2-evolution-brain-network-aln-resting-state-fit/#analysis","title":"Analysis","text":""},{"location":"examples/example-3-meg-functional-connectivity/","title":"Example 3 meg functional connectivity","text":"
In this example we will learn how to use neurolib to simulate resting state functional connectivity of MEG recordings.
In the first part of the notebook, we will compute the frequency specific functional connectivity matrix of an examplary resting state MEG recording from the YouR-Study Uhlhaas, P.J., Gajwani, R., Gross, J. et al. The Youth Mental Health Risk and Resilience Study (YouR-Study). BMC Psychiatry 17, 43 (2017).
To this end we will:
Band-Pass filter the signal
Apply the hilbert-transformation to extract the signal envelope
Orthogonalize the signal envelopes of two examplary regions
Low-Pass filter the signal envelopes
and compute the pairwise envelope correlations which yields the functional connectivity matrix.
We follow the approach presented in Hipp, J., Hawellek, D., Corbetta, M. et al., Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat Neurosci 15, 884\u2013890 (2012)
In the second part of this notebook, we will use a whole-brain model to simulate brain activity and compute functional connectivity matrix of the simulated signal envelope, as was done for the empirical MEG data. The parameters of this model have been previously optimized with neurolib's evolutionary algorithms (not shown here).
Finally, we will compute the fit (Pearson correlation) of the simulated functional connectivity to the empirical MEG data, which was used as a fitting objective in a previous optimization procedure.
# change to the root directory of the project\nimport os\nif os.getcwd().split(\"/\")[-1] == \"examples\":\n os.chdir('..')\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2 \n
import os\nimport numpy as np\nimport xarray as xr\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport ipywidgets as widgets\nfrom IPython.utils import io\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)\nimport time\nimport pandas as pd\n
\n/Users/caglar/anaconda/lib/python3.7/site-packages/pandas/compat/_optional.py:138: UserWarning: Pandas requires version '2.7.0' or newer of 'numexpr' (version '2.6.9' currently installed).\n warnings.warn(msg, UserWarning)\n\n
from neurolib.utils.signal import Signal \n\nsignal = Signal.from_file(os.path.join('examples', 'data','rs-meg.nc'))\nregion_labels = signal.data.regions.values\nnr_regions = len(region_labels)\ndisplay(signal.data)\n
Attributes: (6)name :rest meglabel :signal_type :unit :Tdescription :MEG recording in AAL2 spaceprocess_steps_0 :resample to 100.0Hz
We will now filter the signal into the desidered frequency band and apply the hilbert transform on the band-passed filtered signal. This will provide us with the analytic representation of the signal, which we can then use to extract the signal's envelope and its phase.
In the following, we plot each processing step for an example target region that you can chose using the widgets below (default: left Precentral Gyrus). Furthermore, we can also choose the frequency range that we'd like to filter the signal in (default: alpha (8-12Hz)).
print('Select a region from the AAL2 atlas and a frequency range')\n# Select a Region \ntarget = widgets.Select(options=region_labels, value='PreCG.L', description='Regions', \n tooltips=['Description of slow', 'Description of regular', 'Description of fast'], \n layout=widgets.Layout(width='50%', height='150px'))\ndisplay(target)\n\n# Select Frequency Range\nfreq = widgets.IntRangeSlider(min=1, max=46, description='Frequency (Hz)', value=[8, 12], layout=widgets.Layout(width='80%'), \n style={'description_width': 'initial'})\ndisplay(freq)\n
\nSelect a region from the AAL2 atlas and a frequency range\n\n
# Define how many timepoints you'd like to plot\nplot_timepoints = 1000\n\n# Plot unfiltered Signal\nfig, ax = plt.subplots(2, 1, figsize=(12,8), sharex=True)\nsns.lineplot(x=signal.data.time[:plot_timepoints].values, y=signal.data.sel(regions=target.value)[:plot_timepoints].values, \n ax=ax[0], color='k', alpha=0.6)\nax[0].set_title(f'Unfiltered Signal ({target.value})');\n\n# Band Pass Filter the Signal\nsignal.filter(freq.value[0], freq.value[1], inplace=True);\n\n# Apply hilbert-transform to extract the signal envelope\ncomplex_signal = signal.hilbert_transform('complex', inplace=False)\nsignal_env = np.abs(complex_signal.data)\n\n# Plot filtered Signal and Signal Envelope\nsns.lineplot(x=signal.data.time[:plot_timepoints].values, y=signal.data.sel(regions=target.value)[:plot_timepoints].values, \n ax=ax[1], label='Bandpass-Filtered Signal')\nsns.lineplot(x=signal_env.time[:plot_timepoints].values, y=signal_env.sel(regions=target.value)[:plot_timepoints].values, \n ax=ax[1], label='Signal Envelope')\nax[1].set_title(f'Filtered Signal ({target.value})');\nax[1].legend(bbox_to_anchor=(1.2, 1),borderaxespad=0)\nsns.despine(trim=True)\n
\nSetting up band-pass filter from 8 - 12 Hz\n\nFIR filter parameters\n---------------------\nDesigning a one-pass, zero-phase, non-causal bandpass filter:\n- Windowed time-domain design (firwin) method\n- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation\n- Lower passband edge: 8.00\n- Lower transition bandwidth: 2.00 Hz (-6 dB cutoff frequency: 7.00 Hz)\n- Upper passband edge: 12.00 Hz\n- Upper transition bandwidth: 3.00 Hz (-6 dB cutoff frequency: 13.50 Hz)\n- Filter length: 165 samples (1.650 sec)\n\n\n
print('Select a reference region for the orthogonalization')\n# Select a Region \nreferenz = widgets.Select(options=region_labels, value='PreCG.R', description='Regions',\n tooltips=['Description of slow', 'Description of regular', 'Description of fast'],\n layout=widgets.Layout(width='50%', height='150px'))\ndisplay(referenz)\n
\nSelect a reference region for the orthogonalization\n\n
exclude = list(range(40, 46)) + list(range(74, 82))\ntmp = np.delete(fc, exclude, axis=0)\nemp_fc = np.delete(tmp, exclude, axis=1)\n# Exclude regions from the list of region labels\nemp_labels = np.delete(region_labels, exclude)\n
# Let's import the neurolib\nfrom neurolib.models.wc import WCModel\nfrom neurolib.utils.loadData import Dataset\n\n# First we load the structural data set from the Human Connectome Project \nds = Dataset(\"hcp\")\n\n# We initiate the Wilson-Cowan model\nwc = WCModel(Cmat = ds.Cmat, Dmat = ds.Dmat, seed=0)\n
# Let's set the previously defined parameters\n# note: the duraiton here is short for testing:\nwc.params['duration'] = 10*1000 \n\n# use longer simulation for real run:\n#wc.params['duration'] = 1*60*1000 \n\nwc.params['K_gl'] = global_coupling.value\nwc.params['exc_ext'] = exc_drive.value\nwc.params['inh_ext'] = inh_drive.value\nwc.params['sigma_ou'] = noise_level.value\n# Run the model\nwc.run()\n
# Create xr DataArray from the simulated excitatory timeseries (keeping the region labels)\nsim_signal = xr.DataArray(wc.exc[:, int(1000/wc.params.dt):], dims=(\"regions\", \"time\"), coords={\"regions\": emp_labels, \"time\": wc.t[int(1000/wc.params.dt):]/1000}, \n attrs={'atlas':'AAL2'})\n\n# Initialize Figure\nfig, ax = plt.subplots(figsize=(12,4))\n\n# Filter signal\nsim_signal = Signal(sim_signal)\nsim_signal.resample(to_frequency=100)\nwith io.capture_output() as captured:\n sim_signal.filter(freq.value[0], freq.value[1], inplace=True);\nsns.lineplot(x=sim_signal.data.time[:plot_timepoints].values, y=sim_signal.data.sel(regions=target.value)[:plot_timepoints].values, ax=ax, label='Filtered Signal')\n\n# Extract signal envelope \nsim_signal.hilbert_transform('amplitude', inplace=True)\nsns.lineplot(x=sim_signal.data.time[:plot_timepoints].values, y=sim_signal.data.sel(regions=target.value)[:plot_timepoints].values, ax=ax, label='Signal Envelope')\n\n# Low-Pass Filter\nwith io.capture_output() as captured:\n sim_signal.filter(low_freq=None, high_freq=low_pass.value, inplace=True)\nsns.lineplot(x=sim_signal.data.time[:plot_timepoints].values, y=sim_signal.data.sel(regions=target.value)[:plot_timepoints].values, ax=ax, label='Low-Pass Signal Envelope')\nax.legend(bbox_to_anchor=(1.2, 1),borderaxespad=0)\nax.set_title(f'Simulated Signal of Target Region Y ({target.value})');\nsns.despine(trim=True)\n
To compute the simulated functional connectivity matrix we use the fc functions from neurolib.
import neurolib.utils.functions as func\n\n# Compute the functional connectivity matrix\nsim_fc = func.fc(sim_signal.data)\n\n# Set diagonal to zero\nnp.fill_diagonal(sim_fc, 0)\n\n# Plot Empirical and simulated connectivity matrix\nfig, ax = plt.subplots(1,2, figsize=(16,10))\nsns.heatmap(emp_fc, square=True, ax=ax[0], cmap='YlGnBu', linewidth=0.005, cbar_kws={\"shrink\": .5})\nax[0].set_title('Empirical FC',pad=10);\nsns.heatmap(sim_fc, square=True, ax=ax[1], cmap='YlGnBu', linewidth=0.005, cbar_kws={\"shrink\": .5})\nax[1].set_title('Simulated FC',pad=10);\nticks = [tick[:-2] for tick in emp_labels[::2]]\nfor ax in ax:\n ax.set_xticks(np.arange(0,80,2)); ax.set_yticks(np.arange(0,80,2)) \n ax.set_xticklabels(ticks, rotation=90, fontsize=8); ax.set_yticklabels(ticks, rotation=0, fontsize=8);\n
# Compare structural and simulated connectivity to the empirical functional connectivity\nstruct_emp = np.corrcoef(emp_fc.flatten(), ds.Cmat.flatten())[0,1]\nsim_emp = np.corrcoef(emp_fc.flatten(), sim_fc.flatten())[0,1]\n\n# Plot\nfig, ax = plt.subplots(figsize=(6,6))\nsplot = sns.barplot(x=['Structural Connectivity', 'Simulated Connectivity'], y=[struct_emp, sim_emp], ax=ax)\nax.set_title('Correlation to Empiral Functional Connectivity', pad=10)\nfor p in splot.patches:\n splot.annotate(format(p.get_height(), '.2f'), \n (p.get_x() + p.get_width() / 2., p.get_height()), \n ha = 'center', va = 'center', \n size=20, color='white',\n xytext = (0, -12), \n textcoords = 'offset points')\nsns.despine()\nprint(f\"Parameters: \\tGlobal Coupling: {wc.params['K_gl']}\\n\\t\\tExc. Background Drive: {wc.params['exc_ext']}\")\nprint(f\"\\t\\tNoise Level: {wc.params['sigma_ou']}\")\n
First off, let's load the MEG data using the Signal class from neurolib. Our example data has already been preprocessed and projected into source space using the AAL2 atlas.
"},{"location":"examples/example-3-meg-functional-connectivity/#band-pass-filter-and-hilbert-transform","title":"Band-Pass filter and Hilbert transform","text":""},{"location":"examples/example-3-meg-functional-connectivity/#orthogonalized-signal-envelope","title":"Orthogonalized signal envelope","text":"
Now we are going to address the main methodological issue of MEG when it comes to the analysis of the cortical functional connectivity structure, i.e. its low spatial resolution. The electric field generated by any given neural source spreads widely over the cortex so that the signal captured at the MEG sensors is a complex mixture of signals from multiple underlying neural sources.
To account for the effect of electric field spread on our MEG connectivity measures, we adapted the orthogonalization approach by Hipp, J., Hawellek, D., Corbetta, M. et al. Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat Neurosci 15, 884\u2013890 (2012) link.
The basic idea here is that a signal generated by one neural source and measured at two separate sensors must have exactly the same phase at both sensors. In contrast, signals from different neural sources have different phases. And thus it is possible to eliminate the effect of a reference signal on the target signal by removing the signal component that has the same phase as a reference region.
Formally, this can be expressed as: \\(Y_{\\perp X}(t,f) = imag\\big(\\ Y(t,f)\\ \\frac{X(t,f)^\\star}{|X(t,f)|}\\ \\big)\\ \\label{eq:orth}\\). Here, \\(Y\\) represents the analytic signal from our target regions that is being orthogonalized with respect to the signal from region \\(X\\).
Using the widgets below, you can choose the reference region \\(X\\) (default: right Precentral Gyrus)
"},{"location":"examples/example-3-meg-functional-connectivity/#low-pass-filtering-of-the-envelopes","title":"Low-Pass filtering of the envelopes","text":"
As a last step, before calculating the envelope correlations, we need to low-pass filter the signal envelopes since the connectivity measures of (ultra)-low frequency components of the MEG-signal correspond best to the functional connectivity as measured using fMRI.
Below, you can choose the low-pass frequency (default: 0.2 Hz).
"},{"location":"examples/example-3-meg-functional-connectivity/#computing-the-functional-connectivity-matrix","title":"Computing the functional connectivity matrix","text":"
We will now define a function that iterates over each pair of brain regions and performs the previously presented processing steps, i.e. that extracts the envelopes, performs the orthogonalization, applies the low-pass filter, and returns the functional connectivity matrix that contains the pairwise envelope correlations.
For the following whole-brain simulation we are only interested in the cortical regions. So we'll now exclude all subcortical regions: * Hippocampus: 41 - 44 * Amygdala: 45-46 * Basal Ganglia: 75-80 * Thalamus: 81-82
In this part of the notebook, we will use neurolib to simulate the functional connectivity. We will therefore:
Load structural connectivity matrices from the Human Connectome Project and initiate the whole-brain model using the Wilson-Cowan model to simulate each brain region
Set the global coupling strength, exc. background input, and the noise strength parameters of the model
Run the simulation
Compute the functional connectivity using the signal envelopes
Please refer to the wc-minimal example for an introduction to the Wilson-Cowan model.
You may now choose parameters settings for the global coupling, the excitatory background input, and the noise strength, which will be used when we run the model. The final fit between simulated and empirical connectivity matrices will depend on the parameters choosen here.
"},{"location":"examples/example-3-meg-functional-connectivity/#run-the-simulation","title":"Run the simulation","text":"
Let's now run the whole-brain model using the defined parameter settings. This may take some time since we're simulating a complete minute here.
We'll now compute the functional connectivity matrix containing the pairwise envelope correlations between all cortical regions of the AAL2 atlas. We'll thus follow the processing steps as before, i.e. band-pass filter the signal, extract the signal envelopes using the hilbert transformation, low-pass filter the envelopes and compute the pairwise pearson correlations. Note that we don't apply the orthogonalization scheme here, since this was only done to account to the electric field spread in the empirical data.
Lastly, we may evaluate the model fit by computing the pearson correlation between our simulated functional connectivity matrix and the empirical one. Additionally we'll also plot the correlation between structural and functional connectivity matrices to have a reference.
# import stuff\n\n# try:\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom IPython.display import display\n\n# import ALN single node model and neurolib wrapper `MultiModel`\nfrom neurolib.models.multimodel import ALNNetwork, ALNNode, MultiModel\n\n# except ImportError:\n# import sys\n# !{sys.executable} -m pip install matplotlib\n# import matplotlib.pyplot as plt\n
# create a model and wrap it to `MultiModel`\naln = MultiModel.init_node(ALNNode())\n\n# 5 seconds run\naln.params[\"duration\"] = 5.0 * 1000 # in ms\n# MultiModel offers two integration backends, be default we are using so-called `jitcdde` backend\n# `jitcdde` is a numerical backend employing adaptive dt scheme for DDEs, therefore we do not care about\n# actual dt (since it is adaptive), only about the sampling dt and this can be higher\n# more about this in example-4.2\naln.params[\"sampling_dt\"] = 1.0 # in ms\n# parametrise ALN model in slow limit cycle\naln.params[\"*EXC*input*mu\"] = 4.2\naln.params[\"*INH*input*mu\"] = 1.8\n# run\naln.run()\n\n# plot - default output is firing rates in kHz\nplt.plot(aln[\"t\"], aln[\"r_mean_EXC\"].T, lw=2, c=\"k\")\nplt.xlabel(\"t [ms]\")\nplt.ylabel(\"Rate [kHz]\")\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
As you saw in the previous cell, the internal workings of MultiModel are very similar to the core neurolib. Therefore, for simple runs, you do care about the following: * MultiModel: a wrapper class for all models in MultiModel framework which gives model objects neurolib powers (meaning .params and .run()). MultiModel class is initialized as follows: * when initialising with Node: model = MultiModel.init_node(<init'd Node class>) * when initialising with Network: model = MultiModel(<init'd Network class>) (see later)
dummy_sc = np.array([[0.0, 1.0], [1.0, 0.0]])\n# init MultiModelnetwork with 2 ALN nodes with dummy sc and no delays\nmm_net = ALNNetwork(connectivity_matrix=dummy_sc, delay_matrix=None)\n\nprint(mm_net)\n# each network is an proper python iterator, i.e. len() is defined\nprint(f\"Nodes: {len(mm_net)}\")\n# as well as __get_item__\nprint(mm_net[0])\nprint(mm_net[1])\n# similarly, each node is a python iterator, i.e.\nprint(f\"Masses in 1. node: {len(mm_net[0])}\")\nprint(mm_net[0][0])\nprint(mm_net[0][1])\n\n# in order to navigate through the hierarchy, each mass, node and net\n# has its own name and label and index\n# index of a node is relative within the network\n# index of a mass is relative within the node\nprint(f\"This network name: {mm_net.name}\")\nprint(f\"This network label: {mm_net.label}\")\nprint(f\"1st node name: {mm_net[0].name}\")\nprint(f\"1st node label: {mm_net[0].label}\")\nprint(f\"1st node index: {mm_net[0].index}\")\nprint(f\"1st mass in 1st node name: {mm_net[0][0].name}\")\nprint(f\"1st mass in 1st node label: {mm_net[0][0].label}\")\nprint(f\"1st mass in 1st node index: {mm_net[0][0].index}\")\n\n# you can also check number of variables etc at all levels of hierarchy\nprint(f\"ALN EXC num. vars: {mm_net[0][0].num_state_variables}\")\nprint(f\"ALN INH num. vars: {mm_net[0][1].num_state_variables}\")\nprint(f\"ALN node num. vars: {mm_net[0].num_state_variables}\")\nprint(f\"This network num. vars: {mm_net.num_state_variables}\")\n# similarly you can check number of \"noise variables\", i.e. the number\n# of stochastic variables entering the simulation\nprint(f\"ALN EXC noise vars: {mm_net[0][0].num_noise_variables}\")\n# etc\n\n# not sure what are the state variables? no problem!\nprint(f\"ALN EXC state vars: {mm_net[0][0].state_variable_names}\")\nprint(f\"ALN node state vars: {mm_net[0].state_variable_names}\")\nprint(f\"This network state vars: {mm_net.state_variable_names}\")\n\n# if you are unsure what kind of a monster you build in MultiModel,\n# a function `describe()` is available at all three levels -\n# it returns a dictionary with basic info about the model object\n# this is describe of a `NeuralMass`\nprint(\"\")\nprint(\"Mass `describe`:\")\ndisplay(mm_net[0][0].describe())\n# describe of a `Node` recursively describes all masses and some more\nprint(\"\")\nprint(\"Node `describe`:\")\ndisplay(mm_net[0].describe())\n# and finally, describe of a `Network` gives you everything\nprint(\"\")\nprint(\"Network `describe`:\")\ndisplay(mm_net.describe())\n\n# PRO tip: imagine highly heterogeneous network and some long simulation with it;\n# apart from the results you can dump `net.describe()` dictionary into json and\n# never forget what you've done!\n
\nBrain network ALN neural mass network with 2 nodes\nNodes: 2\nNetwork node: ALN neural mass node with 2 neural masses: ALN excitatory neural mass EXC, ALN inhibitory neural mass INH\nNetwork node: ALN neural mass node with 2 neural masses: ALN excitatory neural mass EXC, ALN inhibitory neural mass INH\nMasses in 1. node: 2\nNeural mass: ALN excitatory neural mass with 7 state variables: I_mu, I_A, I_syn_mu_exc, I_syn_mu_inh, I_syn_sigma_exc, I_syn_sigma_inh, r_mean\nNeural mass: ALN inhibitory neural mass with 6 state variables: I_mu, I_syn_mu_exc, I_syn_mu_inh, I_syn_sigma_exc, I_syn_sigma_inh, r_mean\nThis network name: ALN neural mass network\nThis network label: ALNNet\n1st node name: ALN neural mass node\n1st node label: ALNNode\n1st node index: 0\n1st mass in 1st node name: ALN excitatory neural mass\n1st mass in 1st node label: ALNMassEXC\n1st mass in 1st node index: 0\nALN EXC num. vars: 7\nALN INH num. vars: 6\nALN node num. vars: 13\nThis network num. vars: 26\nALN EXC noise vars: 1\nALN EXC state vars: ['I_mu', 'I_A', 'I_syn_mu_exc', 'I_syn_mu_inh', 'I_syn_sigma_exc', 'I_syn_sigma_inh', 'r_mean']\nALN node state vars: [['I_mu_EXC', 'I_A_EXC', 'I_syn_mu_exc_EXC', 'I_syn_mu_inh_EXC', 'I_syn_sigma_exc_EXC', 'I_syn_sigma_inh_EXC', 'r_mean_EXC', 'I_mu_INH', 'I_syn_mu_exc_INH', 'I_syn_mu_inh_INH', 'I_syn_sigma_exc_INH', 'I_syn_sigma_inh_INH', 'r_mean_INH']]\nThis network state vars: [['I_mu_EXC', 'I_A_EXC', 'I_syn_mu_exc_EXC', 'I_syn_mu_inh_EXC', 'I_syn_sigma_exc_EXC', 'I_syn_sigma_inh_EXC', 'r_mean_EXC', 'I_mu_INH', 'I_syn_mu_exc_INH', 'I_syn_mu_inh_INH', 'I_syn_sigma_exc_INH', 'I_syn_sigma_inh_INH', 'r_mean_INH'], ['I_mu_EXC', 'I_A_EXC', 'I_syn_mu_exc_EXC', 'I_syn_mu_inh_EXC', 'I_syn_sigma_exc_EXC', 'I_syn_sigma_inh_EXC', 'r_mean_EXC', 'I_mu_INH', 'I_syn_mu_exc_INH', 'I_syn_mu_inh_INH', 'I_syn_sigma_exc_INH', 'I_syn_sigma_inh_INH', 'r_mean_INH']]\n\nMass `describe`:\n\n
# now let us check the parameters.. for this we initialise MultiModel in neurolib's fashion\naln_net = MultiModel(mm_net)\n# parameters are accessible via .params\naln_net.params\n# as you can see the parameters are flattened nested dictionary which follows this nomenclature\n# {\"<network label>.<node label>_index.<mass label>_index.<param name>: param value\"}\n
# as you can see there are a lot of parameters for simple 2-node network of ALN models\n# typically you want to change parameters of all nodes at the same time\n# fortunately, model.params is not your basic dictionary, it's a special one, we call it `star` dictionary,\n# because you can do this:\ndisplay(aln_net.params[\"*tau\"])\nprint(\"\")\n# so yes, star works as a glob identifier, so by selecting \"*tau\" I want all parameters named tau\n# (I dont care from which mass or node it comes)\n# what if I want to change taus only in EXC masses? easy:\ndisplay(aln_net.params[\"*EXC*tau\"])\nprint(\"\")\n# or maybe I want to change taus only in the first node?\ndisplay(aln_net.params[\"*Node_0*tau\"])\nprint(\"\")\n# of course, you can change a param value with this\naln_net.params[\"*Node_0*tau\"] = 13.2\ndisplay(aln_net.params[\"*Node_0*tau\"])\naln_net.params[\"*Node_0*tau\"] = 5.0\n
# case: I want to change all taus except \"noise\" taus\n# this gives all the taus, including \"noise\"\ndisplay(aln_net.params[\"*tau*\"])\nprint(\"\")\n# pipe symbol filters out unwanted keys - here we have only taus which key does NOT include \"input\"\ndisplay(aln_net.params[\"*tau*|input\"])\n
max_rate_e = []\nmin_rate_e = []\n\n# number low for testing:\nmue_inputs = np.linspace(0, 2, 2)\n# use: mue_inputs = np.linspace(0, 2, 20)\n\n# not let's match ALN parameters to those in example-0-aln-minimal and recreate\n# the 1D bif. diagram\naln.params[\"*INH*input*mu\"] = 0.5\naln.params[\"*b\"] = 0.0\naln.params[\"ALNNode_0.ALNMassEXC_0.a\"] = 0.0\nfor mue in mue_inputs:\n aln.params[\"*EXC*input*mu\"] = mue\n display(aln.params[\"*EXC*input*mu\"])\n aln.run()\n max_rate_e.append(np.max(aln.output[0, -int(1000 / aln.params[\"sampling_dt\"]) :]))\n min_rate_e.append(np.min(aln.output[0, -int(1000 / aln.params[\"sampling_dt\"]) :]))\n
\n{'ALNNode_0.ALNMassEXC_0.input_0.mu': 0.0}\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
plt.plot(mue_inputs, max_rate_e, c=\"k\", lw=2)\nplt.plot(mue_inputs, min_rate_e, c=\"k\", lw=2)\nplt.title(\"Bifurcation diagram of the aln model\")\nplt.xlabel(\"Input to excitatory population\")\nplt.ylabel(\"Min / max firing rate [kHz]\")\n
# let us start by subclassing the Network\n\n\nclass ALNThalamusMiniNetwork(Network):\n\"\"\"\n Simple thalamocortical motif: 1 cortical node ALN + 1 NMM thalamus.\n \"\"\"\n\n # provide basic attributes as name and label\n name = \"ALN 1 node + Thalamus\"\n label = \"ALNThlmNet\"\n\n # define which variables are used to sync, i.e. what coupling variables our nodes need\n sync_variables = [\n # both nodes are connected via excitatory synapses\n \"network_exc_exc\",\n # ALN requires also squared coupling\n \"network_exc_exc_sq\",\n # and INH mass in thalamus also receives excitatory coupling\n \"network_inh_exc\",\n ]\n\n # lastly, we need to define what is default output of the network (this has to be the\n # variable present in all nodes)\n # for us it is excitatory firing rates\n default_output = f\"r_mean_{EXC}\"\n # define all output vars of any interest to us - EXC and INH firing rates and adaptive current in ALN\n output_vars = [f\"r_mean_{EXC}\", f\"r_mean_{INH}\", f\"I_A_{EXC}\"]\n\n def __init__(self, connectivity_matrix, delay_matrix):\n # self connections are resolved within nodes, so zeroes at the diagonal\n assert np.all(np.diag(connectivity_matrix) == 0.0)\n\n # init ALN node with index 0\n aln_node = ALNNode()\n aln_node.index = 0\n # index where the state variables start - for first node it is always 0\n aln_node.idx_state_var = 0\n # set correct indices for noise input\n for mass in aln_node:\n mass.noise_input_idx = [mass.index]\n\n # init thalamus node with index 1\n thalamus = ThalamicNode()\n thalamus.index = 1\n # thalamic state variables start where ALN state variables end - easy\n thalamus.idx_state_var = aln_node.num_state_variables\n # set correct indices of noise input - one per mass, after ALN noise\n # indices\n for mass in thalamus:\n mass.noise_input_idx = [aln_node.num_noise_variables + mass.index]\n\n # now super.__init__ network with these two nodes:\n super().__init__(\n nodes=[aln_node, thalamus],\n connectivity_matrix=connectivity_matrix,\n delay_matrix=delay_matrix,\n )\n\n # done! the only other thing we need to do, is to set the coupling variables\n # thalamus vs. ALN are coupled via their firing rates and here we setup the\n # coupling matrices; the super class `Network` comes with some convenient\n # functions for this\n\n def _sync(self):\n\"\"\"\n Set coupling variables - the ones we defined in `sync_variables`\n _sync returns a list of tuples where the first element in each tuple is the coupling \"symbol\"\n and the second is the actual mathematical expression\n for the ease of doing this, `Network` class contains convenience functions for this:\n - _additive_coupling\n - _diffusive_coupling\n - _no_coupling\n here we use additive coupling only\n \"\"\"\n # get indices of coupling variables from all nodes\n exc_indices = [\n next(\n iter(\n node.all_couplings(\n mass_indices=node.excitatory_masses.tolist()\n )\n )\n )\n for node in self\n ]\n assert len(exc_indices) == len(self)\n return (\n # basic EXC <-> EXC coupling\n # within_node_idx is a list of len 2 (because we have two nodes)\n # with indices of coupling variables within the respective state vectors\n self._additive_coupling(\n within_node_idx=exc_indices, symbol=\"network_exc_exc\"\n )\n # squared EXC <-> EXC coupling (only to ALN)\n + self._additive_coupling(\n within_node_idx=exc_indices,\n symbol=\"network_exc_exc_sq\",\n # square connectivity\n connectivity=self.connectivity * self.connectivity,\n )\n # EXC -> INH coupling (only to thalamus)\n + self._additive_coupling(\n within_node_idx=exc_indices,\n symbol=\"network_inh_exc\",\n connectivity=self.connectivity,\n )\n + super()._sync()\n )\n
# lets check what we have\nSC = np.array([[0.0, 0.15], [1.2, 0.0]])\ndelays = np.array([[0.0, 13.0], [13.0, 0.0]]) # thalamocortical delay = 13ms\nthalamocortical = MultiModel(ALNThalamusMiniNetwork(connectivity_matrix=SC, delay_matrix=delays))\n# original `MultiModel` instance is always accessible as `MultiModel.model_instance`\ndisplay(thalamocortical.model_instance.describe())\n
# fix parameters for interesting regime\nthalamocortical.params[\"*g_LK\"] = 0.032 # K-leak conductance in thalamus\nthalamocortical.params[\"ALNThlmNet.ALNNode_0.ALNMassEXC_0.a\"] = 0.0 # no firing rate adaptation\nthalamocortical.params[\"*b\"] = 15.0 # spike adaptation\nthalamocortical.params[\"*tauA\"] = 1000.0 # slow adaptation timescale\nthalamocortical.params[\"*EXC*mu\"] = 3.4 # background excitation to ALN\nthalamocortical.params[\"*INH*mu\"] = 3.5 # background inhibition to ALN\nthalamocortical.params[\"*ALNMass*input*sigma\"] = 0.05 # noise in ALN\nthalamocortical.params[\"*TCR*input*sigma\"] = 0.005 # noise in thalamus\nthalamocortical.params[\"*input*tau\"] = 5.0 # timescale of OU process\n\n# number low for testing:\nthalamocortical.params[\"duration\"] = 2000. \n# use: thalamocortical.params[\"duration\"] = 20000. # 20 seconds simulation\nthalamocortical.params[\"sampling_dt\"] = 1.0\nthalamocortical.run()\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
We can nicely see the interplay between cortical UP and DOWN states (with UP state being dominant and irregular DOWN state excursions) and thalamic spindles.
Combining different models might seem hard at first, but it is actually kind of intuitive and works as you would connect models with pen and paper. The only necessary thing is to define and initialize individual nodes in the network (done in __init__ function) and then specify the type of coupling between these nodes (in _sync() function). That's it!
For more information on how to build a network and for a deeper understanding of how exactly MultiModel works, please check out our following example, where we will build the Jansen-Rit network from scratch!
Here we showcase the MultiModel framework, a standalone framework within neurolib to create and simulate heterogeneous brain models. By heterogeneous, we mean that a brain network may consist of nodes with totally different dynamics coupled by a single variable. Imagine having a population model for the thalamus, a different model for the hippocampus, and a different model for the cortex. Of course, the parameters and the model dynamics, and the equations would be completely different. This is all possible and even relatively easy in MultiModel.
To facilitate your heterogeneous experiments, the MultiModel comes with few population models predefined for you. We can mix these into a brain network in many ways. We provide:
aln: the adaptive linear-nonlinear population model, it is a mean-field approximation of delay-coupled network of excitatory and inhibitory adaptive exponential integrate-and-fire neurons (AdEx)
fitzhugh_nagumo: the FitzHugh-Nagumo model, a two-dimensional slow-fast system, is a simplified version of the famous 4D Hodgkin\u2013Huxley model
hopf: the Hopf model (sometimes called a Stuart-Landau oscillator) is a 1D nonlinear model and serves as a normal form of Hopf bifurcation in dynamical systems
thalamus: a conductance-based population rate model of the thalamus, it is a Jansen-Rit like population model with current-based voltage evolution, includes adaptation (K-leak), calcium, and rectifying currents
wilson_cowan: the Wilson-Cowan neuronal model is a simple model of interacting interconnected neurons of excitatory and inhibitory subtypes
wong_wang: a Wong-Wang model, a model approximating a biophysically-based cortical network model. Our implementation comes in two flavors:
original Wong-Wang model with excitatory and inhibitory subtypes
reduced Wong-Wang model with simpler dynamics and no EXC/INH distinction
Moreover, the MultiModel framework is built in such a way that creating and connecting new models (e.g., Jansen-Rit) is easy and intuitive. An example of how to make a brand new model implementation in MultiModel is provided in the following example notebook (example-4.1-create-new-model.ipynb).
The MultiModel relies on the modeling hierarchy, which is typically implicit in whole-brain modeling. This hierarchy has three levels: * NeuralMass: represents a single neural population (typically excitatory, inhibitory, or without a subtype) and is defined by a set of parameters and (possibly delayed) (possibly stochastic) differential equations * Node: represents a single brain node, and it is a set of connected neural masses (so, e.g., a single Wilson-Cowan node consists of one excitatory and one inhibitory Wilson-Cowan NeuralMass) * Network: represents a brain network, and it is a set of connected nodes (can be any type, as long as the coupling variables are the same)
Although the magic happens at the level of NeuralMass (by magic, we mean the dynamics), users can only simulate (integrate) a Node or a Network. In other words, even for models without excitatory/inhibitory subtyping (e.g., Hopf or FitzHugh-Nagumo), we create a Node consisting of one NeuralMass. In the case of, e.g., Wilson-Cowan, ALN, or original Wong-Wang model, the Node consists of one excitatory and one inhibitory mass. More info on the modeling hierarchy and how it actually works is provided in the following example notebook (example-4.1-create-new-model.ipynb), where we need to subclass the base classes for this hierarchy to build a new model.
"},{"location":"examples/example-4-multimodel-intro/#basic-usage-in-neurolib","title":"Basic usage in neurolib","text":"
(In the following we expect the reader to be mildly familiar with how neurolib works, e.g. how to run a model, how to change it parameters, and how to get model results)
"},{"location":"examples/example-4-multimodel-intro/#simulating-the-node","title":"Simulating the node","text":""},{"location":"examples/example-4-multimodel-intro/#multimodel-parameters-and-other-accessible-attributes","title":"MultiModel parameters and other accessible attributes","text":"
Since MultiModel is able to simulate heterogeneous models, the internals of how parameters work is a bit more complex than in the core neurolib. Each mass has its own parameters, each node then gathers the parameters of each mass within that node, and finally, the network gathers all parameters from each node in the network, etc. So hierarchy again. To make it easier to navigate through MultiModel hierarchies, some attributes are implemented in all hierarchy levels.
"},{"location":"examples/example-4-multimodel-intro/#connecting-two-models","title":"Connecting two models","text":"
So far, we only showed how to use MultiModel with a single dynamical model (ALN), and that is no fun. I mean, all this is already possible in core neurolib, and it the core, it is much faster.
However, the real strength of MultiModel is combining different models into one network. Let us build a thalamocortical model using one node of the thalamic population model and one node of ALN, representing the cortex.
import matplotlib.pyplot as plt\nimport numpy as np\nimport symengine as se\nfrom IPython.display import display\nfrom jitcdde import input as system_input\nfrom neurolib.models.multimodel import MultiModel, ThalamicNode\nfrom neurolib.models.multimodel.builder.base.constants import LAMBDA_SPEED\nfrom neurolib.models.multimodel.builder.base.network import Network, Node\nfrom neurolib.models.multimodel.builder.base.neural_mass import NeuralMass\nfrom neurolib.utils.functions import getPowerSpectrum\nfrom neurolib.utils.stimulus import Input, OrnsteinUhlenbeckProcess, StepInput\n
A quick detour before we dive into the model itself. Jansen-Rit model is typically driven with a uniformly distributed noise, as the authors wanted to model nonspecific input (they used the term background spontaneous activity). For this we quickly create our model input using the ModelInput class (the tutorial on how to use stimuli in neurolib is given elsewhere).
class UniformlyDistributedNoise(Input):\n\"\"\"\n Uniformly distributed noise process between two values.\n \"\"\"\n\n def __init__(self, low, high, n=1, seed=None):\n # save arguments as attributes for later\n self.low = low\n self.high = high\n # init super\n super().__init__(n=n, seed=seed)\n\n def generate_input(self, duration, dt):\n # generate time vector\n self._get_times(duration=duration, dt=dt)\n # generate noise process itself with the correct shape\n # as (time steps x num. processes)\n return np.random.uniform(\n self.low, self.high, (self.n, self.times.shape[0])\n )\n
# let us build a proper hierarchy, i.e. we firstly build a Jansen-Rit mass\nclass SingleJansenRitMass(NeuralMass):\n\"\"\"\n Single Jansen-Rit mass implementing whole three population dynamics.\n\n Reference:\n Jansen, B. H., & Rit, V. G. (1995). Electroencephalogram and visual evoked potential\n generation in a mathematical model of coupled cortical columns. Biological cybernetics,\n 73(4), 357-366.\n \"\"\"\n\n # all these attributes are compulsory to fill in\n name = \"Jansen-Rit mass\"\n label = \"JRmass\"\n\n num_state_variables = 7 # 6 ODEs + firing rate coupling variable\n num_noise_variables = 1 # single external input\n # NOTE \n # external inputs (so-called noise_variables) are typically background noise drive in models,\n # however, this can be any type of stimulus - periodic stimulus, step stimulus, square pulse,\n # anything. Therefore you may want to add more stimuli, e.g. for Jansen-Rit model three to each\n # of its population. Here we do not stimulate our Jansen-Rit model, so only use actual noise\n # drive to excitatory interneuron population.\n # as dictionary {index of state var: it's name}\n coupling_variables = {6: \"r_mean_EXC\"}\n # as list\n state_variable_names = [\n \"v_pyr\",\n \"dv_pyr\",\n \"v_exc\",\n \"dv_exc\",\n \"v_inh\",\n \"dv_inh\",\n # to comply with other `MultiModel` nodes\n \"r_mean_EXC\",\n ]\n # as list\n # note on parameters C1 - C4 - all papers use one C and C1-C4 are\n # defined as various rations of C, typically: C1 = C, C2 = 0.8*C\n # C3 = C4 = 0.25*C, therefore we use only `C` and scale it in the\n # dynamics definition\n required_params = [\n \"A\",\n \"a\",\n \"B\",\n \"b\",\n \"C\",\n \"v_max\",\n \"v0\",\n \"r\",\n \"lambda\",\n ]\n # list of required couplings when part of a `Node` or `Network`\n # `network_exc_exc` is the default excitatory coupling between nodes\n required_couplings = [\"network_exc_exc\"]\n # here we define the default noise input to Jansen-Rit model (this can be changed later)\n # for a quick test, we follow the original Jansen and Rit paper and use uniformly distributed\n # noise between 120 - 320 Hz; but we do it in kHz, hence 0.12 - 0.32\n # fix seed for reproducibility\n _noise_input = [UniformlyDistributedNoise(low=0.12, high=0.32, seed=42)]\n\n def _sigmoid(self, x):\n\"\"\"\n Sigmoidal transfer function which is the same for all populations.\n \"\"\"\n # notes:\n # - all parameters are accessible as self.params - it is a dictionary\n # - mathematical definition (ODEs) is done in symbolic mathematics - all functions have to be\n # imported from `symengine` module, hence se.exp which is a symbolic exponential function\n return self.params[\"v_max\"] / (\n 1.0 + se.exp(self.params[\"r\"] * (self.params[\"v0\"] - x))\n )\n\n def __init__(self, params=None, seed=None):\n # init this `NeuralMass` - use passed parameters or default ones\n # parameters are now accessible as self.params, seed as self.seed\n super().__init__(params=params or JR_DEFAULT_PARAMS, seed=seed)\n\n def _initialize_state_vector(self):\n\"\"\"\n Initialize state vector.\n \"\"\"\n np.random.seed(self.seed)\n # random in average potentials around zero\n self.initial_state = (\n np.random.normal(size=self.num_state_variables)\n # * np.array([10.0, 0.0, 10.0, 0.0, 10.0, 0.0, 0.0])\n ).tolist()\n\n def _derivatives(self, coupling_variables):\n\"\"\"\n Here the magic happens: dynamics is defined here using symbolic maths package symengine.\n \"\"\"\n # first, we need to unwrap state vector\n (\n v_pyr,\n dv_pyr,\n v_exc,\n dv_exc,\n v_inh,\n dv_inh,\n firing_rate,\n ) = self._unwrap_state_vector() # this function does everything for us\n # now we need to write down our dynamics\n # PYR dynamics\n d_v_pyr = dv_pyr\n d_dv_pyr = (\n self.params[\"A\"] * self.params[\"a\"] * self._sigmoid(v_exc - v_inh)\n - 2 * self.params[\"a\"] * dv_pyr\n - self.params[\"a\"] ** 2 * v_pyr\n )\n # EXC dynamics: system input comes into play here\n d_v_exc = dv_exc\n d_dv_exc = (\n self.params[\"A\"]\n * self.params[\"a\"]\n * (\n # system input as function from jitcdde (also in symengine) with proper index:\n # in our case we have only one noise input (can be more), so index 0\n system_input(self.noise_input_idx[0])\n # C2 = 0.8*C, C1 = C\n + (0.8 * self.params[\"C\"]) * self._sigmoid(self.params[\"C\"] * v_pyr)\n )\n - 2 * self.params[\"a\"] * dv_exc\n - self.params[\"a\"] ** 2 * v_exc\n )\n # INH dynamics\n d_v_inh = dv_inh\n d_dv_inh = (\n self.params[\"B\"] * self.params[\"b\"]\n # C3 = C4 = 0.25 * C\n * (0.25 * self.params[\"C\"])\n * self._sigmoid((0.25 * self.params[\"C\"]) * v_pyr)\n - 2 * self.params[\"b\"] * dv_inh\n - self.params[\"b\"] ** 2 * v_inh\n )\n # firing rate computation\n # firing rate as dummy dynamical variable with infinitely fast\n # fixed point dynamics\n firing_rate_now = self._sigmoid(v_exc - v_inh)\n d_firing_rate = -self.params[\"lambda\"] * (firing_rate - firing_rate_now)\n\n # now just return a list of derivatives in the correct order\n return [d_v_pyr, d_dv_pyr, d_v_exc, d_dv_exc, d_v_inh, d_dv_inh, d_firing_rate]\n
And we are done with the basics! Only thing we really need is to define attributes (such as how many variables we have, what couplings we have, what about noise, etc.) and the actual dynamics as symbolic expressions. Symbolic expressions are easy: are basic operators like +, -, *, /, or ** are overloaded, which means you can simply use them and do not think about. Functions such as sin, log, or exp must be imported from symengine and used. Now we define a default set of parameters. Do not forget - MultiModel defines all in ms, therefore the parameters needs to be in ms, kHz, and similar.
JR_DEFAULT_PARAMS = {\n \"A\": 3.25, # mV\n \"B\": 22.0, # mV\n # `a` and `b` are originally 100Hz and 50Hz\n \"a\": 0.1, # kHz\n \"b\": 0.05, # kHz\n \"v0\": 6.0, # mV\n # v_max is originally 5Hz\n \"v_max\": 0.005, # kHz\n \"r\": 0.56, # m/V\n \"C\": 135.0,\n # parameter for dummy `r` dynamics\n \"lambda\": LAMBDA_SPEED,\n}\n
The next step is to create a Node. Node is second level in the hierarchy and already it can be wrapped into MultiModel and treated as any other neurolib model. On our case, creating the Node is really simple: it has only one mass, no delays, and no connectivity.
class JansenRitNode(Node):\n\"\"\"\n Jansen-Rit node with 1 neural mass representing 3 population model.\n \"\"\"\n\n name = \"Jansen-Rit node\"\n label = \"JRnode\"\n\n # if Node is integrated isolated, what network input we should use\n # zero by default = no network input for one-node model\n default_network_coupling = {\"network_exc_exc\": 0.0}\n\n # default output is the firing rate of pyramidal population\n default_output = \"r_mean_EXC\"\n\n # list of all variables that are accessible as outputs\n output_vars = [\"r_mean_EXC\", \"v_pyr\", \"v_exc\", \"v_inh\"]\n\n def __init__(self, params=None, seed=None):\n # in `Node` __init__, the list of masses is created and passed\n jr_mass = SingleJansenRitMass(params=params, seed=seed)\n # each mass has to have index, in this case it is simply 0\n jr_mass.index = 0\n # call super and properly initialize a Node\n super().__init__(neural_masses=[jr_mass])\n\n self.excitatory_masses = np.array([0])\n\n def _sync(self):\n # this function typically defines the coupling between masses\n # within one node, but in our case there is nothing to define\n return []\n
And we are done. At this point, we can integrate our Jansen-Rit model and see some results.
# init model - fix seed for reproducibility (random init. conditions)\njr_model = MultiModel.init_node(JansenRitNode(seed=42))\n\n# see parameters\nprint(\"Parameters:\")\ndisplay(jr_model.params)\nprint(\"\")\n\n# see describe\nprint(\"Describe:\")\ndisplay(jr_model.model_instance.describe())\n
# run model for 5 seconds - all in ms\njr_model.params[\"sampling_dt\"] = 1.0\njr_model.params[\"duration\"] = 5000\njr_model.params[\"backend\"] = \"jitcdde\"\njr_model.run()\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
The results looks good! With the same parameters as original Jansen and Rit paper, we got the \\(\\alpha\\) activity with spectral peak around 10 Hz. Just as a proof of concept - let us try the second MultiModel backend.
# run model for 5 seconds - all in ms\njr_model.params[\"sampling_dt\"] = 1.0\njr_model.params[\"dt\"] = 0.1\njr_model.params[\"duration\"] = 5000\njr_model.params[\"backend\"] = \"numba\"\njr_model.run()\n
All works as it should, we have our Jansen-Rit model! In the next step, we will showcase how the new model can be connected and coupled to other models (similar as in the first example).
# let us start by subclassing the Network\n\n\nclass JansenRitThalamusMiniNetwork(Network):\n\"\"\"\n Simple thalamocortical motif: 1 cortical node Jansen-Rit + 1 NMM thalamus.\n \"\"\"\n\n # provide basic attributes as name and label\n name = \"Jansen-Rit 1 node + Thalamus\"\n label = \"JRThlmNet\"\n\n # define which variables are used to sync, i.e. what coupling variables our nodes need\n sync_variables = [\n # both nodes are connected via excitatory synapses\n \"network_exc_exc\",\n # and INH mass in thalamus also receives excitatory coupling\n \"network_inh_exc\",\n ]\n\n # lastly, we need to define what is default output of the network (this has to be the\n # variable present in all nodes)\n # for us it is excitatory firing rates\n default_output = f\"r_mean_EXC\"\n # define all output vars of any interest to us - EXC and INH firing rates\n output_vars = [f\"r_mean_EXC\", f\"r_mean_INH\"]\n\n def __init__(self, connectivity_matrix, delay_matrix, seed=None):\n # self connections are resolved within nodes, so zeroes at the diagonal\n assert np.all(np.diag(connectivity_matrix) == 0.0)\n\n # init Jansen-Rit node with index 0\n jr_node = JansenRitNode(seed=seed)\n jr_node.index = 0\n # index where the state variables start - for first node it is always 0\n jr_node.idx_state_var = 0\n # set correct indices for noise input - in JR we have only one noise source\n jr_node[0].noise_input_idx = [0]\n\n # init thalamus node with index 1\n thalamus = ThalamicNode()\n thalamus.index = 1\n # thalamic state variables start where ALN state variables end - easy\n thalamus.idx_state_var = jr_node.num_state_variables\n # set correct indices of noise input - one per mass, after ALN noise\n # indices\n for mass in thalamus:\n mass.noise_input_idx = [jr_node.num_noise_variables + mass.index]\n\n # now super.__init__ network with these two nodes:\n super().__init__(\n nodes=[jr_node, thalamus],\n connectivity_matrix=connectivity_matrix,\n delay_matrix=delay_matrix,\n )\n\n # done! the only other thing we need to do, is to set the coupling variables\n # thalamus vs. Jansen-Rit are coupled via their firing rates and here we setup the\n # coupling matrices; the super class `Network` comes with some convenient\n # functions for this\n\n def _sync(self):\n\"\"\"\n Set coupling variables - the ones we defined in `sync_variables`\n _sync returns a list of tuples where the first element in each tuple is the coupling \"symbol\"\n and the second is the actual mathematical expression\n for the ease of doing this, `Network` class contains convenience functions for this:\n - _additive_coupling\n - _diffusive_coupling\n - _no_coupling\n here we use additive coupling only\n \"\"\"\n # get indices of coupling variables from all nodes\n exc_indices = [\n next(\n iter(\n node.all_couplings(\n mass_indices=node.excitatory_masses.tolist()\n )\n )\n )\n for node in self\n ]\n assert len(exc_indices) == len(self)\n return (\n # basic EXC <-> EXC coupling\n # within_node_idx is a list of len 2 (because we have two nodes)\n # with indices of coupling variables within the respective state vectors\n self._additive_coupling(\n within_node_idx=exc_indices, symbol=\"network_exc_exc\"\n )\n # EXC -> INH coupling (only to thalamus)\n + self._additive_coupling(\n within_node_idx=exc_indices,\n symbol=\"network_inh_exc\",\n connectivity=self.connectivity,\n )\n + super()._sync()\n )\n
# lets check what we have\n# in the ALN-thalamus case the matrix was [0.0, 0.15], [1.2, 0.0] - JR produces firing rates around 5Hz (5 times lower than ALN)\nSC = np.array([[0.0, 0.15], [6., 0.0]])\ndelays = np.array([[0.0, 13.0], [13.0, 0.0]]) # thalamocortical delay = 13ms\nthalamocortical = MultiModel(JansenRitThalamusMiniNetwork(connectivity_matrix=SC, delay_matrix=delays, seed=42))\n# original `MultiModel` instance is always accessible as `MultiModel.model_instance`\ndisplay(thalamocortical.model_instance.describe())\n
# fix parameters for interesting regime\nthalamocortical.params[\"*g_LK\"] = 0.032 # K-leak conductance in thalamus\nthalamocortical.params[\"*TCR*input*sigma\"] = 0.005 # noise in thalamus\nthalamocortical.params[\"*input*tau\"] = 5.0 # timescale of OU process\nthalamocortical.params[\"duration\"] = 20000. # 20 seconds simulation\nthalamocortical.params[\"sampling_dt\"] = 1.0\nthalamocortical.run()\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n\n
So the model works, just like that! Of course, in this case we do not see anything interesting in the modelled dynamics, since we would need to proper investigation of various parameters. Consider this as a proof-of-concept, where we can very easily couple two very different population models.
"},{"location":"examples/example-4.1-multimodel-custom-model/#creating-new-model-from-scratch","title":"Creating new model from scratch","text":"
The real power of MultiModel framework is in fast prototyping of heterogeneous models. To showcase this, in this notebook we create a brand new model in the framework (the famous Jansen-Rit model), and then create a thalamocortical mini network with 1 node representing a thalamus and 1 node Jansen-Rit model representing a cortical column.
The Jansen-Rit model is a neural population model of a local cortical circuit. It contains three interconnected neural populations: one for the pyramidal projection neurons and two for excitatory and inhibitory interneurons forming feedback loops.
The equations for Jansen-Rit model reads: \\begin{align} \\ddot{x}{0} & = Aa\\cdot\\mathrm{Sigm}\\left(x{1} - x_{2}\\right) - 2a\\dot{x}{0} - a^{2}x{0} \\ \\ddot{x}{1} & = Aa[p + C{2}\\mathrm{Sigm}\\left(C_{1}x_{0}\\right)] - 2a\\dot{x}{1} - a^2x{1} \\ \\ddot{x}{2} & = BbC{4}\\mathrm{Sigm}\\left( C_{3}x_{o} \\right) - 2b\\dot{x}{2} - b^2x{2} \\ \\mathrm{Sigm}(x) & = \\frac{v_{max}}{1 + \\mathrm{e}^{r(v_{0} - x)}} \\end{align} Of course, in order to implement the above equations numerically, the system of three second-order ODEs will be rewritten into a system of six first-order ODEs.
The actual implementation will be a bit more involved than simply writing down the above equations. The building block of any proper MultiModel is a NeuralMass. Jansen-Rit model actually summarises an activity of a cortical column consisting of three populations: a population of pyramidal cells interacting with two populations of interneurons - one excitatory and one inhibitory. Moreover, the \\(x\\) represent the average membrane potential, but typically, neuronal models (at least in neurolib) are coupled via firing rate. For this reason, our main output variable would actually be a firing rate of a main, pyramidal population. The average membrane potential of a pyramidal population is \\(x = x_{1} - x_{2}\\) and its firing rate is then \\(r = \\mathrm{Sigm}(x) = \\mathrm{Sigm}(x_{1} - x_{2})\\). Similar strategy (sigmoidal transfer function for average membrane potential) is used for the thalamic model.
The coupling variable in MultiModel must be the same across all hierarchical levels. Individial populations in Jansen-Rit model are coupled via their average membrane potentials \\(x_{i}\\), \\(i\\in[0, 1, 2]\\). However, the \"global\" coupling variable for the node would be the firing rate of pyramidal population \\(r\\), introduced in the paragraph above. To reconcile this, two options exists in MultiModel: * create a NeuralMass representing a pyramidal population with two coupling variables: \\(x_{0}\\) and \\(r\\); NeuralMass representing interneurons would have one coupling variable, \\(x_{1,2}\\) * advantages: cleaner (implementation), more modular (can create J-R Node with more masses than 3 very easily) * disadvantages: more code, might be harder to navigate for the beginner * create one NeuralMass representing all three populations with one coupling variable \\(r\\), since the others are not really coupling since the whole dynamics is contained in one object * advantages: less code, easier to grasp * disadvantages: less modular, cannot simply edit J-R model, since everything is \"hardcoded\" into one NeuralMass object
In order to build a basic understanding of building blocks, we will follow the second option here (less code, easier to grasp). For interested readers, the first option (modular one) will be implemented in MultiModel, so you can follow the model files.
Our strategy for this notebook then would be: 1. implement single NeuralMass object representing all three populations in the Jansen-Rit model, with single coupling variable \\(r\\) 2. implement a \"dummy\" Node with single NeuralMass (requirement, one cannot couple Node to a NeuralMass to build a network) 3. experiment with connecting this Jansen-Rit cortical model to the thalamic population model
Last note, all model's dynamics and parameters in MultiModel are defined in milliseconds, therefore we will scale the default parameters.
Let us start with the imports:
"},{"location":"examples/example-4.1-multimodel-custom-model/#test-and-simulate-jansen-rit-model","title":"Test and simulate Jansen-Rit model","text":"
In order to simulate our newly created model, we just need to wrap it with MultiModel as in the last example and see how it goes.
"},{"location":"examples/example-4.1-multimodel-custom-model/#couple-jansen-rit-to-thalamic-model","title":"Couple Jansen-Rit to thalamic model","text":"
Here we practically copy the previous example where we coupled ALN node with a thalamic node, but instead of ALN representing a cortical column, we use our brand new Jansen-Rit model.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/","title":"Example 4.2 multimodel backends and optimization","text":"
import matplotlib.pyplot as plt\nimport neurolib.utils.functions as func\nimport numpy as np\nfrom IPython.display import display\nfrom neurolib.models.multimodel import (\n ALNNode,\n FitzHughNagumoNetwork,\n FitzHughNagumoNode,\n MultiModel,\n)\nfrom neurolib.optimize.evolution import Evolution\nfrom neurolib.optimize.exploration import BoxSearch\nfrom neurolib.utils.parameterSpace import ParameterSpace\n\n# a nice color map\nplt.rcParams[\"image.cmap\"] = \"plasma\"\n
# create a FitzHugh-Nagumo Node\nfhn_node = FitzHughNagumoNode()\n# necessary attributes when creating a Node from scratch\nfhn_node.index = 0\nfhn_node.idx_state_var = 0\nfhn_node.init_node()\n\ndisplay(fhn_node._derivatives())\ndisplay(len(fhn_node._derivatives()), fhn_node.num_state_variables)\n
As we see, the _derivatives() function return a list of equations, in this case of length 2 (which is, of course, equal to fhn.num_state_variables). The current_y(<index>) lingo is taken from jitcdde. As written above, all equations are symbolic and therefore current_y(<index>) is a symengineSymbol representing state vector with index <index> at current time t. In other words, current_y(0) is the first variable (in FitzHugh-Nagumo model, this is the \\(x\\)), while current_y(1) is the second variable (the \\(y\\)). The past_y() lingo is the same, but encodes either the past of the state vector, i.e. delayed interactions, or the external input (noise or stimulus). In this case it represents the external input (you can tell since it is past_y(-external_input...)). Now let us see how it looks like for network:
Now, since we have 2 nodes, the total number of state variables is 4. And now, we see equations for the whole network as a list of 4 symbolic equations. In the network equations we see a new symbol: network_x_0 and network_x_1. At this time, these are really just symengine symbols, but they actually represent the coupling between the nodes. And that is also a topic of the next section.
In this particular case, we have 2 coupling variables and 2 nodes, hence 4 coupling terms. The coupling of \\(y\\) is zero. As per the coupling of \\(x\\) variables between nodes, you can now see how it works: network_x_0 just means that we are about to define a network coupling of variable x for the first node, and it is just 1.43 (this is the SC matrix we passed when creating FHN network) times state variable with index 2 at time -10 milliseconds minus current state variable index 0 (diffusive coupling). Similarly for node 1 (with different coupling strength and state variable indices, of course).
Now when symbols from _sync() function are inserted into _derivatives() at the proper places, we have a full definition of a model. This is exactly what both backends do: they gather the equations (_derivatives()), look up the coupling terms (_sync()) and integrate the model forward in time.
fhn_mm = MultiModel(fhn_net)\n# 2 second run\nfhn_mm.params[\"duration\"] = 2000.0\nfhn_mm.params[\"backend\"] = \"jitcdde\"\n# jitcdde works with adaptive dt, you only set sampling dt\nfhn_mm.params[\"sampling_dt\"] = 1.0\nfhn_mm.run()\n\nplt.plot(fhn_mm.x.T)\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2000/2000 [00:00<00:00, 85415.01it/s]\n
\nUsing default integration parameters.\n\n
\n\n\n
\n[<matplotlib.lines.Line2D at 0x15aa79950>,\n <matplotlib.lines.Line2D at 0x15aa63690>]\n
fhn_mm = MultiModel(fhn_net)\n# 2 second run\nfhn_mm.params[\"duration\"] = 2000.0\nfhn_mm.params[\"backend\"] = \"numba\"\n# numba uses Euler scheme so dt is important!\nfhn_mm.params[\"dt\"] = 0.1\nfhn_mm.params[\"sampling_dt\"] = 1.0\nfhn_mm.run()\n\nplt.plot(fhn_mm.x.T)\n
\n[<matplotlib.lines.Line2D at 0x15a44fad0>,\n <matplotlib.lines.Line2D at 0x15a2a0e10>]\n
# first init multimodel\naln_mm = MultiModel.init_node(ALNNode())\ndisplay(aln_mm.params)\n
# match params to core ALN model\naln_mm.params[\"backend\"] = \"numba\"\naln_mm.params[\"dt\"] = 0.1 # ms\naln_mm.params[\"*c_gl\"] = 0.3\naln_mm.params[\"*b\"] = 0.0\naln_mm.params[\"ALNNode_0.ALNMassEXC_0.a\"] = 0.0\n\n# set up exploration - using star\n\n# parameters = ParameterSpace(\n# {\n# \"*EXC*input*mu\": np.linspace(0, 3, 21),\n# \"*INH*input*mu\": np.linspace(0, 3, 21),\n# },\n# allow_star_notation=True,\n# )\n\n# set up exploration - using exact parameter names\n# in this case (one node, search over noise mu) -> these two are equivalent!\nparameters = ParameterSpace(\n {\n # 2 values per dimension is for quick testing only, for real exploration use e.g. 21 or rather much more\n \"ALNNode_0.ALNMassEXC_0.input_0.mu\": np.linspace(0, 3, 2),\n \"ALNNode_0.ALNMassINH_1.input_0.mu\": np.linspace(0, 3, 2),\n # \"ALNNode_0.ALNMassEXC_0.input_0.mu\": np.linspace(0, 3, 21),\n # \"ALNNode_0.ALNMassINH_1.input_0.mu\": np.linspace(0, 3, 21),\n },\n allow_star_notation=True,\n)\nsearch = BoxSearch(aln_mm, parameters, filename=\"example-4.2-exploration.hdf\")\n
\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-4.2-exploration.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/pypet/naturalnaming.py:1473: SyntaxWarning: `lambda` is a python keyword, you may not be able to access it via natural naming but only by using `[]` square bracket notation. \n category=SyntaxWarning)\nMainProcess root INFO Number of parameter configurations: 441\nMainProcess root INFO BoxSearch: Environment initialized.\n\n
search.run()\n
\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-40M-57S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 0/441 runs [ ] 0.0%\nMainProcess pypet INFO PROGRESS: Finished 23/441 runs [= ] 5.2%, remaining: 0:05:32\nMainProcess pypet INFO PROGRESS: Finished 45/441 runs [== ] 10.2%, remaining: 0:04:55\nMainProcess pypet INFO PROGRESS: Finished 67/441 runs [=== ] 15.2%, remaining: 0:04:32\nMainProcess pypet INFO PROGRESS: Finished 89/441 runs [==== ] 20.2%, remaining: 0:04:11\nMainProcess pypet INFO PROGRESS: Finished 111/441 runs [===== ] 25.2%, remaining: 0:03:45\nMainProcess pypet INFO PROGRESS: Finished 133/441 runs [====== ] 30.2%, remaining: 0:03:29\nMainProcess pypet INFO PROGRESS: Finished 155/441 runs [======= ] 35.1%, remaining: 0:03:14\nMainProcess pypet INFO PROGRESS: Finished 177/441 runs [======== ] 40.1%, remaining: 0:02:59\nMainProcess pypet INFO PROGRESS: Finished 199/441 runs [========= ] 45.1%, remaining: 0:02:41\nMainProcess pypet INFO PROGRESS: Finished 221/441 runs [========== ] 50.1%, remaining: 0:02:28\nMainProcess pypet INFO PROGRESS: Finished 243/441 runs [=========== ] 55.1%, remaining: 0:02:14\nMainProcess pypet INFO PROGRESS: Finished 265/441 runs [============ ] 60.1%, remaining: 0:02:01\nMainProcess pypet INFO PROGRESS: Finished 287/441 runs [============= ] 65.1%, remaining: 0:01:44\nMainProcess pypet INFO PROGRESS: Finished 309/441 runs [============== ] 70.1%, remaining: 0:01:29\nMainProcess pypet INFO PROGRESS: Finished 331/441 runs [=============== ] 75.1%, remaining: 0:01:14\nMainProcess pypet INFO PROGRESS: Finished 353/441 runs [================ ] 80.0%, remaining: 0:00:59\nMainProcess pypet INFO PROGRESS: Finished 375/441 runs [================= ] 85.0%, remaining: 0:00:43\nMainProcess pypet INFO PROGRESS: Finished 397/441 runs [================== ] 90.0%, remaining: 0:00:29\nMainProcess pypet INFO PROGRESS: Finished 419/441 runs [=================== ] 95.0%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 441/441 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-40M-57S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-40M-57S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-40M-57S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-40M-57S` were completed successfully.\n\n
search.loadResults()\n
\nMainProcess root INFO Loading results from ./data/hdf/example-4.2-exploration.hdf\nMainProcess root INFO Analyzing trajectory results-2021-07-07-16H-40M-57S\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-4.2-exploration.hdf`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading trajectory `results-2021-07-07-16H-40M-57S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `config` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `parameters` in mode `2`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Loading branch `results` in mode `1`.\nMainProcess root INFO Creating `dfResults` dataframe ...\nMainProcess root INFO Loading all results to `results` dictionary ...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 441/441 [00:02<00:00, 219.34it/s]\nMainProcess root INFO Aggregating results to `dfResults` ...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 441/441 [00:00<00:00, 3745.83it/s]\nMainProcess root INFO All results loaded.\n\n
print(f\"Number of results: {len(search.results)}\")\n
\nNumber of results: 441\n\n
# Example analysis of the results\n# The .results attribute is a list and can be indexed by the run\n# number (which is also the index of the pandas dataframe .dfResults).\n# Here we compute the maximum firing rate of the node in the last second\n# and add the result (a float) to the pandas dataframe.\nfor i in search.dfResults.index:\n search.dfResults.loc[i, \"max_r\"] = np.max(\n search.results[i][\"r_mean_EXC\"][:, -int(1000 / aln_mm.params[\"dt\"]) :]\n )\n
plt.imshow(\n search.dfResults.pivot_table(\n values=\"max_r\",\n index=\"ALNNode_0.ALNMassINH_1.input_0.mu\",\n columns=\"ALNNode_0.ALNMassEXC_0.input_0.mu\",\n ),\n extent=[\n min(search.dfResults[\"ALNNode_0.ALNMassEXC_0.input_0.mu\"]),\n max(search.dfResults[\"ALNNode_0.ALNMassEXC_0.input_0.mu\"]),\n min(search.dfResults[\"ALNNode_0.ALNMassINH_1.input_0.mu\"]),\n max(search.dfResults[\"ALNNode_0.ALNMassINH_1.input_0.mu\"]),\n ],\n origin=\"lower\",\n)\nplt.colorbar(label=\"Maximum rate [kHz]\")\nplt.xlabel(\"Input to E\")\nplt.ylabel(\"Input to I\")\n
\nMainProcess root INFO Loading precomputed transfer functions from /Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/neurolib/models/multimodel/builder/../../aln/aln-precalc/quantities_cascade.h5\nMainProcess root INFO All transfer functions loaded.\nMainProcess root INFO Loading precomputed transfer functions from /Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/neurolib/models/multimodel/builder/../../aln/aln-precalc/quantities_cascade.h5\nMainProcess root INFO All transfer functions loaded.\nMainProcess root INFO ALNNode: Model initialized.\n\n
# use the same loss function as example-2.1\ndef evaluateSimulation(traj):\n # The trajectory id is provided as an attribute\n rid = traj.id\n # this function provides the a model with the partuclar\n # parameter set for this given run\n model = evolution.getModelFromTraj(traj)\n # parameters can also be modified after loading\n model.params[\"dt\"] = 0.1\n model.params[\"duration\"] = 2 * 1000.0\n # and the simulation is run\n model.run()\n\n # compute power spectrum\n frs, powers = func.getPowerSpectrum(\n model.r_mean_EXC[:, -int(1000 / model.params[\"dt\"]) :], dt=model.params[\"dt\"]\n )\n # find the peak frequency\n domfr = frs[np.argmax(powers)]\n # fitness evaluation: let's try to find a 25 Hz oscillation\n fitness = abs(domfr - 25)\n # deap needs a fitness *tuple*!\n fitness_tuple = ()\n # more fitness values could be added\n fitness_tuple += (fitness,)\n # we need to return the fitness tuple and the outputs of the model\n return fitness_tuple, model.outputs\n
pars = ParameterSpace(\n [\"*EXC*input*mu\", \"*INH*input*mu\"],\n [[0.0, 4.0], [0.0, 4.0]],\n allow_star_notation=True,\n)\nweightList = [-1.0]\n\nevolution = Evolution(\n evalFunction=evaluateSimulation,\n parameterSpace=pars,\n model=aln_mm, # use our `MultiModel` here\n weightList=[-1.0],\n # POP_INIT_SIZE=16, # mutiple of number of cores\n # POP_SIZE=16,\n # low numbers for testing, for real evolution use higher number of runs\n POP_INIT_SIZE=4, # mutiple of number of cores\n POP_SIZE=4,\n # low numbers for testing, for real evolution use higher number\n NGEN=2,\n filename=\"example-4.2-evolution.hdf\",\n)\n
\nMainProcess root INFO Trajectory Name: results-2021-07-07-16H-45M-51S\nMainProcess root INFO Storing data to: ./data/hdf/example-4.2-evolution.hdf\nMainProcess root INFO Trajectory Name: results-2021-07-07-16H-45M-51S\nMainProcess root INFO Number of cores: 8\nMainProcess pypet.storageservice.HDF5StorageService INFO I will use the hdf5 file `./data/hdf/example-4.2-evolution.hdf`.\nMainProcess pypet.environment.Environment INFO Environment initialized.\nMainProcess root INFO Evolution: Using algorithm: adaptive\nMainProcess root INFO Evolution: Individual generation: <function randomParametersAdaptive at 0x159a6df80>\nMainProcess root INFO Evolution: Mating operator: <function cxBlend at 0x159a43560>\nMainProcess root INFO Evolution: Mutation operator: <function gaussianAdaptiveMutation_nStepSizes at 0x159a70440>\nMainProcess root INFO Evolution: Parent selection: <function selRank at 0x159a70170>\nMainProcess root INFO Evolution: Selection operator: <function selBest_multiObj at 0x159a70200>\n\n
evolution.run(verbose=False)\n
\nMainProcess root INFO Evaluating initial population of size 16 ...\nMainProcess pypet.trajectory.Trajectory INFO Your trajectory has not been explored, yet. I will call `f_explore` instead.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 0/16 runs [ ] 0.0%\nMainProcess pypet INFO PROGRESS: Finished 1/16 runs [= ] 6.2%, remaining: 0:01:22\nMainProcess pypet INFO PROGRESS: Finished 2/16 runs [== ] 12.5%, remaining: 0:00:39\nMainProcess pypet INFO PROGRESS: Finished 3/16 runs [=== ] 18.8%, remaining: 0:00:24\nMainProcess pypet INFO PROGRESS: Finished 4/16 runs [===== ] 25.0%, remaining: 0:00:17\nMainProcess pypet INFO PROGRESS: Finished 5/16 runs [====== ] 31.2%, remaining: 0:00:12\nMainProcess pypet INFO PROGRESS: Finished 6/16 runs [======= ] 37.5%, remaining: 0:00:09\nMainProcess pypet INFO PROGRESS: Finished 7/16 runs [======== ] 43.8%, remaining: 0:00:07\nMainProcess pypet INFO PROGRESS: Finished 8/16 runs [========== ] 50.0%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 9/16 runs [=========== ] 56.2%, remaining: 0:00:08\nMainProcess pypet INFO PROGRESS: Finished 10/16 runs [============ ] 62.5%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 11/16 runs [============= ] 68.8%, remaining: 0:00:04\nMainProcess pypet INFO PROGRESS: Finished 12/16 runs [=============== ] 75.0%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 13/16 runs [================ ] 81.2%, remaining: 0:00:02\nMainProcess pypet INFO PROGRESS: Finished 14/16 runs [================= ] 87.5%, remaining: 0:00:01\nMainProcess pypet INFO PROGRESS: Finished 15/16 runs [================== ] 93.8%, remaining: 0:00:00\nMainProcess pypet INFO PROGRESS: Finished 16/16 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO Start of evolution\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 16/32 runs [========== ] 50.0%\nMainProcess pypet INFO PROGRESS: Finished 18/32 runs [=========== ] 56.2%, remaining: 0:00:33\nMainProcess pypet INFO PROGRESS: Finished 20/32 runs [============ ] 62.5%, remaining: 0:00:15\nMainProcess pypet INFO PROGRESS: Finished 21/32 runs [============= ] 65.6%, remaining: 0:00:11\nMainProcess pypet INFO PROGRESS: Finished 23/32 runs [============== ] 71.9%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 24/32 runs [=============== ] 75.0%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 26/32 runs [================ ] 81.2%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 28/32 runs [================= ] 87.5%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 29/32 runs [================== ] 90.6%, remaining: 0:00:02\nMainProcess pypet INFO PROGRESS: Finished 31/32 runs [=================== ] 96.9%, remaining: 0:00:00\nMainProcess pypet INFO PROGRESS: Finished 32/32 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 1 -----------\nMainProcess root INFO Best individual is [1.2026081698319384, 0.7330736886493492, 1.3333333333333333, 1.3333333333333333]\nMainProcess root INFO Score: -5.0\nMainProcess root INFO Fitness: (5.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 32/48 runs [============= ] 66.7%\nMainProcess pypet INFO PROGRESS: Finished 34/48 runs [============== ] 70.8%, remaining: 0:00:33\nMainProcess pypet INFO PROGRESS: Finished 36/48 runs [=============== ] 75.0%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 39/48 runs [================ ] 81.2%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 41/48 runs [================= ] 85.4%, remaining: 0:00:07\nMainProcess pypet INFO PROGRESS: Finished 44/48 runs [================== ] 91.7%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 46/48 runs [=================== ] 95.8%, remaining: 0:00:01\nMainProcess pypet INFO PROGRESS: Finished 48/48 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 2 -----------\nMainProcess root INFO Best individual is [1.2026081698319384, 0.7330736886493492, 1.3333333333333333, 1.3333333333333333]\nMainProcess root INFO Score: -5.0\nMainProcess root INFO Fitness: (5.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 48/64 runs [=============== ] 75.0%\nMainProcess pypet INFO PROGRESS: Finished 52/64 runs [================ ] 81.2%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 55/64 runs [================= ] 85.9%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 58/64 runs [================== ] 90.6%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 61/64 runs [=================== ] 95.3%, remaining: 0:00:02\nMainProcess pypet INFO PROGRESS: Finished 64/64 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 3 -----------\nMainProcess root INFO Best individual is [1.3018429030776317, 0.4613335264234024, 1.2708417799029919, 0.3261423030366109]\nMainProcess root INFO Score: -2.0\nMainProcess root INFO Fitness: (2.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 64/80 runs [================ ] 80.0%\nMainProcess pypet INFO PROGRESS: Finished 68/80 runs [================= ] 85.0%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 72/80 runs [================== ] 90.0%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 76/80 runs [=================== ] 95.0%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 80/80 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 4 -----------\nMainProcess root INFO Best individual is [1.3018429030776317, 0.4613335264234024, 1.2708417799029919, 0.3261423030366109]\nMainProcess root INFO Score: -2.0\nMainProcess root INFO Fitness: (2.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 80/96 runs [================ ] 83.3%\nMainProcess pypet INFO PROGRESS: Finished 82/96 runs [================= ] 85.4%, remaining: 0:00:33\nMainProcess pypet INFO PROGRESS: Finished 87/96 runs [================== ] 90.6%, remaining: 0:00:06\nMainProcess pypet INFO PROGRESS: Finished 92/96 runs [=================== ] 95.8%, remaining: 0:00:03\nMainProcess pypet INFO PROGRESS: Finished 96/96 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 5 -----------\nMainProcess root INFO Best individual is [1.3018429030776317, 0.4613335264234024, 1.2708417799029919, 0.3261423030366109]\nMainProcess root INFO Score: -2.0\nMainProcess root INFO Fitness: (2.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 96/112 runs [================= ] 85.7%\nMainProcess pypet INFO PROGRESS: Finished 101/112 runs [================== ] 90.2%, remaining: 0:00:10\nMainProcess pypet INFO PROGRESS: Finished 107/112 runs [=================== ] 95.5%, remaining: 0:00:04\nMainProcess pypet INFO PROGRESS: Finished 112/112 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 6 -----------\nMainProcess root INFO Best individual is [1.3018429030776317, 0.4613335264234024, 1.2708417799029919, 0.3261423030366109]\nMainProcess root INFO Score: -2.0\nMainProcess root INFO Fitness: (2.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 112/128 runs [================= ] 87.5%\nMainProcess pypet INFO PROGRESS: Finished 116/128 runs [================== ] 90.6%, remaining: 0:00:14\nMainProcess pypet INFO PROGRESS: Finished 122/128 runs [=================== ] 95.3%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 128/128 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 7 -----------\nMainProcess root INFO Best individual is [0.8975385309150987, 0.11079193956336597, 0.23750510019720242, 0.2003002444648563]\nMainProcess root INFO Score: -1.0\nMainProcess root INFO Fitness: (1.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 128/144 runs [================= ] 88.9%\nMainProcess pypet INFO PROGRESS: Finished 130/144 runs [================== ] 90.3%, remaining: 0:00:32\nMainProcess pypet INFO PROGRESS: Finished 137/144 runs [=================== ] 95.1%, remaining: 0:00:07\nMainProcess pypet INFO PROGRESS: Finished 144/144 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 8 -----------\nMainProcess root INFO Best individual is [1.2018893263483494, 0.26004390251897785, 0.24989089918822285, 0.08052584592439692]\nMainProcess root INFO Score: 0.0\nMainProcess root INFO Fitness: (0.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO Replacing 0 invalid individuals.\nMainProcess pypet.environment.Environment INFO I am preparing the Trajectory for the experiment and initialise the store.\nMainProcess pypet.environment.Environment INFO Initialising the storage for the trajectory.\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO Starting multiprocessing with at most 8 processes running at the same time.\nMainProcess pypet INFO PROGRESS: Finished 144/160 runs [================== ] 90.0%\nMainProcess pypet INFO PROGRESS: Finished 152/160 runs [=================== ] 95.0%, remaining: 0:00:05\nMainProcess pypet INFO PROGRESS: Finished 160/160 runs [====================]100.0%\nMainProcess pypet.storageservice.HDF5StorageService INFO Initialising storage or updating meta data of Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished init or meta data update for `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED all runs of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO \n************************************************************\nSTARTING FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`\n************************************************************\n\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.environment.Environment INFO \n************************************************************\nFINISHED FINAL STORING of trajectory\n`results-2021-07-07-16H-45M-51S`.\n************************************************************\n\nMainProcess pypet.environment.Environment INFO All runs of trajectory `results-2021-07-07-16H-45M-51S` were completed successfully.\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess root INFO ----------- Generation 9 -----------\nMainProcess root INFO Best individual is [1.2018893263483494, 0.26004390251897785, 0.24989089918822285, 0.08052584592439692]\nMainProcess root INFO Score: 0.0\nMainProcess root INFO Fitness: (0.0,)\nMainProcess root INFO --- Population statistics ---\nMainProcess root INFO --- End of evolution ---\nMainProcess root INFO Best individual is [1.2018893263483494, 0.26004390251897785, 0.24989089918822285, 0.08052584592439692], (0.0,)\nMainProcess root INFO --- End of evolution ---\nMainProcess pypet.storageservice.HDF5StorageService INFO Start storing Trajectory `results-2021-07-07-16H-45M-51S`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `config`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `results`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Storing branch `derived_parameters`.\nMainProcess pypet.storageservice.HDF5StorageService INFO Finished storing Trajectory `results-2021-07-07-16H-45M-51S`.\n\n
evolution.info(plot=True)\n
\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/neurolib/optimize/evolution/evolutionaryUtils.py:212: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n plt.tight_layout()\n\n
\n> Simulation parameters\nHDF file storage: ./data/hdf/example-4.2-evolution.hdf\nTrajectory Name: results-2021-07-07-16H-45M-51S\nDuration of evaluating initial population 0:00:11.681965\nDuration of evolution 0:01:31.868614\nModel: <class 'neurolib.models.multimodel.model.MultiModel'>\nModel name: ALNNode\nEval function: <function evaluateSimulation at 0x15b06a830>\nParameter space: {'*EXC*noise*mu': [0.0, 4.0], '*INH*noise*mu': [0.0, 4.0]}\n> Evolution parameters\nNumber of generations: 10\nInitial population size: 16\nPopulation size: 16\n> Evolutionary operators\nMating operator: <function cxBlend at 0x159a43560>\nMating paramter: {'alpha': 0.5}\nSelection operator: <function selBest_multiObj at 0x159a70200>\nSelection paramter: {}\nParent selection operator: <function selRank at 0x159a70170>\nComments: no comments\n--- Info summary ---\nValid: 16\nMean score (weighted fitness): -3.6\nParameter distribution (Generation 9):\nSTAREXCSTARnoiseSTARmu: mean: 1.1141, std: 0.2046\nSTARINHSTARnoiseSTARmu: mean: 0.4791, std: 0.2361\n--------------------\nBest 5 individuals:\nPrinting 5 individuals\nIndividual 0\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 1.20\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.26\nIndividual 1\n Fitness values: 0.0\n Score: 0.0\n Weighted fitness: -0.0\n Stats mean 0.00 std 0.00 min 0.00 max 0.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 1.01\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.16\nIndividual 2\n Fitness values: 1.0\n Score: -1.0\n Weighted fitness: -1.0\n Stats mean 1.00 std 0.00 min 1.00 max 1.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 0.90\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.11\nIndividual 3\n Fitness values: 2.0\n Score: -2.0\n Weighted fitness: -2.0\n Stats mean 2.00 std 0.00 min 2.00 max 2.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 1.30\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.46\nIndividual 4\n Fitness values: 2.0\n Score: -2.0\n Weighted fitness: -2.0\n Stats mean 2.00 std 0.00 min 2.00 max 2.00\n model.params[\"STAREXCSTARnoiseSTARmu\"] = 0.96\n model.params[\"STARINHSTARnoiseSTARmu\"] = 0.25\n--------------------\n\n
\nThere are 16 valid individuals\nMean score across population: -3.6\n\n
\nMainProcess root INFO Saving plot to ./data/figures/results-2021-07-07-16H-45M-51S_hist_9.png\n\n
Since everything in MultiModel is a class, we will simply subclass the ExcitatoryWilsonCowanMass and edit / add things we need to.
Our adaptation current will come in as: \\(\\(\\dot{w} = -\\frac{w}{\\tau_{A}} + b*r\\)\\)
class AdaptationExcitatoryWilsonCowanMass(ExcitatoryWilsonCowanMass):\n # here we only edit attributes that will change!\n # slightly edit name and label\n name = \"Wilson-Cowan excitatory mass with adaptation\"\n label = f\"WCmass{EXC}_adapt\"\n\n num_state_variables = 2\n # same number of noise variables\n\n # coupling variables - the same, no need to do anything\n # add w as a variable\n state_variable_names = [f\"q_mean_{EXC}\", \"w\"]\n # mass type and couplings are the same\n\n # add parameters for adaptation current - b and tauA\n required_params = [\"a\", \"mu\", \"tau\", \"ext_drive\", \"b\", \"tauA\"]\n\n # same input noise\n\n def __init__(self, params=None, seed=None):\n # edit init and pass default parameters for adaptation\n super().__init__(params=params or WC_ADAPT_EXC_DEFAULT_PARAMS, seed=seed)\n\n def _initialize_state_vector(self):\n # need to add init for adaptation variable w\n np.random.seed(self.seed)\n self.initial_state = [0.05 * np.random.uniform(0, 1), 0.0]\n\n def _derivatives(self, coupling_variables):\n # edit derivatives\n [x, w] = self._unwrap_state_vector()\n d_x = (\n -x\n + (1.0 - x)\n * self._sigmoid(\n coupling_variables[\"node_exc_exc\"]\n - coupling_variables[\"node_exc_inh\"]\n + coupling_variables[\"network_exc_exc\"]\n + self.params[\"ext_drive\"]\n - w # subtract adaptation current\n )\n + system_input(self.noise_input_idx[0])\n ) / self.params[\"tau\"]\n # now define adaptation dynamics\n d_w = -w / self.params[\"tauA\"] + self.params[\"b\"] * x\n\n return [d_x, d_w]\n\n\n# define default set of parameters\nWC_ADAPT_EXC_DEFAULT_PARAMS = {\n # just copy all default parameters from non-adaptation version\n **WC_EXC_DEFAULT_PARAMS,\n # add adaptation parameters\n \"tauA\": 500.0, # ms\n \"b\": 1.0,\n}\n
That's it! Now we have our shiny excitatory Wilson-Cowan mass with adaptation. Now, we need to create a Node with one exctitatory WC mass with adaptation and one good old inhibitory mass without adaptation. The basic inhibitory mass is already implemented, no need to do anything. Below we will create our Node with adaptation.
class WilsonCowanNodeWithAdaptation(WilsonCowanNode):\n # start by subclassing the basic WilsonCowanNode and, again,\n # just change what has to be changed\n\n name = \"Wilson-Cowan node with adaptation\"\n label = \"WCnode_adapt\"\n\n # default coupling and outputs are the same\n\n def __init__(\n self,\n exc_params=None,\n inh_params=None,\n connectivity=WC_NODE_DEFAULT_CONNECTIVITY,\n ):\n # here we just pass our new `AdaptationExcitatoryWilsonCowanMass` instead of\n # `ExcitatoryWilsonCowanMass`, otherwise it is the same\n excitatory_mass = AdaptationExcitatoryWilsonCowanMass(exc_params)\n excitatory_mass.index = 0\n inhibitory_mass = InhibitoryWilsonCowanMass(inh_params)\n inhibitory_mass.index = 1\n # the only trick is, we want to call super() and init Node class, BUT\n # just calling super().__init__() will actually call parent's init, and in\n # this case, our parent is `WilsonCowanNode`... we need to call grandparent's\n # __init__.. fortunately, this can be done in python no problemo\n # instead of calling super().__init__(), we need to call\n # super(<current parent>, self).__init__()\n super(WilsonCowanNode, self).__init__(\n neural_masses=[excitatory_mass, inhibitory_mass],\n local_connectivity=connectivity,\n # within W-C node there are no local delays\n local_delays=None,\n )\n
And done. Now we can run and compare WC node with and without adaptation.
\nMainProcess root INFO WCnode: Model initialized.\nMainProcess root INFO WCnode_adapt: Model initialized.\n\n
# set parameters\nwc_basic.params[\"*EXC*ext_drive\"] = 0.8\nwc_basic.params[\"duration\"] = 2000.0\nwc_basic.params[\"sampling_dt\"] = 1.0\n\n# higher external input due to adaptation\nwc_adapt.params[\"*EXC*ext_drive\"] = 5.5\nwc_adapt.params[\"duration\"] = 2000.0\nwc_adapt.params[\"sampling_dt\"] = 1.0\nwc_adapt.params[\"*b\"] = 0.1\n
wc_basic.run()\nwc_adapt.run()\n
\nMainProcess root INFO Initialising jitcdde backend...\nMainProcess root INFO Setting up the DDE system...\n/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:1491: UserWarning: Your input past does not begin at t=0 but at t=1.0. Values before the beginning of the past will be extrapolated. You very likely do not want this.\n warn(f\"Your input past does not begin at t=0 but at t={input[0].time}. Values before the beginning of the past will be extrapolated. You very likely do not want this.\")\nMainProcess root INFO Compiling to C...\nMainProcess root INFO Setting past of the state vector...\nMainProcess root INFO Integrating for 2000 time steps...\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2000/2000 [00:00<00:00, 153705.07it/s]\nMainProcess root INFO Integration done.\nMainProcess root INFO `run` call took 1.24 s\nMainProcess root INFO Initialising jitcdde backend...\nMainProcess root INFO Setting up the DDE system...\nMainProcess root INFO Compiling to C...\n\n
\nUsing default integration parameters.\n\n
\nMainProcess root INFO Setting past of the state vector...\nMainProcess root INFO Integrating for 2000 time steps...\n 0%| | 0/2000 [00:00<?, ?it/s]/Users/nikola/.virtualenvs/neurolib/lib/python3.7/site-packages/jitcdde/_jitcdde.py:791: UserWarning: The target time is smaller than the current time. No integration step will happen. The returned state will be extrapolated from the interpolating Hermite polynomial for the last integration step. You may see this because you try to integrate backwards in time, in which case you did something wrong. You may see this just because your sampling step is small, in which case there is no need to worry.\n warn(\"The target time is smaller than the current time. No integration step will happen. The returned state will be extrapolated from the interpolating Hermite polynomial for the last integration step. You may see this because you try to integrate backwards in time, in which case you did something wrong. You may see this just because your sampling step is small, in which case there is no need to worry.\")\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2000/2000 [00:00<00:00, 100000.10it/s]\nMainProcess root INFO Integration done.\nMainProcess root INFO `run` call took 0.79 s\n\n
All done. Now we can study adaptation dynamics in Wilson-Cowan model by e.g. running exploration over adaptation parameters and eventually evolution with the target of having slow oscillations (i.e. oscillations with frequency ~1Hz) and optimising with respect to parameters such as b, tauA, ext_input and others. Happy hacking!
In the last two examples we showcased the basics of MultiModel framework and how to create a new model from scratch. Now we will look at some advanced topics such as how exactly the integration works, why we have two integration backends, and how to run optimisation and exploration with MultiModel.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#the-tale-of-two-backends","title":"The tale of two backends","text":"
In the current implementation of MultiModel, users may choose from two different integration backends. Before diving into the details of both backends, let us quickly revise how exactly MultiModel integrates the model equations.
Almost all whole-brain simulators works in the sense, that you define the dynamics of single brain area and then we have a double loop for integration: one over time, the second over brain areas. In other words, all brain areas have the same dynamics. In pseudo-code it would look something like:
for t in range(time_total):\n for n in range(num_areas):\n x[t, n] = integrate_step(x[t-max_delay:t-1, :])\n
Since all areas are the same, the integrate_step function would simply take the history of state vector and apply one integration step in any scheme. This won't work in MultiModel, since it allows building heterogeneous models. The internal workings of MultiModel can be explained in couple of steps."},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#state-vector","title":"State vector","text":"
Since the inner loop in the pseudocode above is not doable in MultiModel due to heterogeneity, we solve it simply by concatenating all individual equations into one big state vector (that is also the reason why all NeuralMass and Node objects have their indices). When the model is ready for simulation, we iterate over Nodes and NeuralMasses within these nodes and stack their equations into a single list. The concatenation is done using the _derivatives() function. - in NeuralMass, the _derivatives() function implements the actual dynamics as delay differential equations - in Node, the _derivatives() function stacks all equations from NeuralMasses within this Node into one list - in Network, the _derivatives() function stacks all equations from Nodes within this Network.
As we have seen before, the state vector encodes the whole dynamics of the brain model, but the coupling is crypted as symbol. To make things easier for simulating, we had to separate the individual internal dynamics from the coupling terms. The coupling comes in two flavours, reflecting the three levels of hierarchy: node coupling takes care of coupling between NeuralMasses within one Node, and network coupling takes care of coupling between Nodes in one Network. The coupling is implemented as _sync() function. This function returns a list, where each item is a tuple of length two: (name_of_the_symbol, symbolic_term_representing_the_coupling). The FitzHugh-Nagumo model only has one mass per node, hence there are no node couplings, but we can inspect the network coupling:
The jitcdde backend was the first integration backend in MultiModel. The name stems from the fact that we use wonderful jitcdde python package. It employs just-in-time compilation of symbolic derivatives into C and then uses DDE integration method proposed by Shampine and Thompson, which in turn employs the Bogacki\u2013Shampine Runge\u2013Kutta pair. This is the reason why the definition of dynamics in MultiModel is done using symbolic derivatives written in symengine. It uses adaptive dt scheme, hence is very useful for stiff problems. Also, if you are implementing a new model and have no idea how stiff the dynamics are, this is the backend to try first. It has reasonable speed, but for large networks and long simulations it is not the best.
The internal workings of jitcdde package served as an inspiration when creating MultiModel. jitcdde naturally works with dynamics defined as symbolic equations (_derivatives()) and it also supports the use of \"helpers\" - the helpers in our case are the coupling terms (_sync()).
Since jitcdde is rather slow, in particular for long runs or large networks, we created a numba backend. It was tricky - the whole code around MultiModel was creating with jitcdde in mind with the symbolic equations, helpers, etc. However, the advantage of symbolic equations is also - they are purely symbolic, so you can \"print\" them. By printing, you really just obtain the string with equations. So what numba backend actually does? 1. gather all symbolic equations with _derivatives() 2. substitute coupling symbols with functional terms in _sync() 3. substitute current and past state vector symbols (current_y() and past_y()) with state vector y with correct indices and time delays 4. now we have a complete description of dy 5. \"print\" out this dy into prepared function template (string) inside the time for loop, you can imagine it as
for t in range(1, t_max):\n dy = np.array({dy}) # <- `dy` is printed here\n y[:, t] = y[:, t-1] + dt*dy\n
6. compile the prepared string template into an actual python function (yes, python can do this) 7. wrap the compiled function with numba.njit() to get the speed boots 8. profit
And yes, the numba backend simply employs Euler integration scheme, which means that you need to think about dt.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#which-backend-to-use-and-when","title":"Which backend to use and when?","text":"
By default numba backend is almost always faster (with the exception of small network and not too long runs, when they perform similarly). However, for prototyping new models, or connecting couple of models into a heterogeneous network, it is always a good idea to do a couple of short simulations with jitcdde. The reason is - it uses an adaptive dt, so you do not need to worry about setting a correct dt. When you have an idea about what the dynamics should look like and how fast it is, you can switch to numba and try couple of different dt and select the one, when the results are closest to jitcdde results. For exploration and optimisation with evolution, always use numba. numba backend is actually compiling a jit'ed function with all the parameters as arguments, hence for exploration, only one compilation is necessary and then even when changing parameters, the model runs at high speed.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#exploration-with-multimodel","title":"Exploration with MultiModel","text":"
So we all love neurolib not only for its speed, efficiency, and ease of use, but also for its built-in exploration and evolution frameworks. These two makes studying the population models a real breeze! And naturally, MultiModel supports both. Firtsly, we will showcase how we can explore parameters of MultiModel using the BoxSearch class and we will replicate the ALN exploration from example-1-aln-parameter-exploration.ipynb.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#note-on-parameters-when-exploring-or-optimising","title":"Note on parameters when exploring or optimising","text":"
If you remember, the MultiModel has an option for true heterogeneous parameters, i.e. each brain node can have different parameters. This allows you to explore paramaters one-by-one (truly different parameters for each node), or in bulk. As an example, consider this:
In the first example (parameters1), we explore Ke parameter and set this for all nodes and for all masses the same. In the end, we run three simulations with homogeneous Ke for all nodes/masses.
In the second example, we explore Ke individually for excitatory masses and inhibitory masses, however, for all nodes these would be the same. In total, we will run 9 simulations with heterogeneous Ke within one node, but the same values for Ke_exc and Ke_inh would be used in all nodes.
Finally, in the last example, we would explore only excitatory Ke parameters, but for two nodes differently, hence we can study how excitatory Ke affects 2-node network.
Of course, we can always call parameters by their full name (glob path), like:
parameters3 = ParameterSpace(\n {\"ALNNode_0.ALNMassINH_1.Ke\": [400, 800, 1200]},\n allow_star_notation=True # due to \".\" in the param names\n)\n
and we will simply explore over one particular Ke of inhibitory mass in first node within a network. All other Ke parameters will remain constant."},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#evolution-with-multimodel","title":"Evolution with MultiModel","text":"
If you're familiar with how Evolution works in core neurolib, and now you know the perks of MultiModel with respect to exploration - that's all you need to know when doing optimisation using evolutionary algorithms using MultiModel. Parameters are passed via ParameterSpace with allow_star_notation=True. Below, we will reproduce the example-2.1-evolutionary-optimization-aln with the MultiModel version of ALN model.
"},{"location":"examples/example-4.2-multimodel-backends-and-optimization/#adapting-existing-models-in-multimodel-framework","title":"Adapting existing models in MultiModel framework","text":"
MultiModel comes with a few implemented models so you can play with models right away. Due to the hierarchical architecture based on class inheritence in python, it is also very easy to adapt existing models. As an example - MultiModel comes with an implementation of Wilson-Cowan model. In our version, there are excitatory and inhibitory masses in one node. Let's say, you want to add an adaptation current inside excitatory mass in Wilson-Cowan model.
"},{"location":"examples/example-5.1-oc-phenomenological-model-deterministic/","title":"Example 5.1 oc phenomenological model deterministic","text":"
import matplotlib.pyplot as plt\nimport numpy as np\nimport os\n\nwhile os.getcwd().split(os.sep)[-1] != \"neurolib\":\n os.chdir('..')\n\n# We import the model, stimuli, and the optimal control package\nfrom neurolib.models.fhn import FHNModel\nfrom neurolib.models.hopf import HopfModel\nfrom neurolib.utils.stimulus import ZeroInput\nfrom neurolib.control.optimal_control import oc_fhn\nfrom neurolib.control.optimal_control import oc_hopf\nfrom neurolib.utils.plot_oc import plot_oc_singlenode, plot_oc_network\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 3\n
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
# We import the model\nmodel = FHNModel()\n# model = HopfModel() # OC can be computed for the Hopf model completely analogously\n\n# Some parameters to define stimulation signals\ndt = model.params[\"dt\"]\nduration = 10.\namplitude = 1.\nperiod = duration/4.\n\n# We define a \"zero-input\", and a sine-input\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-2] = np.sin(2.*np.pi*np.arange(0,duration-0.2, dt)/period) # other functions or random values can be used as well\n\n# We set the duration of the simulation and the initial values\nmodel.params[\"duration\"] = duration\nx_init = 0.\ny_init = 0.\nmodel.params[\"xs_init\"] = np.array([[x_init]])\nmodel.params[\"ys_init\"] = np.array([[y_init]])\n
# We set the stimulus in x and y variables, and run the simulation\nmodel.params[\"x_ext\"] = input\nmodel.params[\"y_ext\"] = zero_input\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = np.concatenate((np.concatenate( (model.params[\"xs_init\"], model.params[\"ys_init\"]), axis=1)[:,:, np.newaxis], np.stack( (model.x, model.y), axis=1)), axis=2)\ntarget_input = np.concatenate( (input,zero_input), axis=0)[np.newaxis,:,:]\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"x_ext\"] = zero_input\nmodel.params[\"y_ext\"] = zero_input\ncontrol = np.concatenate( (zero_input,zero_input), axis=0)[np.newaxis,:,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = np.concatenate((np.concatenate( (model.params[\"xs_init\"], model.params[\"ys_init\"]), axis=1)[:,:, np.newaxis], np.stack( (model.x, model.y), axis=1)), axis=2)\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input)\n
# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"x_ext\"] = zero_input\nmodel.params[\"y_ext\"] = zero_input\n\n# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nif model.name == 'fhn':\n model_controlled = oc_fhn.OcFhn(model, target, print_array=np.arange(0,501,25))\nelif model.name == 'hopf':\n model_controlled = oc_hopf.OcHopf(model, target, print_array=np.arange(0,501,25))\n\n# per default, the weights are set to w_p = 1 and w_2 = 0, meaning that energy costs do not contribute. The algorithm will produce a control such that the signal will match the target exactly, regardless of the strength of the required control input.\n# If you want to adjust the ratio of precision and energy weight, you can change the values in the weights dictionary\nmodel_controlled.weights[\"w_p\"] = 1. # default value 1\nmodel_controlled.weights[\"w_2\"] = 0. # default value 0\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.5533851530971279\nCost in iteration 25: 0.2424229146183965\nCost in iteration 50: 0.1584467235220361\nCost in iteration 75: 0.12000029040838786\nCost in iteration 100: 0.09606458437628636\nCost in iteration 125: 0.07875899052824148\nCost in iteration 150: 0.06567349888722097\nCost in iteration 175: 0.055617171219608186\nCost in iteration 200: 0.04682087916195195\nCost in iteration 225: 0.03978086855629591\nCost in iteration 250: 0.03392391540076884\nCost in iteration 275: 0.028992099916335258\nCost in iteration 300: 0.024790790776996006\nCost in iteration 325: 0.021330380416435698\nCost in iteration 350: 0.018279402174332753\nCost in iteration 375: 0.01576269909191436\nCost in iteration 400: 0.013565848707923062\nCost in iteration 425: 0.011714500580338114\nCost in iteration 450: 0.009981011218383677\nCost in iteration 475: 0.008597600155106654\nCost in iteration 500: 0.007380756958683128\nFinal cost : 0.007380756958683128\n\n
# Do another 100 iterations if you want to.\n# Repeated execution will continue with further 100 iterations.\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.007380756958683128\nCost in iteration 25: 0.0063153874519220445\nCost in iteration 50: 0.00541103301473969\nCost in iteration 75: 0.004519862815977447\nCost in iteration 100: 0.003828425847813115\nFinal cost : 0.003828425847813115\n\n
cmat = np.array( [[0., 0.5], [1., 0.]] ) # diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)\ndmat = np.array( [[0., 0.], [0., 0.]] ) # no delay\n\nif model.name == 'fhn':\n model = FHNModel(Cmat=cmat, Dmat=dmat)\nelif model.name == 'hopf':\n model = HopfModel(Cmat=cmat, Dmat=dmat)\n\n# we define the control input matrix to enable or disable certain channels and nodes\ncontrol_mat = np.zeros( (model.params.N, len(model.state_vars)) )\ncontrol_mat[0,0] = 1. # only allow inputs in x-channel in node 0\n\nif control_mat[0,0] == 0. and control_mat[1,0] == 0:\n # if x is input channel, high connection strength can lead to numerical issues\n model.params.K_gl = 5. # increase for stronger connectivity, WARNING: too high value will cause numerical problems\n\nmodel.params[\"duration\"] = duration\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-3] = np.sin(np.arange(0,duration-0.3, dt)) # other functions or random values can be used as well\nmodel.params[\"xs_init\"] = np.vstack( [x_init, x_init] )\nmodel.params[\"ys_init\"] = np.vstack( [y_init, y_init] )\n\n# We set the stimulus in x and y variables, and run the simulation\ninput_nw = np.concatenate( (np.vstack( [control_mat[0,0] * input, control_mat[0,1] * input] )[np.newaxis,:,:],\n np.vstack( [control_mat[1,0] * input, control_mat[1,1] * input] )[np.newaxis,:,:]), axis=0)\nzero_input_nw = np.concatenate( (np.vstack( [zero_input, zero_input] )[np.newaxis,:,:],\n np.vstack( [zero_input, zero_input] )[np.newaxis,:,:]), axis=0)\n\nmodel.params[\"x_ext\"] = input_nw[:,0,:]\nmodel.params[\"y_ext\"] = input_nw[:,1,:]\n\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = np.concatenate( (np.concatenate( (model.params[\"xs_init\"], model.params[\"ys_init\"]), axis=1)[:,:, np.newaxis], np.stack( (model.x, model.y), axis=1)), axis=2)\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"x_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"y_ext\"] = zero_input_nw[:,0,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = np.concatenate( (np.concatenate( (model.params[\"xs_init\"], model.params[\"ys_init\"]), axis=1)[:,:, np.newaxis], np.stack( (model.x, model.y), axis=1)), axis=2)\n\nplot_oc_network(model.params.N, duration, dt, state, target, zero_input_nw, input_nw)\n
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target\ncost_mat = np.zeros( (model.params.N, len(model.output_vars)) )\ncost_mat[1,0] = 1. # only measure in y-channel in node 1\n\n# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"x_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"y_ext\"] = zero_input_nw[:,0,:]\n\n# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nif model.name == 'fhn':\n model_controlled = oc_fhn.OcFhn(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\nelif model.name == 'hopf':\n model_controlled = oc_hopf.OcHopf(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.26634675059119883\nCost in iteration 25: 0.007720097126561841\nCost in iteration 50: 0.0034680947661811417\nCost in iteration 75: 0.0019407060206991053\nCost in iteration 100: 0.0014869014234351792\nCost in iteration 125: 0.0012416880831819742\nCost in iteration 150: 0.001092671530708714\nCost in iteration 175: 0.0009785714578839102\nCost in iteration 200: 0.0008690983607758308\nCost in iteration 225: 0.0007820993626886098\nCost in iteration 250: 0.0007014496869583778\nCost in iteration 275: 0.0006336452348537255\nCost in iteration 300: 0.0005674277634957603\nCost in iteration 325: 0.0005103364437866347\nCost in iteration 350: 0.0004672824975699639\nCost in iteration 375: 0.0004270480894871664\nCost in iteration 400: 0.00038299359917410083\nCost in iteration 425: 0.00033863450743146543\nCost in iteration 450: 0.0002822096745731488\nCost in iteration 475: 0.00025498430139333237\nCost in iteration 500: 0.0002317087704141942\nFinal cost : 0.0002317087704141942\n\n
# Do another 100 iterations if you want to.\n# Repeated execution will continue with further 100 iterations.\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.0002317087704141942\nCost in iteration 25: 0.00021249031308297534\nCost in iteration 50: 0.00019830797443039547\nCost in iteration 75: 0.0001844977342872052\nCost in iteration 100: 0.00017230020232441738\nFinal cost : 0.00017230020232441738\n\n
cmat = np.array( [[0., 0.], [1., 0.]] ) # diagonal elements are zero, connection strength is 1 from node 0 to node 1\ndmat = np.array( [[0., 0.], [18, 0.]] ) # distance from 0 to 1, delay is computed by dividing by the signal speed params.signalV\n\nif model.name == 'fhn':\n model = FHNModel(Cmat=cmat, Dmat=dmat)\nelif model.name == 'hopf':\n model = HopfModel(Cmat=cmat, Dmat=dmat)\n\nduration, dt = 2000., 0.1\nmodel.params.duration = duration\nmodel.params.dt = dt\n\n# change coupling parameters for faster and stronger connection between nodes\nmodel.params.K_gl = 1.\n\nmodel.params.x_ext = np.zeros((1,))\nmodel.params.y_ext = np.zeros((1,))\n\nmodel.run()\n\ne0 = model.x[0,-1]\ne1 = model.x[1,-1]\ni0 = model.y[0,-1]\ni1 = model.y[1,-1]\n\nmaxdelay = model.getMaxDelay()\n\nmodel.params[\"xs_init\"] = np.array([[e0] * (maxdelay + 1), [e1] * (maxdelay + 1) ])\nmodel.params[\"ys_init\"] = np.array([[i0] * (maxdelay + 1), [i1] * (maxdelay + 1) ])\n\nduration = 6.\nmodel.params.duration = duration\ntime = np.arange(dt, duration+dt, dt)\n\n# we define the control input matrix to enable or disable certain channels and nodes\ncontrol_mat = np.zeros( (model.params.N, len(model.state_vars)) )\ncontrol_mat[0,0] = 1. # only allow inputs in E-channel in node 0\n\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,10] = 1. \ninput[0,20] = 1.\ninput[0,30] = 1. # Three pulses as control input\n\ninput_nw = np.concatenate( (np.vstack( [control_mat[0,0] * input, control_mat[0,1] * input] )[np.newaxis,:,:],\n np.vstack( [control_mat[1,0] * input, control_mat[1,1] * input] )[np.newaxis,:,:]), axis=0)\nzero_input_nw = np.concatenate( (np.vstack( [zero_input, zero_input] )[np.newaxis,:,:],\n np.vstack( [zero_input, zero_input] )[np.newaxis,:,:]), axis=0)\n\nmodel.params[\"x_ext\"] = input_nw[:,0,:]\nmodel.params[\"y_ext\"] = input_nw[:,1,:]\n\nmodel.params[\"xs_init\"] = np.array([[e0] * (maxdelay + 1), [e1] * (maxdelay + 1) ])\nmodel.params[\"ys_init\"] = np.array([[i0] * (maxdelay + 1), [i1] * (maxdelay + 1) ])\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = np.concatenate( (np.stack( (model.params[\"xs_init\"][:,-1], model.params[\"ys_init\"][:,-1]), axis=1)[:,:, np.newaxis], np.stack( (model.x, model.y), axis=1)), axis=2)\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"x_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"y_ext\"] = zero_input_nw[:,0,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = np.concatenate( (np.stack( (model.params[\"xs_init\"][:,-1], model.params[\"ys_init\"][:,-1]), axis=1)[:,:, np.newaxis], np.stack( (model.x, model.y), axis=1)), axis=2)\nplot_oc_network(model.params.N, duration, dt, state, target, zero_input_nw, input_nw)\n
# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"x_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"y_ext\"] = zero_input_nw[:,0,:]\n\n# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nif model.name == \"fhn\":\n model_controlled = oc_fhn.OcFhn(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\nelif model.name == \"hopf\":\n model_controlled = oc_hopf.OcHopf(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.0011947065709511494\nCost in iteration 25: 1.8995713965492315e-05\nCost in iteration 50: 1.2661264833225136e-05\nCost in iteration 75: 9.010644155785715e-06\nCost in iteration 100: 6.820944851923922e-06\nCost in iteration 125: 5.474911745391518e-06\nCost in iteration 150: 4.530608100186918e-06\nCost in iteration 175: 3.927022075378679e-06\nCost in iteration 200: 3.506301912798229e-06\nCost in iteration 225: 3.1905412820140275e-06\nCost in iteration 250: 2.9567061175703895e-06\nCost in iteration 275: 2.7741407209279735e-06\nCost in iteration 300: 2.625794937490633e-06\nCost in iteration 325: 2.502192369572658e-06\nCost in iteration 350: 2.3959920314309043e-06\nCost in iteration 375: 2.303282831253012e-06\nCost in iteration 400: 2.220451776797742e-06\nCost in iteration 425: 2.1458248650643056e-06\nCost in iteration 450: 2.0775097671229942e-06\nCost in iteration 475: 2.0119242553645737e-06\nCost in iteration 500: 1.953220604966201e-06\nFinal cost : 1.953220604966201e-06\n\n
# perofrm another 100 iterations to improve result\n# repeat execution to add another 100 iterations\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 1.953220604966201e-06\nCost in iteration 25: 1.8983582753730346e-06\nCost in iteration 50: 1.8467668220809676e-06\nCost in iteration 75: 1.798071064385974e-06\nCost in iteration 100: 1.7518998980010873e-06\nFinal cost : 1.7518998980010873e-06\n\n
\n
"},{"location":"examples/example-5.1-oc-phenomenological-model-deterministic/#optimal-control-of-deterministic-phenomenological-models","title":"Optimal control of deterministic phenomenological models","text":"
This notebook shows how to compute the optimal control (OC) signal for phenomenological models (FHN, Hopf) for a simple example task.
"},{"location":"examples/example-5.1-oc-phenomenological-model-deterministic/#network-of-neural-populations-no-delay","title":"Network of neural populations (no delay)","text":"
Let us know study a simple 2-node network of FHN oscillators. We first define the coupling matrix and the distance matrix. We can then initialize the model.
"},{"location":"examples/example-5.1-oc-phenomenological-model-deterministic/#delayed-network-of-neural-populations","title":"Delayed network of neural populations","text":"
We now consider a network topology with delayed signalling between the two nodes.
"},{"location":"examples/example-5.2-oc-wc-model-deterministic/","title":"Example 5.2 oc wc model deterministic","text":"
import matplotlib.pyplot as plt\nimport numpy as np\nimport os\n\nwhile os.getcwd().split(os.sep)[-1] != \"neurolib\":\n os.chdir('..')\n\n# We import the model, stimuli, and the optimal control package\nfrom neurolib.models.wc import WCModel\nfrom neurolib.utils.stimulus import ZeroInput\nfrom neurolib.control.optimal_control import oc_wc\nfrom neurolib.utils.plot_oc import plot_oc_singlenode, plot_oc_network\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2\n
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
# We import the model\nmodel = WCModel()\n\n# Some parameters to define stimulation signals\ndt = model.params[\"dt\"]\nduration = 10.\namplitude = 1.\nperiod = duration /4.\n\n# We define a \"zero-input\", and a sine-input\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-1] = amplitude * np.sin(2.*np.pi*np.arange(0,duration-0.1, dt)/period) # other functions or random values can be used as well\n\n# We set the duration of the simulation and the initial values\nmodel.params[\"duration\"] = duration\nx_init = 0.011225367461896877\ny_init = 0.013126741089502588\nmodel.params[\"exc_init\"] = np.array([[x_init]])\nmodel.params[\"inh_init\"] = np.array([[y_init]])\n
# We set the stimulus in x and y variables, and run the simulation\nmodel.params[\"exc_ext\"] = input\nmodel.params[\"inh_ext\"] = zero_input\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = np.concatenate((np.concatenate( (model.params[\"exc_init\"], model.params[\"inh_init\"]), axis=1)[:,:, np.newaxis],\n np.stack( (model.exc, model.inh), axis=1)), axis=2)\ntarget_input = np.concatenate( (input,zero_input), axis=0)[np.newaxis,:,:]\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"exc_ext\"] = zero_input\nmodel.params[\"inh_ext\"] = zero_input\ncontrol = np.concatenate( (zero_input,zero_input), axis=0)[np.newaxis,:,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = np.concatenate((np.concatenate( (model.params[\"exc_init\"], model.params[\"inh_init\"]), axis=1)[:,:, np.newaxis],\n np.stack( (model.exc, model.inh), axis=1)), axis=2)\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input)\n
# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"exc_ext\"] = zero_input\nmodel.params[\"inh_ext\"] = zero_input\n\n# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel_controlled = oc_wc.OcWc(model, target, print_array=np.arange(0,501,25))\nmodel_controlled.weights[\"w_p\"] = 1. # default value 1\nmodel_controlled.weights[\"w_2\"] = 0. # default value 0\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.00041810554198290294\nCost in iteration 25: 1.0532102454109209e-05\nCost in iteration 50: 3.925315729100555e-06\nCost in iteration 75: 2.1054588334476998e-06\nCost in iteration 100: 1.398320694183479e-06\nCost in iteration 125: 1.0229387100203843e-06\nCost in iteration 150: 7.974333735234386e-07\nCost in iteration 175: 6.521115340266662e-07\nCost in iteration 200: 5.444869100157712e-07\nCost in iteration 225: 4.64536510299819e-07\nCost in iteration 250: 4.017338930501393e-07\nCost in iteration 275: 3.5110841320809306e-07\nCost in iteration 300: 3.096084004886465e-07\nCost in iteration 325: 2.752219772816687e-07\nCost in iteration 350: 2.466122217504442e-07\nCost in iteration 375: 2.2171404739100818e-07\nCost in iteration 400: 2.0072190143053269e-07\nCost in iteration 425: 1.8306021177634902e-07\nCost in iteration 450: 1.6681651877735875e-07\nCost in iteration 475: 1.5334951215981366e-07\nCost in iteration 500: 1.409261374589448e-07\nFinal cost : 1.409261374589448e-07\n\n
# Do another 100 iterations if you want to.\n# Repeated execution will continue with further 100 iterations.\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 1.409261374589448e-07\nCost in iteration 25: 1.3051113114486073e-07\nCost in iteration 50: 1.2069164098268257e-07\nCost in iteration 75: 1.1215971283577606e-07\nCost in iteration 100: 1.0456327452784617e-07\nFinal cost : 1.0456327452784617e-07\n\n
cmat = np.array( [[0., 0.5], [1., 0.]] ) # diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)\ndmat = np.array( [[0., 0.], [0., 0.]] ) # no delay\n\nmodel = WCModel(Cmat=cmat, Dmat=dmat)\n\n# we define the control input matrix to enable or disable certain channels and nodes\ncontrol_mat = np.zeros( (model.params.N, len(model.state_vars)) )\ncontrol_mat[0,0] = 1. # only allow inputs in x-channel in node 0\n\nmodel.params.K_gl = 5.\n\nmodel.params[\"duration\"] = duration\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-3] = np.sin(np.arange(0,duration-0.3, dt)) # other functions or random values can be used as well\nmodel.params[\"exc_init\"] = np.vstack( [x_init, x_init] )\nmodel.params[\"inh_init\"] = np.vstack( [y_init, y_init] )\n\n\n# We set the stimulus in x and y variables, and run the simulation\ninput_nw = np.concatenate( (np.vstack( [control_mat[0,0] * input, control_mat[0,1] * input] )[np.newaxis,:,:],\n np.vstack( [control_mat[1,0] * input, control_mat[1,1] * input] )[np.newaxis,:,:]), axis=0)\nzero_input_nw = np.concatenate( (np.vstack( [zero_input, zero_input] )[np.newaxis,:,:],\n np.vstack( [zero_input, zero_input] )[np.newaxis,:,:]), axis=0)\n\nmodel.params[\"exc_ext\"] = input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = input_nw[:,1,:]\n\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = np.concatenate( (np.concatenate( (model.params[\"exc_init\"], model.params[\"inh_init\"]), axis=1)[:,:, np.newaxis], np.stack( (model.exc, model.inh), axis=1)), axis=2)\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"exc_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = zero_input_nw[:,0,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = np.concatenate( (np.concatenate( (model.params[\"exc_init\"], model.params[\"inh_init\"]), axis=1)[:,:, np.newaxis], np.stack( (model.exc, model.inh), axis=1)), axis=2)\n\nplot_oc_network(model.params.N, duration, dt, state, target, zero_input_nw, input_nw)\n
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target\ncost_mat = np.zeros( (model.params.N, len(model.output_vars)) )\ncost_mat[1,0] = 1. # only measure in y-channel in node 1\n\n# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"exc_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = zero_input_nw[:,0,:]\n\n# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel_controlled = oc_wc.OcWc(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 8.117061134315108e-06\nCost in iteration 25: 4.0329637221407195e-07\nCost in iteration 50: 2.133706589679289e-07\nCost in iteration 75: 1.0846418185856119e-07\nCost in iteration 100: 6.237553898673198e-08\nCost in iteration 125: 3.607365058691262e-08\nCost in iteration 150: 2.2496421814207724e-08\nCost in iteration 175: 1.5886138922670738e-08\nCost in iteration 200: 1.1727415781910453e-08\nCost in iteration 225: 9.005487959890062e-09\nCost in iteration 250: 7.191281120908631e-09\nCost in iteration 275: 5.835744371001404e-09\nCost in iteration 300: 4.915806895112334e-09\nCost in iteration 325: 4.206672224203755e-09\nCost in iteration 350: 3.6916483993194285e-09\nCost in iteration 375: 3.2948161905145206e-09\nCost in iteration 400: 2.9837006122863342e-09\nCost in iteration 425: 2.7310136209212046e-09\nCost in iteration 450: 2.5267282859627983e-09\nCost in iteration 475: 2.352356874896669e-09\nCost in iteration 500: 2.2057268519628175e-09\nFinal cost : 2.2057268519628175e-09\n\n
# Do another 100 iterations if you want to.\n# Repeated execution will continue with further 100 iterations.\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 2.2057268519628175e-09\nCost in iteration 25: 2.079569265893922e-09\nCost in iteration 50: 1.969986550217457e-09\nCost in iteration 75: 1.874389888067335e-09\nCost in iteration 100: 1.7855706988225455e-09\nFinal cost : 1.7855706988225455e-09\n\n
cmat = np.array( [[0., 0.], [1., 0.]] ) # diagonal elements are zero, connection strength is 1 from node 0 to node 1\ndmat = np.array( [[0., 0.], [18, 0.]] ) # distance from 0 to 1, delay is computed by dividing by the signal speed params.signalV\n\nmodel = WCModel(Cmat=cmat, Dmat=dmat)\n\nduration, dt = 2000., 0.1\nmodel.params.duration = duration\nmodel.params.dt = dt\nmodel.params.K_gl = 10.\n\nmodel.run()\n\ne0 = model.exc[0,-1]\ne1 = model.exc[1,-1]\ni0 = model.inh[0,-1]\ni1 = model.inh[1,-1]\n\nmaxdelay = model.getMaxDelay()\n\nmodel.params[\"exc_init\"] = np.array([[e0] * (maxdelay + 1), [e1] * (maxdelay + 1) ])\nmodel.params[\"inh_init\"] = np.array([[i0] * (maxdelay + 1), [i1] * (maxdelay + 1) ])\n\nduration = 6.\nmodel.params.duration = duration\nmodel.run()\n\n# we define the control input matrix to enable or disable certain channels and nodes\ncontrol_mat = np.zeros( (model.params.N, len(model.state_vars)) )\ncontrol_mat[0,0] = 1. # only allow inputs in E-channel in node 0\n\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = zero_input.copy()\ninput[0,10] = 1. \ninput[0,20] = 1.\ninput[0,30] = 1. # Three pulses as control input\n\ninput_nw = np.concatenate( (np.vstack( [control_mat[0,0] * input, control_mat[0,1] * input] )[np.newaxis,:,:],\n np.vstack( [control_mat[1,0] * input, control_mat[1,1] * input] )[np.newaxis,:,:]), axis=0)\nzero_input_nw = np.concatenate( (np.vstack( [zero_input, zero_input] )[np.newaxis,:,:],\n np.vstack( [zero_input, zero_input] )[np.newaxis,:,:]), axis=0)\n\nmodel.params[\"exc_ext\"] = input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = input_nw[:,1,:]\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = np.concatenate( (np.stack( (model.params[\"exc_init\"][:,-1], model.params[\"inh_init\"][:,-1]), axis=1)[:,:, np.newaxis], np.stack( (model.exc, model.inh), axis=1)), axis=2)\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"exc_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = zero_input_nw[:,0,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = np.concatenate( (np.stack( (model.params[\"exc_init\"][:,-1], model.params[\"inh_init\"][:,-1]), axis=1)[:,:, np.newaxis], np.stack( (model.exc, model.inh), axis=1)), axis=2)\nplot_oc_network(model.params.N, duration, dt, state, target, zero_input_nw, input_nw)\n
# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel_controlled = oc_wc.OcWc(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 1.792835053390993e-07\nCost in iteration 25: 3.224858708247228e-10\nCost in iteration 50: 1.0235990384283723e-10\nCost in iteration 75: 8.627681277851615e-11\nCost in iteration 100: 8.09708890397755e-11\nCost in iteration 125: 6.901547805762654e-11\nCost in iteration 150: 6.563898918059379e-11\nCost in iteration 175: 6.358322097910284e-11\nCost in iteration 200: 5.819126634851626e-11\nCost in iteration 225: 5.598411882794661e-11\nCost in iteration 250: 5.458351655389417e-11\nCost in iteration 275: 5.101837452145287e-11\nCost in iteration 300: 4.9526343719852504e-11\nCost in iteration 325: 4.872279762423021e-11\nCost in iteration 350: 4.599347400927492e-11\nCost in iteration 375: 4.5049466495032303e-11\nCost in iteration 400: 4.32863678958512e-11\nCost in iteration 425: 4.241565430129624e-11\nCost in iteration 450: 4.121896579349796e-11\nCost in iteration 475: 4.036542019862459e-11\nCost in iteration 500: 3.990804399212831e-11\nFinal cost : 3.990804399212831e-11\n\n
# perofrm another 100 iterations to improve result\n# repeat execution to add another 500 iterations\n# converence to the input stimulus is relatively slow for the WC nodel\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 3.990804399212831e-11\nCost in iteration 25: 3.8701660107380814e-11\nCost in iteration 50: 3.8275743610357815e-11\nCost in iteration 75: 3.731362663528545e-11\nCost in iteration 100: 3.694171527929222e-11\nFinal cost : 3.694171527929222e-11\n\n
\n
"},{"location":"examples/example-5.2-oc-wc-model-deterministic/#optimal-control-of-the-wilson-cowan-model","title":"Optimal control of the Wilson-Cowan model","text":"
This notebook shows how to compute the optimal control (OC) signal for the Wilson-Cowan model for a simple example task.
Let us know study a simple 2-node network of model oscillators. We first define the coupling matrix and the distance matrix. We can then initialize the model.
"},{"location":"examples/example-5.2-oc-wc-model-deterministic/#delayed-network-of-neural-populations","title":"Delayed network of neural populations","text":"
We now consider a network topology with delayed signalling between the two nodes.
"},{"location":"examples/example-5.3-oc-wc-model-noisy/","title":"Example 5.3 oc wc model noisy","text":"
import matplotlib.pyplot as plt\nimport numpy as np\nimport os\n\nwhile os.getcwd().split(os.sep)[-1] != \"neurolib\":\n os.chdir('..')\n\n# We import the model, stimuli, and the optimal control package\nfrom neurolib.models.wc import WCModel\nfrom neurolib.utils.stimulus import ZeroInput\nfrom neurolib.control.optimal_control import oc_wc\nfrom neurolib.utils.plot_oc import plot_oc_singlenode, plot_oc_network\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2\n
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
# We import the model\nmodel = WCModel()\n\n# Set noise strength to zero to define target state\nmodel.params.sigma_ou = 0.\n\n# Some parameters to define stimulation signals\ndt = model.params[\"dt\"]\nduration = 40.\namplitude = 1.\nperiod = duration / 4.\n\n# We define a \"zero-input\", and a sine-input\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-1] = amplitude * np.sin(2.*np.pi*np.arange(0,duration-0.1, dt)/period) # other functions or random values can be used as well\n\n# We set the duration of the simulation and the initial values\nmodel.params[\"duration\"] = duration\nx_init = 0.011225367461896877\ny_init = 0.013126741089502588\nmodel.params[\"exc_init\"] = np.array([[x_init]])\nmodel.params[\"inh_init\"] = np.array([[y_init]])\n\n# We set the stimulus in x and y variables, and run the simulation\nmodel.params[\"exc_ext\"] = input\nmodel.params[\"inh_ext\"] = zero_input\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = np.concatenate((np.concatenate( (model.params[\"exc_init\"], model.params[\"inh_init\"]), axis=1)[:,:, np.newaxis],\n np.stack( (model.exc, model.inh), axis=1)), axis=2)\ntarget_input = np.concatenate( (input,zero_input), axis=0)[np.newaxis,:,:]\n\n# Remove stimuli and re-run the simulation\n# Change sigma_ou_parameter to adjust the noise strength\nmodel.params['sigma_ou'] = 0.1\nmodel.params['tau_ou'] = 1.\nmodel.params[\"exc_ext\"] = zero_input\nmodel.params[\"inh_ext\"] = zero_input\ncontrol = np.concatenate( (zero_input,zero_input), axis=0)[np.newaxis,:,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = np.concatenate((np.concatenate( (model.params[\"exc_init\"], model.params[\"inh_init\"]), axis=1)[:,:, np.newaxis],\n np.stack( (model.exc, model.inh), axis=1)), axis=2)\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input)\n
The target is a periodic oscillation of x and y variable (computed in deterministic, noise-free system).
The noisy, undistrubed system fluctuates around zero.
For the optimization, you can now set several new parameters: - M: the number of noise realizations that the algorithm averages over. Default=1 - M_validation: the number of noise realization the final cost is computed from. Default=1000 - validate_per_step: If True, the cost for each step is computed averaging over M_validation instead of M realizations, this takes much longer. Default=False - method: determines, how the noise averages are computed. Results may vary for different methods depending on the specific task. Choose from ['3']. Default='3'
Please note: - higher number of iterations does not promise better results for computations in noisy systems. The cost will level off at some iteration number, and start increasing again afterwards. Make sure not to perform too many iterations. - M, M_validation should increase with sigma_ou model parameter - validate_per_step does not impact the control result
Let's first optimize with the following parameters: M=20, iterations=100
# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"exc_ext\"] = zero_input\nmodel.params[\"inh_ext\"] = zero_input\n\n# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel_controlled = oc_wc.OcWc(model, target, print_array=np.arange(0,101,10),\n M=20, M_validation=500, validate_per_step=True)\n\n# We run 100 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(100)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)\n
\nCompute control for a noisy system\nMean cost in iteration 0: 0.0486299027821106\nMean cost in iteration 10: 0.02795683316984877\nMean cost in iteration 20: 0.027101411958439722\nMean cost in iteration 30: 0.026543919519260453\nMean cost in iteration 40: 0.026707819124178123\nMean cost in iteration 50: 0.026786489900410732\nMean cost in iteration 60: 0.026412584686262147\nMean cost in iteration 70: 0.026425089398826186\nMean cost in iteration 80: 0.026760368474147204\nMean cost in iteration 90: 0.026954163211574594\nMean cost in iteration 100: 0.027106734179733114\nMinimal cost found at iteration 36\nFinal cost validated with 500 noise realizations : 0.02719992592343364\n\n
Let's do the same thing with different parameters: M=100, iterations=30
# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"exc_ext\"] = zero_input\nmodel.params[\"inh_ext\"] = zero_input\n\n# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel_controlled = oc_wc.OcWc(model, target,print_array=np.arange(0,31,5),\n M=100, M_validation=500, validate_per_step=True)\n\n# We run 30 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(30)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)\n
\nCompute control for a noisy system\nMean cost in iteration 0: 0.044519683319845585\nMean cost in iteration 5: 0.049139417017223554\nMean cost in iteration 10: 0.050857609671347954\nMean cost in iteration 15: 0.04663531486878592\nMean cost in iteration 20: 0.046747345271133535\nMean cost in iteration 25: 0.05112611753258763\nMean cost in iteration 30: 0.04785865829049892\nMinimal cost found at iteration 27\nFinal cost validated with 500 noise realizations : 0.045416281905513174\n\n
Let us know study a simple 2-node network of model oscillators. We first need to define the coupling matrix and the delay matrix. We can then initialize the model.
cmat = np.array( [[0., 0.5], [1.0, 0.]] ) # diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)\ndmat = np.array( [[0., 0.], [0., 0.]] ) # no delay\n\nmodel = WCModel(Cmat=cmat, Dmat=dmat)\n\n# we define the control input matrix to enable or disable certain channels and nodes\ncontrol_mat = np.zeros( (model.params.N, len(model.state_vars)) )\ncontrol_mat[0,0] = 1. # only allow inputs in x-channel in node 0\n\nmodel.params.K_gl = 10.\n\n# Set noise strength to zero to define target state\nmodel.params['sigma_ou'] = 0.\n\nmodel.params[\"duration\"] = duration\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-1] = amplitude * np.sin(2.*np.pi*np.arange(0,duration-0.1, dt)/period) # other functions or random values can be used as well\nmodel.params[\"exc_init\"] = np.vstack( [0.01255381969006173, 0.01190300495001282] )\nmodel.params[\"inh_init\"] = np.vstack( [0.013492631513639169, 0.013312224583806076] )\n\n\n# We set the stimulus in x and y variables, and run the simulation\ninput_nw = np.concatenate( (np.vstack( [control_mat[0,0] * input, control_mat[0,1] * input] )[np.newaxis,:,:],\n np.vstack( [control_mat[1,0] * input, control_mat[1,1] * input] )[np.newaxis,:,:]), axis=0)\nzero_input_nw = np.concatenate( (np.vstack( [zero_input, zero_input] )[np.newaxis,:,:],\n np.vstack( [zero_input, zero_input] )[np.newaxis,:,:]), axis=0)\n\nmodel.params[\"exc_ext\"] = input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = input_nw[:,1,:]\n\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = np.concatenate( (np.concatenate( (model.params[\"exc_init\"], model.params[\"inh_init\"]), axis=1)[:,:, np.newaxis], np.stack( (model.exc, model.inh), axis=1)), axis=2)\n\n# Remove stimuli and re-run the simulation\nmodel.params['sigma_ou'] = 0.03\nmodel.params['tau_ou'] = 1.\nmodel.params[\"exc_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = zero_input_nw[:,0,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = np.concatenate( (np.concatenate( (model.params[\"exc_init\"], model.params[\"inh_init\"]), axis=1)[:,:, np.newaxis], np.stack( (model.exc, model.inh), axis=1)), axis=2)\n\nplot_oc_network(model.params.N, duration, dt, state, target, zero_input_nw, input_nw)\n
Let's optimize with the following parameters: M=20, iterations=100
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target\ncost_mat = np.zeros( (model.params.N, len(model.output_vars)) )\ncost_mat[1,0] = 1. # only measure in x-channel in node 1\n\n# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"exc_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = zero_input_nw[:,0,:]\n\n# We load the optimal control class\n# print array (optinal parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel_controlled = oc_wc.OcWc(model,\n target,\n print_array=np.arange(0,101,10),\n control_matrix=control_mat,\n cost_matrix=cost_mat,\n M=20,\n M_validation=500,\n validate_per_step=True)\n\n# We run 100 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(100)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a noisy system\nMean cost in iteration 0: 0.0161042019653286\nMean cost in iteration 10: 0.029701202083900886\nMean cost in iteration 20: 0.02055100392146934\nMean cost in iteration 30: 0.01824138412316584\nMean cost in iteration 40: 0.01774943248604246\nMean cost in iteration 50: 0.00938616563892467\nMean cost in iteration 60: 0.013815979179667275\nMean cost in iteration 70: 0.011677029951767951\nMean cost in iteration 80: 0.03103645422939053\nMean cost in iteration 90: 0.018355469642118635\nMean cost in iteration 100: 0.021407393453975455\nMinimal cost found at iteration 67\nFinal cost validated with 500 noise realizations : 0.02038125379192151\n\n
Let's do the same thing with different parameters: M=100, iterations=30
# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"exc_ext\"] = zero_input_nw[:,0,:]\nmodel.params[\"inh_ext\"] = zero_input_nw[:,0,:]\n\n# We load the optimal control class\n# print array (optinal parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel_controlled = oc_wc.OcWc(model,\n target,\n print_array=np.arange(0,31,5),\n control_matrix=control_mat,\n cost_matrix=cost_mat,\n M=100,\n M_validation=500,\n validate_per_step=True)\n\n# We run 30 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(30)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a noisy system\nMean cost in iteration 0: 0.01775755329403377\nMean cost in iteration 5: 0.010280452998278504\nMean cost in iteration 10: 0.01594708289308906\nMean cost in iteration 15: 0.028644745813145765\nMean cost in iteration 20: 0.030889247442364865\nMean cost in iteration 25: 0.02629869930972565\nMean cost in iteration 30: 0.017322464091192105\nMinimal cost found at iteration 21\nFinal cost validated with 500 noise realizations : 0.04481574197020663\n\n
\n
"},{"location":"examples/example-5.3-oc-wc-model-noisy/#optimal-control-of-the-noisy-wilson-cowan-odel","title":"Optimal control of the noisy Wilson-Cowan odel","text":"
This notebook shows how to compute the optimal control (OC) signal for the noisy WC model for a simple example task.
"},{"location":"examples/example-5.3-oc-wc-model-noisy/#network-case","title":"Network case","text":""},{"location":"examples/example-5.4-oc-aln-model-deterministic/","title":"Example 5.4 oc aln model deterministic","text":"
import matplotlib.pyplot as plt\nimport numpy as np\nimport os\n\nwhile os.getcwd().split(os.sep)[-1] != \"neurolib\":\n os.chdir('..')\n\n# We import the model, stimuli, and the optimal control package\nfrom neurolib.models.aln import ALNModel\nfrom neurolib.utils.stimulus import ZeroInput\nfrom neurolib.control.optimal_control import oc_aln\nfrom neurolib.utils.plot_oc import plot_oc_singlenode, plot_oc_network\n\n# This will reload all imports as soon as the code changes\n%load_ext autoreload\n%autoreload 2\n\n\n# This function reads out the final state of a simulation\ndef getfinalstate(model):\n N = model.params.Cmat.shape[0]\n V = len(model.state_vars)\n T = model.getMaxDelay() + 1\n state = np.zeros((N, V, T))\n for v in range(V):\n if \"rates\" in model.state_vars[v] or \"IA\" in model.state_vars[v]:\n for n in range(N):\n state[n, v, :] = model.state[model.state_vars[v]][n, -T:]\n else:\n for n in range(N):\n state[n, v, :] = model.state[model.state_vars[v]][n]\n return state\n\n\ndef setinitstate(model, state):\n N = model.params.Cmat.shape[0]\n V = len(model.init_vars)\n T = model.getMaxDelay() + 1\n\n for n in range(N):\n for v in range(V):\n if \"rates\" in model.init_vars[v] or \"IA\" in model.init_vars[v]:\n model.params[model.init_vars[v]] = state[:, v, -T:]\n else:\n model.params[model.init_vars[v]] = state[:, v, -1]\n\n return\n\ndef getstate(model):\n state = np.concatenate( ( np.concatenate((model.params[\"rates_exc_init\"][:, np.newaxis, -1],\n model.params[\"rates_inh_init\"][:, np.newaxis, -1],\n model.params[\"IA_init\"][:, np.newaxis, -1], ), axis=1, )[:, :, np.newaxis],\n np.stack((model.rates_exc, model.rates_inh, model.IA), axis=1),),axis=2, )\n\n return state\n
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
We first study current inputs. We will later proceed to rate inputs.
# We import the model\nmodel = ALNModel()\nmodel.params.duration = 10000\nmodel.params.mue_ext_mean = 2. # up state\nmodel.run()\nsetinitstate(model, getfinalstate(model))\n\n# Some parameters to define stimulation signals\ndt = model.params[\"dt\"]\nduration = 10.\namplitude = 1.\nperiod = duration /4.\n\n# We define a \"zero-input\", and a sine-input\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-1] = amplitude * np.sin(2.*np.pi*np.arange(0,duration-0.1, dt)/period) # other functions or random values can be used as well\n\n# We set the duration of the simulation and the initial values\nmodel.params[\"duration\"] = duration\n\n# We set the stimulus in x and y variables, and run the simulation\nmodel.params[\"ext_exc_current\"] = input\nmodel.params[\"ext_inh_current\"] = zero_input\nmodel.params[\"ext_exc_rate\"] = zero_input\nmodel.params[\"ext_inh_rate\"] = zero_input\n\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = getstate(model)\ntarget_input = np.concatenate( (input, zero_input, zero_input, zero_input), axis=0)[np.newaxis,:,:]\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"ext_exc_current\"] = zero_input\nmodel.params[\"ext_inh_current\"] = zero_input\ncontrol = np.concatenate( (zero_input, zero_input, zero_input, zero_input), axis=0)[np.newaxis,:,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = getstate(model)\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input)\n
# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\ncontrol_mat = np.zeros((1,len(model.input_vars)))\ncontrol_mat[0,0] = 1.\ncost_mat = np.zeros((1,len(model.output_vars)))\ncost_mat[0,0] = 1.\n\nmodel_controlled = oc_aln.OcAln(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\nmodel_controlled.weights[\"w_p\"] = 1. # default value 1\nmodel_controlled.weights[\"w_2\"] = 0. # default value 0\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 314.1247597247194\nCost in iteration 25: 0.13317432824531167\nCost in iteration 50: 0.025934764241784855\nCost in iteration 75: 0.010689714898934012\nCost in iteration 100: 0.006042649711908977\nCost in iteration 125: 0.003852074448389804\nCost in iteration 150: 0.0026454397557471756\nCost in iteration 175: 0.0019048498068881534\nCost in iteration 200: 0.0014175325285176437\nCost in iteration 225: 0.0010832777739798686\nCost in iteration 250: 0.0008270405756069322\nCost in iteration 275: 0.000647747907643482\nCost in iteration 300: 0.0005135789763737352\nCost in iteration 325: 0.00041166220430455887\nCost in iteration 350: 0.00033334319584000865\nCost in iteration 375: 0.0002682483135493626\nCost in iteration 400: 0.00021897331522083166\nCost in iteration 425: 0.0001797951466810639\nCost in iteration 450: 0.0001484385297291106\nCost in iteration 475: 0.00012322292996632452\nCost in iteration 500: 0.0001019978308262297\nFinal cost : 0.0001019978308262297\n\n
# Do another 100 iterations if you want to.\n# Repeated execution will continue with further 100 iterations.\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.0001019978308262297\nCost in iteration 25: 8.503577269809191e-05\nCost in iteration 50: 7.113629148054069e-05\nCost in iteration 75: 5.970536946996868e-05\nCost in iteration 100: 5.02763560369055e-05\nFinal cost : 5.02763560369055e-05\n\n
Let us now look at a scenario with rate-type control inputs
amplitude = 40.\noffset = 60.\nperiod = duration /4.\n\n# We define a \"zero-input\", and a sine-input\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-1] = offset + amplitude * np.sin(2.*np.pi*np.arange(0,duration-0.1, dt)/period) # other functions or random values can be used as well\n\n# We set the stimulus in x and y variables, and run the simulation\nmodel.params[\"ext_exc_current\"] = zero_input\nmodel.params[\"ext_inh_current\"] = zero_input\nmodel.params[\"ext_exc_rate\"] = input * 1e-3 # rate inputs need to be converted to kHz\nmodel.params[\"ext_inh_rate\"] = zero_input\n\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = getstate(model)\ntarget_input = np.concatenate( (zero_input, zero_input, input, zero_input), axis=0)[np.newaxis,:,:]\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"ext_exc_rate\"] = zero_input\ncontrol = np.concatenate( (zero_input, zero_input, zero_input, zero_input), axis=0)[np.newaxis,:,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = getstate(model)\n\nplot_oc_singlenode(duration, dt, state, target, control, target_input, plot_control_vars=[2,3])\n
# Control matrix needs to be adjusted for rate inputs\ncontrol_mat = np.zeros((1,len(model.input_vars)))\ncontrol_mat[0,2] = 1.\n\nmodel_controlled = oc_aln.OcAln(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\nmodel_controlled.weights[\"w_p\"] = 1. # default value 1\nmodel_controlled.weights[\"w_2\"] = 0. # default value 0\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_singlenode(duration, dt, state, target, control*1e3, target_input, model_controlled.cost_history, plot_control_vars=[2,3])\n
\nCompute control for a deterministic system\nCost in iteration 0: 27.349397232974408\nCost in iteration 25: 0.0006390076320320428\nCost in iteration 50: 0.00014311978667798868\nCost in iteration 75: 8.017957661471726e-05\nCost in iteration 100: 5.679617359217007e-05\nCost in iteration 125: 4.306794192661556e-05\nCost in iteration 150: 3.376433119895472e-05\nCost in iteration 175: 2.7066420641127278e-05\nCost in iteration 200: 2.2059610014723193e-05\nCost in iteration 225: 1.8212160897041168e-05\nCost in iteration 250: 1.5191277735291038e-05\nCost in iteration 275: 1.2778303406474285e-05\nCost in iteration 300: 1.0888696043551817e-05\nCost in iteration 325: 9.243703911351409e-06\nCost in iteration 350: 7.899581967191086e-06\nCost in iteration 375: 6.787562684851147e-06\nCost in iteration 400: 5.859013881863671e-06\nCost in iteration 425: 5.077487368901499e-06\nCost in iteration 450: 4.439379983051779e-06\nCost in iteration 475: 3.85899283693207e-06\nCost in iteration 500: 3.3690715490197364e-06\nFinal cost : 3.3690715490197364e-06\n\n
# Do another 100 iterations if you want to.\n# Repeated execution will continue with further 100 iterations.\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_singlenode(duration, dt, state, target, control*1e3, target_input, model_controlled.cost_history, plot_control_vars=[2,3])\n
\nCompute control for a deterministic system\nCost in iteration 0: 3.3690715490197364e-06\nCost in iteration 25: 2.9515384676759174e-06\nCost in iteration 50: 2.593417209868494e-06\nCost in iteration 75: 2.2845622320483142e-06\nCost in iteration 100: 2.024231674713015e-06\nFinal cost : 2.024231674713015e-06\n\n
cmat = np.array( [[0., 0.5], [1., 0.]] ) # diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)\ndmat = np.array( [[0., 0.], [0., 0.]] ) # no delay\n\nmodel = ALNModel(Cmat=cmat, Dmat=dmat)\nmodel.params.duration = 10000\nmodel.params.mue_ext_mean = 2. # up state\nmodel.params.de = 0.0\nmodel.params.di = 0.0\nmodel.run()\nsetinitstate(model, getfinalstate(model))\n\n# we define the control input matrix to enable or disable certain channels and nodes\ncontrol_mat = np.zeros( (model.params.N, len(model.input_vars)) )\ncontrol_mat[0,0] = 1. # only allow inputs in x-channel in node 0\n\namplitude = 1.\nmodel.params[\"duration\"] = duration\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = np.copy(zero_input)\ninput[0,1:-3] = amplitude * np.sin(2.*np.pi*np.arange(0,duration-0.3, dt)/period) # other functions or random values can be used as well\n\n# We set the stimulus in x and y variables, and run the simulation\ninput_nw = np.concatenate( (np.vstack( [control_mat[0,0] * input, control_mat[0,1] * input, control_mat[0,2] * input, control_mat[0,3] * input] )[np.newaxis,:,:],\n np.vstack( [control_mat[1,0] * input, control_mat[1,1] * input, control_mat[1,2] * input, control_mat[1,3] * input] )[np.newaxis,:,:]), axis=0)\nzero_input_nw = np.concatenate( (np.vstack( [zero_input, zero_input, zero_input, zero_input] )[np.newaxis,:,:],\n np.vstack( [zero_input, zero_input, zero_input, zero_input] )[np.newaxis,:,:]), axis=0)\n\nmodel.params[\"ext_exc_current\"] = input_nw[:,0,:]\nmodel.params[\"ext_inh_current\"] = input_nw[:,1,:]\nmodel.params[\"ext_exc_rate\"] = input_nw[:,2,:]\nmodel.params[\"ext_inh_rate\"] = input_nw[:,3,:]\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = getstate(model)\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"ext_exc_current\"] = zero_input_nw[:,0,:]\nmodel.params[\"ext_inh_current\"] = zero_input_nw[:,1,:]\nmodel.params[\"ext_exc_rate\"] = zero_input_nw[:,2,:]\nmodel.params[\"ext_inh_rate\"] = zero_input_nw[:,3,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = getstate(model)\nplot_oc_network(model.params.N, duration, dt, state, target, zero_input_nw, input_nw)\n
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target\ncost_mat = np.zeros( (model.params.N, len(model.output_vars)) )\ncost_mat[1,0] = 1. # only measure in y-channel in node 1\n\n# We set the external stimulation to zero. This is the \"initial guess\" for the OC algorithm\nmodel.params[\"ext_exc_current\"] = zero_input_nw[:,0,:]\nmodel.params[\"ext_inh_current\"] = zero_input_nw[:,1,:]\nmodel.params[\"ext_exc_rate\"] = zero_input_nw[:,2,:]\nmodel.params[\"ext_inh_rate\"] = zero_input_nw[:,3,:]\n\n# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel_controlled = oc_aln.OcAln(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\n\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.05681899553888795\nCost in iteration 25: 0.009049511507864006\nCost in iteration 50: 0.00385727901608276\nCost in iteration 75: 0.0018622667677526768\nCost in iteration 100: 0.000987085765866294\nCost in iteration 125: 0.000572356512723035\nCost in iteration 150: 0.0003547474327963845\nCost in iteration 175: 0.0002363751625995732\nCost in iteration 200: 0.0001619919185800181\nCost in iteration 225: 0.00011952382655835105\nCost in iteration 250: 9.020890267478555e-05\nCost in iteration 275: 7.169979753138072e-05\nCost in iteration 300: 5.8948947006216384e-05\nCost in iteration 325: 4.953649496402098e-05\nCost in iteration 350: 4.2578616654798227e-05\nCost in iteration 375: 3.721358584763165e-05\nCost in iteration 400: 3.294916084298363e-05\nCost in iteration 425: 2.9490826042506942e-05\nCost in iteration 450: 2.6637122691294857e-05\nCost in iteration 475: 2.418022517349344e-05\nCost in iteration 500: 2.213935579529806e-05\nFinal cost : 2.213935579529806e-05\n\n
# Do another 1000 iterations if you want to.\n# Repeated execution will continue with further 100 iterations.\nmodel_controlled.zero_step_encountered = False\nmodel_controlled.optimize(100)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 2.213935579529806e-05\nCost in iteration 25: 2.039986650084248e-05\nCost in iteration 50: 1.890816061870718e-05\nCost in iteration 75: 1.7543052398445186e-05\nCost in iteration 100: 1.6372947909519095e-05\nCost in iteration 125: 1.535146855935076e-05\nCost in iteration 150: 1.4407226990366437e-05\nCost in iteration 175: 1.3578403645605011e-05\nCost in iteration 200: 1.2839061879178726e-05\nCost in iteration 225: 1.215663786521688e-05\nCost in iteration 250: 1.1540904218753432e-05\nCost in iteration 275: 1.098339286406832e-05\nCost in iteration 300: 1.0476920392110899e-05\nCost in iteration 325: 1.001955972944213e-05\nCost in iteration 350: 9.57055264939235e-06\nCost in iteration 375: 9.17392006953542e-06\nCost in iteration 400: 8.809334792766664e-06\nCost in iteration 425: 8.475824235515095e-06\nCost in iteration 450: 8.147435560163446e-06\nCost in iteration 475: 7.852707565165967e-06\nCost in iteration 500: 7.579247993018956e-06\nFinal cost : 7.579247993018956e-06\n\n
cmat = np.array( [[0., 0.], [1., 0.]] ) # diagonal elements are zero, connection strength is 1 from node 0 to node 1\ndmat = np.array( [[0., 0.], [18, 0.]] ) # distance from 0 to 1, delay is computed by dividing by the signal speed params.signalV\n\nmodel = ALNModel(Cmat=cmat, Dmat=dmat)\n\nmodel.params.mue_ext_mean = 2. # up state\nmodel.run()\nsetinitstate(model, getfinalstate(model))\n\nduration = 6.\nmodel.params.duration = duration\nmodel.run()\n\n# we define the control input matrix to enable or disable certain channels and nodes\ncontrol_mat = np.zeros( (model.params.N, len(model.state_vars)) )\ncontrol_mat[0,0] = 1. # only allow inputs in E-channel in node 0\n\nzero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)\ninput = zero_input.copy()\ninput[0,10] = 10. \ninput[0,20] = 10.\ninput[0,30] = 10. # Three pulses as control input\n\ninput_nw = np.concatenate( (np.vstack( [control_mat[0,0] * input, control_mat[0,1] * input, control_mat[0,2] * input, control_mat[0,3] * input] )[np.newaxis,:,:],\n np.vstack( [control_mat[1,0] * input, control_mat[1,1] * input, control_mat[1,2] * input, control_mat[1,3] * input] )[np.newaxis,:,:]), axis=0)\nzero_input_nw = np.concatenate( (np.vstack( [zero_input, zero_input, zero_input, zero_input] )[np.newaxis,:,:],\n np.vstack( [zero_input, zero_input, zero_input, zero_input] )[np.newaxis,:,:]), axis=0)\n\nmodel.params[\"ext_exc_current\"] = input_nw[:,0,:]\nmodel.params[\"ext_inh_current\"] = input_nw[:,1,:]\nmodel.params[\"ext_exc_rate\"] = input_nw[:,2,:]\nmodel.params[\"ext_inh_rate\"] = input_nw[:,3,:]\nmodel.run()\n\n# Define the result of the stimulation as target\ntarget = getstate(model)\n\n# Remove stimuli and re-run the simulation\nmodel.params[\"ext_exc_current\"] = zero_input_nw[:,0,:]\nmodel.params[\"ext_inh_current\"] = zero_input_nw[:,1,:]\nmodel.params[\"ext_exc_rate\"] = zero_input_nw[:,2,:]\nmodel.params[\"ext_inh_rate\"] = zero_input_nw[:,3,:]\nmodel.run()\n\n# combine initial value and simulation result to one array\nstate = getstate(model)\nplot_oc_network(model.params.N, duration, dt, state, target, zero_input_nw, input_nw)\n
# We load the optimal control class\n# print array (optional parameter) defines, for which iterations intermediate results will be printed\n# Parameters will be taken from the input model\nmodel.params[\"ext_exc_current\"] = zero_input_nw[:,0,:]\nmodel.params[\"ext_inh_current\"] = zero_input_nw[:,1,:]\nmodel.params[\"ext_exc_rate\"] = zero_input_nw[:,2,:]\nmodel.params[\"ext_inh_rate\"] = zero_input_nw[:,3,:]\nmodel_controlled = oc_aln.OcAln(model, target, print_array=np.arange(0,501,25), control_matrix=control_mat, cost_matrix=cost_mat)\n\n# We run 500 iterations of the optimal control gradient descent algorithm\nmodel_controlled.optimize(500)\n\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.3591626682338002\nCost in iteration 25: 0.0009615249415297563\nCost in iteration 50: 0.0007333032937119198\nCost in iteration 75: 0.0006259951827765307\nCost in iteration 100: 0.0005505407696329882\nCost in iteration 125: 0.0004885380123600698\nCost in iteration 150: 0.00043735661984762556\nCost in iteration 175: 0.00039467203255346946\nCost in iteration 200: 0.00035575090435742684\nCost in iteration 225: 0.00032290389213762856\nCost in iteration 250: 0.0002955564149879958\nCost in iteration 275: 0.0002706822302509814\nCost in iteration 300: 0.0002481078663686744\nCost in iteration 325: 0.0002287228008388444\nCost in iteration 350: 0.00021138912691190224\nCost in iteration 375: 0.00019614824660540533\nCost in iteration 400: 0.00018255547996069997\nCost in iteration 425: 0.00017091020493998155\nCost in iteration 450: 0.00016022332136043902\nCost in iteration 475: 0.0001503441843619978\nCost in iteration 500: 0.00014206923879279553\nFinal cost : 0.00014206923879279553\n\n
# perofrm another 100 iterations to improve result\n# repeat execution to add another 100 iterations\n# converence to the input stimulus is relatively slow for the WC nodel\nmodel_controlled.optimize(100)\nstate = model_controlled.get_xs()\ncontrol = model_controlled.control\nplot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)\n
\nCompute control for a deterministic system\nCost in iteration 0: 0.00014206923879279553\nCost in iteration 25: 0.0001344899989412419\nCost in iteration 50: 0.00012771190226165116\nCost in iteration 75: 0.00012170773950612534\nCost in iteration 100: 0.0001161846252066297\nFinal cost : 0.0001161846252066297\n\n
\n
"},{"location":"examples/example-5.4-oc-aln-model-deterministic/#optimal-control-of-the-aln-model","title":"Optimal control of the ALN model","text":"
This notebook shows how to compute the optimal control (OC) signal for the ALN model for a simple example task.
Let us know study a simple 2-node network of model oscillators. We first define the coupling matrix and the distance matrix. We can then initialize the model.
"},{"location":"examples/example-5.4-oc-aln-model-deterministic/#delayed-network-of-neural-populations","title":"Delayed network of neural populations","text":"
We now consider a network topology with delayed signalling between the two nodes.
Paremeter box search for a given model and a range of parameters.
Source code in neurolib/optimize/exploration/exploration.py
class BoxSearch:\n\"\"\"\n Paremeter box search for a given model and a range of parameters.\n \"\"\"\n\n def __init__(\n self,\n model=None,\n parameterSpace=None,\n evalFunction=None,\n filename=None,\n saveAllModelOutputs=False,\n ncores=None,\n ):\n\"\"\"Either a model has to be passed, or an evalFunction. If an evalFunction\n is passed, then the evalFunction will be called and the model is accessible to the\n evalFunction via `self.getModelFromTraj(traj)`. The parameters of the current\n run are accessible via `self.getParametersFromTraj(traj)`.\n\n If no evaluation function is passed, then the model is simulated using `Model.run()`\n for every parameter.\n\n :param model: Model to run for each parameter (or model to pass to the evaluation function if an evaluation\n function is used), defaults to None\n :type model: `neurolib.models.model.Model`, optional\n :param parameterSpace: Parameter space to explore, defaults to None\n :type parameterSpace: `neurolib.utils.parameterSpace.ParameterSpace`, optional\n :param evalFunction: Evaluation function to call for each run., defaults to None\n :type evalFunction: function, optional\n :param filename: HDF5 storage file name, if left empty, defaults to ``exploration.hdf``\n :type filename: str\n :param saveAllModelOutputs: If True, save all outputs of model, else only default output of the model will be\n saved. Note: if saveAllModelOutputs==False and the model's parameter model.params['bold']==True, then BOLD\n output will be saved as well, defaults to False\n :type saveAllModelOutputs: bool\n\n :param ncores: Number of cores to simulate on (max cores default), defaults to None\n :type ncores: int, optional\n \"\"\"\n self.model = model\n if evalFunction is None and model is not None:\n self.evalFunction = self._runModel\n elif evalFunction is not None:\n self.evalFunction = evalFunction\n\n assert (evalFunction is not None) or (\n model is not None\n ), \"Either a model has to be specified or an evalFunction.\"\n\n assert parameterSpace is not None, \"No parameters to explore.\"\n\n if parameterSpace.kind == \"sequence\":\n assert model is not None, \"Model must be defined for sequential explore\"\n\n self.parameterSpace = parameterSpace\n self.exploreParameters = parameterSpace.dict()\n\n # TODO: use random ICs for every explored point or rather reuse the ones that are generated at model\n # initialization\n self.useRandomICs = False\n\n filename = filename or \"exploration.hdf\"\n self.filename = filename\n\n self.saveAllModelOutputs = saveAllModelOutputs\n\n # number of cores\n if ncores is None:\n ncores = multiprocessing.cpu_count()\n self.ncores = ncores\n logging.info(\"Number of processes: {}\".format(self.ncores))\n\n # bool to check whether pypet was initialized properly\n self.initialized = False\n self._initializeExploration(self.filename)\n\n self.results = None\n\n def _initializeExploration(self, filename=\"exploration.hdf\"):\n\"\"\"Initialize the pypet environment\n\n :param filename: hdf filename to store the results in , defaults to \"exploration.hdf\"\n :type filename: str, optional\n \"\"\"\n # create hdf file path if it does not exist yet\n pathlib.Path(paths.HDF_DIR).mkdir(parents=True, exist_ok=True)\n\n # set default hdf filename\n self.HDF_FILE = os.path.join(paths.HDF_DIR, filename)\n\n # initialize pypet environment\n trajectoryName = \"results\" + datetime.datetime.now().strftime(\"-%Y-%m-%d-%HH-%MM-%SS\")\n trajectoryfilename = self.HDF_FILE\n\n # set up the pypet environment\n env = pypet.Environment(\n trajectory=trajectoryName,\n filename=trajectoryfilename,\n multiproc=True,\n ncores=self.ncores,\n complevel=9,\n log_config=paths.PYPET_LOGGING_CONFIG,\n )\n self.env = env\n # Get the trajectory from the environment\n self.traj = env.trajectory\n self.trajectoryName = self.traj.v_name\n\n # Add all parameters to the pypet trajectory\n if self.model is not None:\n # if a model is specified, use the default parameter of the\n # model to initialize pypet\n self._addParametersToPypet(self.traj, self.model.params)\n else:\n # else, use a random parameter of the parameter space\n self._addParametersToPypet(self.traj, self.parameterSpace.getRandom(safe=True))\n\n # Tell pypet which parameters to explore\n self.pypetParametrization = self.parameterSpace.get_parametrization()\n # explicitely add all parameters within star notation, hence unwrap star notation into actual params names\n if self.parameterSpace.star:\n assert self.model is not None, \"With star notation, model cannot be None\"\n self.pypetParametrization = unwrap_star_dotdict(self.pypetParametrization, self.model)\n self.nRuns = len(self.pypetParametrization[list(self.pypetParametrization.keys())[0]])\n logging.info(f\"Number of parameter configurations: {self.nRuns}\")\n if self.parameterSpace.kind == \"sequence\":\n # if sequential explore, need to fill-in the default parameters instead of None\n self.pypetParametrization = self._fillin_default_parameters_for_sequential(\n self.pypetParametrization, self.model.params\n )\n self.traj.f_explore(self.pypetParametrization)\n\n # initialization done\n logging.info(\"BoxSearch: Environment initialized.\")\n self.initialized = True\n\n @staticmethod\n def _fillin_default_parameters_for_sequential(parametrization, model_params):\n fresh_dict = {}\n for k, params in parametrization.items():\n fresh_dict[k] = [v if v is not None else model_params[k] for v in params]\n return fresh_dict\n\n def _addParametersToPypet(self, traj, params):\n\"\"\"This function registers the parameters of the model to Pypet.\n Parameters can be nested dictionaries. They are unpacked and stored recursively.\n\n :param traj: Pypet trajectory to store the parameters in\n :type traj: `pypet.trajectory.Trajectory`\n :param params: Parameter dictionary\n :type params: dict, dict[dict,]\n \"\"\"\n\n def addParametersRecursively(traj, params, current_level):\n # make dummy list if just string\n if isinstance(current_level, str):\n current_level = [current_level]\n # iterate dict\n for key, value in params.items():\n # if another dict - recurse and increase level\n if isinstance(value, dict):\n addParametersRecursively(traj, value, current_level + [key])\n else:\n param_address = \".\".join(current_level + [key])\n value = \"None\" if value is None else value\n traj.f_add_parameter(param_address, value)\n\n addParametersRecursively(traj, params, [])\n\n def saveToPypet(self, outputs, traj):\n\"\"\"This function takes simulation results in the form of a nested dictionary\n and stores all data into the pypet hdf file.\n\n :param outputs: Simulation outputs as a dictionary.\n :type outputs: dict\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n \"\"\"\n\n def makeSaveStringForPypet(value, savestr):\n\"\"\"Builds the pypet-style results string from the results\n dictionary's keys.\n \"\"\"\n for k, v in value.items():\n if isinstance(v, dict):\n _savestr = savestr + k + \".\"\n makeSaveStringForPypet(v, _savestr)\n else:\n _savestr = savestr + k\n self.traj.f_add_result(_savestr, v)\n\n assert isinstance(outputs, dict), \"Outputs must be an instance of dict.\"\n value = outputs\n savestr = \"results.$.\"\n makeSaveStringForPypet(value, savestr)\n\n def _runModel(self, traj):\n\"\"\"If not evaluation function is given, we assume that a model will be simulated.\n This function will be called by pypet directly and therefore wants a pypet trajectory as an argument\n\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n \"\"\"\n if self.useRandomICs:\n logging.warn(\"Random initial conditions not implemented yet\")\n # get parameters of this run from pypet trajectory\n runParams = self.getParametersFromTraj(traj)\n if self.parameterSpace.star:\n runParams = flatten_nested_dict(flat_dict_to_nested(runParams)[\"parameters\"])\n\n # set the parameters for the model\n self.model.params.update(runParams)\n\n # get kwargs from Exploration.run()\n runKwargs = {}\n if hasattr(self, \"runKwargs\"):\n runKwargs = self.runKwargs\n # run it\n self.model.run(**runKwargs)\n # save outputs\n self._saveModelOutputsToPypet(traj)\n\n def _saveModelOutputsToPypet(self, traj):\n # save all data to the pypet trajectory\n if self.saveAllModelOutputs:\n # save all results from exploration\n self.saveToPypet(self.model.outputs, traj)\n else:\n # save only the default output\n self.saveToPypet(\n {\n self.model.default_output: self.model.output,\n \"t\": self.model.outputs[\"t\"],\n },\n traj,\n )\n # save BOLD output\n # if \"bold\" in self.model.params:\n # if self.model.params[\"bold\"] and \"BOLD\" in self.model.outputs:\n # self.saveToPypet(self.model.outputs[\"BOLD\"], traj)\n if \"BOLD\" in self.model.outputs:\n self.saveToPypet(self.model.outputs[\"BOLD\"], traj)\n\n def _validatePypetParameters(self, runParams):\n\"\"\"Helper to handle None's in pypet parameters\n (used for random number generator seed)\n\n :param runParams: parameters as returned by traj.parameters.f_to_dict()\n :type runParams: dict of pypet.parameter.Parameter\n \"\"\"\n\n # fix rng seed, which is saved as a string if None\n if \"seed\" in runParams:\n if runParams[\"seed\"] == \"None\":\n runParams[\"seed\"] = None\n return runParams\n\n def getParametersFromTraj(self, traj):\n\"\"\"Returns the parameters of the current run as a (dot.able) dictionary\n\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n :return: Parameter set of the current run\n :rtype: dict\n \"\"\"\n # DO NOT use short names for star notation dicts\n runParams = self.traj.parameters.f_to_dict(short_names=not self.parameterSpace.star, fast_access=True)\n runParams = self._validatePypetParameters(runParams)\n return dotdict(runParams)\n\n def getModelFromTraj(self, traj):\n\"\"\"Return the appropriate model with parameters for this run\n :params traj: Pypet trajectory of current run\n\n :returns model: Model with the parameters of this run.\n \"\"\"\n model = self.model\n runParams = self.getParametersFromTraj(traj)\n # removes keys with None values\n # runParams = {k: v for k, v in runParams.items() if v is not None}\n if self.parameterSpace.star:\n runParams = flatten_nested_dict(flat_dict_to_nested(runParams)[\"parameters\"])\n\n model.params.update(runParams)\n return model\n\n def run(self, **kwargs):\n\"\"\"\n Call this function to run the exploration\n \"\"\"\n self.runKwargs = kwargs\n assert self.initialized, \"Pypet environment not initialized yet.\"\n self._t_start_exploration = datetime.datetime.now()\n self.env.run(self.evalFunction)\n self._t_end_exploration = datetime.datetime.now()\n\n def loadResults(self, all=True, filename=None, trajectoryName=None, pypetShortNames=True, memory_cap=95.0):\n\"\"\"Load results from a hdf file of a previous simulation.\n\n :param all: Load all simulated results into memory, which will be available as the `.results` attribute. Can\n use a lot of RAM if your simulation is large, please use this with caution. , defaults to True\n :type all: bool, optional\n :param filename: hdf file name in which results are stored, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory inside the hdf file, newest will be used if left empty, defaults\n to None\n :type trajectoryName: str, optional\n :param pypetShortNames: Use pypet short names as keys for the results dictionary. Use if you are experiencing\n errors due to natural naming collisions.\n :type pypetShortNames: bool\n :param memory_cap: Percentage memory cap between 0 and 100. If `all=True` is used, a memory cap can be set to\n avoid filling up the available RAM. Example: use `memory_cap = 95` to avoid loading more data if memory is\n at 95% use, defaults to 95\n :type memory_cap: float, int, optional\n \"\"\"\n\n self.loadDfResults(filename, trajectoryName)\n\n # make a list of dictionaries with results\n self.results = dotdict({})\n if all:\n logging.info(\"Loading all results to `results` dictionary ...\")\n for rInd in tqdm.tqdm(range(self.nResults), total=self.nResults):\n\n # check if enough memory is available\n if memory_cap:\n assert isinstance(memory_cap, (int, float)), \"`memory_cap` must be float.\"\n assert (memory_cap > 0) and (memory_cap < 100), \"`memory_cap` must be between 0 and 100\"\n # check ram usage with psutil\n used_memory_percent = psutil.virtual_memory()[2]\n if used_memory_percent > memory_cap:\n raise MemoryError(\n f\"Memory use is at {used_memory_percent}% and capped at {memory_cap}. Aborting.\"\n )\n\n self.pypetTrajectory.results[rInd].f_load()\n result = self.pypetTrajectory.results[rInd].f_to_dict(fast_access=True, short_names=pypetShortNames)\n result = dotdict(result)\n self.pypetTrajectory.results[rInd].f_remove()\n self.results[rInd] = copy.deepcopy(result)\n\n # Postprocess result keys if pypet short names aren't used\n # Before: results.run_00000001.outputs.rates_inh\n # After: outputs.rates_inh\n if not pypetShortNames:\n for i, r in self.results.items():\n new_dict = dotdict({})\n for key, value in r.items():\n new_key = \"\".join(key.split(\".\", 2)[2:])\n new_dict[new_key] = r[key]\n self.results[i] = copy.deepcopy(new_dict)\n\n self.aggregateResultsToDfResults()\n\n logging.info(\"All results loaded.\")\n\n def aggregateResultsToDfResults(self, arrays=True, fillna=False):\n\"\"\"Aggregate all results in to dfResults dataframe.\n\n :param arrays: Load array results (like timeseries) if True. If False, only load scalar results, defaults to\n True\n :type arrays: bool, optional\n :param fillna: Fill nan results (for example if they're not returned in a subset of runs) with zeros, default\n to False\n :type fillna: bool, optional\n \"\"\"\n nan_value = np.nan\n # defines which variable types will be saved in the results dataframe\n SUPPORTED_TYPES = (float, int, np.ndarray, list)\n SCALAR_TYPES = (float, int)\n ARRAY_TYPES = (np.ndarray, list)\n\n logging.info(\"Aggregating results to `dfResults` ...\")\n for runId, parameters in tqdm.tqdm(self.dfResults.iterrows(), total=len(self.dfResults)):\n # if the results were previously loaded into memory, use them\n if hasattr(self, \"results\"):\n # only if the length matches the number of results\n if len(self.results) == len(self.dfResults):\n result = self.results[runId]\n # else, load results individually from hdf file\n else:\n result = self.getRun(runId)\n # else, load results individually from hdf file\n else:\n result = self.getRun(runId)\n\n for key, value in result.items():\n # only save floats, ints and arrays\n if isinstance(value, SUPPORTED_TYPES):\n # save 1-dim arrays\n if isinstance(value, ARRAY_TYPES) and arrays:\n # to save a numpy array, convert column to object type\n if key not in self.dfResults:\n self.dfResults[key] = None\n self.dfResults[key] = self.dfResults[key].astype(object)\n self.dfResults.at[runId, key] = value\n elif isinstance(value, SCALAR_TYPES):\n # save scalars\n self.dfResults.loc[runId, key] = value\n else:\n self.dfResults.loc[runId, key] = nan_value\n # drop nan columns\n self.dfResults = self.dfResults.dropna(axis=\"columns\", how=\"all\")\n\n if fillna:\n self.dfResults = self.dfResults.fillna(0)\n\n def loadDfResults(self, filename=None, trajectoryName=None):\n\"\"\"Load results from a previous simulation.\n\n :param filename: hdf file name in which results are stored, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory inside the hdf file, newest will be used if left empty, defaults\n to None\n :type trajectoryName: str, optional\n \"\"\"\n # chose HDF file to load\n filename = filename or self.HDF_FILE\n self.pypetTrajectory = pu.loadPypetTrajectory(filename, trajectoryName)\n self.nResults = len(self.pypetTrajectory.f_get_run_names())\n\n exploredParameters = self.pypetTrajectory.f_get_explored_parameters()\n\n # create pandas dataframe of all runs with parameters as keys\n logging.info(\"Creating `dfResults` dataframe ...\")\n niceParKeys = [p[11:] for p in exploredParameters.keys()]\n if not self.parameterSpace:\n niceParKeys = [p.split(\".\")[-1] for p in niceParKeys]\n self.dfResults = pd.DataFrame(columns=niceParKeys, dtype=object)\n for nicep, p in zip(niceParKeys, exploredParameters.keys()):\n self.dfResults[nicep] = exploredParameters[p].f_get_range()\n\n @staticmethod\n def _filterDictionaryBold(filt_dict, bold):\n\"\"\"Filters result dictionary: either keeps ONLY BOLD results, or remove\n BOLD results.\n\n :param filt_dict: dictionary to filter for BOLD keys\n :type filt_dict: dict\n :param bold: whether to remove BOLD keys (bold=False) or keep only BOLD\n keys (bold=True)\n :return: filtered dict, without or only BOLD keys\n :rtype: dict\n \"\"\"\n filt_dict = copy.deepcopy(filt_dict)\n if bold:\n return {k: v for k, v in filt_dict.items() if \"BOLD\" in k}\n else:\n return {k: v for k, v in filt_dict.items() if \"BOLD\" not in k}\n\n def _getCoordsFromRun(self, run_dict, bold=False):\n\"\"\"Find coordinates of a single run - time, output and space dimensions.\n\n :param run_dict: dictionary with run results\n :type run_dict: dict\n :param bold: whether to do only BOLD or without BOLD results\n :type bold: bool\n :return: dictionary of coordinates for xarray\n :rtype: dict\n \"\"\"\n run_dict = copy.deepcopy(run_dict)\n run_dict = self._filterDictionaryBold(run_dict, bold=bold)\n timeDictKey = \"\"\n if \"t\" in run_dict:\n timeDictKey = \"t\"\n else:\n for k in run_dict:\n if k.startswith(\"t\"):\n timeDictKey = k\n logging.info(f\"Assuming {k} to be the time axis.\")\n break\n assert len(timeDictKey) > 0, \"No time array found (starting with t) in model output.\"\n t = run_dict[timeDictKey].copy()\n del run_dict[timeDictKey]\n return timeDictKey, {\n \"output\": list(run_dict.keys()),\n \"space\": list(range(next(iter(run_dict.values())).shape[0])),\n \"time\": t,\n }\n\n def xr(self, bold=False):\n\"\"\"\n Return `xr.Dataset` from the exploration results.\n\n :param bold: if True, will load and return only BOLD output\n :type bold: bool\n \"\"\"\n\n def _sanitize_nc_key(k):\n return k.replace(\"*\", \"_\").replace(\".\", \"_\").replace(\"|\", \"_\")\n\n assert self.results is not None, \"Run `loadResults()` first to populate the results\"\n assert len(self.results) == len(self.dfResults)\n # create intrisinsic dims for one run\n timeDictKey, run_coords = self._getCoordsFromRun(self.results[0], bold=bold)\n dataarrays = []\n orig_search_coords = self.parameterSpace.get_parametrization()\n for runId, run_result in self.results.items():\n # take exploration coordinates for this run\n expl_coords = {k: v[runId] for k, v in orig_search_coords.items()}\n outputs = []\n run_result = self._filterDictionaryBold(run_result, bold=bold)\n for key, value in run_result.items():\n if key == timeDictKey:\n continue\n outputs.append(value)\n # create DataArray for run only - we need to add exploration coordinates\n data_temp = xr.DataArray(\n np.stack(outputs), dims=[\"output\", \"space\", \"time\"], coords=run_coords, name=\"exploration\"\n )\n expand_coords = {}\n # iterate exploration coordinates\n for k, v in expl_coords.items():\n # sanitize keys in the case of stars etc\n k = _sanitize_nc_key(k)\n # if single values, just assign\n if isinstance(v, (str, float, int)):\n expand_coords[k] = [v]\n # if arrays, check whether they can be squeezed into one value\n elif isinstance(v, np.ndarray):\n if np.unique(v).size == 1:\n # if yes, just assign that one value\n expand_coords[k] = [float(np.unique(v))]\n else:\n # if no, sorry - coordinates cannot be array\n raise ValueError(\"Cannot squeeze coordinates\")\n # assing exploration coordinates to the DataArray\n dataarrays.append(data_temp.expand_dims(expand_coords))\n\n # finally, combine all arrays into one\n if self.parameterSpace.kind == \"sequence\":\n # when run in sequence, cannot combine to grid, so just concatenate along new dimension\n combined = xr.concat(dataarrays, dim=\"run_no\", coords=\"all\")\n else:\n # sometimes combining xr.DataArrays does not work, see https://github.com/pydata/xarray/issues/3248#issuecomment-531511177\n # resolved by casting them explicitely to xr.Dataset\n combined = xr.combine_by_coords([da.to_dataset() for da in dataarrays])[\"exploration\"]\n if self.parameterSpace.star:\n # if we explored over star params, unwrap them into attributes\n combined.attrs = {\n _sanitize_nc_key(k): list(self.model.params[k].keys()) for k in orig_search_coords.keys() if \"*\" in k\n }\n return combined\n\n def getRun(self, runId, filename=None, trajectoryName=None, pypetShortNames=True):\n\"\"\"Load the simulated data of a run and its parameters from a pypetTrajectory.\n\n :param runId: ID of the run\n :type runId: int\n\n :return: Dictionary with simulated data and parameters of the run.\n :type return: dict\n \"\"\"\n # chose HDF file to load\n filename = self.HDF_FILE or filename\n\n # either use loaded pypetTrajectory or load from HDF file if it isn't available\n pypetTrajectory = (\n self.pypetTrajectory\n if hasattr(self, \"pypetTrajectory\")\n else pu.loadPypetTrajectory(filename, trajectoryName)\n )\n\n # # if there was no pypetTrajectory loaded before\n # if pypetTrajectory is None:\n # # chose HDF file to load\n # filename = self.HDF_FILE or filename\n # pypetTrajectory = pu.loadPypetTrajectory(filename, trajectoryName)\n\n return pu.getRun(runId, pypetTrajectory, pypetShortNames=pypetShortNames)\n\n def getResult(self, runId):\n\"\"\"Returns either a loaded result or reads from disk.\n\n :param runId: runId of result\n :type runId: int\n :return: result\n :rtype: dict\n \"\"\"\n # if hasattr(self, \"results\"):\n # # load result from either the preloaded .result attribute (from .loadResults)\n # result = self.results[runId]\n # else:\n # # or from disk if results haven't been loaded yet\n # result = self.getRun(runId)\n\n # load result from either the preloaded .result attribute (from .loadResults)\n # or from disk if results haven't been loaded yet\n # result = self.results[runId] if hasattr(self, \"results\") else self.getRun(runId)\n return self.results[runId] if hasattr(self, \"results\") else self.getRun(runId)\n\n def info(self):\n\"\"\"Print info about the current search.\"\"\"\n now = datetime.datetime.now().strftime(\"%Y-%m-%d-%HH-%MM-%SS\")\n print(f\"Exploration info ({now})\")\n print(f\"HDF name: {self.HDF_FILE}\")\n print(f\"Trajectory name: {self.trajectoryName}\")\n if self.model is not None:\n print(f\"Model: {self.model.name}\")\n if hasattr(self, \"nRuns\"):\n print(f\"Number of runs {self.nRuns}\")\n print(f\"Explored parameters: {self.exploreParameters.keys()}\")\n if hasattr(self, \"_t_end_exploration\") and hasattr(self, \"_t_start_exploration\"):\n print(f\"Duration of exploration: {self._t_end_exploration-self._t_start_exploration}\")\n
Either a model has to be passed, or an evalFunction. If an evalFunction is passed, then the evalFunction will be called and the model is accessible to the evalFunction via self.getModelFromTraj(traj). The parameters of the current run are accessible via self.getParametersFromTraj(traj).
If no evaluation function is passed, then the model is simulated using Model.run() for every parameter.
Parameters:
Name Type Description Default model`neurolib.models.model.Model`, optional
Model to run for each parameter (or model to pass to the evaluation function if an evaluation function is used), defaults to None
Evaluation function to call for each run., defaults to None
Nonefilenamestr
HDF5 storage file name, if left empty, defaults to exploration.hdf
NonesaveAllModelOutputsbool
If True, save all outputs of model, else only default output of the model will be saved. Note: if saveAllModelOutputs==False and the model's parameter model.params['bold']==True, then BOLD output will be saved as well, defaults to False
Falsencoresint, optional
Number of cores to simulate on (max cores default), defaults to None
None Source code in neurolib/optimize/exploration/exploration.py
def __init__(\n self,\n model=None,\n parameterSpace=None,\n evalFunction=None,\n filename=None,\n saveAllModelOutputs=False,\n ncores=None,\n):\n\"\"\"Either a model has to be passed, or an evalFunction. If an evalFunction\n is passed, then the evalFunction will be called and the model is accessible to the\n evalFunction via `self.getModelFromTraj(traj)`. The parameters of the current\n run are accessible via `self.getParametersFromTraj(traj)`.\n\n If no evaluation function is passed, then the model is simulated using `Model.run()`\n for every parameter.\n\n :param model: Model to run for each parameter (or model to pass to the evaluation function if an evaluation\n function is used), defaults to None\n :type model: `neurolib.models.model.Model`, optional\n :param parameterSpace: Parameter space to explore, defaults to None\n :type parameterSpace: `neurolib.utils.parameterSpace.ParameterSpace`, optional\n :param evalFunction: Evaluation function to call for each run., defaults to None\n :type evalFunction: function, optional\n :param filename: HDF5 storage file name, if left empty, defaults to ``exploration.hdf``\n :type filename: str\n :param saveAllModelOutputs: If True, save all outputs of model, else only default output of the model will be\n saved. Note: if saveAllModelOutputs==False and the model's parameter model.params['bold']==True, then BOLD\n output will be saved as well, defaults to False\n :type saveAllModelOutputs: bool\n\n :param ncores: Number of cores to simulate on (max cores default), defaults to None\n :type ncores: int, optional\n \"\"\"\n self.model = model\n if evalFunction is None and model is not None:\n self.evalFunction = self._runModel\n elif evalFunction is not None:\n self.evalFunction = evalFunction\n\n assert (evalFunction is not None) or (\n model is not None\n ), \"Either a model has to be specified or an evalFunction.\"\n\n assert parameterSpace is not None, \"No parameters to explore.\"\n\n if parameterSpace.kind == \"sequence\":\n assert model is not None, \"Model must be defined for sequential explore\"\n\n self.parameterSpace = parameterSpace\n self.exploreParameters = parameterSpace.dict()\n\n # TODO: use random ICs for every explored point or rather reuse the ones that are generated at model\n # initialization\n self.useRandomICs = False\n\n filename = filename or \"exploration.hdf\"\n self.filename = filename\n\n self.saveAllModelOutputs = saveAllModelOutputs\n\n # number of cores\n if ncores is None:\n ncores = multiprocessing.cpu_count()\n self.ncores = ncores\n logging.info(\"Number of processes: {}\".format(self.ncores))\n\n # bool to check whether pypet was initialized properly\n self.initialized = False\n self._initializeExploration(self.filename)\n\n self.results = None\n
Name Type Description Default arraysbool, optional
Load array results (like timeseries) if True. If False, only load scalar results, defaults to True
Truefillnabool, optional
Fill nan results (for example if they're not returned in a subset of runs) with zeros, default to False
False Source code in neurolib/optimize/exploration/exploration.py
def aggregateResultsToDfResults(self, arrays=True, fillna=False):\n\"\"\"Aggregate all results in to dfResults dataframe.\n\n :param arrays: Load array results (like timeseries) if True. If False, only load scalar results, defaults to\n True\n :type arrays: bool, optional\n :param fillna: Fill nan results (for example if they're not returned in a subset of runs) with zeros, default\n to False\n :type fillna: bool, optional\n \"\"\"\n nan_value = np.nan\n # defines which variable types will be saved in the results dataframe\n SUPPORTED_TYPES = (float, int, np.ndarray, list)\n SCALAR_TYPES = (float, int)\n ARRAY_TYPES = (np.ndarray, list)\n\n logging.info(\"Aggregating results to `dfResults` ...\")\n for runId, parameters in tqdm.tqdm(self.dfResults.iterrows(), total=len(self.dfResults)):\n # if the results were previously loaded into memory, use them\n if hasattr(self, \"results\"):\n # only if the length matches the number of results\n if len(self.results) == len(self.dfResults):\n result = self.results[runId]\n # else, load results individually from hdf file\n else:\n result = self.getRun(runId)\n # else, load results individually from hdf file\n else:\n result = self.getRun(runId)\n\n for key, value in result.items():\n # only save floats, ints and arrays\n if isinstance(value, SUPPORTED_TYPES):\n # save 1-dim arrays\n if isinstance(value, ARRAY_TYPES) and arrays:\n # to save a numpy array, convert column to object type\n if key not in self.dfResults:\n self.dfResults[key] = None\n self.dfResults[key] = self.dfResults[key].astype(object)\n self.dfResults.at[runId, key] = value\n elif isinstance(value, SCALAR_TYPES):\n # save scalars\n self.dfResults.loc[runId, key] = value\n else:\n self.dfResults.loc[runId, key] = nan_value\n # drop nan columns\n self.dfResults = self.dfResults.dropna(axis=\"columns\", how=\"all\")\n\n if fillna:\n self.dfResults = self.dfResults.fillna(0)\n
Return the appropriate model with parameters for this run
Parameters:
Name Type Description Default traj
Pypet trajectory of current run
required
Returns:
Type Description
Model with the parameters of this run.
Source code in neurolib/optimize/exploration/exploration.py
def getModelFromTraj(self, traj):\n\"\"\"Return the appropriate model with parameters for this run\n :params traj: Pypet trajectory of current run\n\n :returns model: Model with the parameters of this run.\n \"\"\"\n model = self.model\n runParams = self.getParametersFromTraj(traj)\n # removes keys with None values\n # runParams = {k: v for k, v in runParams.items() if v is not None}\n if self.parameterSpace.star:\n runParams = flatten_nested_dict(flat_dict_to_nested(runParams)[\"parameters\"])\n\n model.params.update(runParams)\n return model\n
Returns the parameters of the current run as a (dot.able) dictionary
Parameters:
Name Type Description Default traj`pypet.trajectory.Trajectory`
Pypet trajectory
required
Returns:
Type Description dict
Parameter set of the current run
Source code in neurolib/optimize/exploration/exploration.py
def getParametersFromTraj(self, traj):\n\"\"\"Returns the parameters of the current run as a (dot.able) dictionary\n\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n :return: Parameter set of the current run\n :rtype: dict\n \"\"\"\n # DO NOT use short names for star notation dicts\n runParams = self.traj.parameters.f_to_dict(short_names=not self.parameterSpace.star, fast_access=True)\n runParams = self._validatePypetParameters(runParams)\n return dotdict(runParams)\n
Returns either a loaded result or reads from disk.
Parameters:
Name Type Description Default runIdint
runId of result
required
Returns:
Type Description dict
result
Source code in neurolib/optimize/exploration/exploration.py
def getResult(self, runId):\n\"\"\"Returns either a loaded result or reads from disk.\n\n :param runId: runId of result\n :type runId: int\n :return: result\n :rtype: dict\n \"\"\"\n # if hasattr(self, \"results\"):\n # # load result from either the preloaded .result attribute (from .loadResults)\n # result = self.results[runId]\n # else:\n # # or from disk if results haven't been loaded yet\n # result = self.getRun(runId)\n\n # load result from either the preloaded .result attribute (from .loadResults)\n # or from disk if results haven't been loaded yet\n # result = self.results[runId] if hasattr(self, \"results\") else self.getRun(runId)\n return self.results[runId] if hasattr(self, \"results\") else self.getRun(runId)\n
Load the simulated data of a run and its parameters from a pypetTrajectory.
Parameters:
Name Type Description Default runIdint
ID of the run
required
Returns:
Type Description
Dictionary with simulated data and parameters of the run.
Source code in neurolib/optimize/exploration/exploration.py
def getRun(self, runId, filename=None, trajectoryName=None, pypetShortNames=True):\n\"\"\"Load the simulated data of a run and its parameters from a pypetTrajectory.\n\n :param runId: ID of the run\n :type runId: int\n\n :return: Dictionary with simulated data and parameters of the run.\n :type return: dict\n \"\"\"\n # chose HDF file to load\n filename = self.HDF_FILE or filename\n\n # either use loaded pypetTrajectory or load from HDF file if it isn't available\n pypetTrajectory = (\n self.pypetTrajectory\n if hasattr(self, \"pypetTrajectory\")\n else pu.loadPypetTrajectory(filename, trajectoryName)\n )\n\n # # if there was no pypetTrajectory loaded before\n # if pypetTrajectory is None:\n # # chose HDF file to load\n # filename = self.HDF_FILE or filename\n # pypetTrajectory = pu.loadPypetTrajectory(filename, trajectoryName)\n\n return pu.getRun(runId, pypetTrajectory, pypetShortNames=pypetShortNames)\n
Source code in neurolib/optimize/exploration/exploration.py
def info(self):\n\"\"\"Print info about the current search.\"\"\"\n now = datetime.datetime.now().strftime(\"%Y-%m-%d-%HH-%MM-%SS\")\n print(f\"Exploration info ({now})\")\n print(f\"HDF name: {self.HDF_FILE}\")\n print(f\"Trajectory name: {self.trajectoryName}\")\n if self.model is not None:\n print(f\"Model: {self.model.name}\")\n if hasattr(self, \"nRuns\"):\n print(f\"Number of runs {self.nRuns}\")\n print(f\"Explored parameters: {self.exploreParameters.keys()}\")\n if hasattr(self, \"_t_end_exploration\") and hasattr(self, \"_t_start_exploration\"):\n print(f\"Duration of exploration: {self._t_end_exploration-self._t_start_exploration}\")\n
Name Type Description Default filenamestr, optional
hdf file name in which results are stored, defaults to None
NonetrajectoryNamestr, optional
Name of the trajectory inside the hdf file, newest will be used if left empty, defaults to None
None Source code in neurolib/optimize/exploration/exploration.py
def loadDfResults(self, filename=None, trajectoryName=None):\n\"\"\"Load results from a previous simulation.\n\n :param filename: hdf file name in which results are stored, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory inside the hdf file, newest will be used if left empty, defaults\n to None\n :type trajectoryName: str, optional\n \"\"\"\n # chose HDF file to load\n filename = filename or self.HDF_FILE\n self.pypetTrajectory = pu.loadPypetTrajectory(filename, trajectoryName)\n self.nResults = len(self.pypetTrajectory.f_get_run_names())\n\n exploredParameters = self.pypetTrajectory.f_get_explored_parameters()\n\n # create pandas dataframe of all runs with parameters as keys\n logging.info(\"Creating `dfResults` dataframe ...\")\n niceParKeys = [p[11:] for p in exploredParameters.keys()]\n if not self.parameterSpace:\n niceParKeys = [p.split(\".\")[-1] for p in niceParKeys]\n self.dfResults = pd.DataFrame(columns=niceParKeys, dtype=object)\n for nicep, p in zip(niceParKeys, exploredParameters.keys()):\n self.dfResults[nicep] = exploredParameters[p].f_get_range()\n
Load results from a hdf file of a previous simulation.
Parameters:
Name Type Description Default allbool, optional
Load all simulated results into memory, which will be available as the .results attribute. Can use a lot of RAM if your simulation is large, please use this with caution. , defaults to True
Truefilenamestr, optional
hdf file name in which results are stored, defaults to None
NonetrajectoryNamestr, optional
Name of the trajectory inside the hdf file, newest will be used if left empty, defaults to None
NonepypetShortNamesbool
Use pypet short names as keys for the results dictionary. Use if you are experiencing errors due to natural naming collisions.
Truememory_capfloat, int, optional
Percentage memory cap between 0 and 100. If all=True is used, a memory cap can be set to avoid filling up the available RAM. Example: use memory_cap = 95 to avoid loading more data if memory is at 95% use, defaults to 95
95.0 Source code in neurolib/optimize/exploration/exploration.py
def loadResults(self, all=True, filename=None, trajectoryName=None, pypetShortNames=True, memory_cap=95.0):\n\"\"\"Load results from a hdf file of a previous simulation.\n\n :param all: Load all simulated results into memory, which will be available as the `.results` attribute. Can\n use a lot of RAM if your simulation is large, please use this with caution. , defaults to True\n :type all: bool, optional\n :param filename: hdf file name in which results are stored, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory inside the hdf file, newest will be used if left empty, defaults\n to None\n :type trajectoryName: str, optional\n :param pypetShortNames: Use pypet short names as keys for the results dictionary. Use if you are experiencing\n errors due to natural naming collisions.\n :type pypetShortNames: bool\n :param memory_cap: Percentage memory cap between 0 and 100. If `all=True` is used, a memory cap can be set to\n avoid filling up the available RAM. Example: use `memory_cap = 95` to avoid loading more data if memory is\n at 95% use, defaults to 95\n :type memory_cap: float, int, optional\n \"\"\"\n\n self.loadDfResults(filename, trajectoryName)\n\n # make a list of dictionaries with results\n self.results = dotdict({})\n if all:\n logging.info(\"Loading all results to `results` dictionary ...\")\n for rInd in tqdm.tqdm(range(self.nResults), total=self.nResults):\n\n # check if enough memory is available\n if memory_cap:\n assert isinstance(memory_cap, (int, float)), \"`memory_cap` must be float.\"\n assert (memory_cap > 0) and (memory_cap < 100), \"`memory_cap` must be between 0 and 100\"\n # check ram usage with psutil\n used_memory_percent = psutil.virtual_memory()[2]\n if used_memory_percent > memory_cap:\n raise MemoryError(\n f\"Memory use is at {used_memory_percent}% and capped at {memory_cap}. Aborting.\"\n )\n\n self.pypetTrajectory.results[rInd].f_load()\n result = self.pypetTrajectory.results[rInd].f_to_dict(fast_access=True, short_names=pypetShortNames)\n result = dotdict(result)\n self.pypetTrajectory.results[rInd].f_remove()\n self.results[rInd] = copy.deepcopy(result)\n\n # Postprocess result keys if pypet short names aren't used\n # Before: results.run_00000001.outputs.rates_inh\n # After: outputs.rates_inh\n if not pypetShortNames:\n for i, r in self.results.items():\n new_dict = dotdict({})\n for key, value in r.items():\n new_key = \"\".join(key.split(\".\", 2)[2:])\n new_dict[new_key] = r[key]\n self.results[i] = copy.deepcopy(new_dict)\n\n self.aggregateResultsToDfResults()\n\n logging.info(\"All results loaded.\")\n
Source code in neurolib/optimize/exploration/exploration.py
def run(self, **kwargs):\n\"\"\"\n Call this function to run the exploration\n \"\"\"\n self.runKwargs = kwargs\n assert self.initialized, \"Pypet environment not initialized yet.\"\n self._t_start_exploration = datetime.datetime.now()\n self.env.run(self.evalFunction)\n self._t_end_exploration = datetime.datetime.now()\n
This function takes simulation results in the form of a nested dictionary and stores all data into the pypet hdf file.
Parameters:
Name Type Description Default outputsdict
Simulation outputs as a dictionary.
required traj`pypet.trajectory.Trajectory`
Pypet trajectory
required Source code in neurolib/optimize/exploration/exploration.py
def saveToPypet(self, outputs, traj):\n\"\"\"This function takes simulation results in the form of a nested dictionary\n and stores all data into the pypet hdf file.\n\n :param outputs: Simulation outputs as a dictionary.\n :type outputs: dict\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n \"\"\"\n\n def makeSaveStringForPypet(value, savestr):\n\"\"\"Builds the pypet-style results string from the results\n dictionary's keys.\n \"\"\"\n for k, v in value.items():\n if isinstance(v, dict):\n _savestr = savestr + k + \".\"\n makeSaveStringForPypet(v, _savestr)\n else:\n _savestr = savestr + k\n self.traj.f_add_result(_savestr, v)\n\n assert isinstance(outputs, dict), \"Outputs must be an instance of dict.\"\n value = outputs\n savestr = \"results.$.\"\n makeSaveStringForPypet(value, savestr)\n
False Source code in neurolib/optimize/exploration/exploration.py
def xr(self, bold=False):\n\"\"\"\n Return `xr.Dataset` from the exploration results.\n\n :param bold: if True, will load and return only BOLD output\n :type bold: bool\n \"\"\"\n\n def _sanitize_nc_key(k):\n return k.replace(\"*\", \"_\").replace(\".\", \"_\").replace(\"|\", \"_\")\n\n assert self.results is not None, \"Run `loadResults()` first to populate the results\"\n assert len(self.results) == len(self.dfResults)\n # create intrisinsic dims for one run\n timeDictKey, run_coords = self._getCoordsFromRun(self.results[0], bold=bold)\n dataarrays = []\n orig_search_coords = self.parameterSpace.get_parametrization()\n for runId, run_result in self.results.items():\n # take exploration coordinates for this run\n expl_coords = {k: v[runId] for k, v in orig_search_coords.items()}\n outputs = []\n run_result = self._filterDictionaryBold(run_result, bold=bold)\n for key, value in run_result.items():\n if key == timeDictKey:\n continue\n outputs.append(value)\n # create DataArray for run only - we need to add exploration coordinates\n data_temp = xr.DataArray(\n np.stack(outputs), dims=[\"output\", \"space\", \"time\"], coords=run_coords, name=\"exploration\"\n )\n expand_coords = {}\n # iterate exploration coordinates\n for k, v in expl_coords.items():\n # sanitize keys in the case of stars etc\n k = _sanitize_nc_key(k)\n # if single values, just assign\n if isinstance(v, (str, float, int)):\n expand_coords[k] = [v]\n # if arrays, check whether they can be squeezed into one value\n elif isinstance(v, np.ndarray):\n if np.unique(v).size == 1:\n # if yes, just assign that one value\n expand_coords[k] = [float(np.unique(v))]\n else:\n # if no, sorry - coordinates cannot be array\n raise ValueError(\"Cannot squeeze coordinates\")\n # assing exploration coordinates to the DataArray\n dataarrays.append(data_temp.expand_dims(expand_coords))\n\n # finally, combine all arrays into one\n if self.parameterSpace.kind == \"sequence\":\n # when run in sequence, cannot combine to grid, so just concatenate along new dimension\n combined = xr.concat(dataarrays, dim=\"run_no\", coords=\"all\")\n else:\n # sometimes combining xr.DataArrays does not work, see https://github.com/pydata/xarray/issues/3248#issuecomment-531511177\n # resolved by casting them explicitely to xr.Dataset\n combined = xr.combine_by_coords([da.to_dataset() for da in dataarrays])[\"exploration\"]\n if self.parameterSpace.star:\n # if we explored over star params, unwrap them into attributes\n combined.attrs = {\n _sanitize_nc_key(k): list(self.model.params[k].keys()) for k in orig_search_coords.keys() if \"*\" in k\n }\n return combined\n
Models are the core of neurolib. The Model superclass will help you to load, simulate, and analyse models. It also makes it very easy to implement your own neural mass model (see Example 0.6 custom model).
"},{"location":"models/model/#loading-a-model","title":"Loading a model","text":"
To load a model, we need to import the submodule of a model and instantiate it. This example shows how to load a single node of the ALNModel. See Example 0 aln minimal on how to simulate a whole-brain network using this model.
from neurolib.models.aln import ALNModel # Import the model\nmodel = ALNModel() # Create an instance\nmodel.run() # Run it\n
"},{"location":"models/model/#model-base-class-methods","title":"Model base class methods","text":"
The Model base class runs models, manages their outputs, parameters and more. This class should serve as the base class for all implemented models.
Source code in neurolib/models/model.py
class Model:\n\"\"\"The Model base class runs models, manages their outputs, parameters and more.\n This class should serve as the base class for all implemented models.\n \"\"\"\n\n def __init__(self, integration, params):\n if hasattr(self, \"name\"):\n if self.name is not None:\n assert isinstance(self.name, str), f\"Model name is not a string.\"\n else:\n self.name = \"Noname\"\n\n assert integration is not None, \"Model integration function not given.\"\n self.integration = integration\n\n assert isinstance(params, dict), \"Parameters must be a dictionary.\"\n self.params = dotdict(params)\n\n # assert self.state_vars not None:\n assert hasattr(\n self, \"state_vars\"\n ), f\"Model {self.name} has no attribute `state_vars`, which should be alist of strings containing all variable names.\"\n assert np.all([type(s) is str for s in self.state_vars]), \"All entries in state_vars must be strings.\"\n\n assert hasattr(\n self, \"default_output\"\n ), f\"Model {self.name} needs to define a default output variable in `default_output`.\"\n\n assert isinstance(self.default_output, str), \"`default_output` must be a string.\"\n\n # if no output_vars is set, it will be replaced by state_vars\n if not hasattr(self, \"output_vars\"):\n self.output_vars = self.state_vars\n\n # create output and state dictionary\n self.outputs = dotdict({})\n self.state = dotdict({})\n self.maxDelay = None\n self.initializeRun()\n\n self.boldInitialized = False\n\n logging.info(f\"{self.name}: Model initialized.\")\n\n def initializeBold(self):\n\"\"\"Initialize BOLD model.\"\"\"\n self.boldInitialized = False\n\n # function to transform model state before passing it to the bold model\n # Note: This can be used like the parameter \\epsilon in Friston2000\n # (neural efficacy) by multiplying the input with a constant via\n # self.boldInputTransform = lambda x: x * epsilon\n if not hasattr(self, \"boldInputTransform\"):\n self.boldInputTransform = None\n\n self.boldModel = bold.BOLDModel(self.params[\"N\"], self.params[\"dt\"])\n self.boldInitialized = True\n # logging.info(f\"{self.name}: BOLD model initialized.\")\n\n def simulateBold(self, t, variables, append=False):\n\"\"\"Gets the default output of the model and simulates the BOLD model.\n Adds the simulated BOLD signal to outputs.\n \"\"\"\n if self.boldInitialized:\n # first we loop through all state variables\n for svn, sv in zip(self.state_vars, variables):\n # the default output is used as the input for the bold model\n if svn == self.default_output:\n bold_input = sv[:, self.startindt :]\n # logging.debug(f\"BOLD input `{svn}` of shape {bold_input.shape}\")\n if bold_input.shape[1] >= self.boldModel.samplingRate_NDt:\n # only if the length of the output has a zero mod to the sampling rate,\n # the downsampled output from the boldModel can correctly appended to previous data\n # so: we are lazy here and simply disable appending in that case ...\n if not bold_input.shape[1] % self.boldModel.samplingRate_NDt == 0:\n append = False\n logging.warn(\n f\"Output size {bold_input.shape[1]} is not a multiple of BOLD sampling length { self.boldModel.samplingRate_NDt}, will not append data.\"\n )\n logging.debug(f\"Simulating BOLD: boldModel.run(append={append})\")\n\n # transform bold input according to self.boldInputTransform\n if self.boldInputTransform:\n bold_input = self.boldInputTransform(bold_input)\n\n # simulate bold model\n self.boldModel.run(bold_input, append=append)\n\n t_BOLD = self.boldModel.t_BOLD\n BOLD = self.boldModel.BOLD\n self.setOutput(\"BOLD.t_BOLD\", t_BOLD)\n self.setOutput(\"BOLD.BOLD\", BOLD)\n else:\n logging.warn(\n f\"Will not simulate BOLD if output {bold_input.shape[1]*self.params['dt']} not at least of duration {self.boldModel.samplingRate_NDt*self.params['dt']}\"\n )\n else:\n logging.warn(\"BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`\")\n\n def checkChunkwise(self, chunksize):\n\"\"\"Checks if the model fulfills requirements for chunkwise simulation.\n Checks whether the sampling rate for outputs fits to chunksize and duration.\n Throws errors if not.\"\"\"\n assert self.state_vars is not None, \"State variable names not given.\"\n assert self.init_vars is not None, \"Initial value variable names not given.\"\n assert len(self.state_vars) == len(self.init_vars), \"State variables are not same length as initial values.\"\n\n # throw a warning if the user is nasty\n if int(self.params[\"duration\"] / self.params[\"dt\"]) % chunksize != 0:\n logging.warning(\n f\"It is strongly advised to use a `chunksize` ({chunksize}) that is a divisor of `duration / dt` ({int(self.params['duration']/self.params['dt'])}).\"\n )\n\n # if `sampling_dt` is set, do some checks\n if self.params.get(\"sampling_dt\") is not None:\n # sample_dt checks are required after setting chunksize\n assert (\n chunksize * self.params[\"dt\"] >= self.params[\"sampling_dt\"]\n ), \"`chunksize * dt` must be >= `sampling_dt`\"\n\n # ugly floating point modulo hack: instead of float1%float2==0, we do\n # (float1/float2)%1==0\n assert ((chunksize * self.params[\"dt\"]) / self.params[\"sampling_dt\"]) % 1 == 0, (\n f\"Chunksize {chunksize * self.params['dt']} must be divisible by sampling dt \"\n f\"{self.params['sampling_dt']}\"\n )\n assert (\n (self.params[\"duration\"] % (chunksize * self.params[\"dt\"])) / self.params[\"sampling_dt\"]\n ) % 1 == 0, (\n f\"Last chunk of size {self.params['duration'] % (chunksize * self.params['dt'])} must be divisible by sampling dt \"\n f\"{self.params['sampling_dt']}\"\n )\n\n def setSamplingDt(self):\n\"\"\"Checks if sampling_dt is set correctly and sets self.`sample_every`\n 1) Check if sampling_dt is multiple of dt\n 2) Check if semplind_dt is greater than duration\n \"\"\"\n\n if self.params.get(\"sampling_dt\") is None:\n self.sample_every = 1\n elif self.params.get(\"sampling_dt\") > 0:\n assert self.params[\"sampling_dt\"] >= self.params[\"dt\"], \"`sampling_dt` needs to be >= `dt`\"\n assert (\n self.params[\"duration\"] >= self.params[\"sampling_dt\"]\n ), \"`sampling_dt` needs to be lower than `duration`\"\n self.sample_every = int(self.params[\"sampling_dt\"] / self.params[\"dt\"])\n else:\n raise ValueError(f\"Can't handle `sampling_dt`={self.params.get('sampling_dt')}\")\n\n def initializeRun(self, initializeBold=False):\n\"\"\"Initialization before each run.\n\n :param initializeBold: initialize BOLD model\n :type initializeBold: bool\n \"\"\"\n # get the maxDelay of the system\n self.maxDelay = self.getMaxDelay()\n\n # length of the initial condition\n self.startindt = self.maxDelay + 1\n\n # check dt / sampling_dt\n self.setSamplingDt()\n\n # force bold if params['bold'] == True\n if self.params.get(\"bold\"):\n initializeBold = True\n # set up the bold model, if it didn't happen yet\n if initializeBold and not self.boldInitialized:\n self.initializeBold()\n\n def run(\n self,\n inputs=None,\n chunkwise=False,\n chunksize=None,\n bold=False,\n append=False,\n append_outputs=None,\n continue_run=False,\n ):\n\"\"\"\n Main interfacing function to run a model.\n\n The model can be run in three different ways:\n 1) `model.run()` starts a new run.\n 2) `model.run(chunkwise=True)` runs the simulation in chunks of length `chunksize`.\n 3) `mode.run(continue_run=True)` continues the simulation of a previous run.\n\n :param inputs: list of inputs to the model, must have the same order as model.input_vars. Note: no sanity check is performed for performance reasons. Take care of the inputs yourself.\n :type inputs: list[np.ndarray|]\n :param chunkwise: simulate model chunkwise or in one single run, defaults to False\n :type chunkwise: bool, optional\n :param chunksize: size of the chunk to simulate in dt, if set will imply chunkwise=True, defaults to 2s\n :type chunksize: int, optional\n :param bold: simulate BOLD signal (only for chunkwise integration), defaults to False\n :type bold: bool, optional\n :param append: append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False\n :type append: bool, optional\n :param continue_run: continue a simulation by using the initial values from a previous simulation\n :type continue_run: bool\n \"\"\"\n # TODO: legacy argument support\n if append_outputs is not None:\n append = append_outputs\n\n # if a previous run is not to be continued clear the model's state\n if continue_run is False:\n self.clearModelState()\n\n self.initializeRun(initializeBold=bold)\n\n # enable chunkwise if chunksize is set\n chunkwise = chunkwise if chunksize is None else True\n\n if chunkwise is False:\n self.integrate(append_outputs=append, simulate_bold=bold)\n if continue_run:\n self.setInitialValuesToLastState()\n\n else:\n if chunksize is None:\n chunksize = int(2000 / self.params[\"dt\"])\n\n # check if model is safe for chunkwise integration\n # and whether sampling_dt is compatible with duration and chunksize\n self.checkChunkwise(chunksize)\n if bold and not self.boldInitialized:\n logging.warn(f\"{self.name}: BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`\")\n bold = False\n self.integrateChunkwise(chunksize=chunksize, bold=bold, append_outputs=append)\n\n # check if there was a problem with the simulated data\n self.checkOutputs()\n\n def checkOutputs(self):\n # check nans in output\n if np.isnan(self.output).any():\n logging.error(\"nan in model output!\")\n else:\n EXPLOSION_THRESHOLD = 1e20\n if (self.output > EXPLOSION_THRESHOLD).any() > 0:\n logging.error(\"nan in model output!\")\n\n # check nans in BOLD\n if \"BOLD\" in self.outputs:\n if np.isnan(self.outputs.BOLD.BOLD).any():\n logging.error(\"nan in BOLD output!\")\n\n def integrate(self, append_outputs=False, simulate_bold=False):\n\"\"\"Calls each models `integration` function and saves the state and the outputs of the model.\n\n :param append: append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False\n :type append: bool, optional\n \"\"\"\n # run integration\n t, *variables = self.integration(self.params)\n self.storeOutputsAndStates(t, variables, append=append_outputs)\n\n # force bold if params['bold'] == True\n if self.params.get(\"bold\"):\n simulate_bold = True\n\n # bold simulation after integration\n if simulate_bold and self.boldInitialized:\n self.simulateBold(t, variables, append=True)\n\n def integrateChunkwise(self, chunksize, bold=False, append_outputs=False):\n\"\"\"Repeatedly calls the chunkwise integration for the whole duration of the simulation.\n If `bold==True`, the BOLD model is simulated after each chunk.\n\n :param chunksize: size of each chunk to simulate in units of dt\n :type chunksize: int\n :param bold: simulate BOLD model after each chunk, defaults to False\n :type bold: bool, optional\n :param append_outputs: append the chunkwise outputs to the outputs attribute, defaults to False\n :type append_outputs: bool, optional\n \"\"\"\n totalDuration = self.params[\"duration\"]\n\n dt = self.params[\"dt\"]\n # create a shallow copy of the parameters\n lastT = 0\n while totalDuration - lastT >= dt - 1e-6:\n # Determine the size of the next chunk\n # account for floating point errors\n remainingChunkSize = int(round((totalDuration - lastT) / dt))\n currentChunkSize = min(chunksize, remainingChunkSize)\n\n self.autochunk(chunksize=currentChunkSize, append_outputs=append_outputs, bold=bold)\n # we save the last simulated time step\n lastT += currentChunkSize * dt\n # or\n # lastT = self.state[\"t\"][-1]\n\n # set duration back to its original value\n self.params[\"duration\"] = totalDuration\n\n def clearModelState(self):\n\"\"\"Clears the model's state to create a fresh one\"\"\"\n self.state = dotdict({})\n self.outputs = dotdict({})\n # reinitialize bold model\n if self.params.get(\"bold\"):\n self.initializeBold()\n\n def storeOutputsAndStates(self, t, variables, append=False):\n\"\"\"Takes the simulated variables of the integration and stores it to the appropriate model output and state object.\n\n :param t: time vector\n :type t: list\n :param variables: variable from time integration\n :type variables: numpy.ndarray\n :param append: append output to existing output or overwrite, defaults to False\n :type append: bool, optional\n \"\"\"\n # save time array\n self.setOutput(\"t\", t, append=append, removeICs=True)\n self.setStateVariables(\"t\", t)\n # save outputs\n for svn, sv in zip(self.state_vars, variables):\n if svn in self.output_vars:\n self.setOutput(svn, sv, append=append, removeICs=True)\n self.setStateVariables(svn, sv)\n\n def setInitialValuesToLastState(self):\n\"\"\"Reads the last state of the model and sets the initial conditions to that state for continuing a simulation.\"\"\"\n for iv, sv in zip(self.init_vars, self.state_vars):\n # if state variables are one-dimensional (in space only)\n if (self.state[sv].ndim == 0) or (self.state[sv].ndim == 1):\n self.params[iv] = self.state[sv]\n # if they are space-time arrays\n else:\n # we set the next initial condition to the last state\n self.params[iv] = self.state[sv][:, -self.startindt :]\n\n def randomICs(self, min=0, max=1):\n\"\"\"Generates a new set of uniformly-distributed random initial conditions for the model.\n\n TODO: All parameters are drawn from the same distribution / range. Allow for independent ranges.\n\n :param min: Minium of uniform distribution\n :type min: float\n :param max: Maximum of uniform distribution\n :type max: float\n \"\"\"\n for iv in self.init_vars:\n if self.params[iv].ndim == 1:\n self.params[iv] = np.random.uniform(min, max, (self.params[\"N\"]))\n elif self.params[iv].ndim == 2:\n self.params[iv] = np.random.uniform(min, max, (self.params[\"N\"], 1))\n\n def setInputs(self, inputs):\n\"\"\"Take inputs from a list and store it in the appropriate model parameter for external input.\n TODO: This is not safe yet, checks should be implemented whether the model has inputs defined or not for example.\n\n :param inputs: list of inputs\n :type inputs: list[np.ndarray(), ...]\n \"\"\"\n for i, iv in enumerate(self.input_vars):\n self.params[iv] = inputs[i].copy()\n\n def autochunk(self, inputs=None, chunksize=1, append_outputs=False, bold=False):\n\"\"\"Executes a single chunk of integration, either for a given duration\n or a single timestep `dt`. Gathers all inputs to the model and resets\n the initial conditions as a preparation for the next chunk.\n\n :param inputs: list of input values, ordered according to self.input_vars, defaults to None\n :type inputs: list[np.ndarray|], optional\n :param chunksize: length of a chunk to simulate in dt, defaults 1\n :type chunksize: int, optional\n :param append_outputs: append the chunkwise outputs to the outputs attribute, defaults to False\n :type append_outputs: bool, optional\n \"\"\"\n\n # set the duration for this chunk\n self.params[\"duration\"] = chunksize * self.params[\"dt\"]\n\n # set inputs\n if inputs is not None:\n self.setInputs(inputs)\n\n # run integration\n self.integrate(append_outputs=append_outputs, simulate_bold=bold)\n\n # set initial conditions to last state for the next chunk\n self.setInitialValuesToLastState()\n\n def getMaxDelay(self):\n\"\"\"Computes the maximum delay of the model. This function should be overloaded\n if the model has internal delays (additional to delay between nodes defined by Dmat)\n such as the delay between an excitatory and inhibitory population within each brain area.\n If this function is not overloaded, the maximum delay is assumed to be defined from the\n global delay matrix `Dmat`.\n\n Note: Maxmimum delay is given in units of dt.\n\n :return: maxmimum delay of the model in units of dt\n :rtype: int\n \"\"\"\n dt = self.params.get(\"dt\")\n Dmat = self.params.get(\"lengthMat\")\n\n if Dmat is not None:\n # divide Dmat by signalV\n signalV = self.params.get(\"signalV\") or 0\n if signalV > 0:\n Dmat = Dmat / signalV\n else:\n # if signalV is 0, eliminate delays\n Dmat = Dmat * 0.0\n\n # only if Dmat and dt exist, a global max delay can be computed\n if Dmat is not None and dt is not None:\n Dmat_ndt = np.around(Dmat / dt) # delay matrix in multiples of dt\n max_global_delay = int(np.amax(Dmat_ndt))\n else:\n max_global_delay = 0\n return max_global_delay\n\n def setStateVariables(self, name, data):\n\"\"\"Saves the models current state variables.\n\n TODO: Cut state variables to length of self.maxDelay\n However, this could be time-memory tradeoff\n\n :param name: name of the state variable\n :type name: str\n :param data: value of the variable\n :type data: np.ndarray\n \"\"\"\n # old\n # self.state[name] = data.copy()\n\n # if the data is temporal, cut off initial values\n # NOTE: this shuold actually check for\n # if data.shape[1] > 1:\n # else: data.copy()\n # there coulb be (N, 1)-dimensional output, right now\n # it is requred to be of shape (N, )\n if data.ndim == 2:\n self.state[name] = data[:, -self.startindt :].copy()\n else:\n self.state[name] = data.copy()\n\n def setOutput(self, name, data, append=False, removeICs=False):\n\"\"\"Adds an output to the model, typically a simulation result.\n :params name: Name of the output in dot.notation, a la \"outputgroup.output\"\n :type name: str\n :params data: Output data, can't be a dictionary!\n :type data: `numpy.ndarray`\n \"\"\"\n assert not isinstance(data, dict), \"Output data cannot be a dictionary.\"\n assert isinstance(name, str), \"Output name must be a string.\"\n assert isinstance(data, np.ndarray), \"Output must be a `numpy.ndarray`.\"\n\n # remove initial conditions from output\n if removeICs and name != \"t\":\n if data.ndim == 1:\n data = data[self.startindt :]\n elif data.ndim == 2:\n data = data[:, self.startindt :]\n else:\n raise ValueError(f\"Don't know how to truncate data of shape {data.shape}.\")\n\n # subsample to sampling dt\n if data.ndim == 1:\n data = data[:: self.sample_every]\n elif data.ndim == 2:\n data = data[:, :: self.sample_every]\n else:\n raise ValueError(f\"Don't know how to subsample data of shape {data.shape}.\")\n\n # if the output is a single name (not dot.separated)\n if \".\" not in name:\n # append data\n if append and name in self.outputs:\n # special treatment for time data:\n # increment the time by the last recorded duration\n if name == \"t\":\n data += self.outputs[name][-1]\n self.outputs[name] = np.hstack((self.outputs[name], data))\n else:\n # save all data into output dict\n self.outputs[name] = data\n # set output as an attribute\n setattr(self, name, self.outputs[name])\n else:\n # build results dictionary and write into self.outputs\n # dot.notation iteration\n keys = name.split(\".\")\n level = self.outputs # not copy, reference!\n for i, k in enumerate(keys):\n # if it's the last iteration, store data\n if i == len(keys) - 1:\n # TODO: this needs to be append-aware like above\n # if append:\n # if k == \"t\":\n # data += level[k][-1]\n # level[k] = np.hstack((level[k], data))\n # else:\n # level[k] = data\n level[k] = data\n # if key is in outputs, then go deeper\n elif k in level:\n level = level[k]\n setattr(self, k, level)\n # if it's a new key, create new nested dictionary, set attribute, then go deeper\n else:\n level[k] = dotdict({})\n setattr(self, k, level[k])\n level = level[k]\n\n def getOutput(self, name):\n\"\"\"Get an output of a given name (dot.semarated)\n :param name: A key, grouped outputs in the form group.subgroup.variable\n :type name: str\n\n :returns: Output data\n \"\"\"\n assert isinstance(name, str), \"Output name must be a string.\"\n keys = name.split(\".\")\n lastOutput = self.outputs.copy()\n for i, k in enumerate(keys):\n assert k in lastOutput, f\"Key {k} not found in outputs.\"\n lastOutput = lastOutput[k]\n return lastOutput\n\n def __getitem__(self, key):\n\"\"\"Index outputs with a dictionary-like key, e.g., `model['rates_exc']`.\"\"\"\n return self.getOutput(key)\n\n def getOutputs(self, group=\"\"):\n\"\"\"Get all outputs of an output group. Examples: `getOutputs(\"BOLD\")` or simply `getOutputs()`\n\n :param group: Group name, subgroups separated by dots. If left empty (default), all outputs of the root group\n are returned.\n :type group: str\n \"\"\"\n assert isinstance(group, str), \"Group name must be a string.\"\n\n def filterOutputsFromGroupDict(groupDict):\n\"\"\"Return a dictionary with the output data of a group disregarding all other nested dicts.\n :param groupDict: Dictionary of outputs (can include other groups)\n :type groupDict: dict\n \"\"\"\n assert isinstance(groupDict, dict), \"Not a dictionary.\"\n # make a deep copy of the dictionary\n returnDict = groupDict.copy()\n for key, value in groupDict.items():\n if isinstance(value, dict):\n del returnDict[key]\n return returnDict\n\n # if a group deeper than the root is given, select the last node\n lastOutput = self.outputs.copy()\n if len(group) > 0:\n keys = group.split(\".\")\n for i, k in enumerate(keys):\n assert k in lastOutput, f\"Key {k} not found in outputs.\"\n lastOutput = lastOutput[k]\n assert isinstance(lastOutput, dict), f\"Key {k} does not refer to a group.\"\n # filter out all output *groups* that might be in this node and return only output data\n return filterOutputsFromGroupDict(lastOutput)\n\n @property\n def output(self):\n\"\"\"Returns value of default output as defined by `self.default_output`.\n Note that all outputs are saved in the attribute `self.outputs`.\n \"\"\"\n assert self.default_output is not None, \"Default output has not been set yet. Use `setDefaultOutput()`.\"\n return self.getOutput(self.default_output)\n\n def xr(self, group=\"\"):\n\"\"\"Converts a group of outputs to xarray. Output group needs to contain an\n element that starts with the letter \"t\" or it will not recognize any time axis.\n\n :param group: Output group name, example: \"BOLD\". Leave empty for top group.\n :type group: str\n \"\"\"\n assert isinstance(group, str), \"Group name must be a string.\"\n # take all outputs of one group: disregard all dictionaries because they are subgroups\n outputDict = self.getOutputs(group)\n # make sure that there is a time array\n timeDictKey = \"\"\n if \"t\" in outputDict:\n timeDictKey = \"t\"\n else:\n for k in outputDict:\n if k.startswith(\"t\"):\n timeDictKey = k\n logging.info(f\"Assuming {k} to be the time axis.\")\n break\n assert len(timeDictKey) > 0, f\"No time array found (starting with t) in output group {group}.\"\n t = outputDict[timeDictKey].copy()\n del outputDict[timeDictKey]\n outputs = []\n outputNames = []\n for key, value in outputDict.items():\n outputNames.append(key)\n outputs.append(value)\n\n nNodes = outputs[0].shape[0]\n nodes = list(range(nNodes))\n allOutputsStacked = np.stack(outputs) # What? Where? When?\n result = xr.DataArray(allOutputsStacked, coords=[outputNames, nodes, t], dims=[\"output\", \"space\", \"time\"])\n return result\n
Executes a single chunk of integration, either for a given duration or a single timestep dt. Gathers all inputs to the model and resets the initial conditions as a preparation for the next chunk.
Parameters:
Name Type Description Default inputslist[np.ndarray|], optional
list of input values, ordered according to self.input_vars, defaults to None
Nonechunksizeint, optional
length of a chunk to simulate in dt, defaults 1
1append_outputsbool, optional
append the chunkwise outputs to the outputs attribute, defaults to False
False Source code in neurolib/models/model.py
def autochunk(self, inputs=None, chunksize=1, append_outputs=False, bold=False):\n\"\"\"Executes a single chunk of integration, either for a given duration\n or a single timestep `dt`. Gathers all inputs to the model and resets\n the initial conditions as a preparation for the next chunk.\n\n :param inputs: list of input values, ordered according to self.input_vars, defaults to None\n :type inputs: list[np.ndarray|], optional\n :param chunksize: length of a chunk to simulate in dt, defaults 1\n :type chunksize: int, optional\n :param append_outputs: append the chunkwise outputs to the outputs attribute, defaults to False\n :type append_outputs: bool, optional\n \"\"\"\n\n # set the duration for this chunk\n self.params[\"duration\"] = chunksize * self.params[\"dt\"]\n\n # set inputs\n if inputs is not None:\n self.setInputs(inputs)\n\n # run integration\n self.integrate(append_outputs=append_outputs, simulate_bold=bold)\n\n # set initial conditions to last state for the next chunk\n self.setInitialValuesToLastState()\n
Checks if the model fulfills requirements for chunkwise simulation. Checks whether the sampling rate for outputs fits to chunksize and duration. Throws errors if not.
Source code in neurolib/models/model.py
def checkChunkwise(self, chunksize):\n\"\"\"Checks if the model fulfills requirements for chunkwise simulation.\n Checks whether the sampling rate for outputs fits to chunksize and duration.\n Throws errors if not.\"\"\"\n assert self.state_vars is not None, \"State variable names not given.\"\n assert self.init_vars is not None, \"Initial value variable names not given.\"\n assert len(self.state_vars) == len(self.init_vars), \"State variables are not same length as initial values.\"\n\n # throw a warning if the user is nasty\n if int(self.params[\"duration\"] / self.params[\"dt\"]) % chunksize != 0:\n logging.warning(\n f\"It is strongly advised to use a `chunksize` ({chunksize}) that is a divisor of `duration / dt` ({int(self.params['duration']/self.params['dt'])}).\"\n )\n\n # if `sampling_dt` is set, do some checks\n if self.params.get(\"sampling_dt\") is not None:\n # sample_dt checks are required after setting chunksize\n assert (\n chunksize * self.params[\"dt\"] >= self.params[\"sampling_dt\"]\n ), \"`chunksize * dt` must be >= `sampling_dt`\"\n\n # ugly floating point modulo hack: instead of float1%float2==0, we do\n # (float1/float2)%1==0\n assert ((chunksize * self.params[\"dt\"]) / self.params[\"sampling_dt\"]) % 1 == 0, (\n f\"Chunksize {chunksize * self.params['dt']} must be divisible by sampling dt \"\n f\"{self.params['sampling_dt']}\"\n )\n assert (\n (self.params[\"duration\"] % (chunksize * self.params[\"dt\"])) / self.params[\"sampling_dt\"]\n ) % 1 == 0, (\n f\"Last chunk of size {self.params['duration'] % (chunksize * self.params['dt'])} must be divisible by sampling dt \"\n f\"{self.params['sampling_dt']}\"\n )\n
def clearModelState(self):\n\"\"\"Clears the model's state to create a fresh one\"\"\"\n self.state = dotdict({})\n self.outputs = dotdict({})\n # reinitialize bold model\n if self.params.get(\"bold\"):\n self.initializeBold()\n
Computes the maximum delay of the model. This function should be overloaded if the model has internal delays (additional to delay between nodes defined by Dmat) such as the delay between an excitatory and inhibitory population within each brain area. If this function is not overloaded, the maximum delay is assumed to be defined from the global delay matrix Dmat.
Note: Maxmimum delay is given in units of dt.
Returns:
Type Description int
maxmimum delay of the model in units of dt
Source code in neurolib/models/model.py
def getMaxDelay(self):\n\"\"\"Computes the maximum delay of the model. This function should be overloaded\n if the model has internal delays (additional to delay between nodes defined by Dmat)\n such as the delay between an excitatory and inhibitory population within each brain area.\n If this function is not overloaded, the maximum delay is assumed to be defined from the\n global delay matrix `Dmat`.\n\n Note: Maxmimum delay is given in units of dt.\n\n :return: maxmimum delay of the model in units of dt\n :rtype: int\n \"\"\"\n dt = self.params.get(\"dt\")\n Dmat = self.params.get(\"lengthMat\")\n\n if Dmat is not None:\n # divide Dmat by signalV\n signalV = self.params.get(\"signalV\") or 0\n if signalV > 0:\n Dmat = Dmat / signalV\n else:\n # if signalV is 0, eliminate delays\n Dmat = Dmat * 0.0\n\n # only if Dmat and dt exist, a global max delay can be computed\n if Dmat is not None and dt is not None:\n Dmat_ndt = np.around(Dmat / dt) # delay matrix in multiples of dt\n max_global_delay = int(np.amax(Dmat_ndt))\n else:\n max_global_delay = 0\n return max_global_delay\n
A key, grouped outputs in the form group.subgroup.variable
required
Returns:
Type Description
Output data
Source code in neurolib/models/model.py
def getOutput(self, name):\n\"\"\"Get an output of a given name (dot.semarated)\n :param name: A key, grouped outputs in the form group.subgroup.variable\n :type name: str\n\n :returns: Output data\n \"\"\"\n assert isinstance(name, str), \"Output name must be a string.\"\n keys = name.split(\".\")\n lastOutput = self.outputs.copy()\n for i, k in enumerate(keys):\n assert k in lastOutput, f\"Key {k} not found in outputs.\"\n lastOutput = lastOutput[k]\n return lastOutput\n
Get all outputs of an output group. Examples: getOutputs(\"BOLD\") or simply getOutputs()
Parameters:
Name Type Description Default groupstr
Group name, subgroups separated by dots. If left empty (default), all outputs of the root group are returned.
'' Source code in neurolib/models/model.py
def getOutputs(self, group=\"\"):\n\"\"\"Get all outputs of an output group. Examples: `getOutputs(\"BOLD\")` or simply `getOutputs()`\n\n :param group: Group name, subgroups separated by dots. If left empty (default), all outputs of the root group\n are returned.\n :type group: str\n \"\"\"\n assert isinstance(group, str), \"Group name must be a string.\"\n\n def filterOutputsFromGroupDict(groupDict):\n\"\"\"Return a dictionary with the output data of a group disregarding all other nested dicts.\n :param groupDict: Dictionary of outputs (can include other groups)\n :type groupDict: dict\n \"\"\"\n assert isinstance(groupDict, dict), \"Not a dictionary.\"\n # make a deep copy of the dictionary\n returnDict = groupDict.copy()\n for key, value in groupDict.items():\n if isinstance(value, dict):\n del returnDict[key]\n return returnDict\n\n # if a group deeper than the root is given, select the last node\n lastOutput = self.outputs.copy()\n if len(group) > 0:\n keys = group.split(\".\")\n for i, k in enumerate(keys):\n assert k in lastOutput, f\"Key {k} not found in outputs.\"\n lastOutput = lastOutput[k]\n assert isinstance(lastOutput, dict), f\"Key {k} does not refer to a group.\"\n # filter out all output *groups* that might be in this node and return only output data\n return filterOutputsFromGroupDict(lastOutput)\n
def initializeBold(self):\n\"\"\"Initialize BOLD model.\"\"\"\n self.boldInitialized = False\n\n # function to transform model state before passing it to the bold model\n # Note: This can be used like the parameter \\epsilon in Friston2000\n # (neural efficacy) by multiplying the input with a constant via\n # self.boldInputTransform = lambda x: x * epsilon\n if not hasattr(self, \"boldInputTransform\"):\n self.boldInputTransform = None\n\n self.boldModel = bold.BOLDModel(self.params[\"N\"], self.params[\"dt\"])\n self.boldInitialized = True\n
def initializeRun(self, initializeBold=False):\n\"\"\"Initialization before each run.\n\n :param initializeBold: initialize BOLD model\n :type initializeBold: bool\n \"\"\"\n # get the maxDelay of the system\n self.maxDelay = self.getMaxDelay()\n\n # length of the initial condition\n self.startindt = self.maxDelay + 1\n\n # check dt / sampling_dt\n self.setSamplingDt()\n\n # force bold if params['bold'] == True\n if self.params.get(\"bold\"):\n initializeBold = True\n # set up the bold model, if it didn't happen yet\n if initializeBold and not self.boldInitialized:\n self.initializeBold()\n
Calls each models integration function and saves the state and the outputs of the model.
Parameters:
Name Type Description Default appendbool, optional
append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False
required Source code in neurolib/models/model.py
def integrate(self, append_outputs=False, simulate_bold=False):\n\"\"\"Calls each models `integration` function and saves the state and the outputs of the model.\n\n :param append: append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False\n :type append: bool, optional\n \"\"\"\n # run integration\n t, *variables = self.integration(self.params)\n self.storeOutputsAndStates(t, variables, append=append_outputs)\n\n # force bold if params['bold'] == True\n if self.params.get(\"bold\"):\n simulate_bold = True\n\n # bold simulation after integration\n if simulate_bold and self.boldInitialized:\n self.simulateBold(t, variables, append=True)\n
Repeatedly calls the chunkwise integration for the whole duration of the simulation. If bold==True, the BOLD model is simulated after each chunk.
Parameters:
Name Type Description Default chunksizeint
size of each chunk to simulate in units of dt
required boldbool, optional
simulate BOLD model after each chunk, defaults to False
Falseappend_outputsbool, optional
append the chunkwise outputs to the outputs attribute, defaults to False
False Source code in neurolib/models/model.py
def integrateChunkwise(self, chunksize, bold=False, append_outputs=False):\n\"\"\"Repeatedly calls the chunkwise integration for the whole duration of the simulation.\n If `bold==True`, the BOLD model is simulated after each chunk.\n\n :param chunksize: size of each chunk to simulate in units of dt\n :type chunksize: int\n :param bold: simulate BOLD model after each chunk, defaults to False\n :type bold: bool, optional\n :param append_outputs: append the chunkwise outputs to the outputs attribute, defaults to False\n :type append_outputs: bool, optional\n \"\"\"\n totalDuration = self.params[\"duration\"]\n\n dt = self.params[\"dt\"]\n # create a shallow copy of the parameters\n lastT = 0\n while totalDuration - lastT >= dt - 1e-6:\n # Determine the size of the next chunk\n # account for floating point errors\n remainingChunkSize = int(round((totalDuration - lastT) / dt))\n currentChunkSize = min(chunksize, remainingChunkSize)\n\n self.autochunk(chunksize=currentChunkSize, append_outputs=append_outputs, bold=bold)\n # we save the last simulated time step\n lastT += currentChunkSize * dt\n # or\n # lastT = self.state[\"t\"][-1]\n\n # set duration back to its original value\n self.params[\"duration\"] = totalDuration\n
Generates a new set of uniformly-distributed random initial conditions for the model.
TODO: All parameters are drawn from the same distribution / range. Allow for independent ranges.
Parameters:
Name Type Description Default minfloat
Minium of uniform distribution
0maxfloat
Maximum of uniform distribution
1 Source code in neurolib/models/model.py
def randomICs(self, min=0, max=1):\n\"\"\"Generates a new set of uniformly-distributed random initial conditions for the model.\n\n TODO: All parameters are drawn from the same distribution / range. Allow for independent ranges.\n\n :param min: Minium of uniform distribution\n :type min: float\n :param max: Maximum of uniform distribution\n :type max: float\n \"\"\"\n for iv in self.init_vars:\n if self.params[iv].ndim == 1:\n self.params[iv] = np.random.uniform(min, max, (self.params[\"N\"]))\n elif self.params[iv].ndim == 2:\n self.params[iv] = np.random.uniform(min, max, (self.params[\"N\"], 1))\n
The model can be run in three different ways: 1) model.run() starts a new run. 2) model.run(chunkwise=True) runs the simulation in chunks of length chunksize. 3) mode.run(continue_run=True) continues the simulation of a previous run.
Parameters:
Name Type Description Default inputslist[np.ndarray|]
list of inputs to the model, must have the same order as model.input_vars. Note: no sanity check is performed for performance reasons. Take care of the inputs yourself.
Nonechunkwisebool, optional
simulate model chunkwise or in one single run, defaults to False
Falsechunksizeint, optional
size of the chunk to simulate in dt, if set will imply chunkwise=True, defaults to 2s
Noneboldbool, optional
simulate BOLD signal (only for chunkwise integration), defaults to False
Falseappendbool, optional
append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False
Falsecontinue_runbool
continue a simulation by using the initial values from a previous simulation
False Source code in neurolib/models/model.py
def run(\n self,\n inputs=None,\n chunkwise=False,\n chunksize=None,\n bold=False,\n append=False,\n append_outputs=None,\n continue_run=False,\n):\n\"\"\"\n Main interfacing function to run a model.\n\n The model can be run in three different ways:\n 1) `model.run()` starts a new run.\n 2) `model.run(chunkwise=True)` runs the simulation in chunks of length `chunksize`.\n 3) `mode.run(continue_run=True)` continues the simulation of a previous run.\n\n :param inputs: list of inputs to the model, must have the same order as model.input_vars. Note: no sanity check is performed for performance reasons. Take care of the inputs yourself.\n :type inputs: list[np.ndarray|]\n :param chunkwise: simulate model chunkwise or in one single run, defaults to False\n :type chunkwise: bool, optional\n :param chunksize: size of the chunk to simulate in dt, if set will imply chunkwise=True, defaults to 2s\n :type chunksize: int, optional\n :param bold: simulate BOLD signal (only for chunkwise integration), defaults to False\n :type bold: bool, optional\n :param append: append the chunkwise outputs to the outputs attribute, defaults to False, defaults to False\n :type append: bool, optional\n :param continue_run: continue a simulation by using the initial values from a previous simulation\n :type continue_run: bool\n \"\"\"\n # TODO: legacy argument support\n if append_outputs is not None:\n append = append_outputs\n\n # if a previous run is not to be continued clear the model's state\n if continue_run is False:\n self.clearModelState()\n\n self.initializeRun(initializeBold=bold)\n\n # enable chunkwise if chunksize is set\n chunkwise = chunkwise if chunksize is None else True\n\n if chunkwise is False:\n self.integrate(append_outputs=append, simulate_bold=bold)\n if continue_run:\n self.setInitialValuesToLastState()\n\n else:\n if chunksize is None:\n chunksize = int(2000 / self.params[\"dt\"])\n\n # check if model is safe for chunkwise integration\n # and whether sampling_dt is compatible with duration and chunksize\n self.checkChunkwise(chunksize)\n if bold and not self.boldInitialized:\n logging.warn(f\"{self.name}: BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`\")\n bold = False\n self.integrateChunkwise(chunksize=chunksize, bold=bold, append_outputs=append)\n\n # check if there was a problem with the simulated data\n self.checkOutputs()\n
Reads the last state of the model and sets the initial conditions to that state for continuing a simulation.
Source code in neurolib/models/model.py
def setInitialValuesToLastState(self):\n\"\"\"Reads the last state of the model and sets the initial conditions to that state for continuing a simulation.\"\"\"\n for iv, sv in zip(self.init_vars, self.state_vars):\n # if state variables are one-dimensional (in space only)\n if (self.state[sv].ndim == 0) or (self.state[sv].ndim == 1):\n self.params[iv] = self.state[sv]\n # if they are space-time arrays\n else:\n # we set the next initial condition to the last state\n self.params[iv] = self.state[sv][:, -self.startindt :]\n
Take inputs from a list and store it in the appropriate model parameter for external input. TODO: This is not safe yet, checks should be implemented whether the model has inputs defined or not for example.
Parameters:
Name Type Description Default inputslist[np.ndarray(), ...]
list of inputs
required Source code in neurolib/models/model.py
def setInputs(self, inputs):\n\"\"\"Take inputs from a list and store it in the appropriate model parameter for external input.\n TODO: This is not safe yet, checks should be implemented whether the model has inputs defined or not for example.\n\n :param inputs: list of inputs\n :type inputs: list[np.ndarray(), ...]\n \"\"\"\n for i, iv in enumerate(self.input_vars):\n self.params[iv] = inputs[i].copy()\n
Adds an output to the model, typically a simulation result.
Parameters:
Name Type Description Default namestr
Name of the output in dot.notation, a la \"outputgroup.output\"
required data`numpy.ndarray`
Output data, can't be a dictionary!
required Source code in neurolib/models/model.py
def setOutput(self, name, data, append=False, removeICs=False):\n\"\"\"Adds an output to the model, typically a simulation result.\n :params name: Name of the output in dot.notation, a la \"outputgroup.output\"\n :type name: str\n :params data: Output data, can't be a dictionary!\n :type data: `numpy.ndarray`\n \"\"\"\n assert not isinstance(data, dict), \"Output data cannot be a dictionary.\"\n assert isinstance(name, str), \"Output name must be a string.\"\n assert isinstance(data, np.ndarray), \"Output must be a `numpy.ndarray`.\"\n\n # remove initial conditions from output\n if removeICs and name != \"t\":\n if data.ndim == 1:\n data = data[self.startindt :]\n elif data.ndim == 2:\n data = data[:, self.startindt :]\n else:\n raise ValueError(f\"Don't know how to truncate data of shape {data.shape}.\")\n\n # subsample to sampling dt\n if data.ndim == 1:\n data = data[:: self.sample_every]\n elif data.ndim == 2:\n data = data[:, :: self.sample_every]\n else:\n raise ValueError(f\"Don't know how to subsample data of shape {data.shape}.\")\n\n # if the output is a single name (not dot.separated)\n if \".\" not in name:\n # append data\n if append and name in self.outputs:\n # special treatment for time data:\n # increment the time by the last recorded duration\n if name == \"t\":\n data += self.outputs[name][-1]\n self.outputs[name] = np.hstack((self.outputs[name], data))\n else:\n # save all data into output dict\n self.outputs[name] = data\n # set output as an attribute\n setattr(self, name, self.outputs[name])\n else:\n # build results dictionary and write into self.outputs\n # dot.notation iteration\n keys = name.split(\".\")\n level = self.outputs # not copy, reference!\n for i, k in enumerate(keys):\n # if it's the last iteration, store data\n if i == len(keys) - 1:\n # TODO: this needs to be append-aware like above\n # if append:\n # if k == \"t\":\n # data += level[k][-1]\n # level[k] = np.hstack((level[k], data))\n # else:\n # level[k] = data\n level[k] = data\n # if key is in outputs, then go deeper\n elif k in level:\n level = level[k]\n setattr(self, k, level)\n # if it's a new key, create new nested dictionary, set attribute, then go deeper\n else:\n level[k] = dotdict({})\n setattr(self, k, level[k])\n level = level[k]\n
Checks if sampling_dt is set correctly and sets self.sample_every 1) Check if sampling_dt is multiple of dt 2) Check if semplind_dt is greater than duration
Source code in neurolib/models/model.py
def setSamplingDt(self):\n\"\"\"Checks if sampling_dt is set correctly and sets self.`sample_every`\n 1) Check if sampling_dt is multiple of dt\n 2) Check if semplind_dt is greater than duration\n \"\"\"\n\n if self.params.get(\"sampling_dt\") is None:\n self.sample_every = 1\n elif self.params.get(\"sampling_dt\") > 0:\n assert self.params[\"sampling_dt\"] >= self.params[\"dt\"], \"`sampling_dt` needs to be >= `dt`\"\n assert (\n self.params[\"duration\"] >= self.params[\"sampling_dt\"]\n ), \"`sampling_dt` needs to be lower than `duration`\"\n self.sample_every = int(self.params[\"sampling_dt\"] / self.params[\"dt\"])\n else:\n raise ValueError(f\"Can't handle `sampling_dt`={self.params.get('sampling_dt')}\")\n
TODO: Cut state variables to length of self.maxDelay However, this could be time-memory tradeoff
Parameters:
Name Type Description Default namestr
name of the state variable
required datanp.ndarray
value of the variable
required Source code in neurolib/models/model.py
def setStateVariables(self, name, data):\n\"\"\"Saves the models current state variables.\n\n TODO: Cut state variables to length of self.maxDelay\n However, this could be time-memory tradeoff\n\n :param name: name of the state variable\n :type name: str\n :param data: value of the variable\n :type data: np.ndarray\n \"\"\"\n # old\n # self.state[name] = data.copy()\n\n # if the data is temporal, cut off initial values\n # NOTE: this shuold actually check for\n # if data.shape[1] > 1:\n # else: data.copy()\n # there coulb be (N, 1)-dimensional output, right now\n # it is requred to be of shape (N, )\n if data.ndim == 2:\n self.state[name] = data[:, -self.startindt :].copy()\n else:\n self.state[name] = data.copy()\n
Gets the default output of the model and simulates the BOLD model. Adds the simulated BOLD signal to outputs.
Source code in neurolib/models/model.py
def simulateBold(self, t, variables, append=False):\n\"\"\"Gets the default output of the model and simulates the BOLD model.\n Adds the simulated BOLD signal to outputs.\n \"\"\"\n if self.boldInitialized:\n # first we loop through all state variables\n for svn, sv in zip(self.state_vars, variables):\n # the default output is used as the input for the bold model\n if svn == self.default_output:\n bold_input = sv[:, self.startindt :]\n # logging.debug(f\"BOLD input `{svn}` of shape {bold_input.shape}\")\n if bold_input.shape[1] >= self.boldModel.samplingRate_NDt:\n # only if the length of the output has a zero mod to the sampling rate,\n # the downsampled output from the boldModel can correctly appended to previous data\n # so: we are lazy here and simply disable appending in that case ...\n if not bold_input.shape[1] % self.boldModel.samplingRate_NDt == 0:\n append = False\n logging.warn(\n f\"Output size {bold_input.shape[1]} is not a multiple of BOLD sampling length { self.boldModel.samplingRate_NDt}, will not append data.\"\n )\n logging.debug(f\"Simulating BOLD: boldModel.run(append={append})\")\n\n # transform bold input according to self.boldInputTransform\n if self.boldInputTransform:\n bold_input = self.boldInputTransform(bold_input)\n\n # simulate bold model\n self.boldModel.run(bold_input, append=append)\n\n t_BOLD = self.boldModel.t_BOLD\n BOLD = self.boldModel.BOLD\n self.setOutput(\"BOLD.t_BOLD\", t_BOLD)\n self.setOutput(\"BOLD.BOLD\", BOLD)\n else:\n logging.warn(\n f\"Will not simulate BOLD if output {bold_input.shape[1]*self.params['dt']} not at least of duration {self.boldModel.samplingRate_NDt*self.params['dt']}\"\n )\n else:\n logging.warn(\"BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`\")\n
Takes the simulated variables of the integration and stores it to the appropriate model output and state object.
Parameters:
Name Type Description Default tlist
time vector
required variablesnumpy.ndarray
variable from time integration
required appendbool, optional
append output to existing output or overwrite, defaults to False
False Source code in neurolib/models/model.py
def storeOutputsAndStates(self, t, variables, append=False):\n\"\"\"Takes the simulated variables of the integration and stores it to the appropriate model output and state object.\n\n :param t: time vector\n :type t: list\n :param variables: variable from time integration\n :type variables: numpy.ndarray\n :param append: append output to existing output or overwrite, defaults to False\n :type append: bool, optional\n \"\"\"\n # save time array\n self.setOutput(\"t\", t, append=append, removeICs=True)\n self.setStateVariables(\"t\", t)\n # save outputs\n for svn, sv in zip(self.state_vars, variables):\n if svn in self.output_vars:\n self.setOutput(svn, sv, append=append, removeICs=True)\n self.setStateVariables(svn, sv)\n
Converts a group of outputs to xarray. Output group needs to contain an element that starts with the letter \"t\" or it will not recognize any time axis.
Parameters:
Name Type Description Default groupstr
Output group name, example: \"BOLD\". Leave empty for top group.
'' Source code in neurolib/models/model.py
def xr(self, group=\"\"):\n\"\"\"Converts a group of outputs to xarray. Output group needs to contain an\n element that starts with the letter \"t\" or it will not recognize any time axis.\n\n :param group: Output group name, example: \"BOLD\". Leave empty for top group.\n :type group: str\n \"\"\"\n assert isinstance(group, str), \"Group name must be a string.\"\n # take all outputs of one group: disregard all dictionaries because they are subgroups\n outputDict = self.getOutputs(group)\n # make sure that there is a time array\n timeDictKey = \"\"\n if \"t\" in outputDict:\n timeDictKey = \"t\"\n else:\n for k in outputDict:\n if k.startswith(\"t\"):\n timeDictKey = k\n logging.info(f\"Assuming {k} to be the time axis.\")\n break\n assert len(timeDictKey) > 0, f\"No time array found (starting with t) in output group {group}.\"\n t = outputDict[timeDictKey].copy()\n del outputDict[timeDictKey]\n outputs = []\n outputNames = []\n for key, value in outputDict.items():\n outputNames.append(key)\n outputs.append(value)\n\n nNodes = outputs[0].shape[0]\n nodes = list(range(nNodes))\n allOutputsStacked = np.stack(outputs) # What? Where? When?\n result = xr.DataArray(allOutputsStacked, coords=[outputNames, nodes, t], dims=[\"output\", \"space\", \"time\"])\n return result\n
Model parameters in neurolib are stored as a dictionary-like object params as one of a model's attributes. Changing parameters is straightforward:
from neurolib.models.aln import ALNModel # Import the model\nmodel = ALNModel() # Create an instance\n\nmodel.params['duration'] = 10 * 1000 # in ms\nmodel.run() # Run it\n
Parameters are dotdict objects that can also be accessed using the more simple syntax model.params.parameter_name = 123 (see Collections).
The default parameters of a model are stored in the loadDefaultParams.py within each model's directory. This function is called by the model.py file upon initialisation and returns all necessary parameters of the model.
Below is an example function that prepares the structural connectivity matrices Cmat and Dmat, all parameters of the model, and its initial values.
def loadDefaultParams(Cmat=None, Dmat=None, seed=None):\n\"\"\"Load default parameters for a model\n\n :param Cmat: Structural connectivity matrix (adjacency matrix) of coupling strengths, will be normalized to 1. If not given, then a single node simulation will be assumed, defaults to None\n :type Cmat: numpy.ndarray, optional\n :param Dmat: Fiber length matrix, will be used for computing the delay matrix together with the signal transmission speed parameter `signalV`, defaults to None\n :type Dmat: numpy.ndarray, optional\n :param seed: Seed for the random number generator, defaults to None\n :type seed: int, optional\n\n :return: A dictionary with the default parameters of the model\n :rtype: dict\n \"\"\"\n\n params = dotdict({})\n\n ### runtime parameters\n params.dt = 0.1 # ms 0.1ms is reasonable\n params.duration = 2000 # Simulation duration (ms)\n np.random.seed(seed) # seed for RNG of noise and ICs\n # set seed to 0 if None, pypet will complain otherwise\n params.seed = seed or 0\n\n # make sure that seed=0 remains None\n if seed == 0:\n seed = None\n\n # ------------------------------------------------------------------------\n # global whole-brain network parameters\n # ------------------------------------------------------------------------\n\n # the coupling parameter determines how nodes are coupled.\n # \"diffusive\" for diffusive coupling, \"additive\" for additive coupling\n params.coupling = \"diffusive\"\n\n params.signalV = 20.0\n params.K_gl = 0.6 # global coupling strength\n\n if Cmat is None:\n params.N = 1\n params.Cmat = np.zeros((1, 1))\n params.lengthMat = np.zeros((1, 1))\n\n else:\n params.Cmat = Cmat.copy() # coupling matrix\n np.fill_diagonal(params.Cmat, 0) # no self connections\n params.N = len(params.Cmat) # number of nodes\n params.lengthMat = Dmat\n\n # ------------------------------------------------------------------------\n # local node parameters\n # ------------------------------------------------------------------------\n\n # external input parameters:\n params.tau_ou = 5.0 # ms Timescale of the Ornstein-Uhlenbeck noise process\n params.sigma_ou = 0.0 # mV/ms/sqrt(ms) noise intensity\n params.x_ou_mean = 0.0 # mV/ms (OU process) [0-5]\n params.y_ou_mean = 0.0 # mV/ms (OU process) [0-5]\n\n # neural mass model parameters\n params.a = 0.25 # Hopf bifurcation parameter\n params.w = 0.2 # Oscillator frequency, 32 Hz at w = 0.2\n\n # ------------------------------------------------------------------------\n\n # initial values of the state variables\n params.xs_init = 0.5 * np.random.uniform(-1, 1, (params.N, 1))\n params.ys_init = 0.5 * np.random.uniform(-1, 1, (params.N, 1))\n\n # Ornstein-Uhlenbeck noise state variables\n params.x_ou = np.zeros((params.N,))\n params.y_ou = np.zeros((params.N,))\n\n # values of the external inputs\n params.x_ext = np.zeros((params.N,))\n params.y_ext = np.zeros((params.N,))\n\n return params\n
Evolutionary parameter optimization. This class helps you to optimize any function or model using an evolutionary algorithm. It uses the package deap and supports its builtin mating and selection functions as well as custom ones.
Source code in neurolib/optimize/evolution/evolution.py
class Evolution:\n\"\"\"Evolutionary parameter optimization. This class helps you to optimize any function or model using an evolutionary algorithm.\n It uses the package `deap` and supports its builtin mating and selection functions as well as custom ones.\n \"\"\"\n\n def __init__(\n self,\n evalFunction,\n parameterSpace,\n weightList=None,\n model=None,\n filename=\"evolution.hdf\",\n ncores=None,\n POP_INIT_SIZE=100,\n POP_SIZE=20,\n NGEN=10,\n algorithm=\"adaptive\",\n matingOperator=None,\n MATE_P=None,\n mutationOperator=None,\n MUTATE_P=None,\n selectionOperator=None,\n SELECT_P=None,\n parentSelectionOperator=None,\n PARENT_SELECT_P=None,\n individualGenerator=None,\n IND_GENERATOR_P=None,\n ):\n\"\"\"Initialize evolutionary optimization.\n :param evalFunction: Evaluation function of a run that provides a fitness vector and simulation outputs\n :type evalFunction: function\n :param parameterSpace: Parameter space to run evolution in.\n :type parameterSpace: `neurolib.utils.parameterSpace.ParameterSpace`\n :param weightList: List of floats that defines the dimensionality of the fitness vector returned from evalFunction and the weights of each component for multiobjective optimization (positive = maximize, negative = minimize). If not given, then a single positive weight will be used, defaults to None\n :type weightList: list[float], optional\n :param model: Model to simulate, defaults to None\n :type model: `neurolib.models.model.Model`, optional\n\n :param filename: HDF file to store all results in, defaults to \"evolution.hdf\"\n :type filename: str, optional\n :param ncores: Number of cores to simulate on (max cores default), defaults to None\n :type ncores: int, optional\n\n :param POP_INIT_SIZE: Size of first population to initialize evolution with (random, uniformly distributed), defaults to 100\n :type POP_INIT_SIZE: int, optional\n :param POP_SIZE: Size of the population during evolution, defaults to 20\n :type POP_SIZE: int, optional\n :param NGEN: Numbers of generations to evaluate, defaults to 10\n :type NGEN: int, optional\n\n :param matingOperator: Custom mating operator, defaults to deap.tools.cxBlend\n :type matingOperator: deap operator, optional\n :param MATE_P: Mating operator keyword arguments (for the default crossover operator cxBlend, this defaults `alpha` = 0.5)\n :type MATE_P: dict, optional\n\n :param mutationOperator: Custom mutation operator, defaults to du.gaussianAdaptiveMutation_nStepSizes\n :type mutationOperator: deap operator, optional\n :param MUTATE_P: Mutation operator keyword arguments\n :type MUTATE_P: dict, optional\n\n :param selectionOperator: Custom selection operator, defaults to du.selBest_multiObj\n :type selectionOperator: deap operator, optional\n :param SELECT_P: Selection operator keyword arguments\n :type SELECT_P: dict, optional\n\n :param parentSelectionOperator: Operator for parent selection, defaults to du.selRank\n :param PARENT_SELECT_P: Parent selection operator keyword arguments (for the default operator selRank, this defaults to `s` = 1.5 in Eiben&Smith p.81)\n :type PARENT_SELECT_P: dict, optional\n\n :param individualGenerator: Function to generate initial individuals, defaults to du.randomParametersAdaptive\n \"\"\"\n\n if weightList is None:\n logging.info(\"weightList not set, assuming single fitness value to be maximized.\")\n weightList = [1.0]\n\n trajectoryName = \"results\" + datetime.datetime.now().strftime(\"-%Y-%m-%d-%HH-%MM-%SS\")\n logging.info(f\"Trajectory Name: {trajectoryName}\")\n self.HDF_FILE = os.path.join(paths.HDF_DIR, filename)\n trajectoryFileName = self.HDF_FILE\n\n logging.info(\"Storing data to: {}\".format(trajectoryFileName))\n logging.info(\"Trajectory Name: {}\".format(trajectoryName))\n if ncores is None:\n ncores = multiprocessing.cpu_count()\n logging.info(\"Number of cores: {}\".format(ncores))\n\n # initialize pypet environment\n # env = pp.Environment(trajectory=trajectoryName, filename=trajectoryFileName)\n env = pp.Environment(\n trajectory=trajectoryName,\n filename=trajectoryFileName,\n use_pool=False,\n multiproc=True,\n ncores=ncores,\n complevel=9,\n log_config=paths.PYPET_LOGGING_CONFIG,\n )\n\n # Get the trajectory from the environment\n traj = env.traj\n # Sanity check if everything went ok\n assert (\n trajectoryName == traj.v_name\n ), f\"Pypet trajectory has a different name than trajectoryName {trajectoryName}\"\n # trajectoryName = traj.v_name\n\n self.model = model\n self.evalFunction = evalFunction\n self.weightList = weightList\n\n self.NGEN = NGEN\n assert POP_SIZE % 2 == 0, \"Please chose an even number for POP_SIZE!\"\n self.POP_SIZE = POP_SIZE\n assert POP_INIT_SIZE % 2 == 0, \"Please chose an even number for POP_INIT_SIZE!\"\n self.POP_INIT_SIZE = POP_INIT_SIZE\n self.ncores = ncores\n\n # comment string for storing info\n self.comments = \"no comments\"\n\n self.traj = env.traj\n self.env = env\n self.trajectoryName = trajectoryName\n self.trajectoryFileName = trajectoryFileName\n\n self._initialPopulationSimulated = False\n\n # -------- settings\n self.verbose = False\n self.verbose_plotting = True\n self.plotColor = \"C0\"\n\n # -------- simulation\n self.parameterSpace = parameterSpace\n self.ParametersInterval = self.parameterSpace.named_tuple_constructor\n self.paramInterval = self.parameterSpace.named_tuple\n\n self.toolbox = deap.base.Toolbox()\n\n # -------- algorithms\n if algorithm == \"adaptive\":\n logging.info(f\"Evolution: Using algorithm: {algorithm}\")\n self.matingOperator = tools.cxBlend\n self.MATE_P = {\"alpha\": 0.5} or MATE_P\n self.mutationOperator = du.gaussianAdaptiveMutation_nStepSizes\n self.selectionOperator = du.selBest_multiObj\n self.parentSelectionOperator = du.selRank\n self.PARENT_SELECT_P = {\"s\": 1.5} or PARENT_SELECT_P\n self.individualGenerator = du.randomParametersAdaptive\n\n elif algorithm == \"nsga2\":\n logging.info(f\"Evolution: Using algorithm: {algorithm}\")\n self.matingOperator = tools.cxSimulatedBinaryBounded\n self.MATE_P = {\n \"low\": self.parameterSpace.lowerBound,\n \"up\": self.parameterSpace.upperBound,\n \"eta\": 20.0,\n } or MATE_P\n self.mutationOperator = tools.mutPolynomialBounded\n self.MUTATE_P = {\n \"low\": self.parameterSpace.lowerBound,\n \"up\": self.parameterSpace.upperBound,\n \"eta\": 20.0,\n \"indpb\": 1.0 / len(self.weightList),\n } or MUTATE_P\n self.selectionOperator = tools.selNSGA2\n self.parentSelectionOperator = tools.selTournamentDCD\n self.individualGenerator = du.randomParameters\n\n else:\n raise ValueError(\"Evolution: algorithm must be one of the following: ['adaptive', 'nsga2']\")\n\n # if the operators are set manually, then overwrite them\n self.matingOperator = self.matingOperator if hasattr(self, \"matingOperator\") else matingOperator\n self.mutationOperator = self.mutationOperator if hasattr(self, \"mutationOperator\") else mutationOperator\n self.selectionOperator = self.selectionOperator if hasattr(self, \"selectionOperator\") else selectionOperator\n self.parentSelectionOperator = (\n self.parentSelectionOperator if hasattr(self, \"parentSelectionOperator\") else parentSelectionOperator\n )\n self.individualGenerator = (\n self.individualGenerator if hasattr(self, \"individualGenerator\") else individualGenerator\n )\n\n # let's also make sure that the parameters are set correctly\n self.MATE_P = self.MATE_P if hasattr(self, \"MATE_P\") else {}\n self.PARENT_SELECT_P = self.PARENT_SELECT_P if hasattr(self, \"PARENT_SELECT_P\") else {}\n self.MUTATE_P = self.MUTATE_P if hasattr(self, \"MUTATE_P\") else {}\n self.SELECT_P = self.SELECT_P if hasattr(self, \"SELECT_P\") else {}\n\n self._initDEAP(\n self.toolbox,\n self.env,\n self.paramInterval,\n self.evalFunction,\n weightList=self.weightList,\n matingOperator=self.matingOperator,\n mutationOperator=self.mutationOperator,\n selectionOperator=self.selectionOperator,\n parentSelectionOperator=self.parentSelectionOperator,\n individualGenerator=self.individualGenerator,\n )\n\n # set up pypet trajectory\n self._initPypetTrajectory(\n self.traj,\n self.paramInterval,\n self.POP_SIZE,\n self.NGEN,\n self.model,\n )\n\n # population history: dict of all valid individuals per generation\n self.history = {}\n\n # initialize population\n self.evaluationCounter = 0\n self.last_id = 0\n\n def run(self, verbose=False, verbose_plotting=True):\n\"\"\"Run the evolution or continue previous evolution. If evolution was not initialized first\n using `runInitial()`, this will be done.\n\n :param verbose: Print and plot state of evolution during run, defaults to False\n :type verbose: bool, optional\n \"\"\"\n\n self.verbose = verbose\n self.verbose_plotting = verbose_plotting\n if not self._initialPopulationSimulated:\n self.runInitial()\n\n self.runEvolution()\n\n def getIndividualFromTraj(self, traj):\n\"\"\"Get individual from pypet trajectory\n\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n :return: Individual (`DEAP` type)\n :rtype: `deap.creator.Individual`\n \"\"\"\n # either pass an individual or a pypet trajectory with the attribute individual\n if type(traj).__name__ == \"Individual\":\n individual = traj\n else:\n individual = traj.individual\n ind_id = traj.id\n individual = [p for p in self.pop if p.id == ind_id]\n if len(individual) > 0:\n individual = individual[0]\n return individual\n\n def getModelFromTraj(self, traj):\n\"\"\"Return the appropriate model with parameters for this individual\n :params traj: Pypet trajectory with individual (traj.individual) or directly a deap.Individual\n\n :returns model: Model with the parameters of this individual.\n\n :param traj: Pypet trajectory with individual (traj.individual) or directly a deap.Individual\n :type traj: `pypet.trajectory.Trajectory`\n :return: Model with the parameters of this individual.\n :rtype: `neurolib.models.model.Model`\n \"\"\"\n model = self.model\n # resolve star notation - MultiModel\n individual_params = self.individualToDict(self.getIndividualFromTraj(traj))\n if self.parameterSpace.star:\n individual_params = unwrap_star_dotdict(individual_params, self.model, replaced_dict=BACKWARD_REPLACE)\n model.params.update(individual_params)\n return model\n\n def getIndividualFromHistory(self, id):\n\"\"\"Searches the entire evolution history for an individual with a specific id and returns it.\n\n :param id: Individual id\n :type id: int\n :return: Individual (`DEAP` type)\n :rtype: `deap.creator.Individual`\n \"\"\"\n for key, value in self.history.items():\n for p in value:\n if p.id == id:\n return p\n logging.warning(f\"No individual with id={id} found. Returning `None`\")\n return None\n\n def individualToDict(self, individual):\n\"\"\"Convert an individual to a parameter dictionary.\n\n :param individual: Individual (`DEAP` type)\n :type individual: `deap.creator.Individual`\n :return: Parameter dictionary of this individual\n :rtype: dict\n \"\"\"\n return self.ParametersInterval(*(individual[: len(self.paramInterval)]))._asdict().copy()\n\n def _initPypetTrajectory(self, traj, paramInterval, POP_SIZE, NGEN, model):\n\"\"\"Initializes pypet trajectory and store all simulation parameters for later analysis.\n\n :param traj: Pypet trajectory (must be already initialized!)\n :type traj: `pypet.trajectory.Trajectory`\n :param paramInterval: Parameter space, from ParameterSpace class\n :type paramInterval: parameterSpace.named_tuple\n :param POP_SIZE: Population size\n :type POP_SIZE: int\n :param MATE_P: Crossover parameter\n :type MATE_P: float\n :param NGEN: Number of generations\n :type NGEN: int\n :param model: Model to store the default parameters of\n :type model: `neurolib.models.model.Model`\n \"\"\"\n # Initialize pypet trajectory and add all simulation parameters\n traj.f_add_parameter(\"popsize\", POP_SIZE, comment=\"Population size\") #\n traj.f_add_parameter(\"NGEN\", NGEN, comment=\"Number of generations\")\n\n # Placeholders for individuals and results that are about to be explored\n traj.f_add_parameter(\"generation\", 0, comment=\"Current generation\")\n\n traj.f_add_result(\"scores\", [], comment=\"Score of all individuals for each generation\")\n traj.f_add_result_group(\"evolution\", comment=\"Contains results for each generation\")\n traj.f_add_result_group(\"outputs\", comment=\"Contains simulation results\")\n\n # TODO: save evolution parameters and operators as well, MATE_P, MUTATE_P, etc..\n\n # if a model was given, save its parameters\n # NOTE: Convert model.params to dict() since it is a dotdict() and pypet doesn't like that\n if model is not None:\n params_dict = dict(model.params)\n # replace all None with zeros, pypet doesn't like None\n for key, value in params_dict.items():\n if value is None:\n params_dict[key] = \"None\"\n traj.f_add_result(\"params\", params_dict, comment=\"Default parameters\")\n\n # todo: initialize this after individuals have been defined!\n traj.f_add_parameter(\"id\", 0, comment=\"Index of individual\")\n traj.f_add_parameter(\"ind_len\", 20, comment=\"Length of individual\")\n traj.f_add_derived_parameter(\n \"individual\",\n [0 for x in range(traj.ind_len)],\n \"An indivudal of the population\",\n )\n\n def _initDEAP(\n self,\n toolbox,\n pypetEnvironment,\n paramInterval,\n evalFunction,\n weightList,\n matingOperator,\n mutationOperator,\n selectionOperator,\n parentSelectionOperator,\n individualGenerator,\n ):\n\"\"\"Initializes DEAP and registers all methods to the deap.toolbox\n\n :param toolbox: Deap toolbox\n :type toolbox: deap.base.Toolbox\n :param pypetEnvironment: Pypet environment (must be initialized first!)\n :type pypetEnvironment: [type]\n :param paramInterval: Parameter space, from ParameterSpace class\n :type paramInterval: parameterSpace.named_tuple\n :param evalFunction: Evaluation function\n :type evalFunction: function\n :param weightList: List of weiths for multiobjective optimization\n :type weightList: list[float]\n :param matingOperator: Mating function (crossover)\n :type matingOperator: function\n :param selectionOperator: Parent selection function\n :type selectionOperator: function\n :param individualGenerator: Function that generates individuals\n \"\"\"\n # ------------- register everything in deap\n deap.creator.create(\"FitnessMulti\", deap.base.Fitness, weights=tuple(weightList))\n deap.creator.create(\"Individual\", list, fitness=deap.creator.FitnessMulti)\n\n # initially, each individual has randomized genes\n # need to create a lambda funciton because du.generateRandomParams wants an argument but\n # toolbox.register cannot pass an argument to it.\n toolbox.register(\n \"individual\",\n deap.tools.initIterate,\n deap.creator.Individual,\n lambda: individualGenerator(paramInterval),\n )\n logging.info(f\"Evolution: Individual generation: {individualGenerator}\")\n\n toolbox.register(\"population\", deap.tools.initRepeat, list, toolbox.individual)\n toolbox.register(\"map\", pypetEnvironment.run)\n toolbox.register(\"run_map\", pypetEnvironment.run_map)\n\n def _worker(arg, fn):\n\"\"\"\n Wrapper to get original exception from inner, `fn`, function.\n \"\"\"\n try:\n return fn(arg)\n except Exception as e:\n logging.exception(e)\n raise\n\n toolbox.register(\"evaluate\", partial(_worker, fn=evalFunction))\n\n # Operator registering\n\n toolbox.register(\"mate\", matingOperator)\n logging.info(f\"Evolution: Mating operator: {matingOperator}\")\n\n toolbox.register(\"mutate\", mutationOperator)\n logging.info(f\"Evolution: Mutation operator: {mutationOperator}\")\n\n toolbox.register(\"selBest\", du.selBest_multiObj)\n toolbox.register(\"selectParents\", parentSelectionOperator)\n logging.info(f\"Evolution: Parent selection: {parentSelectionOperator}\")\n toolbox.register(\"select\", selectionOperator)\n logging.info(f\"Evolution: Selection operator: {selectionOperator}\")\n\n def _evalPopulationUsingPypet(self, traj, toolbox, pop, gIdx):\n\"\"\"Evaluate the fitness of the popoulation of the current generation using pypet\n :param traj: Pypet trajectory\n :type traj: `pypet.trajectory.Trajectory`\n :param toolbox: `deap` toolbox\n :type toolbox: deap.base.Toolbox\n :param pop: Population\n :type pop: list\n :param gIdx: Index of the current generation\n :type gIdx: int\n :return: Evaluated population with fitnesses\n :rtype: list\n \"\"\"\n # Add as many explored runs as individuals that need to be evaluated.\n # Furthermore, add the individuals as explored parameters.\n # We need to convert them to lists or write our own custom IndividualParameter ;-)\n # Note the second argument to `cartesian_product`:\n # This is for only having the cartesian product\n # between ``generation x (ind_idx AND individual)``, so that every individual has just one\n # unique index within a generation.\n\n # this function is necessary for the NSGA-2 algorithms because\n # some operators return np.float64 instead of float and pypet\n # does not like individuals with mixed types... sigh.\n def _cleanIndividual(ind):\n return [float(i) for i in ind]\n\n traj.f_expand(\n pp.cartesian_product(\n {\n \"generation\": [gIdx],\n \"id\": [x.id for x in pop],\n \"individual\": [list(_cleanIndividual(x)) for x in pop],\n },\n [(\"id\", \"individual\"), \"generation\"],\n )\n ) # the current generation # unique id of each individual\n\n # increment the evaluationCounter\n self.evaluationCounter += len(pop)\n\n # run simulations for one generation\n evolutionResult = toolbox.map(toolbox.evaluate)\n\n # This error can have different reasons but is most likely\n # due to multiprocessing problems. One possibility is that your evaluation\n # function is not pickleable or that it returns an object that is not pickleable.\n assert len(evolutionResult) > 0, \"No results returned from simulations.\"\n\n for idx, result in enumerate(evolutionResult):\n runIndex, packedReturnFromEvalFunction = result\n\n # packedReturnFromEvalFunction is the return from the evaluation function\n # it has length two, the first is the fitness, second is the model output\n assert (\n len(packedReturnFromEvalFunction) == 2\n ), \"Evaluation function must return tuple with shape (fitness, output_data)\"\n\n fitnessesResult, returnedOutputs = packedReturnFromEvalFunction\n\n # store simulation outputs\n pop[idx].outputs = returnedOutputs\n\n # store fitness values\n pop[idx].fitness.values = fitnessesResult\n\n # compute score\n pop[idx].fitness.score = np.ma.masked_invalid(pop[idx].fitness.wvalues).sum() / (\n len(pop[idx].fitness.wvalues)\n )\n return pop\n\n def getValidPopulation(self, pop=None):\n\"\"\"Returns a list of the valid population.\n\n :params pop: Population to check, defaults to self.pop\n :type pop: deap population\n :return: List of valid population\n :rtype: list\n \"\"\"\n pop = pop or self.pop\n return [p for p in pop if np.isfinite(p.fitness.values).all()]\n\n def getInvalidPopulation(self, pop=None):\n\"\"\"Returns a list of the invalid population.\n\n :params pop: Population to check, defaults to self.pop\n :type pop: deap population\n :return: List of invalid population\n :rtype: list\n \"\"\"\n pop = pop or self.pop\n return [p for p in pop if not np.isfinite(p.fitness.values).all()]\n\n def _tagPopulation(self, pop):\n\"\"\"Take a fresh population and add id's and attributes such as parameters that we can use later\n\n :param pop: Fresh population\n :type pop: list\n :return: Population with tags\n :rtype: list\n \"\"\"\n for i, ind in enumerate(pop):\n assert not hasattr(ind, \"id\"), \"Individual has an id already, will not overwrite it!\"\n ind.id = self.last_id\n ind.gIdx = self.gIdx\n ind.simulation_stored = False\n ind_dict = self.individualToDict(ind)\n for key, value in ind_dict.items():\n # set the parameters as attributes for easy access\n setattr(ind, key, value)\n ind.params = ind_dict\n # increment id counter\n self.last_id += 1\n return pop\n\n def runInitial(self):\n\"\"\"Run the first round of evolution with the initial population of size `POP_INIT_SIZE`\n and select the best `POP_SIZE` for the following evolution. This needs to be run before `runEvolution()`\n \"\"\"\n self._t_start_initial_population = datetime.datetime.now()\n\n # Create the initial population\n self.pop = self.toolbox.population(n=self.POP_INIT_SIZE)\n\n ### Evaluate the initial population\n logging.info(\"Evaluating initial population of size %i ...\" % len(self.pop))\n self.gIdx = 0 # set generation index\n self.pop = self._tagPopulation(self.pop)\n\n # evaluate\n self.pop = self._evalPopulationUsingPypet(self.traj, self.toolbox, self.pop, self.gIdx)\n\n if self.verbose:\n eu.printParamDist(self.pop, self.paramInterval, self.gIdx)\n\n # save all simulation data to pypet\n self.pop = eu.saveToPypet(self.traj, self.pop, self.gIdx)\n\n # reduce initial population to popsize\n self.pop = self.toolbox.select(self.pop, k=self.traj.popsize, **self.SELECT_P)\n\n self._initialPopulationSimulated = True\n\n # populate history for tracking\n self.history[self.gIdx] = self.pop # self.getValidPopulation(self.pop)\n\n self._t_end_initial_population = datetime.datetime.now()\n\n def runEvolution(self):\n\"\"\"Run the evolutionary optimization process for `NGEN` generations.\"\"\"\n # Start evolution\n logging.info(\"Start of evolution\")\n self._t_start_evolution = datetime.datetime.now()\n for self.gIdx in range(self.gIdx + 1, self.gIdx + self.traj.NGEN):\n # ------- Weed out the invalid individuals and replace them by random new individuals -------- #\n validpop = self.getValidPopulation(self.pop)\n # replace invalid individuals\n invalidpop = self.getInvalidPopulation(self.pop)\n\n logging.info(\"Replacing {} invalid individuals.\".format(len(invalidpop)))\n newpop = self.toolbox.population(n=len(invalidpop))\n newpop = self._tagPopulation(newpop)\n\n # ------- Create the next generation by crossover and mutation -------- #\n ### Select parents using rank selection and clone them ###\n offspring = list(\n map(\n self.toolbox.clone,\n self.toolbox.selectParents(self.pop, self.POP_SIZE, **self.PARENT_SELECT_P),\n )\n )\n\n ##### cross-over ####\n for i in range(1, len(offspring), 2):\n offspring[i - 1], offspring[i] = self.toolbox.mate(offspring[i - 1], offspring[i], **self.MATE_P)\n # delete fitness inherited from parents\n del offspring[i - 1].fitness.values, offspring[i].fitness.values\n del offspring[i - 1].fitness.wvalues, offspring[i].fitness.wvalues\n\n # assign parent IDs to new offspring\n offspring[i - 1].parentIds = offspring[i - 1].id, offspring[i].id\n offspring[i].parentIds = offspring[i - 1].id, offspring[i].id\n\n # delete id originally set from parents, needs to be deleted here!\n # will be set later in _tagPopulation()\n del offspring[i - 1].id, offspring[i].id\n\n ##### Mutation ####\n # Apply mutation\n du.mutateUntilValid(offspring, self.paramInterval, self.toolbox, MUTATE_P=self.MUTATE_P)\n\n offspring = self._tagPopulation(offspring)\n\n # ------- Evaluate next generation -------- #\n\n self.pop = offspring + newpop\n self._evalPopulationUsingPypet(self.traj, self.toolbox, offspring + newpop, self.gIdx)\n\n # log individuals\n self.history[self.gIdx] = validpop + offspring + newpop # self.getValidPopulation(self.pop)\n\n # ------- Select surviving population -------- #\n\n # select next generation\n self.pop = self.toolbox.select(validpop + offspring + newpop, k=self.traj.popsize, **self.SELECT_P)\n\n # ------- END OF ROUND -------\n\n # save all simulation data to pypet\n self.pop = eu.saveToPypet(self.traj, self.pop, self.gIdx)\n\n # select best individual for logging\n self.best_ind = self.toolbox.selBest(self.pop, 1)[0]\n\n # text log\n next_print = print if self.verbose else logging.info\n next_print(\"----------- Generation %i -----------\" % self.gIdx)\n next_print(\"Best individual is {}\".format(self.best_ind))\n next_print(\"Score: {}\".format(self.best_ind.fitness.score))\n next_print(\"Fitness: {}\".format(self.best_ind.fitness.values))\n next_print(\"--- Population statistics ---\")\n\n # verbose output\n if self.verbose:\n self.info(plot=self.verbose_plotting, info=True)\n\n logging.info(\"--- End of evolution ---\")\n logging.info(\"Best individual is %s, %s\" % (self.best_ind, self.best_ind.fitness.values))\n logging.info(\"--- End of evolution ---\")\n\n self.traj.f_store() # We switched off automatic storing, so we need to store manually\n self._t_end_evolution = datetime.datetime.now()\n\n self._buildEvolutionTree()\n\n def _buildEvolutionTree(self):\n\"\"\"Builds a genealogy tree that is networkx compatible.\n\n Plot the tree using:\n\n import matplotlib.pyplot as plt\n import networkx as nx\n from networkx.drawing.nx_pydot import graphviz_layout\n\n G = nx.DiGraph(evolution.tree)\n G = G.reverse() # Make the graph top-down\n pos = graphviz_layout(G, prog='dot')\n plt.figure(figsize=(8, 8))\n nx.draw(G, pos, node_size=50, alpha=0.5, node_color=list(evolution.genx.values()), with_labels=False)\n plt.show()\n \"\"\"\n self.tree = dict()\n self.id_genx = dict()\n self.id_score = dict()\n\n for gen, pop in self.history.items():\n for p in pop:\n self.tree[p.id] = p.parentIds if hasattr(p, \"parentIds\") else ()\n self.id_genx[p.id] = p.gIdx\n self.id_score[p.id] = p.fitness.score\n\n def info(self, plot=True, bestN=5, info=True, reverse=False):\n\"\"\"Print and plot information about the evolution and the current population\n\n :param plot: plot a plot using `matplotlib`, defaults to True\n :type plot: bool, optional\n :param bestN: Print summary of `bestN` best individuals, defaults to 5\n :type bestN: int, optional\n :param info: Print information about the evolution environment\n :type info: bool, optional\n \"\"\"\n if info:\n eu.printEvolutionInfo(self)\n validPop = self.getValidPopulation(self.pop)\n scores = self.getScores()\n # Text output\n print(\"--- Info summary ---\")\n print(\"Valid: {}\".format(len(validPop)))\n print(\"Mean score (weighted fitness): {:.2}\".format(np.mean(scores)))\n eu.printParamDist(self.pop, self.paramInterval, self.gIdx)\n print(\"--------------------\")\n print(f\"Best {bestN} individuals:\")\n eu.printIndividuals(self.toolbox.selBest(self.pop, bestN), self.paramInterval)\n print(\"--------------------\")\n # Plotting evolutionary progress\n if plot:\n # hack: during the evolution we need to use reverse=True\n # after the evolution (with evolution.info()), we need False\n try:\n self.plotProgress(reverse=reverse)\n except:\n logging.warning(\"Could not plot progress, is this a previously saved simulation?\")\n eu.plotPopulation(\n self,\n plotScattermatrix=True,\n save_plots=self.trajectoryName,\n color=self.plotColor,\n )\n\n def plotProgress(self, reverse=False):\n\"\"\"Plots progress of fitnesses of current evolution run\"\"\"\n eu.plotProgress(self, reverse=reverse)\n\n def saveEvolution(self, fname=None):\n\"\"\"Save evolution to file using dill.\n\n :param fname: Filename, defaults to a path in ./data/\n :type fname: str, optional\n \"\"\"\n import dill\n\n fname = fname or os.path.join(\"data/\", \"evolution-\" + self.trajectoryName + \".dill\")\n dill.dump(self, open(fname, \"wb\"))\n logging.info(f\"Saving evolution to {fname}\")\n\n def loadEvolution(self, fname):\n\"\"\"Load evolution from previously saved simulations.\n\n Example usage:\n ```\n evaluateSimulation = lambda x: x # the function can be omitted, that's why we define a lambda here\n pars = ParameterSpace(['a', 'b'], # should be same as previously saved evolution\n [[0.0, 4.0], [0.0, 5.0]])\n evolution = Evolution(evaluateSimulation, pars, weightList = [1.0])\n evolution = evolution.loadEvolution(\"data/evolution-results-2020-05-15-00H-24M-48S.dill\")\n ```\n\n :param fname: Filename, defaults to a path in ./data/\n :type fname: str\n :return: Evolution\n :rtype: self\n \"\"\"\n import dill\n\n evolution = dill.load(open(fname, \"rb\"))\n # parameter space is not saved correctly in dill, don't know why\n # that is why we recreate it using the values of\n # the parameter space in the dill\n pars = ParameterSpace(\n evolution.parameterSpace.parameterNames,\n evolution.parameterSpace.parameterValues,\n )\n\n evolution.parameterSpace = pars\n evolution.paramInterval = evolution.parameterSpace.named_tuple\n evolution.ParametersInterval = evolution.parameterSpace.named_tuple_constructor\n return evolution\n\n def _outputToDf(self, pop, df):\n\"\"\"Loads outputs dictionary from evolution from the .outputs attribute\n and writes data into a dataframe.\n\n :param pop: Population of which to get outputs from.\n :type pop: list\n :param df: Dataframe to which outputs are written\n :type df: pandas.core.frame.DataFrame\n :return: Dataframe with outputs\n :rtype: pandas.core.frame.DataFrame\n \"\"\"\n # defines which variable types will be saved in the results dataframe\n SUPPORTED_TYPES = (float, int, np.ndarray, list)\n SCALAR_TYPES = (float, int)\n ARRAY_TYPES = (np.ndarray, list)\n\n assert len(pop) == len(df), \"Dataframe and population do not have same length.\"\n nan_value = np.nan\n # load outputs into dataframe\n for i, p in enumerate(pop):\n if hasattr(p, \"outputs\"):\n for key, value in p.outputs.items():\n # only save floats, ints and arrays\n if isinstance(value, SUPPORTED_TYPES):\n # save 1-dim arrays\n if isinstance(value, ARRAY_TYPES):\n # to save a numpy array, convert column to object type\n if key not in df:\n df[key] = None\n df[key] = df[key].astype(object)\n df.at[i, key] = value\n elif isinstance(value, SCALAR_TYPES):\n # save numbers\n df.loc[i, key] = value\n else:\n df.loc[i, key] = nan_value\n return df\n\n def _dropDuplicatesFromDf(self, df):\n\"\"\"Drops duplicates from dfEvolution dataframe.\n Tries vanilla drop_duplicates, which fails if the Dataframe contains\n data objects like numpy.arrays. Tries to drop via key \"id\" if it fails.\n\n :param df: Input dataframe with duplicates to drop\n :type df: pandas.core.frame.DataFrame\n :return: Dataframe without duplicates\n :rtype: pandas.core.frame.DataFrame\n \"\"\"\n try:\n df = df.drop_duplicates()\n except:\n logging.info('Failed to drop_duplicates() without column name. Trying by column \"id\".')\n try:\n df = df.drop_duplicates(subset=\"id\")\n except:\n logging.warning(\"Failed to drop_duplicates from dataframe.\")\n return df\n\n def dfPop(self, outputs=False):\n\"\"\"Returns a `pandas` DataFrame of the current generation's population parameters.\n This object can be further used to easily analyse the population.\n :return: Pandas DataFrame with all individuals and their parameters\n :rtype: `pandas.core.frame.DataFrame`\n \"\"\"\n # add the current population to the dataframe\n validPop = self.getValidPopulation(self.pop)\n indIds = [p.id for p in validPop]\n popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in validPop]).T\n\n dfPop = pd.DataFrame(popArray, index=self.parameterSpace.parameterNames).T\n\n # add more information to the dataframe\n scores = self.getScores()\n dfPop[\"score\"] = scores\n dfPop[\"id\"] = indIds\n dfPop[\"gen\"] = [p.gIdx for p in validPop]\n\n if outputs:\n dfPop = self._outputToDf(validPop, dfPop)\n\n # add fitness columns\n # NOTE: when loading an evolution with dill using loadingEvolution\n # MultiFitness values dissappear and only one is left.\n # See dfEvolution() for a solution using wvalues\n n_fitnesses = len(validPop[0].fitness.values)\n for i in range(n_fitnesses):\n for ip, p in enumerate(validPop):\n column_name = \"f\" + str(i)\n dfPop.loc[ip, column_name] = p.fitness.values[i]\n return dfPop\n\n def dfEvolution(self, outputs=False):\n\"\"\"Returns a `pandas` DataFrame with the individuals of the the whole evolution.\n This method can be usef after loading an evolution from disk using loadEvolution()\n\n :return: Pandas DataFrame with all individuals and their parameters\n :rtype: `pandas.core.frame.DataFrame`\n \"\"\"\n parameters = self.parameterSpace.parameterNames\n allIndividuals = [p for gen, pop in self.history.items() for p in pop]\n popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in allIndividuals]).T\n dfEvolution = pd.DataFrame(popArray, index=parameters).T\n # add more information to the dataframe\n scores = [float(p.fitness.score) for p in allIndividuals]\n indIds = [p.id for p in allIndividuals]\n dfEvolution[\"score\"] = scores\n dfEvolution[\"id\"] = indIds\n dfEvolution[\"gen\"] = [p.gIdx for p in allIndividuals]\n\n if outputs:\n dfEvolution = self._outputToDf(allIndividuals, dfEvolution)\n\n # add fitness columns\n # NOTE: have to do this with wvalues and divide by weights later, why?\n # Because after loading the evolution with dill, somehow multiple fitnesses\n # dissappear and only the first one is left. However, wvalues still has all\n # fitnesses, and we have acces to weightList, so this hack kind of helps\n n_fitnesses = len(self.pop[0].fitness.wvalues)\n for i in range(n_fitnesses):\n for ip, p in enumerate(allIndividuals):\n dfEvolution.loc[ip, f\"f{i}\"] = p.fitness.wvalues[i] / self.weightList[i]\n\n # the history keeps all individuals of all generations\n # there can be duplicates (in elitism for example), which we filter\n # out for the dataframe\n dfEvolution = self._dropDuplicatesFromDf(dfEvolution)\n dfEvolution = dfEvolution.reset_index(drop=True)\n return dfEvolution\n\n def loadResults(self, filename=None, trajectoryName=None):\n\"\"\"Load results from a hdf file of a previous evolution and store the\n pypet trajectory in `self.traj`\n\n :param filename: hdf filename of the previous run, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory in the hdf file to load. If not given, the last one will be loaded, defaults to None\n :type trajectoryName: str, optional\n \"\"\"\n if filename == None:\n filename = self.HDF_FILE\n self.traj = pu.loadPypetTrajectory(filename, trajectoryName)\n\n def getScores(self):\n\"\"\"Returns the scores of the current valid population\"\"\"\n validPop = self.getValidPopulation(self.pop)\n return np.array([pop.fitness.score for pop in validPop])\n\n def getScoresDuringEvolution(self, traj=None, drop_first=True, reverse=False):\n\"\"\"Get the scores of each generation's population.\n\n :param traj: Pypet trajectory. If not given, the current trajectory is used, defaults to None\n :type traj: `pypet.trajectory.Trajectory`, optional\n :param drop_first: Drop the first (initial) generation. This can be usefull because it can have a different size (`POP_INIT_SIZE`) than the succeeding populations (`POP_SIZE`) which can make data handling tricky, defaults to True\n :type drop_first: bool, optional\n :param reverse: Reverse the order of each generation. This is a necessary workaraound because loading from the an hdf file returns the generations in a reversed order compared to loading each generation from the pypet trajectory in memory, defaults to False\n :type reverse: bool, optional\n :return: Tuple of list of all generations and an array of the scores of all individuals\n :rtype: tuple[list, numpy.ndarray]\n \"\"\"\n if traj == None:\n traj = self.traj\n\n generation_names = list(traj.results.evolution.f_to_dict(nested=True).keys())\n\n if reverse:\n generation_names = generation_names[::-1]\n if drop_first and \"gen_000000\" in generation_names:\n generation_names.remove(\"gen_000000\")\n\n npop = len(traj.results.evolution[generation_names[0]].scores)\n\n gens = []\n all_scores = np.empty((len(generation_names), npop))\n\n for i, r in enumerate(generation_names):\n gens.append(i)\n scores = traj.results.evolution[r].scores\n all_scores[i] = scores\n\n if drop_first:\n gens = np.add(gens, 1)\n\n return gens, all_scores\n
List of floats that defines the dimensionality of the fitness vector returned from evalFunction and the weights of each component for multiobjective optimization (positive = maximize, negative = minimize). If not given, then a single positive weight will be used, defaults to None
Nonemodel`neurolib.models.model.Model`, optional
Model to simulate, defaults to None
Nonefilenamestr, optional
HDF file to store all results in, defaults to \"evolution.hdf\"
'evolution.hdf'ncoresint, optional
Number of cores to simulate on (max cores default), defaults to None
NonePOP_INIT_SIZEint, optional
Size of first population to initialize evolution with (random, uniformly distributed), defaults to 100
100POP_SIZEint, optional
Size of the population during evolution, defaults to 20
20NGENint, optional
Numbers of generations to evaluate, defaults to 10
10matingOperatordeap operator, optional
Custom mating operator, defaults to deap.tools.cxBlend
NoneMATE_Pdict, optional
Mating operator keyword arguments (for the default crossover operator cxBlend, this defaults alpha = 0.5)
NonemutationOperatordeap operator, optional
Custom mutation operator, defaults to du.gaussianAdaptiveMutation_nStepSizes
NoneMUTATE_Pdict, optional
Mutation operator keyword arguments
NoneselectionOperatordeap operator, optional
Custom selection operator, defaults to du.selBest_multiObj
NoneSELECT_Pdict, optional
Selection operator keyword arguments
NoneparentSelectionOperator
Operator for parent selection, defaults to du.selRank
NonePARENT_SELECT_Pdict, optional
Parent selection operator keyword arguments (for the default operator selRank, this defaults to s = 1.5 in Eiben&Smith p.81)
NoneindividualGenerator
Function to generate initial individuals, defaults to du.randomParametersAdaptive
None Source code in neurolib/optimize/evolution/evolution.py
def __init__(\n self,\n evalFunction,\n parameterSpace,\n weightList=None,\n model=None,\n filename=\"evolution.hdf\",\n ncores=None,\n POP_INIT_SIZE=100,\n POP_SIZE=20,\n NGEN=10,\n algorithm=\"adaptive\",\n matingOperator=None,\n MATE_P=None,\n mutationOperator=None,\n MUTATE_P=None,\n selectionOperator=None,\n SELECT_P=None,\n parentSelectionOperator=None,\n PARENT_SELECT_P=None,\n individualGenerator=None,\n IND_GENERATOR_P=None,\n):\n\"\"\"Initialize evolutionary optimization.\n :param evalFunction: Evaluation function of a run that provides a fitness vector and simulation outputs\n :type evalFunction: function\n :param parameterSpace: Parameter space to run evolution in.\n :type parameterSpace: `neurolib.utils.parameterSpace.ParameterSpace`\n :param weightList: List of floats that defines the dimensionality of the fitness vector returned from evalFunction and the weights of each component for multiobjective optimization (positive = maximize, negative = minimize). If not given, then a single positive weight will be used, defaults to None\n :type weightList: list[float], optional\n :param model: Model to simulate, defaults to None\n :type model: `neurolib.models.model.Model`, optional\n\n :param filename: HDF file to store all results in, defaults to \"evolution.hdf\"\n :type filename: str, optional\n :param ncores: Number of cores to simulate on (max cores default), defaults to None\n :type ncores: int, optional\n\n :param POP_INIT_SIZE: Size of first population to initialize evolution with (random, uniformly distributed), defaults to 100\n :type POP_INIT_SIZE: int, optional\n :param POP_SIZE: Size of the population during evolution, defaults to 20\n :type POP_SIZE: int, optional\n :param NGEN: Numbers of generations to evaluate, defaults to 10\n :type NGEN: int, optional\n\n :param matingOperator: Custom mating operator, defaults to deap.tools.cxBlend\n :type matingOperator: deap operator, optional\n :param MATE_P: Mating operator keyword arguments (for the default crossover operator cxBlend, this defaults `alpha` = 0.5)\n :type MATE_P: dict, optional\n\n :param mutationOperator: Custom mutation operator, defaults to du.gaussianAdaptiveMutation_nStepSizes\n :type mutationOperator: deap operator, optional\n :param MUTATE_P: Mutation operator keyword arguments\n :type MUTATE_P: dict, optional\n\n :param selectionOperator: Custom selection operator, defaults to du.selBest_multiObj\n :type selectionOperator: deap operator, optional\n :param SELECT_P: Selection operator keyword arguments\n :type SELECT_P: dict, optional\n\n :param parentSelectionOperator: Operator for parent selection, defaults to du.selRank\n :param PARENT_SELECT_P: Parent selection operator keyword arguments (for the default operator selRank, this defaults to `s` = 1.5 in Eiben&Smith p.81)\n :type PARENT_SELECT_P: dict, optional\n\n :param individualGenerator: Function to generate initial individuals, defaults to du.randomParametersAdaptive\n \"\"\"\n\n if weightList is None:\n logging.info(\"weightList not set, assuming single fitness value to be maximized.\")\n weightList = [1.0]\n\n trajectoryName = \"results\" + datetime.datetime.now().strftime(\"-%Y-%m-%d-%HH-%MM-%SS\")\n logging.info(f\"Trajectory Name: {trajectoryName}\")\n self.HDF_FILE = os.path.join(paths.HDF_DIR, filename)\n trajectoryFileName = self.HDF_FILE\n\n logging.info(\"Storing data to: {}\".format(trajectoryFileName))\n logging.info(\"Trajectory Name: {}\".format(trajectoryName))\n if ncores is None:\n ncores = multiprocessing.cpu_count()\n logging.info(\"Number of cores: {}\".format(ncores))\n\n # initialize pypet environment\n # env = pp.Environment(trajectory=trajectoryName, filename=trajectoryFileName)\n env = pp.Environment(\n trajectory=trajectoryName,\n filename=trajectoryFileName,\n use_pool=False,\n multiproc=True,\n ncores=ncores,\n complevel=9,\n log_config=paths.PYPET_LOGGING_CONFIG,\n )\n\n # Get the trajectory from the environment\n traj = env.traj\n # Sanity check if everything went ok\n assert (\n trajectoryName == traj.v_name\n ), f\"Pypet trajectory has a different name than trajectoryName {trajectoryName}\"\n # trajectoryName = traj.v_name\n\n self.model = model\n self.evalFunction = evalFunction\n self.weightList = weightList\n\n self.NGEN = NGEN\n assert POP_SIZE % 2 == 0, \"Please chose an even number for POP_SIZE!\"\n self.POP_SIZE = POP_SIZE\n assert POP_INIT_SIZE % 2 == 0, \"Please chose an even number for POP_INIT_SIZE!\"\n self.POP_INIT_SIZE = POP_INIT_SIZE\n self.ncores = ncores\n\n # comment string for storing info\n self.comments = \"no comments\"\n\n self.traj = env.traj\n self.env = env\n self.trajectoryName = trajectoryName\n self.trajectoryFileName = trajectoryFileName\n\n self._initialPopulationSimulated = False\n\n # -------- settings\n self.verbose = False\n self.verbose_plotting = True\n self.plotColor = \"C0\"\n\n # -------- simulation\n self.parameterSpace = parameterSpace\n self.ParametersInterval = self.parameterSpace.named_tuple_constructor\n self.paramInterval = self.parameterSpace.named_tuple\n\n self.toolbox = deap.base.Toolbox()\n\n # -------- algorithms\n if algorithm == \"adaptive\":\n logging.info(f\"Evolution: Using algorithm: {algorithm}\")\n self.matingOperator = tools.cxBlend\n self.MATE_P = {\"alpha\": 0.5} or MATE_P\n self.mutationOperator = du.gaussianAdaptiveMutation_nStepSizes\n self.selectionOperator = du.selBest_multiObj\n self.parentSelectionOperator = du.selRank\n self.PARENT_SELECT_P = {\"s\": 1.5} or PARENT_SELECT_P\n self.individualGenerator = du.randomParametersAdaptive\n\n elif algorithm == \"nsga2\":\n logging.info(f\"Evolution: Using algorithm: {algorithm}\")\n self.matingOperator = tools.cxSimulatedBinaryBounded\n self.MATE_P = {\n \"low\": self.parameterSpace.lowerBound,\n \"up\": self.parameterSpace.upperBound,\n \"eta\": 20.0,\n } or MATE_P\n self.mutationOperator = tools.mutPolynomialBounded\n self.MUTATE_P = {\n \"low\": self.parameterSpace.lowerBound,\n \"up\": self.parameterSpace.upperBound,\n \"eta\": 20.0,\n \"indpb\": 1.0 / len(self.weightList),\n } or MUTATE_P\n self.selectionOperator = tools.selNSGA2\n self.parentSelectionOperator = tools.selTournamentDCD\n self.individualGenerator = du.randomParameters\n\n else:\n raise ValueError(\"Evolution: algorithm must be one of the following: ['adaptive', 'nsga2']\")\n\n # if the operators are set manually, then overwrite them\n self.matingOperator = self.matingOperator if hasattr(self, \"matingOperator\") else matingOperator\n self.mutationOperator = self.mutationOperator if hasattr(self, \"mutationOperator\") else mutationOperator\n self.selectionOperator = self.selectionOperator if hasattr(self, \"selectionOperator\") else selectionOperator\n self.parentSelectionOperator = (\n self.parentSelectionOperator if hasattr(self, \"parentSelectionOperator\") else parentSelectionOperator\n )\n self.individualGenerator = (\n self.individualGenerator if hasattr(self, \"individualGenerator\") else individualGenerator\n )\n\n # let's also make sure that the parameters are set correctly\n self.MATE_P = self.MATE_P if hasattr(self, \"MATE_P\") else {}\n self.PARENT_SELECT_P = self.PARENT_SELECT_P if hasattr(self, \"PARENT_SELECT_P\") else {}\n self.MUTATE_P = self.MUTATE_P if hasattr(self, \"MUTATE_P\") else {}\n self.SELECT_P = self.SELECT_P if hasattr(self, \"SELECT_P\") else {}\n\n self._initDEAP(\n self.toolbox,\n self.env,\n self.paramInterval,\n self.evalFunction,\n weightList=self.weightList,\n matingOperator=self.matingOperator,\n mutationOperator=self.mutationOperator,\n selectionOperator=self.selectionOperator,\n parentSelectionOperator=self.parentSelectionOperator,\n individualGenerator=self.individualGenerator,\n )\n\n # set up pypet trajectory\n self._initPypetTrajectory(\n self.traj,\n self.paramInterval,\n self.POP_SIZE,\n self.NGEN,\n self.model,\n )\n\n # population history: dict of all valid individuals per generation\n self.history = {}\n\n # initialize population\n self.evaluationCounter = 0\n self.last_id = 0\n
Returns a pandas DataFrame with the individuals of the the whole evolution. This method can be usef after loading an evolution from disk using loadEvolution()
Returns:
Type Description `pandas.core.frame.DataFrame`
Pandas DataFrame with all individuals and their parameters
Source code in neurolib/optimize/evolution/evolution.py
def dfEvolution(self, outputs=False):\n\"\"\"Returns a `pandas` DataFrame with the individuals of the the whole evolution.\n This method can be usef after loading an evolution from disk using loadEvolution()\n\n :return: Pandas DataFrame with all individuals and their parameters\n :rtype: `pandas.core.frame.DataFrame`\n \"\"\"\n parameters = self.parameterSpace.parameterNames\n allIndividuals = [p for gen, pop in self.history.items() for p in pop]\n popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in allIndividuals]).T\n dfEvolution = pd.DataFrame(popArray, index=parameters).T\n # add more information to the dataframe\n scores = [float(p.fitness.score) for p in allIndividuals]\n indIds = [p.id for p in allIndividuals]\n dfEvolution[\"score\"] = scores\n dfEvolution[\"id\"] = indIds\n dfEvolution[\"gen\"] = [p.gIdx for p in allIndividuals]\n\n if outputs:\n dfEvolution = self._outputToDf(allIndividuals, dfEvolution)\n\n # add fitness columns\n # NOTE: have to do this with wvalues and divide by weights later, why?\n # Because after loading the evolution with dill, somehow multiple fitnesses\n # dissappear and only the first one is left. However, wvalues still has all\n # fitnesses, and we have acces to weightList, so this hack kind of helps\n n_fitnesses = len(self.pop[0].fitness.wvalues)\n for i in range(n_fitnesses):\n for ip, p in enumerate(allIndividuals):\n dfEvolution.loc[ip, f\"f{i}\"] = p.fitness.wvalues[i] / self.weightList[i]\n\n # the history keeps all individuals of all generations\n # there can be duplicates (in elitism for example), which we filter\n # out for the dataframe\n dfEvolution = self._dropDuplicatesFromDf(dfEvolution)\n dfEvolution = dfEvolution.reset_index(drop=True)\n return dfEvolution\n
Returns a pandas DataFrame of the current generation's population parameters. This object can be further used to easily analyse the population.
Returns:
Type Description `pandas.core.frame.DataFrame`
Pandas DataFrame with all individuals and their parameters
Source code in neurolib/optimize/evolution/evolution.py
def dfPop(self, outputs=False):\n\"\"\"Returns a `pandas` DataFrame of the current generation's population parameters.\n This object can be further used to easily analyse the population.\n :return: Pandas DataFrame with all individuals and their parameters\n :rtype: `pandas.core.frame.DataFrame`\n \"\"\"\n # add the current population to the dataframe\n validPop = self.getValidPopulation(self.pop)\n indIds = [p.id for p in validPop]\n popArray = np.array([p[0 : len(self.paramInterval._fields)] for p in validPop]).T\n\n dfPop = pd.DataFrame(popArray, index=self.parameterSpace.parameterNames).T\n\n # add more information to the dataframe\n scores = self.getScores()\n dfPop[\"score\"] = scores\n dfPop[\"id\"] = indIds\n dfPop[\"gen\"] = [p.gIdx for p in validPop]\n\n if outputs:\n dfPop = self._outputToDf(validPop, dfPop)\n\n # add fitness columns\n # NOTE: when loading an evolution with dill using loadingEvolution\n # MultiFitness values dissappear and only one is left.\n # See dfEvolution() for a solution using wvalues\n n_fitnesses = len(validPop[0].fitness.values)\n for i in range(n_fitnesses):\n for ip, p in enumerate(validPop):\n column_name = \"f\" + str(i)\n dfPop.loc[ip, column_name] = p.fitness.values[i]\n return dfPop\n
Searches the entire evolution history for an individual with a specific id and returns it.
Parameters:
Name Type Description Default idint
Individual id
required
Returns:
Type Description `deap.creator.Individual`
Individual (DEAP type)
Source code in neurolib/optimize/evolution/evolution.py
def getIndividualFromHistory(self, id):\n\"\"\"Searches the entire evolution history for an individual with a specific id and returns it.\n\n :param id: Individual id\n :type id: int\n :return: Individual (`DEAP` type)\n :rtype: `deap.creator.Individual`\n \"\"\"\n for key, value in self.history.items():\n for p in value:\n if p.id == id:\n return p\n logging.warning(f\"No individual with id={id} found. Returning `None`\")\n return None\n
Source code in neurolib/optimize/evolution/evolution.py
def getInvalidPopulation(self, pop=None):\n\"\"\"Returns a list of the invalid population.\n\n :params pop: Population to check, defaults to self.pop\n :type pop: deap population\n :return: List of invalid population\n :rtype: list\n \"\"\"\n pop = pop or self.pop\n return [p for p in pop if not np.isfinite(p.fitness.values).all()]\n
Return the appropriate model with parameters for this individual
Parameters:
Name Type Description Default traj`pypet.trajectory.Trajectory`
Pypet trajectory with individual (traj.individual) or directly a deap.Individual
required
Returns:
Type Description `neurolib.models.model.Model`
Model with the parameters of this individual.
Source code in neurolib/optimize/evolution/evolution.py
def getModelFromTraj(self, traj):\n\"\"\"Return the appropriate model with parameters for this individual\n :params traj: Pypet trajectory with individual (traj.individual) or directly a deap.Individual\n\n :returns model: Model with the parameters of this individual.\n\n :param traj: Pypet trajectory with individual (traj.individual) or directly a deap.Individual\n :type traj: `pypet.trajectory.Trajectory`\n :return: Model with the parameters of this individual.\n :rtype: `neurolib.models.model.Model`\n \"\"\"\n model = self.model\n # resolve star notation - MultiModel\n individual_params = self.individualToDict(self.getIndividualFromTraj(traj))\n if self.parameterSpace.star:\n individual_params = unwrap_star_dotdict(individual_params, self.model, replaced_dict=BACKWARD_REPLACE)\n model.params.update(individual_params)\n return model\n
Returns the scores of the current valid population
Source code in neurolib/optimize/evolution/evolution.py
def getScores(self):\n\"\"\"Returns the scores of the current valid population\"\"\"\n validPop = self.getValidPopulation(self.pop)\n return np.array([pop.fitness.score for pop in validPop])\n
Name Type Description Default traj`pypet.trajectory.Trajectory`, optional
Pypet trajectory. If not given, the current trajectory is used, defaults to None
Nonedrop_firstbool, optional
Drop the first (initial) generation. This can be usefull because it can have a different size (POP_INIT_SIZE) than the succeeding populations (POP_SIZE) which can make data handling tricky, defaults to True
Truereversebool, optional
Reverse the order of each generation. This is a necessary workaraound because loading from the an hdf file returns the generations in a reversed order compared to loading each generation from the pypet trajectory in memory, defaults to False
False
Returns:
Type Description tuple[list, numpy.ndarray]
Tuple of list of all generations and an array of the scores of all individuals
Source code in neurolib/optimize/evolution/evolution.py
def getScoresDuringEvolution(self, traj=None, drop_first=True, reverse=False):\n\"\"\"Get the scores of each generation's population.\n\n :param traj: Pypet trajectory. If not given, the current trajectory is used, defaults to None\n :type traj: `pypet.trajectory.Trajectory`, optional\n :param drop_first: Drop the first (initial) generation. This can be usefull because it can have a different size (`POP_INIT_SIZE`) than the succeeding populations (`POP_SIZE`) which can make data handling tricky, defaults to True\n :type drop_first: bool, optional\n :param reverse: Reverse the order of each generation. This is a necessary workaraound because loading from the an hdf file returns the generations in a reversed order compared to loading each generation from the pypet trajectory in memory, defaults to False\n :type reverse: bool, optional\n :return: Tuple of list of all generations and an array of the scores of all individuals\n :rtype: tuple[list, numpy.ndarray]\n \"\"\"\n if traj == None:\n traj = self.traj\n\n generation_names = list(traj.results.evolution.f_to_dict(nested=True).keys())\n\n if reverse:\n generation_names = generation_names[::-1]\n if drop_first and \"gen_000000\" in generation_names:\n generation_names.remove(\"gen_000000\")\n\n npop = len(traj.results.evolution[generation_names[0]].scores)\n\n gens = []\n all_scores = np.empty((len(generation_names), npop))\n\n for i, r in enumerate(generation_names):\n gens.append(i)\n scores = traj.results.evolution[r].scores\n all_scores[i] = scores\n\n if drop_first:\n gens = np.add(gens, 1)\n\n return gens, all_scores\n
Source code in neurolib/optimize/evolution/evolution.py
def getValidPopulation(self, pop=None):\n\"\"\"Returns a list of the valid population.\n\n :params pop: Population to check, defaults to self.pop\n :type pop: deap population\n :return: List of valid population\n :rtype: list\n \"\"\"\n pop = pop or self.pop\n return [p for p in pop if np.isfinite(p.fitness.values).all()]\n
Print and plot information about the evolution and the current population
Parameters:
Name Type Description Default plotbool, optional
plot a plot using matplotlib, defaults to True
TruebestNint, optional
Print summary of bestN best individuals, defaults to 5
5infobool, optional
Print information about the evolution environment
True Source code in neurolib/optimize/evolution/evolution.py
def info(self, plot=True, bestN=5, info=True, reverse=False):\n\"\"\"Print and plot information about the evolution and the current population\n\n :param plot: plot a plot using `matplotlib`, defaults to True\n :type plot: bool, optional\n :param bestN: Print summary of `bestN` best individuals, defaults to 5\n :type bestN: int, optional\n :param info: Print information about the evolution environment\n :type info: bool, optional\n \"\"\"\n if info:\n eu.printEvolutionInfo(self)\n validPop = self.getValidPopulation(self.pop)\n scores = self.getScores()\n # Text output\n print(\"--- Info summary ---\")\n print(\"Valid: {}\".format(len(validPop)))\n print(\"Mean score (weighted fitness): {:.2}\".format(np.mean(scores)))\n eu.printParamDist(self.pop, self.paramInterval, self.gIdx)\n print(\"--------------------\")\n print(f\"Best {bestN} individuals:\")\n eu.printIndividuals(self.toolbox.selBest(self.pop, bestN), self.paramInterval)\n print(\"--------------------\")\n # Plotting evolutionary progress\n if plot:\n # hack: during the evolution we need to use reverse=True\n # after the evolution (with evolution.info()), we need False\n try:\n self.plotProgress(reverse=reverse)\n except:\n logging.warning(\"Could not plot progress, is this a previously saved simulation?\")\n eu.plotPopulation(\n self,\n plotScattermatrix=True,\n save_plots=self.trajectoryName,\n color=self.plotColor,\n )\n
evaluateSimulation = lambda x: x # the function can be omitted, that's why we define a lambda here\npars = ParameterSpace(['a', 'b'], # should be same as previously saved evolution\n [[0.0, 4.0], [0.0, 5.0]])\nevolution = Evolution(evaluateSimulation, pars, weightList = [1.0])\nevolution = evolution.loadEvolution(\"data/evolution-results-2020-05-15-00H-24M-48S.dill\")\n
Parameters:
Name Type Description Default fnamestr
Filename, defaults to a path in ./data/
required
Returns:
Type Description self
Evolution
Source code in neurolib/optimize/evolution/evolution.py
def loadEvolution(self, fname):\n\"\"\"Load evolution from previously saved simulations.\n\n Example usage:\n ```\n evaluateSimulation = lambda x: x # the function can be omitted, that's why we define a lambda here\n pars = ParameterSpace(['a', 'b'], # should be same as previously saved evolution\n [[0.0, 4.0], [0.0, 5.0]])\n evolution = Evolution(evaluateSimulation, pars, weightList = [1.0])\n evolution = evolution.loadEvolution(\"data/evolution-results-2020-05-15-00H-24M-48S.dill\")\n ```\n\n :param fname: Filename, defaults to a path in ./data/\n :type fname: str\n :return: Evolution\n :rtype: self\n \"\"\"\n import dill\n\n evolution = dill.load(open(fname, \"rb\"))\n # parameter space is not saved correctly in dill, don't know why\n # that is why we recreate it using the values of\n # the parameter space in the dill\n pars = ParameterSpace(\n evolution.parameterSpace.parameterNames,\n evolution.parameterSpace.parameterValues,\n )\n\n evolution.parameterSpace = pars\n evolution.paramInterval = evolution.parameterSpace.named_tuple\n evolution.ParametersInterval = evolution.parameterSpace.named_tuple_constructor\n return evolution\n
Load results from a hdf file of a previous evolution and store the pypet trajectory in self.traj
Parameters:
Name Type Description Default filenamestr, optional
hdf filename of the previous run, defaults to None
NonetrajectoryNamestr, optional
Name of the trajectory in the hdf file to load. If not given, the last one will be loaded, defaults to None
None Source code in neurolib/optimize/evolution/evolution.py
def loadResults(self, filename=None, trajectoryName=None):\n\"\"\"Load results from a hdf file of a previous evolution and store the\n pypet trajectory in `self.traj`\n\n :param filename: hdf filename of the previous run, defaults to None\n :type filename: str, optional\n :param trajectoryName: Name of the trajectory in the hdf file to load. If not given, the last one will be loaded, defaults to None\n :type trajectoryName: str, optional\n \"\"\"\n if filename == None:\n filename = self.HDF_FILE\n self.traj = pu.loadPypetTrajectory(filename, trajectoryName)\n
Run the evolution or continue previous evolution. If evolution was not initialized first using runInitial(), this will be done.
Parameters:
Name Type Description Default verbosebool, optional
Print and plot state of evolution during run, defaults to False
False Source code in neurolib/optimize/evolution/evolution.py
def run(self, verbose=False, verbose_plotting=True):\n\"\"\"Run the evolution or continue previous evolution. If evolution was not initialized first\n using `runInitial()`, this will be done.\n\n :param verbose: Print and plot state of evolution during run, defaults to False\n :type verbose: bool, optional\n \"\"\"\n\n self.verbose = verbose\n self.verbose_plotting = verbose_plotting\n if not self._initialPopulationSimulated:\n self.runInitial()\n\n self.runEvolution()\n
Run the evolutionary optimization process for NGEN generations.
Source code in neurolib/optimize/evolution/evolution.py
def runEvolution(self):\n\"\"\"Run the evolutionary optimization process for `NGEN` generations.\"\"\"\n # Start evolution\n logging.info(\"Start of evolution\")\n self._t_start_evolution = datetime.datetime.now()\n for self.gIdx in range(self.gIdx + 1, self.gIdx + self.traj.NGEN):\n # ------- Weed out the invalid individuals and replace them by random new individuals -------- #\n validpop = self.getValidPopulation(self.pop)\n # replace invalid individuals\n invalidpop = self.getInvalidPopulation(self.pop)\n\n logging.info(\"Replacing {} invalid individuals.\".format(len(invalidpop)))\n newpop = self.toolbox.population(n=len(invalidpop))\n newpop = self._tagPopulation(newpop)\n\n # ------- Create the next generation by crossover and mutation -------- #\n ### Select parents using rank selection and clone them ###\n offspring = list(\n map(\n self.toolbox.clone,\n self.toolbox.selectParents(self.pop, self.POP_SIZE, **self.PARENT_SELECT_P),\n )\n )\n\n ##### cross-over ####\n for i in range(1, len(offspring), 2):\n offspring[i - 1], offspring[i] = self.toolbox.mate(offspring[i - 1], offspring[i], **self.MATE_P)\n # delete fitness inherited from parents\n del offspring[i - 1].fitness.values, offspring[i].fitness.values\n del offspring[i - 1].fitness.wvalues, offspring[i].fitness.wvalues\n\n # assign parent IDs to new offspring\n offspring[i - 1].parentIds = offspring[i - 1].id, offspring[i].id\n offspring[i].parentIds = offspring[i - 1].id, offspring[i].id\n\n # delete id originally set from parents, needs to be deleted here!\n # will be set later in _tagPopulation()\n del offspring[i - 1].id, offspring[i].id\n\n ##### Mutation ####\n # Apply mutation\n du.mutateUntilValid(offspring, self.paramInterval, self.toolbox, MUTATE_P=self.MUTATE_P)\n\n offspring = self._tagPopulation(offspring)\n\n # ------- Evaluate next generation -------- #\n\n self.pop = offspring + newpop\n self._evalPopulationUsingPypet(self.traj, self.toolbox, offspring + newpop, self.gIdx)\n\n # log individuals\n self.history[self.gIdx] = validpop + offspring + newpop # self.getValidPopulation(self.pop)\n\n # ------- Select surviving population -------- #\n\n # select next generation\n self.pop = self.toolbox.select(validpop + offspring + newpop, k=self.traj.popsize, **self.SELECT_P)\n\n # ------- END OF ROUND -------\n\n # save all simulation data to pypet\n self.pop = eu.saveToPypet(self.traj, self.pop, self.gIdx)\n\n # select best individual for logging\n self.best_ind = self.toolbox.selBest(self.pop, 1)[0]\n\n # text log\n next_print = print if self.verbose else logging.info\n next_print(\"----------- Generation %i -----------\" % self.gIdx)\n next_print(\"Best individual is {}\".format(self.best_ind))\n next_print(\"Score: {}\".format(self.best_ind.fitness.score))\n next_print(\"Fitness: {}\".format(self.best_ind.fitness.values))\n next_print(\"--- Population statistics ---\")\n\n # verbose output\n if self.verbose:\n self.info(plot=self.verbose_plotting, info=True)\n\n logging.info(\"--- End of evolution ---\")\n logging.info(\"Best individual is %s, %s\" % (self.best_ind, self.best_ind.fitness.values))\n logging.info(\"--- End of evolution ---\")\n\n self.traj.f_store() # We switched off automatic storing, so we need to store manually\n self._t_end_evolution = datetime.datetime.now()\n\n self._buildEvolutionTree()\n
Run the first round of evolution with the initial population of size POP_INIT_SIZE and select the best POP_SIZE for the following evolution. This needs to be run before runEvolution()
Source code in neurolib/optimize/evolution/evolution.py
def runInitial(self):\n\"\"\"Run the first round of evolution with the initial population of size `POP_INIT_SIZE`\n and select the best `POP_SIZE` for the following evolution. This needs to be run before `runEvolution()`\n \"\"\"\n self._t_start_initial_population = datetime.datetime.now()\n\n # Create the initial population\n self.pop = self.toolbox.population(n=self.POP_INIT_SIZE)\n\n ### Evaluate the initial population\n logging.info(\"Evaluating initial population of size %i ...\" % len(self.pop))\n self.gIdx = 0 # set generation index\n self.pop = self._tagPopulation(self.pop)\n\n # evaluate\n self.pop = self._evalPopulationUsingPypet(self.traj, self.toolbox, self.pop, self.gIdx)\n\n if self.verbose:\n eu.printParamDist(self.pop, self.paramInterval, self.gIdx)\n\n # save all simulation data to pypet\n self.pop = eu.saveToPypet(self.traj, self.pop, self.gIdx)\n\n # reduce initial population to popsize\n self.pop = self.toolbox.select(self.pop, k=self.traj.popsize, **self.SELECT_P)\n\n self._initialPopulationSimulated = True\n\n # populate history for tracking\n self.history[self.gIdx] = self.pop # self.getValidPopulation(self.pop)\n\n self._t_end_initial_population = datetime.datetime.now()\n
Empirical datasets are stored in the neurolib/data/datasets directory. In each dataset, subject-wise functional and structural data is stored as MATLAB .mat matrices that can be opened in Python using SciPy's loadmat function. Structural data are \\(N \\times N\\), and functional time series are \\(N \\times t\\) matrices, \\(N\\) being the number of brain regions and \\(t\\) the number of time steps. Example datasets are included in neurolib and custom datasets can be added by placing them in the dataset directory.
To simulate a whole-brain network model, first we need to load the structural connectivity matrices from a DTI data set. The matrices are usually a result of processing DTI data and performing fiber tractography using software like FSL or DSIStudio. The handling of the datasets is done by the Dataset class, and the attributes in the following refer to its instances. Upon initialization, the subject-wise data set is loaded from disk. For all examples in this paper, we use freely available data from the ConnectomeDB of the Human Connectome Project (HCP). For a given parcellation of the brain into \\(N\\) brain regions, these matrices are the \\(N \\times N\\) adjacency matrix self.Cmat, i.e. the structural connectivity matrix, which determines the coupling strengths between brain areas, and the fiber length matrix Dmat which determines the signal transmission delays. The two example datasets currently included in neurolib use the the 80 cortical regions of the AAL2 atlas to define the brain areas and are sorted in a LRLR-ordering.
The elements of the structural connectivity matrix Cmat are typically the number of reconstructed fibers from DTI tractography. Since the number of fibers depends on the method and the parameters of the (probabilistic or deterministic) tractography, they need to be normalized using one of the three implemented methods. The first method max is to simply divide the entries of Cmat by the largest entry, such that the the largest entry becomes 1. The second method waytotal divides the entries of each column of Cmat by the number fiber tracts generated from the respective brain region during probabilistic tractography in FSL, which is contained in the waytotal.txt file. The third method nvoxel divides the entries of each column of Cmat by the size, e.g., the number of voxels of the corresponding brain area. The last two methods yield an asymmetric connectivity matrix, while the first one keeps Cmat symmetric. All normalization steps are done on the subject-wise matrices Cmats and Dmats. In a final step, all matrices can also be averaged across all subjects to yield one Cmat and Dmat per dataset.
Subject-wise fMRI time series must be in a \\((N \\times t)\\)-dimensional format, where \\(N\\) is the number of brain regions and \\(t\\) the length of the time series. Each region-wise time series represents the BOLD activity averaged across all voxels of that region, which can be also obtained from software like FSL. Functional connectivity (FC) captures the spatial correlation structure of the BOLD time series averaged across the entire time of the recording. FC matrices are accessible via the attribute FCs and are generated by computing the Pearson correlation of the time series between all regions, yielding a \\(N \\times N\\) matrix for each subject.
To capture the temporal fluctuations of time-dependent FC(t), which are lost when averaging across the entire recording time, functional connectivity dynamics matrices (FCDs) are computed as the element-wise Pearson correlation of time-dependent FC(t) matrices in a moving window across the BOLD time series of a chosen window length of, for example, 1 min. This yields a \\(t_{FCD} \\times t_{FCD}\\) FCD matrix for each subject, with \\(t_{FCD}\\) being the number of steps the window was moved.
Source code in neurolib/utils/loadData.py
class Dataset:\n\"\"\"\n This class is for loading empirical Datasets. Datasets are stored as matrices and can\n include functional (e.g. fMRI) and structural (e.g. DTI) data.\n\n ## Format\n\n Empirical datasets are\n stored in the `neurolib/data/datasets` directory. In each dataset, subject-wise functional\n and structural data is stored as MATLAB `.mat` matrices that can be opened in\n Python using SciPy's `loadmat` function. Structural data are\n $N \\\\times N$, and functional time series are $N \\\\times t$ matrices, $N$ being the number\n of brain regions and $t$ the number of time steps. Example datasets are included in `neurolib`\n and custom datasets can be added by placing them in the dataset directory.\n\n ## Structural DTI data\n\n To simulate a whole-brain network model, first we need to load the structural connectivity\n matrices from a DTI data set. The matrices are usually a result of processing DTI data and\n performing fiber tractography using software like *FSL* or\n *DSIStudio*. The handling of the datasets is done by the\n `Dataset` class, and the attributes in the following refer to its instances.\n Upon initialization, the subject-wise data set is loaded from disk. For all examples\n in this paper, we use freely available data from the ConnectomeDB of the\n Human Connectome Project (HCP). For a given parcellation of the brain\n into $N$ brain regions, these matrices are the $N \\\\times N$ adjacency matrix `self.Cmat`,\n i.e. the structural connectivity matrix, which determines the coupling strengths between\n brain areas, and the fiber length matrix `Dmat` which determines the signal\n transmission delays. The two example datasets currently included in `neurolib` use the the 80\n cortical regions of the AAL2 atlas to define the brain areas and are\n sorted in a LRLR-ordering.\n\n\n ## Connectivity matrix normalization\n\n The elements of the structural connectivity matrix `Cmat` are typically the number\n of reconstructed fibers from DTI tractography. Since the number of fibers depends on the\n method and the parameters of the (probabilistic or deterministic) tractography, they need to\n be normalized using one of the three implemented methods. The first method `max` is to\n simply divide the entries of `Cmat` by the largest entry, such that the the largest\n entry becomes 1. The second method `waytotal` divides the entries of each column of\n `Cmat` by the number fiber tracts generated from the respective brain region during\n probabilistic tractography in FSL, which is contained in the `waytotal.txt` file.\n The third method `nvoxel` divides the entries of each column of `Cmat` by the\n size, e.g., the number of voxels of the corresponding brain area. The last two methods yield\n an asymmetric connectivity matrix, while the first one keeps `Cmat` symmetric.\n All normalization steps are done on the subject-wise matrices `Cmats` and\n `Dmats`. In a final step, all matrices can also be averaged across all subjects\n to yield one `Cmat` and `Dmat` per dataset.\n\n ## Functional MRI data\n\n Subject-wise fMRI time series must be in a $(N \\\\times t)$-dimensional format, where $N$ is the\n number of brain regions and $t$ the length of the time series. Each region-wise time series\n represents the BOLD activity averaged across all voxels of that region, which can be also obtained\n from software like FSL. Functional connectivity (FC) captures the spatial correlation structure\n of the BOLD time series averaged across the entire time of the recording. FC matrices are\n accessible via the attribute `FCs` and are generated by computing the Pearson correlation\n of the time series between all regions, yielding a $N \\\\times N$ matrix for each subject.\n\n To capture the temporal fluctuations of time-dependent FC(t), which are lost when averaging\n across the entire recording time, functional connectivity dynamics matrices (`FCDs`) are\n computed as the element-wise Pearson correlation of time-dependent FC(t) matrices in a moving\n window across the BOLD time series of a chosen window length of, for example, 1 min. This\n yields a $t_{FCD} \\\\times t_{FCD}$ FCD matrix for each subject, with $t_{FCD}$ being the number\n of steps the window was moved.\n\n \"\"\"\n\n def __init__(self, datasetName=None, normalizeCmats=\"max\", fcd=False, subcortical=False):\n\"\"\"\n Load the empirical data sets that are provided with `neurolib`.\n\n Right now, datasets work on a per-subject base. A dataset must be located\n in the `neurolib/data/datasets/` directory. Each subject's dataset\n must be in the `subjects` subdirectory of that folder. In each subject\n folder there is a directory called `functional` for time series data\n and `structural` the structural connectivity data.\n\n See `loadData.loadSubjectFiles()` for more details on which files are\n being loaded.\n\n The structural connectivity data (accessible using the attribute\n loadData.Cmat), can be normalized using the `normalizeCmats` flag.\n This defaults to \"max\" which normalizes the Cmat by its maxmimum.\n Other options are `waytotal` or `nvoxel`, which normalizes the\n Cmat by dividing every row of the matrix by the waytotal or\n nvoxel files that are provided in the datasets.\n\n Info: the waytotal.txt and the nvoxel.txt are files extracted from\n the tractography of DTI data using `probtrackX` from the `fsl` pipeline.\n\n Individual subject data is provided with the class attributes:\n self.BOLDs: BOLD timeseries of each individual\n self.FCs: Functional connectivity of BOLD timeseries\n\n Mean data is provided with the class attributes:\n self.Cmat: Structural connectivity matrix (for coupling strenghts between areas)\n self.Dmat: Fiber length matrix (for delays)\n self.BOLDs: BOLD timeseries of each area\n self.FCs: Functional connectiviy matrices of each BOLD timeseries\n\n :param datasetName: Name of the dataset to load\n :type datasetName: str\n :param normalizeCmats: Normalization method for the structural connectivity matrix. normalizationMethods = [\"max\", \"waytotal\", \"nvoxel\"]\n :type normalizeCmats: str\n :param fcd: Compute FCD matrices of BOLD data, defaults to False\n :type fcd: bool\n :param subcortical: Include subcortical areas from the atlas or not, defaults to False\n :type subcortical: bool\n\n \"\"\"\n self.has_subjects = None\n if datasetName:\n self.loadDataset(datasetName, normalizeCmats=normalizeCmats, fcd=fcd, subcortical=subcortical)\n\n def loadDataset(self, datasetName, normalizeCmats=\"max\", fcd=False, subcortical=False):\n\"\"\"Load data into accessible class attributes.\n\n :param datasetName: Name of the dataset (must be in `datasets` directory)\n :type datasetName: str\n :param normalizeCmats: Normalization method for Cmats, defaults to \"max\"\n :type normalizeCmats: str, optional\n :raises NotImplementedError: If unknown normalization method is used\n \"\"\"\n # the base directory of the dataset\n dsBaseDirectory = os.path.join(os.path.dirname(__file__), \"..\", \"data\", \"datasets\", datasetName)\n assert os.path.exists(dsBaseDirectory), f\"Dataset {datasetName} not found in {dsBaseDirectory}.\"\n self.dsBaseDirectory = dsBaseDirectory\n self.data = dotdict({})\n\n # load all available subject data from disk to memory\n logging.info(f\"Loading dataset {datasetName} from {self.dsBaseDirectory}.\")\n self._loadSubjectFiles(self.dsBaseDirectory, subcortical=subcortical)\n assert len(self.data) > 0, \"No data loaded.\"\n assert self.has_subjects\n\n self.Cmats = self._normalizeCmats(self.getDataPerSubject(\"cm\"), method=normalizeCmats)\n self.Dmats = self.getDataPerSubject(\"len\")\n\n # take the average of all\n self.Cmat = np.mean(self.Cmats, axis=0)\n\n self.Dmat = self.getDataPerSubject(\n \"len\",\n apply=\"all\",\n apply_function=np.mean,\n apply_function_kwargs={\"axis\": 0},\n )\n self.BOLDs = self.getDataPerSubject(\"bold\")\n self.FCs = self.getDataPerSubject(\"bold\", apply_function=func.fc)\n\n if fcd:\n self.computeFCD()\n\n logging.info(f\"Dataset {datasetName} loaded.\")\n\n def computeFCD(self):\n logging.info(\"Computing FCD matrices ...\")\n self.FCDs = self.getDataPerSubject(\"bold\", apply_function=func.fcd, apply_function_kwargs={\"stepsize\": 10})\n\n def getDataPerSubject(\n self,\n name,\n apply=\"single\",\n apply_function=None,\n apply_function_kwargs={},\n normalizeCmats=\"max\",\n ):\n\"\"\"Load data of a certain kind for all users of the current dataset\n\n :param name: Name of data type, i.e. \"bold\" or \"cm\"\n :type name: str\n :param apply: Apply function per subject (\"single\") or on all subjects (\"all\"), defaults to \"single\"\n :type apply: str, optional\n :param apply_function: Apply function on data, defaults to None\n :type apply_function: function, optional\n :param apply_function_kwargs: Keyword arguments of fuction, defaults to {}\n :type apply_function_kwargs: dict, optional\n :return: Subjectwise data, after function apply\n :rtype: list[np.ndarray]\n \"\"\"\n values = []\n for subject, value in self.data[\"subjects\"].items():\n assert name in value, f\"Data type {name} not found in dataset of subject {subject}.\"\n val = value[name]\n if apply_function and apply == \"single\":\n val = apply_function(val, **apply_function_kwargs)\n values.append(val)\n\n if apply_function and apply == \"all\":\n values = apply_function(values, **apply_function_kwargs)\n return values\n\n def _normalizeCmats(self, Cmats, method=\"max\", FSL_SAMPLES_PER_VOXEL=5000):\n # normalize per subject data\n normalizationMethods = [None, \"max\", \"waytotal\", \"nvoxel\"]\n if method not in normalizationMethods:\n raise NotImplementedError(\n f'\"{method}\" is not a known normalization method. Use one of these: {normalizationMethods}'\n )\n if method == \"max\":\n Cmats = [cm / np.max(cm) for cm in Cmats]\n elif method == \"waytotal\":\n self.waytotal = self.getDataPerSubject(\"waytotal\")\n Cmats = [cm / wt for cm, wt in zip(Cmats, self.waytotal)]\n elif method == \"nvoxel\":\n self.nvoxel = self.getDataPerSubject(\"nvoxel\")\n Cmats = [cm / (nv[:, 0] * FSL_SAMPLES_PER_VOXEL) for cm, nv in zip(Cmats, self.nvoxel)]\n return Cmats\n\n def _loadSubjectFiles(self, dsBaseDirectory, subcortical=False):\n\"\"\"Dirty subject-wise file loader. Depends on the exact naming of all\n files as provided in the `neurolib/data/datasets` directory. Uses `glob.glob()`\n to find all files based on hardcoded file name matching.\n\n Can filter out subcortical regions from the AAL2 atlas.\n\n Info: Dirty implementation that assumes a lot of things about the dataset and filenames.\n\n :param dsBaseDirectory: Base directory of the dataset\n :type dsBaseDirectory: str\n :param subcortical: Filter subcortical regions from files defined by the AAL2 atlas, defaults to False\n :type subcortical: bool, optional\n \"\"\"\n # check if there are subject files in the dataset\n if os.path.exists(os.path.join(dsBaseDirectory, \"subjects\")):\n self.has_subjects = True\n self.data[\"subjects\"] = {}\n\n # data type paths, glob strings, dirty\n BOLD_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"functional\", \"*rsfMRI*.mat\")\n CM_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"structural\", \"DTI_CM*.mat\")\n LEN_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"structural\", \"DTI_LEN*.mat\")\n WAY_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"structural\", \"waytotal*.txt\")\n NVOXEL_paths_glob = os.path.join(dsBaseDirectory, \"subjects\", \"*\", \"structural\", \"nvoxel*.txt\")\n\n _ftypes = {\n \"bold\": BOLD_paths_glob,\n \"cm\": CM_paths_glob,\n \"len\": LEN_paths_glob,\n \"waytotal\": WAY_paths_glob,\n \"nvoxel\": NVOXEL_paths_glob,\n }\n\n for _name, _glob in _ftypes.items():\n fnames = glob.glob(_glob)\n # if there is none of this data type\n if len(fnames) == 0:\n continue\n for f in fnames:\n # dirty\n subject = f.split(os.path.sep)[-3]\n # create subject in dict if not present yet\n if not subject in self.data[\"subjects\"]:\n self.data[\"subjects\"][subject] = {}\n\n # if the data for this type is not already loaded\n if _name not in self.data[\"subjects\"][subject]:\n # bold, cm and len matrixes are provided as .mat files\n if _name in [\"bold\", \"cm\", \"len\"]:\n filter_subcotrical_axis = \"both\"\n if _name == \"bold\":\n key = \"tc\"\n filter_subcotrical_axis = 0\n elif _name == \"cm\":\n key = \"sc\"\n elif _name == \"len\":\n key = \"len\"\n # load the data\n data = self.loadMatrix(f, key=key)\n if not subcortical:\n data = filterSubcortical(data, axis=filter_subcotrical_axis)\n self.data[\"subjects\"][subject][_name] = data\n # waytotal and nvoxel files are .txt files\n elif _name in [\"waytotal\", \"nvoxel\"]:\n data = np.loadtxt(f)\n if not subcortical:\n data = filterSubcortical(data, axis=0)\n self.data[\"subjects\"][subject][_name] = data\n\n def loadMatrix(self, matFileName, key=\"\", verbose=False):\n\"\"\"Function to furiously load .mat files with scipy.io.loadmat.\n Info: More formats are supported but commented out in the code.\n\n :param matFileName: Filename of matrix to load\n :type matFileName: str\n :param key: .mat file key in which data is stored (example: \"sc\")\n :type key: str\n\n :return: Loaded matrix\n :rtype: numpy.ndarray\n \"\"\"\n if verbose:\n print(f\"Loading {matFileName}\")\n matrix = scipy.io.loadmat(matFileName)\n if verbose:\n print(\"\\tLoading using scipy.io.loadmat...\")\n print(f\"Keys: {list(matrix.keys())}\")\n if key != \"\" and key in list(matrix.keys()):\n matrix = matrix[key]\n if verbose:\n print(f'\\tLoaded key \"{key}\"')\n elif type(matrix) is dict:\n raise ValueError(f\"Object is still a dict. Here are the keys: {matrix.keys()}\")\n return matrix\n return 0\n
Load the empirical data sets that are provided with neurolib.
Right now, datasets work on a per-subject base. A dataset must be located in the neurolib/data/datasets/ directory. Each subject's dataset must be in the subjects subdirectory of that folder. In each subject folder there is a directory called functional for time series data and structural the structural connectivity data.
See loadData.loadSubjectFiles() for more details on which files are being loaded.
The structural connectivity data (accessible using the attribute loadData.Cmat), can be normalized using the normalizeCmats flag. This defaults to \"max\" which normalizes the Cmat by its maxmimum. Other options are waytotal or nvoxel, which normalizes the Cmat by dividing every row of the matrix by the waytotal or nvoxel files that are provided in the datasets.
Info: the waytotal.txt and the nvoxel.txt are files extracted from the tractography of DTI data using probtrackX from the fsl pipeline.
Individual subject data is provided with the class attributes: self.BOLDs: BOLD timeseries of each individual self.FCs: Functional connectivity of BOLD timeseries
Mean data is provided with the class attributes: self.Cmat: Structural connectivity matrix (for coupling strenghts between areas) self.Dmat: Fiber length matrix (for delays) self.BOLDs: BOLD timeseries of each area self.FCs: Functional connectiviy matrices of each BOLD timeseries
Parameters:
Name Type Description Default datasetNamestr
Name of the dataset to load
NonenormalizeCmatsstr
Normalization method for the structural connectivity matrix. normalizationMethods = [\"max\", \"waytotal\", \"nvoxel\"]
'max'fcdbool
Compute FCD matrices of BOLD data, defaults to False
Falsesubcorticalbool
Include subcortical areas from the atlas or not, defaults to False
False Source code in neurolib/utils/loadData.py
def __init__(self, datasetName=None, normalizeCmats=\"max\", fcd=False, subcortical=False):\n\"\"\"\n Load the empirical data sets that are provided with `neurolib`.\n\n Right now, datasets work on a per-subject base. A dataset must be located\n in the `neurolib/data/datasets/` directory. Each subject's dataset\n must be in the `subjects` subdirectory of that folder. In each subject\n folder there is a directory called `functional` for time series data\n and `structural` the structural connectivity data.\n\n See `loadData.loadSubjectFiles()` for more details on which files are\n being loaded.\n\n The structural connectivity data (accessible using the attribute\n loadData.Cmat), can be normalized using the `normalizeCmats` flag.\n This defaults to \"max\" which normalizes the Cmat by its maxmimum.\n Other options are `waytotal` or `nvoxel`, which normalizes the\n Cmat by dividing every row of the matrix by the waytotal or\n nvoxel files that are provided in the datasets.\n\n Info: the waytotal.txt and the nvoxel.txt are files extracted from\n the tractography of DTI data using `probtrackX` from the `fsl` pipeline.\n\n Individual subject data is provided with the class attributes:\n self.BOLDs: BOLD timeseries of each individual\n self.FCs: Functional connectivity of BOLD timeseries\n\n Mean data is provided with the class attributes:\n self.Cmat: Structural connectivity matrix (for coupling strenghts between areas)\n self.Dmat: Fiber length matrix (for delays)\n self.BOLDs: BOLD timeseries of each area\n self.FCs: Functional connectiviy matrices of each BOLD timeseries\n\n :param datasetName: Name of the dataset to load\n :type datasetName: str\n :param normalizeCmats: Normalization method for the structural connectivity matrix. normalizationMethods = [\"max\", \"waytotal\", \"nvoxel\"]\n :type normalizeCmats: str\n :param fcd: Compute FCD matrices of BOLD data, defaults to False\n :type fcd: bool\n :param subcortical: Include subcortical areas from the atlas or not, defaults to False\n :type subcortical: bool\n\n \"\"\"\n self.has_subjects = None\n if datasetName:\n self.loadDataset(datasetName, normalizeCmats=normalizeCmats, fcd=fcd, subcortical=subcortical)\n
Load data of a certain kind for all users of the current dataset
Parameters:
Name Type Description Default namestr
Name of data type, i.e. \"bold\" or \"cm\"
required applystr, optional
Apply function per subject (\"single\") or on all subjects (\"all\"), defaults to \"single\"
'single'apply_functionfunction, optional
Apply function on data, defaults to None
Noneapply_function_kwargsdict, optional
Keyword arguments of fuction, defaults to {}
{}
Returns:
Type Description list[np.ndarray]
Subjectwise data, after function apply
Source code in neurolib/utils/loadData.py
def getDataPerSubject(\n self,\n name,\n apply=\"single\",\n apply_function=None,\n apply_function_kwargs={},\n normalizeCmats=\"max\",\n):\n\"\"\"Load data of a certain kind for all users of the current dataset\n\n :param name: Name of data type, i.e. \"bold\" or \"cm\"\n :type name: str\n :param apply: Apply function per subject (\"single\") or on all subjects (\"all\"), defaults to \"single\"\n :type apply: str, optional\n :param apply_function: Apply function on data, defaults to None\n :type apply_function: function, optional\n :param apply_function_kwargs: Keyword arguments of fuction, defaults to {}\n :type apply_function_kwargs: dict, optional\n :return: Subjectwise data, after function apply\n :rtype: list[np.ndarray]\n \"\"\"\n values = []\n for subject, value in self.data[\"subjects\"].items():\n assert name in value, f\"Data type {name} not found in dataset of subject {subject}.\"\n val = value[name]\n if apply_function and apply == \"single\":\n val = apply_function(val, **apply_function_kwargs)\n values.append(val)\n\n if apply_function and apply == \"all\":\n values = apply_function(values, **apply_function_kwargs)\n return values\n
Name of the dataset (must be in datasets directory)
required normalizeCmatsstr, optional
Normalization method for Cmats, defaults to \"max\"
'max'
Raises:
Type Description NotImplementedError
If unknown normalization method is used
Source code in neurolib/utils/loadData.py
def loadDataset(self, datasetName, normalizeCmats=\"max\", fcd=False, subcortical=False):\n\"\"\"Load data into accessible class attributes.\n\n :param datasetName: Name of the dataset (must be in `datasets` directory)\n :type datasetName: str\n :param normalizeCmats: Normalization method for Cmats, defaults to \"max\"\n :type normalizeCmats: str, optional\n :raises NotImplementedError: If unknown normalization method is used\n \"\"\"\n # the base directory of the dataset\n dsBaseDirectory = os.path.join(os.path.dirname(__file__), \"..\", \"data\", \"datasets\", datasetName)\n assert os.path.exists(dsBaseDirectory), f\"Dataset {datasetName} not found in {dsBaseDirectory}.\"\n self.dsBaseDirectory = dsBaseDirectory\n self.data = dotdict({})\n\n # load all available subject data from disk to memory\n logging.info(f\"Loading dataset {datasetName} from {self.dsBaseDirectory}.\")\n self._loadSubjectFiles(self.dsBaseDirectory, subcortical=subcortical)\n assert len(self.data) > 0, \"No data loaded.\"\n assert self.has_subjects\n\n self.Cmats = self._normalizeCmats(self.getDataPerSubject(\"cm\"), method=normalizeCmats)\n self.Dmats = self.getDataPerSubject(\"len\")\n\n # take the average of all\n self.Cmat = np.mean(self.Cmats, axis=0)\n\n self.Dmat = self.getDataPerSubject(\n \"len\",\n apply=\"all\",\n apply_function=np.mean,\n apply_function_kwargs={\"axis\": 0},\n )\n self.BOLDs = self.getDataPerSubject(\"bold\")\n self.FCs = self.getDataPerSubject(\"bold\", apply_function=func.fc)\n\n if fcd:\n self.computeFCD()\n\n logging.info(f\"Dataset {datasetName} loaded.\")\n
Function to furiously load .mat files with scipy.io.loadmat. Info: More formats are supported but commented out in the code.
Parameters:
Name Type Description Default matFileNamestr
Filename of matrix to load
required keystr
.mat file key in which data is stored (example: \"sc\")
''
Returns:
Type Description numpy.ndarray
Loaded matrix
Source code in neurolib/utils/loadData.py
def loadMatrix(self, matFileName, key=\"\", verbose=False):\n\"\"\"Function to furiously load .mat files with scipy.io.loadmat.\n Info: More formats are supported but commented out in the code.\n\n :param matFileName: Filename of matrix to load\n :type matFileName: str\n :param key: .mat file key in which data is stored (example: \"sc\")\n :type key: str\n\n :return: Loaded matrix\n :rtype: numpy.ndarray\n \"\"\"\n if verbose:\n print(f\"Loading {matFileName}\")\n matrix = scipy.io.loadmat(matFileName)\n if verbose:\n print(\"\\tLoading using scipy.io.loadmat...\")\n print(f\"Keys: {list(matrix.keys())}\")\n if key != \"\" and key in list(matrix.keys()):\n matrix = matrix[key]\n if verbose:\n print(f'\\tLoaded key \"{key}\"')\n elif type(matrix) is dict:\n raise ValueError(f\"Object is still a dict. Here are the keys: {matrix.keys()}\")\n return matrix\n return 0\n
Computes FCD (functional connectivity dynamics) matrix, as described in Deco's whole-brain model papers. Default paramters are suited for computing FCS matrices of BOLD timeseries: A windowsize of 30 at the BOLD sampling rate of 0.5 Hz equals 60s and stepsize = 5 equals 10s.
Parameters:
Name Type Description Default tsnumpy.ndarray
Nxt timeseries
required windowsizeint, optional
Size of each rolling window in timesteps, defaults to 30
30stepsizeint, optional
Stepsize between each rolling window, defaults to 5
5
Returns:
Type Description numpy.ndarray
T x T FCD matrix
Source code in neurolib/utils/functions.py
def fcd(ts, windowsize=30, stepsize=5):\n\"\"\"Computes FCD (functional connectivity dynamics) matrix, as described in Deco's whole-brain model papers.\n Default paramters are suited for computing FCS matrices of BOLD timeseries:\n A windowsize of 30 at the BOLD sampling rate of 0.5 Hz equals 60s and stepsize = 5 equals 10s.\n\n :param ts: Nxt timeseries\n :type ts: numpy.ndarray\n :param windowsize: Size of each rolling window in timesteps, defaults to 30\n :type windowsize: int, optional\n :param stepsize: Stepsize between each rolling window, defaults to 5\n :type stepsize: int, optional\n :return: T x T FCD matrix\n :rtype: numpy.ndarray\n \"\"\"\n t_window_width = int(windowsize) # int(windowsize * 30) # x minutes\n stepsize = stepsize # ts.shape[1]/N\n corrFCs = []\n try:\n counter = range(0, ts.shape[1] - t_window_width, stepsize)\n\n for t in counter:\n ts_slice = ts[:, t : t + t_window_width]\n corrFCs.append(np.corrcoef(ts_slice))\n\n FCd = np.empty([len(corrFCs), len(corrFCs)])\n f1i = 0\n for f1 in corrFCs:\n f2i = 0\n for f2 in corrFCs:\n FCd[f1i, f2i] = np.corrcoef(f1.reshape((1, f1.size)), f2.reshape((1, f2.size)))[0, 1]\n f2i += 1\n f1i += 1\n\n return FCd\n except:\n return 0\n
Returns the mean power spectrum of multiple timeseries.
Parameters:
Name Type Description Default activitiesnp.ndarray
N-dimensional timeseries
required dtfloat
Simulation time step
required maxfrint, optional
Maximum frequency in Hz to cutoff from return, defaults to 70
70spectrum_windowsizefloat, optional
Length of the window used in Welch's method (in seconds), defaults to 1.0
1.0normalizebool, optional
Maximum power is normalized to 1 if True, defaults to False
False
Returns:
Type Description [np.ndarray, np.ndarray]
Frquencies and the power of each frequency
Source code in neurolib/utils/functions.py
def getMeanPowerSpectrum(activities, dt, maxfr=70, spectrum_windowsize=1.0, normalize=False):\n\"\"\"Returns the mean power spectrum of multiple timeseries.\n\n :param activities: N-dimensional timeseries\n :type activities: np.ndarray\n :param dt: Simulation time step\n :type dt: float\n :param maxfr: Maximum frequency in Hz to cutoff from return, defaults to 70\n :type maxfr: int, optional\n :param spectrum_windowsize: Length of the window used in Welch's method (in seconds), defaults to 1.0\n :type spectrum_windowsize: float, optional\n :param normalize: Maximum power is normalized to 1 if True, defaults to False\n :type normalize: bool, optional\n\n :return: Frquencies and the power of each frequency\n :rtype: [np.ndarray, np.ndarray]\n \"\"\"\n\n powers = np.zeros(getPowerSpectrum(activities[0], dt, maxfr, spectrum_windowsize)[0].shape)\n ps = []\n for rate in activities:\n f, Pxx_spec = getPowerSpectrum(rate, dt, maxfr, spectrum_windowsize)\n ps.append(Pxx_spec)\n powers += Pxx_spec\n powers /= len(ps)\n if normalize:\n powers /= np.max(powers)\n return f, powers\n
Maximum frequency in Hz to cutoff from return, defaults to 70
70spectrum_windowsizefloat, optional
Length of the window used in Welch's method (in seconds), defaults to 1.0
1.0normalizebool, optional
Maximum power is normalized to 1 if True, defaults to False
False
Returns:
Type Description [np.ndarray, np.ndarray]
Frquencies and the power of each frequency
Source code in neurolib/utils/functions.py
def getPowerSpectrum(activity, dt, maxfr=70, spectrum_windowsize=1.0, normalize=False):\n\"\"\"Returns a power spectrum using Welch's method.\n\n :param activity: One-dimensional timeseries\n :type activity: np.ndarray\n :param dt: Simulation time step\n :type dt: float\n :param maxfr: Maximum frequency in Hz to cutoff from return, defaults to 70\n :type maxfr: int, optional\n :param spectrum_windowsize: Length of the window used in Welch's method (in seconds), defaults to 1.0\n :type spectrum_windowsize: float, optional\n :param normalize: Maximum power is normalized to 1 if True, defaults to False\n :type normalize: bool, optional\n\n :return: Frquencies and the power of each frequency\n :rtype: [np.ndarray, np.ndarray]\n \"\"\"\n # convert to one-dimensional array if it is an (1xn)-D array\n if activity.shape[0] == 1 and activity.shape[1] > 1:\n activity = activity[0]\n assert len(activity.shape) == 1, \"activity is not one-dimensional!\"\n\n f, Pxx_spec = scipy.signal.welch(\n activity,\n 1000 / dt,\n window=\"hann\",\n nperseg=int(spectrum_windowsize * 1000 / dt),\n scaling=\"spectrum\",\n )\n f = f[f < maxfr]\n Pxx_spec = Pxx_spec[0 : len(f)]\n if normalize:\n Pxx_spec /= np.max(Pxx_spec)\n return f, Pxx_spec\n
Computes the Kuramoto order parameter of a timeseries which is a measure for synchrony. Can smooth timeseries if there is noise. Peaks are then detected using a peakfinder. From these peaks a phase is derived and then the amount of phase synchrony (the Kuramoto order parameter) is computed.
Parameters:
Name Type Description Default tracesnumpy.ndarray
Multidimensional timeseries array
required smoothingfloat, optional
Gaussian smoothing strength
0.0distanceint, optional
minimum distance between peaks in samples
10prominenceint, optional
vertical distance between the peak and its lowest contour line
5
Returns:
Type Description numpy.ndarray
Timeseries of Kuramoto order paramter
Source code in neurolib/utils/functions.py
def kuramoto(traces, smoothing=0.0, distance=10, prominence=5):\n\"\"\"\n Computes the Kuramoto order parameter of a timeseries which is a measure for synchrony.\n Can smooth timeseries if there is noise.\n Peaks are then detected using a peakfinder. From these peaks a phase is derived and then\n the amount of phase synchrony (the Kuramoto order parameter) is computed.\n\n :param traces: Multidimensional timeseries array\n :type traces: numpy.ndarray\n :param smoothing: Gaussian smoothing strength\n :type smoothing: float, optional\n :param distance: minimum distance between peaks in samples\n :type distance: int, optional\n :param prominence: vertical distance between the peak and its lowest contour line\n :type prominence: int, optional\n\n :return: Timeseries of Kuramoto order paramter\n :rtype: numpy.ndarray\n \"\"\"\n @numba.njit\n def _estimate_phase(maximalist, n_times):\n lastMax = 0\n phases = np.empty((n_times), dtype=np.float64)\n n = 0\n for m in maximalist:\n for t in range(lastMax, m):\n # compute instantaneous phase\n phi = 2 * np.pi * float(t - lastMax) / float(m - lastMax)\n phases[n] = phi\n n += 1\n lastMax = m\n phases[-1] = 2 * np.pi\n return phases\n\n @numba.njit\n def _estimate_r(ntraces, times, phases):\n kuramoto = np.empty((times), dtype=np.float64)\n for t in range(times):\n R = 1j*0\n for n in range(ntraces):\n R += np.exp(1j * phases[n, t])\n R /= ntraces\n kuramoto[t] = np.absolute(R)\n return kuramoto\n\n nTraces, nTimes = traces.shape\n phases = np.empty_like(traces)\n for n in range(nTraces):\n a = traces[n]\n # find peaks\n if smoothing > 0:\n # smooth data\n a = scipy.ndimage.filters.gaussian_filter(traces[n], smoothing)\n maximalist = scipy.signal.find_peaks(a, distance=distance,\n prominence=prominence)[0]\n maximalist = np.append(maximalist, len(traces[n])-1).astype(int)\n\n if len(maximalist) > 1:\n phases[n, :] = _estimate_phase(maximalist, nTimes)\n else:\n logging.warning(\"Kuramoto: No peaks found, returning 0.\")\n return 0\n # determine kuramoto order paramter\n kuramoto = _estimate_r(nTraces, nTimes, phases)\n return kuramoto\n
Pearson correlation of the lower triagonal of two matrices. The triangular matrix is offset by k = 1 in order to ignore the diagonal line
Parameters:
Name Type Description Default M1numpy.ndarray
First matrix
required M2numpy.ndarray
Second matrix
required
Returns:
Type Description float
Correlation coefficient
Source code in neurolib/utils/functions.py
def matrix_correlation(M1, M2):\n\"\"\"Pearson correlation of the lower triagonal of two matrices.\n The triangular matrix is offset by k = 1 in order to ignore the diagonal line\n\n :param M1: First matrix\n :type M1: numpy.ndarray\n :param M2: Second matrix\n :type M2: numpy.ndarray\n :return: Correlation coefficient\n :rtype: float\n \"\"\"\n cc = np.corrcoef(M1[np.triu_indices_from(M1, k=1)], M2[np.triu_indices_from(M2, k=1)])[0, 1]\n return cc\n
Computes the Kolmogorov distance between the distributions of lower-triangular entries of two matrices See: https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Two-sample_Kolmogorov%E2%80%93Smirnov_test
Parameters:
Name Type Description Default m1np.ndarray
matrix 1
required m2np.ndarray
matrix 2
required
Returns:
Type Description float
2-sample KS statistics
Source code in neurolib/utils/functions.py
def matrix_kolmogorov(m1, m2):\n\"\"\"Computes the Kolmogorov distance between the distributions of lower-triangular entries of two matrices\n See: https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Two-sample_Kolmogorov%E2%80%93Smirnov_test\n\n :param m1: matrix 1\n :type m1: np.ndarray\n :param m2: matrix 2\n :type m2: np.ndarray\n :return: 2-sample KS statistics\n :rtype: float\n \"\"\"\n # get the values of the lower triangle\n triu_ind1 = np.triu_indices(m1.shape[0], k=1)\n m1_vals = m1[triu_ind1]\n\n triu_ind2 = np.triu_indices(m2.shape[0], k=1)\n m2_vals = m2[triu_ind2]\n\n # return the distance, omit p-value\n return scipy.stats.ks_2samp(m1_vals, m2_vals)[0]\n
Computes kolmogorov distance between two timeseries. This is done by first computing two FCD matrices (one for each timeseries) and then measuring the Kolmogorov distance of the upper triangle of these matrices.
Parameters:
Name Type Description Default ts1np.ndarray
Timeseries 1
required ts2np.ndarray
Timeseries 2
required
Returns:
Type Description float
2-sample KS statistics
Source code in neurolib/utils/functions.py
def ts_kolmogorov(ts1, ts2, **fcd_kwargs):\n\"\"\"Computes kolmogorov distance between two timeseries.\n This is done by first computing two FCD matrices (one for each timeseries)\n and then measuring the Kolmogorov distance of the upper triangle of these matrices.\n\n :param ts1: Timeseries 1\n :type ts1: np.ndarray\n :param ts2: Timeseries 2\n :type ts2: np.ndarray\n :return: 2-sample KS statistics\n :rtype: float\n \"\"\"\n fcd1 = fcd(ts1, **fcd_kwargs)\n fcd2 = fcd(ts2, **fcd_kwargs)\n\n return matrix_kolmogorov(fcd1, fcd2)\n
"},{"location":"utils/functions/#neurolib.utils.functions.weighted_correlation","title":"weighted_correlation(x, y, w)","text":"
Weighted Pearson correlation of two series.
Parameters:
Name Type Description Default xlist, np.array
Timeseries 1
required ylist, np.array
Timeseries 2, must have same length as x
required wlist, np.array
Weight vector, must have same length as x and y
required
Returns:
Type Description float
Weighted correlation coefficient
Source code in neurolib/utils/functions.py
def weighted_correlation(x, y, w):\n\"\"\"Weighted Pearson correlation of two series.\n\n :param x: Timeseries 1\n :type x: list, np.array\n :param y: Timeseries 2, must have same length as x\n :type y: list, np.array\n :param w: Weight vector, must have same length as x and y\n :type w: list, np.array\n :return: Weighted correlation coefficient\n :rtype: float\n \"\"\"\n\n def weighted_mean(x, w):\n\"\"\"Weighted Mean\"\"\"\n return np.sum(x * w) / np.sum(w)\n\n def weighted_cov(x, y, w):\n\"\"\"Weighted Covariance\"\"\"\n return np.sum(w * (x - weighted_mean(x, w)) * (y - weighted_mean(y, w))) / np.sum(w)\n\n return weighted_cov(x, y, w) / np.sqrt(weighted_cov(x, x, w) * weighted_cov(y, y, w))\n
class ParameterSpace:\n\"\"\"\n Parameter space\n \"\"\"\n\n def __init__(self, parameters, parameterValues=None, kind=None, allow_star_notation=False):\n\"\"\"\n Initialize parameter space. Parameter space can be initialized in two ways:\n Either a `parameters` is a dictionary of the form `{\"parName1\" : [0, 1, 2], \"parName2\" : [3, 4]}`,\n or `parameters` is a list of names and `parameterValues` are values of each parameter.\n\n :param parameters: parameter dictionary or list of names of parameters e.g. `['x', 'y']`\n :type parameters: `dict, list[str, str]`\n :param parameterValues: list of parameter values (must be floats) e.g. `[[x_min, x_max], [y_min, y_max], ...]`\n :type parameterValues: `list[list[float, float]]`\n :param kind: string describing the kind of parameter space:\n - `point`: a single point in parameter space\n - `bound`: a bound in parameter space, i.e. two values per parameter\n - `grid`: a cartesian product over parameters\n - `sequence`: a sequence of univariate parameter changes - only one will change at the time, other\n parameters will stay as default\n - `explicit`: explicitely define a parameter space, i.e lists of all parameters have to have the same length\n - None: parameterSpace tries to auto-detect the correct kind\n :type kind: str\n :param allow_star_notation: whether to allow star notation in parameter names - MultiModel\n :type allow_star_notation: bool\n \"\"\"\n assert kind in SUPPORTED_KINDS\n self.kind = kind\n self.parameters = parameters\n self.star = allow_star_notation\n # in case a parameter dictionary was given\n if parameterValues is None:\n assert isinstance(\n parameters, dict\n ), \"Parameters must be a dict, if no values are given in `parameterValues`\"\n else:\n # check if all names are strings\n assert np.all([isinstance(pn, str) for pn in parameters]), \"Parameter names must all be strings.\"\n # check if all parameter values are lists\n assert np.all([isinstance(pv, (list, tuple)) for pv in parameterValues]), \"Parameter values must be a list.\"\n parameters = self._parameterListsToDict(parameters, parameterValues)\n\n self.parameters = self._processParameterDict(parameters)\n self.parameterNames = list(self.parameters.keys())\n self.parameterValues = list(self.parameters.values())\n\n # let's create a named tuple of the parameters\n # Note: evolution.py implementation relies on named tuples\n self.named_tuple_constructor = namedtuple(\"ParameterSpace\", sanitize_dot_dict(parameters))\n self.named_tuple = self.named_tuple_constructor(*self.parameterValues)\n\n # set attributes of this class to make it accessible\n for i, p in enumerate(self.parameters):\n setattr(self, p, self.parameterValues[i])\n\n def __str__(self):\n\"\"\"Print the named_tuple object\"\"\"\n return str(self.parameters)\n\n def __getitem__(self, key):\n return self.parameters[key]\n\n def __setitem__(self, key, value):\n self.parameters[key] = value\n self._processParameterDict(self.parameters)\n\n def dict(self):\n\"\"\"Returns the parameter space as a dicitonary of lists.\n :rtype: dict\n \"\"\"\n return self.parameters\n\n def get_parametrization(self):\n assert self.kind is not None\n if self.kind in [\"point\", \"bound\", \"explicit\"]:\n # check same length\n it = iter(self.parameters.values())\n length = len(next(it))\n assert all(len(l) == length for l in it)\n # just return as dict\n return self.parameters\n elif self.kind == \"grid\":\n # cartesian product\n return pypet.cartesian_product(self.parameters)\n elif self.kind == \"sequence\":\n # return as sequence\n return self._inflate_to_sequence(self.parameters)\n\n @staticmethod\n def _inflate_to_sequence(param_dict):\n\"\"\"\n Inflate dict of parameters to a sequence of same length, using None as\n placeholder when a particular parameter should not change.\n {\"a\": [1, 2], \"b\": [3, 4, 5]} ->\n {\"a\": [1, 2, None, None, None], \"b\": [None, None, 3, 4, 5]}\n \"\"\"\n return {\n k: [None] * sum([len(tmp) for tmp in list(param_dict.values())[:i]])\n + v\n + [None] * sum([len(tmp) for tmp in list(param_dict.values())[i + 1 :]])\n for i, (k, v) in enumerate(param_dict.items())\n }\n\n def getRandom(self, safe=False):\n\"\"\"This function returns a random single parameter from the whole space\n in the form of { \"par1\" : 1, \"par2\" : 2}.\n\n This function is used by neurolib/optimize/exploarion.py\n to add parameters of the space to pypet (for initialization)\n\n :param safe: Return a \"safe\" parameter or the original. Safe refers to\n returning python floats, not, for example numpy.float64 (necessary for pypet).\n ;type safe: bool\n \"\"\"\n randomPar = {}\n if safe:\n for key, value in self.parameters.items():\n random_value = np.random.choice(value)\n if isinstance(random_value, np.float64):\n random_value = float(random_value)\n elif isinstance(random_value, np.int64):\n random_value = int(random_value)\n randomPar[key] = random_value\n else:\n for key, value in self.parameters.items():\n randomPar[key] = np.random.choice(value)\n return randomPar\n\n @property\n def lowerBound(self):\n\"\"\"Returns lower bound of all parameters as a list\"\"\"\n return [np.min(p) for p in self.parameterValues]\n\n @property\n def upperBound(self):\n\"\"\"Returns upper bound of all parameters as a list\"\"\"\n return [np.max(p) for p in self.parameterValues]\n\n @property\n def ndims(self):\n\"\"\"Number of dimensions (parameters)\"\"\"\n return len(self.parameters)\n\n @staticmethod\n def _validate_single_bound(single_bound):\n\"\"\"\n Validate single bound.\n :param single_bound: single coordinate bound to validate\n :type single_bound: list|tuple\n \"\"\"\n assert isinstance(\n single_bound, (list, tuple)\n ), \"An error occured while validating the ParameterSpace of kind 'bound': Pass parameter bounds as a list or tuple!\"\n assert (\n len(single_bound) == 2\n ), \"An error occured while validating the ParameterSpace of kind 'bound': Only two bounds (min and max) are allowed\"\n assert (\n single_bound[1] > single_bound[0]\n ), \"An error occured while validating the ParameterSpace of kind 'bound': Minimum parameter value can't be larger than the maximum!\"\n\n def _validate_param_bounds(self, param_bounds):\n\"\"\"\n Validate param bounds.\n :param param_bounds: parameter bounds to validate\n :type param_bounds: list|None\n \"\"\"\n assert param_bounds is not None\n assert isinstance(param_bounds, (list, tuple))\n # check every single parameter bound\n for single_bound in param_bounds:\n self._validate_single_bound(single_bound)\n\n def _processParameterDict(self, parameters):\n\"\"\"Processes all parameters and do checks. Determine the kind of the parameter space.\n :param parameters: parameter dictionary\n :type param: dict\n\n :retun: processed parameter dictionary\n :rtype: dict\n \"\"\"\n\n # convert all parameter arrays into lists\n for key, value in parameters.items():\n if isinstance(value, np.ndarray):\n assert len(value.shape) == 1, f\"Parameter {key} is not one-dimensional.\"\n value = value.tolist()\n parameters[key] = value\n\n # auto detect the parameter kind\n if self.kind is None:\n for key, value in parameters.items():\n # auto detect what kind of space we have\n # kind = \"point\" is a single point in parameter space, one value only\n # kind = \"bound\" is a bounded parameter space with 2 values: min and max\n # kind = \"grid\" is a grid space with as many values on each axis as wished\n\n # first, we assume grid\n self.kind = \"grid\"\n parameterLengths = [len(value) for key, value in parameters.items()]\n # if all parameters have the same length\n if parameterLengths.count(parameterLengths[0]) == len(parameterLengths):\n if parameterLengths[0] == 1:\n self.kind = \"point\"\n elif parameterLengths[0] == 2:\n self.kind = \"bound\"\n logging.info(f'Assuming parameter kind \"{self.kind}\"')\n\n # do some kind-specific tests\n if self.kind == \"bound\":\n # check the boundaries\n self._validate_param_bounds(list(parameters.values()))\n\n # set all parameters as attributes for easy access\n for key, value in parameters.items():\n setattr(self, key, value)\n\n return parameters\n\n def _parameterListsToDict(self, keys, values):\n parameters = {}\n assert len(keys) == len(values), \"Names and values of parameters are not same length.\"\n for key, value in zip(keys, values):\n parameters[key] = value\n return parameters\n
Initialize parameter space. Parameter space can be initialized in two ways: Either a parameters is a dictionary of the form {\"parName1\" : [0, 1, 2], \"parName2\" : [3, 4]}, or parameters is a list of names and parameterValues are values of each parameter.
Parameters:
Name Type Description Default parameters`dict, list[str, str]`
parameter dictionary or list of names of parameters e.g. ['x', 'y']
list of parameter values (must be floats) e.g. [[x_min, x_max], [y_min, y_max], ...]
Nonekindstr
string describing the kind of parameter space: - point: a single point in parameter space - bound: a bound in parameter space, i.e. two values per parameter - grid: a cartesian product over parameters - sequence: a sequence of univariate parameter changes - only one will change at the time, other parameters will stay as default - explicit: explicitely define a parameter space, i.e lists of all parameters have to have the same length - None: parameterSpace tries to auto-detect the correct kind
Noneallow_star_notationbool
whether to allow star notation in parameter names - MultiModel
False Source code in neurolib/utils/parameterSpace.py
def __init__(self, parameters, parameterValues=None, kind=None, allow_star_notation=False):\n\"\"\"\n Initialize parameter space. Parameter space can be initialized in two ways:\n Either a `parameters` is a dictionary of the form `{\"parName1\" : [0, 1, 2], \"parName2\" : [3, 4]}`,\n or `parameters` is a list of names and `parameterValues` are values of each parameter.\n\n :param parameters: parameter dictionary or list of names of parameters e.g. `['x', 'y']`\n :type parameters: `dict, list[str, str]`\n :param parameterValues: list of parameter values (must be floats) e.g. `[[x_min, x_max], [y_min, y_max], ...]`\n :type parameterValues: `list[list[float, float]]`\n :param kind: string describing the kind of parameter space:\n - `point`: a single point in parameter space\n - `bound`: a bound in parameter space, i.e. two values per parameter\n - `grid`: a cartesian product over parameters\n - `sequence`: a sequence of univariate parameter changes - only one will change at the time, other\n parameters will stay as default\n - `explicit`: explicitely define a parameter space, i.e lists of all parameters have to have the same length\n - None: parameterSpace tries to auto-detect the correct kind\n :type kind: str\n :param allow_star_notation: whether to allow star notation in parameter names - MultiModel\n :type allow_star_notation: bool\n \"\"\"\n assert kind in SUPPORTED_KINDS\n self.kind = kind\n self.parameters = parameters\n self.star = allow_star_notation\n # in case a parameter dictionary was given\n if parameterValues is None:\n assert isinstance(\n parameters, dict\n ), \"Parameters must be a dict, if no values are given in `parameterValues`\"\n else:\n # check if all names are strings\n assert np.all([isinstance(pn, str) for pn in parameters]), \"Parameter names must all be strings.\"\n # check if all parameter values are lists\n assert np.all([isinstance(pv, (list, tuple)) for pv in parameterValues]), \"Parameter values must be a list.\"\n parameters = self._parameterListsToDict(parameters, parameterValues)\n\n self.parameters = self._processParameterDict(parameters)\n self.parameterNames = list(self.parameters.keys())\n self.parameterValues = list(self.parameters.values())\n\n # let's create a named tuple of the parameters\n # Note: evolution.py implementation relies on named tuples\n self.named_tuple_constructor = namedtuple(\"ParameterSpace\", sanitize_dot_dict(parameters))\n self.named_tuple = self.named_tuple_constructor(*self.parameterValues)\n\n # set attributes of this class to make it accessible\n for i, p in enumerate(self.parameters):\n setattr(self, p, self.parameterValues[i])\n
This function returns a random single parameter from the whole space in the form of { \"par1\" : 1, \"par2\" : 2}.
This function is used by neurolib/optimize/exploarion.py to add parameters of the space to pypet (for initialization)
Parameters:
Name Type Description Default safe
Return a \"safe\" parameter or the original. Safe refers to returning python floats, not, for example numpy.float64 (necessary for pypet). ;type safe: bool
False Source code in neurolib/utils/parameterSpace.py
def getRandom(self, safe=False):\n\"\"\"This function returns a random single parameter from the whole space\n in the form of { \"par1\" : 1, \"par2\" : 2}.\n\n This function is used by neurolib/optimize/exploarion.py\n to add parameters of the space to pypet (for initialization)\n\n :param safe: Return a \"safe\" parameter or the original. Safe refers to\n returning python floats, not, for example numpy.float64 (necessary for pypet).\n ;type safe: bool\n \"\"\"\n randomPar = {}\n if safe:\n for key, value in self.parameters.items():\n random_value = np.random.choice(value)\n if isinstance(random_value, np.float64):\n random_value = float(random_value)\n elif isinstance(random_value, np.int64):\n random_value = int(random_value)\n randomPar[key] = random_value\n else:\n for key, value in self.parameters.items():\n randomPar[key] = np.random.choice(value)\n return randomPar\n
"},{"location":"utils/signal/","title":"Signal","text":"Source code in neurolib/utils/signal.py
class Signal:\n name = \"\"\n label = \"\"\n signal_type = \"\"\n unit = \"\"\n description = \"\"\n _copy_attributes = [\n \"name\",\n \"label\",\n \"signal_type\",\n \"unit\",\n \"description\",\n \"process_steps\",\n ]\n PROCESS_STEPS_KEY = \"process_steps\"\n\n @classmethod\n def from_model_output(cls, model, group=\"\", time_in_ms=True):\n\"\"\"\n Initial Signal from modelling output.\n \"\"\"\n assert isinstance(model, Model)\n return cls(model.xr(group=group), time_in_ms=time_in_ms)\n\n @classmethod\n def from_file(cls, filename):\n\"\"\"\n Load signal from saved file.\n\n :param filename: filename for the Signal\n :type filename: str\n \"\"\"\n if not filename.endswith(NC_EXT):\n filename += NC_EXT\n # load NC file\n xarray = xr.load_dataarray(filename)\n # init class\n signal = cls(xarray)\n # if nc file has attributes, copy them to signal class\n if xarray.attrs:\n process_steps = []\n for k, v in xarray.attrs.items():\n if cls.PROCESS_STEPS_KEY in k:\n idx = int(k[len(cls.PROCESS_STEPS_KEY) + 1 :])\n process_steps.insert(idx, v)\n else:\n setattr(signal, k, v)\n else:\n logging.warning(\"No metadata found, setting empty...\")\n process_steps = [f\"raw {signal.signal_type} signal: {signal.start_time}--\" f\"{signal.end_time}s\"]\n setattr(signal, cls.PROCESS_STEPS_KEY, process_steps)\n return signal\n\n def __init__(self, data, time_in_ms=False):\n\"\"\"\n :param data: data for the signal, assumes time dimension with time in seconds\n :type data: xr.DataArray\n :param time_in_ms: whether time dimension is in ms\n :type time_in_ms: bool\n \"\"\"\n assert isinstance(data, xr.DataArray)\n data = deepcopy(data)\n assert \"time\" in data.dims, \"DataArray must have time axis\"\n if time_in_ms:\n data[\"time\"] = data[\"time\"] / 1000.0\n data[\"time\"] = np.around(data[\"time\"], 6)\n self.data = data\n # assert time dimension is last\n self.data = self.data.transpose(*(self.dims_not_time + [\"time\"]))\n # compute dt and sampling frequency\n self.dt = np.around(np.diff(data.time).mean(), 6)\n self.sampling_frequency = 1.0 / self.dt\n self.process_steps = [f\"raw {self.signal_type} signal: {self.start_time}--{self.end_time}s\"]\n\n def __str__(self):\n\"\"\"\n String representation.\n \"\"\"\n return (\n f\"{self.name} representing {self.signal_type} signal with unit of \"\n f\"{self.unit} with user-provided description: `{self.description}`\"\n f\". Shape of the signal is {self.shape} with dimensions \"\n f\"{self.data.dims}. Signal starts at {self.start_time} and ends at \"\n f\"{self.end_time}.\"\n )\n\n def __repr__(self):\n\"\"\"\n Representation.\n \"\"\"\n return self.__str__()\n\n def __eq__(self, other):\n\"\"\"\n Comparison operator.\n\n :param other: other `Signal` to compare with\n :type other: `Signal`\n :return: whether two `Signals` are the same\n :rtype: bool\n \"\"\"\n assert isinstance(other, Signal)\n # assert data are the same\n try:\n xr.testing.assert_allclose(self.data, other.data)\n eq = True\n except AssertionError:\n eq = False\n # check attributes, but if not equal, only warn the user\n for attr in self._copy_attributes:\n if getattr(self, attr) != getattr(other, attr):\n logging.warning(f\"`{attr}` not equal between signals.\")\n return eq\n\n def __getitem__(self, pos):\n\"\"\"\n Get item selects in output dimension.\n \"\"\"\n add_steps = [f\"select `{pos}` output\"]\n return self.__constructor__(self.data.sel(output=pos)).__finalize__(self, add_steps)\n\n def __finalize__(self, other, add_steps=None):\n\"\"\"\n Copy attributes from other to self. Used when constructing class\n instance with different data, but same metadata.\n\n :param other: other instance of `Signal`\n :type other: `Signal`\n :param add_steps: add steps to preprocessing\n :type add_steps: list|None\n \"\"\"\n assert isinstance(other, Signal)\n for attr in self._copy_attributes:\n setattr(self, attr, deepcopy(getattr(other, attr)))\n if add_steps is not None:\n self.process_steps += add_steps\n return self\n\n @property\n def __constructor__(self):\n\"\"\"\n Return constructor, so that each child class would initiate a new\n instance of the correct class, i.e. first in the method resolution\n order.\n \"\"\"\n return self.__class__.mro()[0]\n\n def _write_attrs_to_xr(self):\n\"\"\"\n Copy attributes to xarray before saving.\n \"\"\"\n # write attributes to xarray\n for attr in self._copy_attributes:\n value = getattr(self, attr)\n # if list need to unwrap\n if isinstance(value, (list, tuple)):\n for idx, val in enumerate(value):\n self.data.attrs[f\"{attr}_{idx}\"] = val\n else:\n self.data.attrs[attr] = deepcopy(value)\n\n def save(self, filename):\n\"\"\"\n Save signal.\n\n :param filename: filename to save, currently saves to netCDF file, which is natively supported by xarray\n :type filename: str\n \"\"\"\n self._write_attrs_to_xr()\n if not filename.endswith(NC_EXT):\n filename += NC_EXT\n self.data.to_netcdf(filename)\n\n def iterate(self, return_as=\"signal\"):\n\"\"\"\n Return iterator over columns, so univariate measures can be computed\n per column. Loops over tuples as (variable name, timeseries).\n\n :param return_as: how to return columns: `xr` as xr.DataArray, `signal` as\n instance of NeuroSignal with the same attributes as the mother signal\n :type return_as: str\n \"\"\"\n try:\n stacked = self.data.stack({\"all\": self.dims_not_time})\n except ValueError:\n logging.warning(\"No dimensions along which to stack...\")\n stacked = self.data.expand_dims(\"all\")\n\n if return_as == \"xr\":\n yield from stacked.groupby(\"all\")\n elif return_as == \"signal\":\n for name_coords, column in stacked.groupby(\"all\"):\n if not isinstance(name_coords, (list, tuple)):\n name_coords = [name_coords]\n name_dict = {k: v for k, v in zip(self.dims_not_time, name_coords)}\n yield name_dict, self.__constructor__(column).__finalize__(self, [f\"select {column.name}\"])\n else:\n raise ValueError(f\"Data type not understood: {return_as}\")\n\n def sel(self, sel_args, inplace=True):\n\"\"\"\n Subselect part of signal using xarray's `sel`, i.e. selecting by actual\n physical index, hence time in seconds.\n\n :param sel_args: arguments you'd give to xr.sel(), i.e. slice of times\n you want to select, in seconds as a len=2 list or tuple\n :type sel_args: tuple|list\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert len(sel_args) == 2, \"Must provide 2 arguments\"\n selected = self.data.sel(time=slice(sel_args[0], sel_args[1]))\n add_steps = [f\"select {sel_args[0] or 'x'}:{sel_args[1] or 'x'}s\"]\n if inplace:\n self.data = selected\n self.process_steps += add_steps\n else:\n return self.__constructor__(selected).__finalize__(self, add_steps)\n\n def isel(self, isel_args, inplace=True):\n\"\"\"\n Subselect part of signal using xarray's `isel`, i.e. selecting by index,\n hence integers.\n\n :param loc_args: arguments you'd give to xr.isel(), i.e. slice of\n indices you want to select, in seconds as a len=2 list or tuple\n :type loc_args: tuple|list\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert len(isel_args) == 2, \"Must provide 2 arguments\"\n selected = self.data.isel(time=slice(isel_args[0], isel_args[1]))\n start = isel_args[0] * self.dt if isel_args[0] is not None else \"x\"\n end = isel_args[1] * self.dt if isel_args[1] is not None else \"x\"\n add_steps = [f\"select {start}:{end}s\"]\n if inplace:\n self.data = selected\n self.process_steps += add_steps\n else:\n return self.__constructor__(selected).__finalize__(self, add_steps)\n\n def rolling(self, roll_over, function=np.mean, dropnans=True, inplace=True):\n\"\"\"\n Return rolling reduction over signal's time dimension. The window is\n centered around the midpoint.\n\n :param roll_over: window to use, in seconds\n :type roll_over: float\n :param function: function to use for reduction\n :type function: callable\n :param dropnans: whether to drop NaNs - will shorten time dimension, or\n not\n :type dropnans: bool\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert callable(function)\n rolling = self.data.rolling(time=int(roll_over * self.sampling_frequency), center=True).reduce(function)\n add_steps = [f\"rolling {function.__name__} over {roll_over}s\"]\n if dropnans:\n rolling = rolling.dropna(\"time\")\n add_steps[0] += \"; drop NaNs\"\n if inplace:\n self.data = rolling\n self.process_steps += add_steps\n else:\n return self.__constructor__(rolling).__finalize__(self, add_steps)\n\n def sliding_window(self, length, step=1, window_function=\"boxcar\", lengths_in_seconds=False):\n\"\"\"\n Return iterator over sliding windows with windowing function applied.\n Each window has length `length` and each is translated by `step` steps.\n For no windowing function use \"boxcar\". If the last window would have\n the same length as other, it is omitted, i.e. last window does not have\n to end with the final timeseries point!\n\n :param length: length of the window, can be index or time in seconds,\n see `lengths_in_seconds`\n :type length: int|float\n :param step: how much to translate window in the temporal sense, can be\n index or time in seconds, see `lengths_in_seconds`\n :type step: int|float\n :param window_function: windowing function to use, this is passed to\n `get_window()`; see `scipy.signal.windows.get_window` documentation\n :type window_function: str|tuple|float\n :param lengths_in_seconds: if True, `length` and `step` are interpreted\n in seconds, if False they are indices\n :type lengths_in_seconds: bool\n :yield: generator with windowed Signals\n \"\"\"\n if lengths_in_seconds:\n length = int(length / self.dt)\n step = int(step / self.dt)\n assert (\n length < self.data.time.shape[0]\n ), f\"Length must be smaller than time span of the timeseries: {self.data.time.shape[0]}\"\n assert step <= length, \"Step cannot be larger than length, some part of timeseries would be omitted!\"\n current_idx = 0\n add_steps = f\"{str(window_function)} window: \"\n windowing_function = get_window(window_function, Nx=length)\n while current_idx <= (self.data.time.shape[0] - length):\n yield self.__constructor__(\n self.data.isel(time=slice(current_idx, current_idx + length)) * windowing_function\n ).__finalize__(self, [add_steps + f\"{current_idx}:{current_idx + length}\"])\n current_idx += step\n\n @property\n def shape(self):\n\"\"\"\n Return shape of the data. Time axis is the first one.\n \"\"\"\n return self.data.shape\n\n @property\n def dims_not_time(self):\n\"\"\"\n Return list of dimensions that are not time.\n \"\"\"\n return [dim for dim in self.data.dims if dim != \"time\"]\n\n @property\n def coords_not_time(self):\n\"\"\"\n Return dict with all coordinates except time.\n \"\"\"\n return {k: v.values for k, v in self.data.coords.items() if k != \"time\"}\n\n @property\n def start_time(self):\n\"\"\"\n Return starting time of the signal.\n \"\"\"\n return self.data.time.values[0]\n\n @property\n def end_time(self):\n\"\"\"\n Return ending time of the signal.\n \"\"\"\n return self.data.time.values[-1]\n\n @property\n def time(self):\n\"\"\"\n Return time vector.\n \"\"\"\n return self.data.time.values\n\n @property\n def preprocessing_steps(self):\n\"\"\"\n Return preprocessing steps done on the data.\n \"\"\"\n return \" -> \".join(self.process_steps)\n\n def pad(self, how_much, in_seconds=False, padding_type=\"constant\", side=\"both\", inplace=True, **kwargs):\n\"\"\"\n Pad signal by `how_much` on given side of given type.\n\n :param how_much: how much we should pad, can be time points, or seconds,\n see `in_seconds`\n :type how_much: float|int\n :param in_seconds: whether `how_much` is in seconds, if False, it is\n number of time points\n :type in_seconds: bool\n :param padding_type: how to pad the signal, see `np.pad` documentation\n :type padding_type: str\n :param side: which side to pad - \"before\", \"after\", or \"both\"\n :type side: str\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n :kwargs: passed to `np.pad`\n \"\"\"\n if in_seconds:\n how_much = int(np.around(how_much / self.dt))\n if side == \"before\":\n pad_width = (how_much, 0)\n pad_times = np.arange(-how_much, 0) * self.dt + self.data.time.values[0]\n new_times = np.concatenate([pad_times, self.data.time.values], axis=0)\n elif side == \"after\":\n pad_width = (0, how_much)\n pad_times = np.arange(1, how_much + 1) * self.dt + self.data.time.values[-1]\n new_times = np.concatenate([self.data.time.values, pad_times], axis=0)\n elif side == \"both\":\n pad_width = (how_much, how_much)\n pad_before = np.arange(-how_much, 0) * self.dt + self.data.time.values[0]\n pad_after = np.arange(1, how_much + 1) * self.dt + self.data.time.values[-1]\n new_times = np.concatenate([pad_before, self.data.time.values, pad_after], axis=0)\n side += \" sides\"\n else:\n raise ValueError(f\"Unknown padding side: {side}\")\n # add padding for other axes than time - zeroes\n pad_width = [(0, 0)] * len(self.dims_not_time) + [pad_width]\n padded = np.pad(self.data.values, pad_width, mode=padding_type, **kwargs)\n # to dataframe\n padded = xr.DataArray(padded, dims=self.data.dims, coords={**self.coords_not_time, \"time\": new_times})\n add_steps = [f\"{how_much * self.dt}s {padding_type} {side} padding\"]\n if inplace:\n self.data = padded\n self.process_steps += add_steps\n else:\n return self.__constructor__(padded).__finalize__(self, add_steps)\n\n def normalize(self, std=False, inplace=True):\n\"\"\"\n De-mean the timeseries. Optionally also standardise.\n\n :param std: normalize by std, i.e. to unit variance\n :type std: bool\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n\n def norm_func(x, dim):\n demeaned = x - x.mean(dim=dim)\n if std:\n return demeaned / x.std(dim=dim)\n else:\n return demeaned\n\n normalized = norm_func(self.data, dim=\"time\")\n add_steps = [\"normalize\", \"standardize\"] if std else [\"normalize\"]\n if inplace:\n self.data = normalized\n self.process_steps += add_steps\n else:\n return self.__constructor__(normalized).__finalize__(self, add_steps)\n\n def resample(self, to_frequency, inplace=True):\n\"\"\"\n Resample signal to target frequency.\n\n :param to_frequency: target frequency of the signal, in Hz\n :type to_frequency: float\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n to_frequency = float(to_frequency)\n try:\n from mne.filter import resample\n\n resample_func = partial(\n resample, up=to_frequency, down=self.sampling_frequency, npad=\"auto\", axis=-1, pad=\"edge\"\n )\n except ImportError:\n logging.warning(\"`mne` module not found, falling back to basic scipy's function\")\n\n def resample_func(x):\n return scipy_resample(\n x,\n num=int(round((to_frequency / self.sampling_frequency) * self.data.shape[-1])),\n axis=-1,\n window=\"boxcar\",\n )\n\n resampled = resample_func(self.data.values)\n # construct new times\n new_times = (np.arange(resampled.shape[-1], dtype=float) / to_frequency) + self.data.time.values[0]\n # to dataframe\n resampled = xr.DataArray(resampled, dims=self.data.dims, coords={**self.coords_not_time, \"time\": new_times})\n add_steps = [f\"resample to {to_frequency}Hz\"]\n if inplace:\n self.data = resampled\n self.sampling_frequency = to_frequency\n self.dt = np.around(np.diff(resampled.time).mean(), 6)\n self.process_steps += add_steps\n else:\n return self.__constructor__(resampled).__finalize__(self, add_steps)\n\n def hilbert_transform(self, return_as=\"complex\", inplace=True):\n\"\"\"\n Perform hilbert transform on the signal resulting in analytic signal.\n\n :param return_as: what to return\n `complex` will compute only analytical signal\n `amplitude` will compute amplitude, hence abs(H(x))\n `phase_wrapped` will compute phase, hence angle(H(x)), in -pi,pi\n `phase_unwrapped` will compute phase in a continuous sense, hence\n monotonic\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n analytic = hilbert(self.data, axis=-1)\n if return_as == \"amplitude\":\n analytic = np.abs(analytic)\n add_steps = [\"Hilbert - amplitude\"]\n elif return_as == \"phase_unwrapped\":\n analytic = np.unwrap(np.angle(analytic))\n add_steps = [\"Hilbert - unwrapped phase\"]\n elif return_as == \"phase_wrapped\":\n analytic = np.angle(analytic)\n add_steps = [\"Hilbert - wrapped phase\"]\n elif return_as == \"complex\":\n add_steps = [\"Hilbert - complex\"]\n else:\n raise ValueError(f\"Do not know how to return: {return_as}\")\n\n analytic = xr.DataArray(analytic, dims=self.data.dims, coords=self.data.coords)\n if inplace:\n self.data = analytic\n self.process_steps += add_steps\n else:\n return self.__constructor__(analytic).__finalize__(self, add_steps)\n\n def detrend(self, segments=None, inplace=True):\n\"\"\"\n Linearly detrend signal. If segments are given, detrending will be\n performed in each part.\n\n :param segments: segments for detrending, if None will detrend whole\n signal, given as indices of the time array\n :type segments: list|None\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n segments = segments or 0\n detrended = detrend(self.data, type=\"linear\", bp=segments, axis=-1)\n detrended = xr.DataArray(detrended, dims=self.data.dims, coords=self.data.coords)\n segments_text = f\" with segments: {segments}\" if segments != 0 else \"\"\n add_steps = [f\"detrend{segments_text}\"]\n if inplace:\n self.data = detrended\n self.process_steps += add_steps\n else:\n return self.__constructor__(detrended).__finalize__(self, add_steps)\n\n def filter(self, low_freq, high_freq, l_trans_bandwidth=\"auto\", h_trans_bandwidth=\"auto\", inplace=True, **kwargs):\n\"\"\"\n Filter data. Can be:\n low-pass (low_freq is None, high_freq is not None),\n high-pass (high_freq is None, low_freq is not None),\n band-pass (l_freq < h_freq),\n band-stop (l_freq > h_freq) filter type\n\n :param low_freq: frequency below which to filter the data\n :type low_freq: float|None\n :param high_freq: frequency above which to filter the data\n :type high_freq: float|None\n :param l_trans_bandwidth: transition band width for low frequency\n :type l_trans_bandwidth: float|str\n :param h_trans_bandwidth: transition band width for high frequency\n :type h_trans_bandwidth: float|str\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n :**kwargs: possible keywords to `mne.filter.create_filter`:\n `filter_length`=\"auto\",\n `method`=\"fir\",\n `iir_params`=None\n `phase`=\"zero\",\n `fir_window`=\"hamming\",\n `fir_design`=\"firwin\"\n \"\"\"\n try:\n from mne.filter import filter_data\n\n except ImportError:\n logging.warning(\"`mne` module not found, falling back to basic scipy's function\")\n filter_data = scipy_iir_filter_data\n\n filtered = filter_data(\n self.data.values, # times has to be the last axis\n sfreq=self.sampling_frequency,\n l_freq=low_freq,\n h_freq=high_freq,\n l_trans_bandwidth=l_trans_bandwidth,\n h_trans_bandwidth=h_trans_bandwidth,\n **kwargs,\n )\n add_steps = [f\"filter: low {low_freq or 'x'}Hz - high {high_freq or 'x'}Hz\"]\n # to dataframe\n filtered = xr.DataArray(filtered, dims=self.data.dims, coords=self.data.coords)\n if inplace:\n self.data = filtered\n self.process_steps += add_steps\n else:\n return self.__constructor__(filtered).__finalize__(self, add_steps)\n\n def functional_connectivity(self, fc_function=np.corrcoef):\n\"\"\"\n Compute and return functional connectivity from the data.\n\n :param fc_function: function which to use for FC computation, should\n take 2D array as space x time and convert it to space x space with\n desired measure\n \"\"\"\n if len(self.data[\"space\"]) <= 1:\n logging.error(\"Cannot compute functional connectivity from one timeseries.\")\n return None\n if self.data.ndim == 3:\n assert callable(fc_function)\n fcs = []\n for output in self.data[\"output\"]:\n current_slice = self.data.sel({\"output\": output})\n assert current_slice.ndim == 2\n fcs.append(fc_function(current_slice.values))\n\n return xr.DataArray(\n np.array(fcs),\n dims=[\"output\", \"space\", \"space\"],\n coords={\"output\": self.data.coords[\"output\"], \"space\": self.data.coords[\"space\"]},\n )\n if self.data.ndim == 2:\n return xr.DataArray(\n fc_function(self.data.values),\n dims=[\"space\", \"space\"],\n coords={\"space\": self.data.coords[\"space\"]},\n )\n\n def apply(self, func, inplace=True):\n\"\"\"\n Apply func for each timeseries.\n\n :param func: function to be applied for each 1D timeseries\n :type func: callable\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert callable(func)\n try:\n # this will work for element-wise function that does not reduces dimensions\n processed = xr.apply_ufunc(func, self.data, input_core_dims=[[\"time\"]], output_core_dims=[[\"time\"]])\n add_steps = [f\"apply `{func.__name__}` function over time dim\"]\n if inplace:\n self.data = processed\n self.process_steps += add_steps\n else:\n return self.__constructor__(processed).__finalize__(self, add_steps)\n except ValueError:\n # this works for functions that reduce time dimension\n processed = xr.apply_ufunc(func, self.data, input_core_dims=[[\"time\"]])\n logging.warning(\n f\"Shape changed after operation! Old shape: {self.shape}, new \"\n f\"shape: {processed.shape}; Cannot cast to Signal class, \"\n \"returing as `xr.DataArray`\"\n )\n return processed\n
def __eq__(self, other):\n\"\"\"\n Comparison operator.\n\n :param other: other `Signal` to compare with\n :type other: `Signal`\n :return: whether two `Signals` are the same\n :rtype: bool\n \"\"\"\n assert isinstance(other, Signal)\n # assert data are the same\n try:\n xr.testing.assert_allclose(self.data, other.data)\n eq = True\n except AssertionError:\n eq = False\n # check attributes, but if not equal, only warn the user\n for attr in self._copy_attributes:\n if getattr(self, attr) != getattr(other, attr):\n logging.warning(f\"`{attr}` not equal between signals.\")\n return eq\n
Copy attributes from other to self. Used when constructing class instance with different data, but same metadata.
Parameters:
Name Type Description Default other`Signal`
other instance of Signal
required add_stepslist|None
add steps to preprocessing
None Source code in neurolib/utils/signal.py
def __finalize__(self, other, add_steps=None):\n\"\"\"\n Copy attributes from other to self. Used when constructing class\n instance with different data, but same metadata.\n\n :param other: other instance of `Signal`\n :type other: `Signal`\n :param add_steps: add steps to preprocessing\n :type add_steps: list|None\n \"\"\"\n assert isinstance(other, Signal)\n for attr in self._copy_attributes:\n setattr(self, attr, deepcopy(getattr(other, attr)))\n if add_steps is not None:\n self.process_steps += add_steps\n return self\n
data for the signal, assumes time dimension with time in seconds
required time_in_msbool
whether time dimension is in ms
False Source code in neurolib/utils/signal.py
def __init__(self, data, time_in_ms=False):\n\"\"\"\n :param data: data for the signal, assumes time dimension with time in seconds\n :type data: xr.DataArray\n :param time_in_ms: whether time dimension is in ms\n :type time_in_ms: bool\n \"\"\"\n assert isinstance(data, xr.DataArray)\n data = deepcopy(data)\n assert \"time\" in data.dims, \"DataArray must have time axis\"\n if time_in_ms:\n data[\"time\"] = data[\"time\"] / 1000.0\n data[\"time\"] = np.around(data[\"time\"], 6)\n self.data = data\n # assert time dimension is last\n self.data = self.data.transpose(*(self.dims_not_time + [\"time\"]))\n # compute dt and sampling frequency\n self.dt = np.around(np.diff(data.time).mean(), 6)\n self.sampling_frequency = 1.0 / self.dt\n self.process_steps = [f\"raw {self.signal_type} signal: {self.start_time}--{self.end_time}s\"]\n
def __str__(self):\n\"\"\"\n String representation.\n \"\"\"\n return (\n f\"{self.name} representing {self.signal_type} signal with unit of \"\n f\"{self.unit} with user-provided description: `{self.description}`\"\n f\". Shape of the signal is {self.shape} with dimensions \"\n f\"{self.data.dims}. Signal starts at {self.start_time} and ends at \"\n f\"{self.end_time}.\"\n )\n
def apply(self, func, inplace=True):\n\"\"\"\n Apply func for each timeseries.\n\n :param func: function to be applied for each 1D timeseries\n :type func: callable\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert callable(func)\n try:\n # this will work for element-wise function that does not reduces dimensions\n processed = xr.apply_ufunc(func, self.data, input_core_dims=[[\"time\"]], output_core_dims=[[\"time\"]])\n add_steps = [f\"apply `{func.__name__}` function over time dim\"]\n if inplace:\n self.data = processed\n self.process_steps += add_steps\n else:\n return self.__constructor__(processed).__finalize__(self, add_steps)\n except ValueError:\n # this works for functions that reduce time dimension\n processed = xr.apply_ufunc(func, self.data, input_core_dims=[[\"time\"]])\n logging.warning(\n f\"Shape changed after operation! Old shape: {self.shape}, new \"\n f\"shape: {processed.shape}; Cannot cast to Signal class, \"\n \"returing as `xr.DataArray`\"\n )\n return processed\n
Linearly detrend signal. If segments are given, detrending will be performed in each part.
Parameters:
Name Type Description Default segmentslist|None
segments for detrending, if None will detrend whole signal, given as indices of the time array
Noneinplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def detrend(self, segments=None, inplace=True):\n\"\"\"\n Linearly detrend signal. If segments are given, detrending will be\n performed in each part.\n\n :param segments: segments for detrending, if None will detrend whole\n signal, given as indices of the time array\n :type segments: list|None\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n segments = segments or 0\n detrended = detrend(self.data, type=\"linear\", bp=segments, axis=-1)\n detrended = xr.DataArray(detrended, dims=self.data.dims, coords=self.data.coords)\n segments_text = f\" with segments: {segments}\" if segments != 0 else \"\"\n add_steps = [f\"detrend{segments_text}\"]\n if inplace:\n self.data = detrended\n self.process_steps += add_steps\n else:\n return self.__constructor__(detrended).__finalize__(self, add_steps)\n
Filter data. Can be: low-pass (low_freq is None, high_freq is not None), high-pass (high_freq is None, low_freq is not None), band-pass (l_freq < h_freq), band-stop (l_freq > h_freq) filter type
:**kwargs: possible keywords to mne.filter.create_filter: filter_length=\"auto\", method=\"fir\", iir_params=None phase=\"zero\", fir_window=\"hamming\", fir_design=\"firwin\"
Parameters:
Name Type Description Default low_freqfloat|None
frequency below which to filter the data
required high_freqfloat|None
frequency above which to filter the data
required l_trans_bandwidthfloat|str
transition band width for low frequency
'auto'h_trans_bandwidthfloat|str
transition band width for high frequency
'auto'inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def filter(self, low_freq, high_freq, l_trans_bandwidth=\"auto\", h_trans_bandwidth=\"auto\", inplace=True, **kwargs):\n\"\"\"\n Filter data. Can be:\n low-pass (low_freq is None, high_freq is not None),\n high-pass (high_freq is None, low_freq is not None),\n band-pass (l_freq < h_freq),\n band-stop (l_freq > h_freq) filter type\n\n :param low_freq: frequency below which to filter the data\n :type low_freq: float|None\n :param high_freq: frequency above which to filter the data\n :type high_freq: float|None\n :param l_trans_bandwidth: transition band width for low frequency\n :type l_trans_bandwidth: float|str\n :param h_trans_bandwidth: transition band width for high frequency\n :type h_trans_bandwidth: float|str\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n :**kwargs: possible keywords to `mne.filter.create_filter`:\n `filter_length`=\"auto\",\n `method`=\"fir\",\n `iir_params`=None\n `phase`=\"zero\",\n `fir_window`=\"hamming\",\n `fir_design`=\"firwin\"\n \"\"\"\n try:\n from mne.filter import filter_data\n\n except ImportError:\n logging.warning(\"`mne` module not found, falling back to basic scipy's function\")\n filter_data = scipy_iir_filter_data\n\n filtered = filter_data(\n self.data.values, # times has to be the last axis\n sfreq=self.sampling_frequency,\n l_freq=low_freq,\n h_freq=high_freq,\n l_trans_bandwidth=l_trans_bandwidth,\n h_trans_bandwidth=h_trans_bandwidth,\n **kwargs,\n )\n add_steps = [f\"filter: low {low_freq or 'x'}Hz - high {high_freq or 'x'}Hz\"]\n # to dataframe\n filtered = xr.DataArray(filtered, dims=self.data.dims, coords=self.data.coords)\n if inplace:\n self.data = filtered\n self.process_steps += add_steps\n else:\n return self.__constructor__(filtered).__finalize__(self, add_steps)\n
Compute and return functional connectivity from the data.
Parameters:
Name Type Description Default fc_function
function which to use for FC computation, should take 2D array as space x time and convert it to space x space with desired measure
np.corrcoef Source code in neurolib/utils/signal.py
def functional_connectivity(self, fc_function=np.corrcoef):\n\"\"\"\n Compute and return functional connectivity from the data.\n\n :param fc_function: function which to use for FC computation, should\n take 2D array as space x time and convert it to space x space with\n desired measure\n \"\"\"\n if len(self.data[\"space\"]) <= 1:\n logging.error(\"Cannot compute functional connectivity from one timeseries.\")\n return None\n if self.data.ndim == 3:\n assert callable(fc_function)\n fcs = []\n for output in self.data[\"output\"]:\n current_slice = self.data.sel({\"output\": output})\n assert current_slice.ndim == 2\n fcs.append(fc_function(current_slice.values))\n\n return xr.DataArray(\n np.array(fcs),\n dims=[\"output\", \"space\", \"space\"],\n coords={\"output\": self.data.coords[\"output\"], \"space\": self.data.coords[\"space\"]},\n )\n if self.data.ndim == 2:\n return xr.DataArray(\n fc_function(self.data.values),\n dims=[\"space\", \"space\"],\n coords={\"space\": self.data.coords[\"space\"]},\n )\n
Perform hilbert transform on the signal resulting in analytic signal.
Parameters:
Name Type Description Default return_as
what to return complex will compute only analytical signal amplitude will compute amplitude, hence abs(H(x)) phase_wrapped will compute phase, hence angle(H(x)), in -pi,pi phase_unwrapped will compute phase in a continuous sense, hence monotonic
'complex'inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def hilbert_transform(self, return_as=\"complex\", inplace=True):\n\"\"\"\n Perform hilbert transform on the signal resulting in analytic signal.\n\n :param return_as: what to return\n `complex` will compute only analytical signal\n `amplitude` will compute amplitude, hence abs(H(x))\n `phase_wrapped` will compute phase, hence angle(H(x)), in -pi,pi\n `phase_unwrapped` will compute phase in a continuous sense, hence\n monotonic\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n analytic = hilbert(self.data, axis=-1)\n if return_as == \"amplitude\":\n analytic = np.abs(analytic)\n add_steps = [\"Hilbert - amplitude\"]\n elif return_as == \"phase_unwrapped\":\n analytic = np.unwrap(np.angle(analytic))\n add_steps = [\"Hilbert - unwrapped phase\"]\n elif return_as == \"phase_wrapped\":\n analytic = np.angle(analytic)\n add_steps = [\"Hilbert - wrapped phase\"]\n elif return_as == \"complex\":\n add_steps = [\"Hilbert - complex\"]\n else:\n raise ValueError(f\"Do not know how to return: {return_as}\")\n\n analytic = xr.DataArray(analytic, dims=self.data.dims, coords=self.data.coords)\n if inplace:\n self.data = analytic\n self.process_steps += add_steps\n else:\n return self.__constructor__(analytic).__finalize__(self, add_steps)\n
Subselect part of signal using xarray's isel, i.e. selecting by index, hence integers.
Parameters:
Name Type Description Default loc_argstuple|list
arguments you'd give to xr.isel(), i.e. slice of indices you want to select, in seconds as a len=2 list or tuple
required inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def isel(self, isel_args, inplace=True):\n\"\"\"\n Subselect part of signal using xarray's `isel`, i.e. selecting by index,\n hence integers.\n\n :param loc_args: arguments you'd give to xr.isel(), i.e. slice of\n indices you want to select, in seconds as a len=2 list or tuple\n :type loc_args: tuple|list\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert len(isel_args) == 2, \"Must provide 2 arguments\"\n selected = self.data.isel(time=slice(isel_args[0], isel_args[1]))\n start = isel_args[0] * self.dt if isel_args[0] is not None else \"x\"\n end = isel_args[1] * self.dt if isel_args[1] is not None else \"x\"\n add_steps = [f\"select {start}:{end}s\"]\n if inplace:\n self.data = selected\n self.process_steps += add_steps\n else:\n return self.__constructor__(selected).__finalize__(self, add_steps)\n
Return iterator over columns, so univariate measures can be computed per column. Loops over tuples as (variable name, timeseries).
Parameters:
Name Type Description Default return_asstr
how to return columns: xr as xr.DataArray, signal as instance of NeuroSignal with the same attributes as the mother signal
'signal' Source code in neurolib/utils/signal.py
def iterate(self, return_as=\"signal\"):\n\"\"\"\n Return iterator over columns, so univariate measures can be computed\n per column. Loops over tuples as (variable name, timeseries).\n\n :param return_as: how to return columns: `xr` as xr.DataArray, `signal` as\n instance of NeuroSignal with the same attributes as the mother signal\n :type return_as: str\n \"\"\"\n try:\n stacked = self.data.stack({\"all\": self.dims_not_time})\n except ValueError:\n logging.warning(\"No dimensions along which to stack...\")\n stacked = self.data.expand_dims(\"all\")\n\n if return_as == \"xr\":\n yield from stacked.groupby(\"all\")\n elif return_as == \"signal\":\n for name_coords, column in stacked.groupby(\"all\"):\n if not isinstance(name_coords, (list, tuple)):\n name_coords = [name_coords]\n name_dict = {k: v for k, v in zip(self.dims_not_time, name_coords)}\n yield name_dict, self.__constructor__(column).__finalize__(self, [f\"select {column.name}\"])\n else:\n raise ValueError(f\"Data type not understood: {return_as}\")\n
De-mean the timeseries. Optionally also standardise.
Parameters:
Name Type Description Default stdbool
normalize by std, i.e. to unit variance
Falseinplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def normalize(self, std=False, inplace=True):\n\"\"\"\n De-mean the timeseries. Optionally also standardise.\n\n :param std: normalize by std, i.e. to unit variance\n :type std: bool\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n\n def norm_func(x, dim):\n demeaned = x - x.mean(dim=dim)\n if std:\n return demeaned / x.std(dim=dim)\n else:\n return demeaned\n\n normalized = norm_func(self.data, dim=\"time\")\n add_steps = [\"normalize\", \"standardize\"] if std else [\"normalize\"]\n if inplace:\n self.data = normalized\n self.process_steps += add_steps\n else:\n return self.__constructor__(normalized).__finalize__(self, add_steps)\n
Pad signal by how_much on given side of given type.
:kwargs: passed to np.pad
Parameters:
Name Type Description Default how_muchfloat|int
how much we should pad, can be time points, or seconds, see in_seconds
required in_secondsbool
whether how_much is in seconds, if False, it is number of time points
Falsepadding_typestr
how to pad the signal, see np.pad documentation
'constant'sidestr
which side to pad - \"before\", \"after\", or \"both\"
'both'inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def pad(self, how_much, in_seconds=False, padding_type=\"constant\", side=\"both\", inplace=True, **kwargs):\n\"\"\"\n Pad signal by `how_much` on given side of given type.\n\n :param how_much: how much we should pad, can be time points, or seconds,\n see `in_seconds`\n :type how_much: float|int\n :param in_seconds: whether `how_much` is in seconds, if False, it is\n number of time points\n :type in_seconds: bool\n :param padding_type: how to pad the signal, see `np.pad` documentation\n :type padding_type: str\n :param side: which side to pad - \"before\", \"after\", or \"both\"\n :type side: str\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n :kwargs: passed to `np.pad`\n \"\"\"\n if in_seconds:\n how_much = int(np.around(how_much / self.dt))\n if side == \"before\":\n pad_width = (how_much, 0)\n pad_times = np.arange(-how_much, 0) * self.dt + self.data.time.values[0]\n new_times = np.concatenate([pad_times, self.data.time.values], axis=0)\n elif side == \"after\":\n pad_width = (0, how_much)\n pad_times = np.arange(1, how_much + 1) * self.dt + self.data.time.values[-1]\n new_times = np.concatenate([self.data.time.values, pad_times], axis=0)\n elif side == \"both\":\n pad_width = (how_much, how_much)\n pad_before = np.arange(-how_much, 0) * self.dt + self.data.time.values[0]\n pad_after = np.arange(1, how_much + 1) * self.dt + self.data.time.values[-1]\n new_times = np.concatenate([pad_before, self.data.time.values, pad_after], axis=0)\n side += \" sides\"\n else:\n raise ValueError(f\"Unknown padding side: {side}\")\n # add padding for other axes than time - zeroes\n pad_width = [(0, 0)] * len(self.dims_not_time) + [pad_width]\n padded = np.pad(self.data.values, pad_width, mode=padding_type, **kwargs)\n # to dataframe\n padded = xr.DataArray(padded, dims=self.data.dims, coords={**self.coords_not_time, \"time\": new_times})\n add_steps = [f\"{how_much * self.dt}s {padding_type} {side} padding\"]\n if inplace:\n self.data = padded\n self.process_steps += add_steps\n else:\n return self.__constructor__(padded).__finalize__(self, add_steps)\n
Return rolling reduction over signal's time dimension. The window is centered around the midpoint.
Parameters:
Name Type Description Default roll_overfloat
window to use, in seconds
required functioncallable
function to use for reduction
np.meandropnansbool
whether to drop NaNs - will shorten time dimension, or not
Trueinplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def rolling(self, roll_over, function=np.mean, dropnans=True, inplace=True):\n\"\"\"\n Return rolling reduction over signal's time dimension. The window is\n centered around the midpoint.\n\n :param roll_over: window to use, in seconds\n :type roll_over: float\n :param function: function to use for reduction\n :type function: callable\n :param dropnans: whether to drop NaNs - will shorten time dimension, or\n not\n :type dropnans: bool\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert callable(function)\n rolling = self.data.rolling(time=int(roll_over * self.sampling_frequency), center=True).reduce(function)\n add_steps = [f\"rolling {function.__name__} over {roll_over}s\"]\n if dropnans:\n rolling = rolling.dropna(\"time\")\n add_steps[0] += \"; drop NaNs\"\n if inplace:\n self.data = rolling\n self.process_steps += add_steps\n else:\n return self.__constructor__(rolling).__finalize__(self, add_steps)\n
filename to save, currently saves to netCDF file, which is natively supported by xarray
required Source code in neurolib/utils/signal.py
def save(self, filename):\n\"\"\"\n Save signal.\n\n :param filename: filename to save, currently saves to netCDF file, which is natively supported by xarray\n :type filename: str\n \"\"\"\n self._write_attrs_to_xr()\n if not filename.endswith(NC_EXT):\n filename += NC_EXT\n self.data.to_netcdf(filename)\n
Subselect part of signal using xarray's sel, i.e. selecting by actual physical index, hence time in seconds.
Parameters:
Name Type Description Default sel_argstuple|list
arguments you'd give to xr.sel(), i.e. slice of times you want to select, in seconds as a len=2 list or tuple
required inplacebool
whether to do the operation in place or return
True Source code in neurolib/utils/signal.py
def sel(self, sel_args, inplace=True):\n\"\"\"\n Subselect part of signal using xarray's `sel`, i.e. selecting by actual\n physical index, hence time in seconds.\n\n :param sel_args: arguments you'd give to xr.sel(), i.e. slice of times\n you want to select, in seconds as a len=2 list or tuple\n :type sel_args: tuple|list\n :param inplace: whether to do the operation in place or return\n :type inplace: bool\n \"\"\"\n assert len(sel_args) == 2, \"Must provide 2 arguments\"\n selected = self.data.sel(time=slice(sel_args[0], sel_args[1]))\n add_steps = [f\"select {sel_args[0] or 'x'}:{sel_args[1] or 'x'}s\"]\n if inplace:\n self.data = selected\n self.process_steps += add_steps\n else:\n return self.__constructor__(selected).__finalize__(self, add_steps)\n
Return iterator over sliding windows with windowing function applied. Each window has length length and each is translated by step steps. For no windowing function use \"boxcar\". If the last window would have the same length as other, it is omitted, i.e. last window does not have to end with the final timeseries point!
:yield: generator with windowed Signals
Parameters:
Name Type Description Default lengthint|float
length of the window, can be index or time in seconds, see lengths_in_seconds
required stepint|float
how much to translate window in the temporal sense, can be index or time in seconds, see lengths_in_seconds
1window_functionstr|tuple|float
windowing function to use, this is passed to get_window(); see scipy.signal.windows.get_window documentation
'boxcar'lengths_in_secondsbool
if True, length and step are interpreted in seconds, if False they are indices
False Source code in neurolib/utils/signal.py
def sliding_window(self, length, step=1, window_function=\"boxcar\", lengths_in_seconds=False):\n\"\"\"\n Return iterator over sliding windows with windowing function applied.\n Each window has length `length` and each is translated by `step` steps.\n For no windowing function use \"boxcar\". If the last window would have\n the same length as other, it is omitted, i.e. last window does not have\n to end with the final timeseries point!\n\n :param length: length of the window, can be index or time in seconds,\n see `lengths_in_seconds`\n :type length: int|float\n :param step: how much to translate window in the temporal sense, can be\n index or time in seconds, see `lengths_in_seconds`\n :type step: int|float\n :param window_function: windowing function to use, this is passed to\n `get_window()`; see `scipy.signal.windows.get_window` documentation\n :type window_function: str|tuple|float\n :param lengths_in_seconds: if True, `length` and `step` are interpreted\n in seconds, if False they are indices\n :type lengths_in_seconds: bool\n :yield: generator with windowed Signals\n \"\"\"\n if lengths_in_seconds:\n length = int(length / self.dt)\n step = int(step / self.dt)\n assert (\n length < self.data.time.shape[0]\n ), f\"Length must be smaller than time span of the timeseries: {self.data.time.shape[0]}\"\n assert step <= length, \"Step cannot be larger than length, some part of timeseries would be omitted!\"\n current_idx = 0\n add_steps = f\"{str(window_function)} window: \"\n windowing_function = get_window(window_function, Nx=length)\n while current_idx <= (self.data.time.shape[0] - length):\n yield self.__constructor__(\n self.data.isel(time=slice(current_idx, current_idx + length)) * windowing_function\n ).__finalize__(self, [add_steps + f\"{current_idx}:{current_idx + length}\"])\n current_idx += step\n
Base class for stimuli consisting of multiple time series, such as summed inputs or concatenated inputs.
Source code in neurolib/utils/stimulus.py
class BaseMultipleInputs(Stimulus):\n\"\"\"\n Base class for stimuli consisting of multiple time series, such as summed inputs or concatenated inputs.\n \"\"\"\n\n def __init__(self, inputs):\n\"\"\"\n :param inputs: List of Inputs to combine\n :type inputs: list[`Input`]\n \"\"\"\n assert all(isinstance(input, Input) for input in inputs)\n self.inputs = inputs\n\n def __len__(self):\n\"\"\"\n Return number of inputs.\n \"\"\"\n return len(self.inputs)\n\n def __getitem__(self, index):\n\"\"\"\n Return inputs by index. This also allows iteration.\n \"\"\"\n return self.inputs[index]\n\n @property\n def n(self):\n n = set([input.n for input in self])\n assert len(n) == 1\n return next(iter(n))\n\n @n.setter\n def n(self, n):\n for input in self:\n input.n = n\n\n def get_params(self):\n\"\"\"\n Get all parameters recursively for all inputs.\n \"\"\"\n return {\n \"type\": self.__class__.__name__,\n **{f\"input_{i}\": input.get_params() for i, input in enumerate(self)},\n }\n\n def update_params(self, params_dict):\n\"\"\"\n Update all parameters recursively.\n \"\"\"\n for i, input in enumerate(self):\n input.update_params(params_dict.get(f\"input_{i}\", {}))\n
required Source code in neurolib/utils/stimulus.py
def __init__(self, inputs):\n\"\"\"\n :param inputs: List of Inputs to combine\n :type inputs: list[`Input`]\n \"\"\"\n assert all(isinstance(input, Input) for input in inputs)\n self.inputs = inputs\n
def get_params(self):\n\"\"\"\n Get all parameters recursively for all inputs.\n \"\"\"\n return {\n \"type\": self.__class__.__name__,\n **{f\"input_{i}\": input.get_params() for i, input in enumerate(self)},\n }\n
def update_params(self, params_dict):\n\"\"\"\n Update all parameters recursively.\n \"\"\"\n for i, input in enumerate(self):\n input.update_params(params_dict.get(f\"input_{i}\", {}))\n
Return concatenation of all stimuli as numpy array.
Source code in neurolib/utils/stimulus.py
def as_array(self, duration, dt):\n\"\"\"\n Return concatenation of all stimuli as numpy array.\n \"\"\"\n # normalize ratios to sum = 1\n ratios = [i / sum(self.length_ratios) for i in self.length_ratios]\n concat = np.concatenate(\n [input.as_array(duration * ratio, dt) for input, ratio in zip(self.inputs, ratios)],\n axis=1,\n )\n length = int(duration / dt)\n # due to rounding errors, the overall length might be longer by a few dt\n return concat[:, :length]\n
class Input:\n\"\"\"\n Generates input to model.\n\n Base class for other input types.\n \"\"\"\n\n def __init__(self, n=1, seed=None):\n\"\"\"\n :param n: Number of spatial dimensions / independent realizations of the input.\n For determinstic inputs, the array is just copied,\n for stociastic / noisy inputs, this means independent realizations.\n :type n: int\n :param seed: Seed for the random number generator.\n :type seed: int|None\n \"\"\"\n self.n = n\n self.seed = seed\n # seed the generator\n np.random.seed(seed)\n # get parameter names\n self.param_names = inspect.getfullargspec(self.__init__).args\n self.param_names.remove(\"self\")\n\n def __add__(self, other):\n\"\"\"\n Sum two inputs into one SummedStimulus.\n \"\"\"\n assert isinstance(other, Input)\n assert self.n == other.n\n if isinstance(other, SummedStimulus):\n return SummedStimulus(inputs=[self] + other.inputs)\n else:\n return SummedStimulus(inputs=[self, other])\n\n def __and__(self, other):\n\"\"\"\n Concatenate two inputs into ConcatenatedStimulus.\n \"\"\"\n assert isinstance(other, Input)\n assert self.n == other.n\n if isinstance(other, ConcatenatedStimulus):\n return ConcatenatedStimulus(inputs=[self] + other.inputs, length_ratios=[1] + other.length_ratios)\n else:\n return ConcatenatedStimulus(inputs=[self, other])\n\n def _reset(self):\n\"\"\"\n Reset is called after generating an input. Can be used to reset\n intrinsic properties.\n \"\"\"\n pass\n\n def get_params(self):\n\"\"\"\n Return the parameters of the input as dict.\n \"\"\"\n assert all(hasattr(self, name) for name in self.param_names), self.param_names\n params = {name: getattr(self, name) for name in self.param_names}\n return {\"type\": self.__class__.__name__, **params}\n\n def update_params(self, params_dict):\n\"\"\"\n Update model input parameters.\n\n :param params_dict: New parameters for this input\n :type params_dict: dict\n \"\"\"\n\n def _sanitize(value):\n\"\"\"\n Change string `None` to actual None - can happen with Exploration or\n Evolution, since `pypet` does None -> \"None\".\n \"\"\"\n if value == \"None\":\n return None\n else:\n return value\n\n for param, value in params_dict.items():\n if hasattr(self, param):\n setattr(self, param, _sanitize(value))\n\n def _get_times(self, duration, dt):\n\"\"\"\n Generate time vector.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n \"\"\"\n self.times = np.arange(dt, duration + dt, dt)\n\n def generate_input(self, duration, dt):\n\"\"\"\n Function to generate input.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n \"\"\"\n raise NotImplementedError\n\n def as_array(self, duration, dt):\n\"\"\"\n Return input as numpy array.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n \"\"\"\n array = self.generate_input(duration, dt)\n self._reset()\n return array\n\n def as_cubic_splines(self, duration, dt, shift_start_time=0.0):\n\"\"\"\n Return as cubic Hermite splines.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n :param shift_start_time: By how much to shift the stimulus start time\n :type shift_start_time: float\n \"\"\"\n self._get_times(duration, dt)\n splines = CubicHermiteSpline.from_data(self.times + shift_start_time, self.generate_input(duration, dt).T)\n self._reset()\n return splines\n\n def to_model(self, model):\n\"\"\"\n Return numpy array of stimuli based on model parameters.\n\n Example:\n ```\n model.params[\"ext_exc_input\"] = SinusoidalInput(...).to_model(model)\n ```\n\n :param model: neurolib's model\n :type model: `neurolib.models.Model`\n \"\"\"\n assert isinstance(model, Model)\n # set number of spatial dimensions as the number of nodes in the brian network\n self.n = model.params[\"N\"]\n return self.as_array(duration=model.params[\"duration\"], dt=model.params[\"dt\"])\n
Number of spatial dimensions / independent realizations of the input. For determinstic inputs, the array is just copied, for stociastic / noisy inputs, this means independent realizations.
1seedint|None
Seed for the random number generator.
None Source code in neurolib/utils/stimulus.py
def __init__(self, n=1, seed=None):\n\"\"\"\n :param n: Number of spatial dimensions / independent realizations of the input.\n For determinstic inputs, the array is just copied,\n for stociastic / noisy inputs, this means independent realizations.\n :type n: int\n :param seed: Seed for the random number generator.\n :type seed: int|None\n \"\"\"\n self.n = n\n self.seed = seed\n # seed the generator\n np.random.seed(seed)\n # get parameter names\n self.param_names = inspect.getfullargspec(self.__init__).args\n self.param_names.remove(\"self\")\n
required Source code in neurolib/utils/stimulus.py
def generate_input(self, duration, dt):\n\"\"\"\n Function to generate input.\n\n :param duration: Duration of the input, in milliseconds\n :type duration: float\n :param dt: dt of input, in milliseconds\n :type dt: float\n \"\"\"\n raise NotImplementedError\n
def get_params(self):\n\"\"\"\n Return the parameters of the input as dict.\n \"\"\"\n assert all(hasattr(self, name) for name in self.param_names), self.param_names\n params = {name: getattr(self, name) for name in self.param_names}\n return {\"type\": self.__class__.__name__, **params}\n
Name Type Description Default model`neurolib.models.Model`
neurolib's model
required Source code in neurolib/utils/stimulus.py
def to_model(self, model):\n\"\"\"\n Return numpy array of stimuli based on model parameters.\n\n Example:\n ```\n model.params[\"ext_exc_input\"] = SinusoidalInput(...).to_model(model)\n ```\n\n :param model: neurolib's model\n :type model: `neurolib.models.Model`\n \"\"\"\n assert isinstance(model, Model)\n # set number of spatial dimensions as the number of nodes in the brian network\n self.n = model.params[\"N\"]\n return self.as_array(duration=model.params[\"duration\"], dt=model.params[\"dt\"])\n
required Source code in neurolib/utils/stimulus.py
def update_params(self, params_dict):\n\"\"\"\n Update model input parameters.\n\n :param params_dict: New parameters for this input\n :type params_dict: dict\n \"\"\"\n\n def _sanitize(value):\n\"\"\"\n Change string `None` to actual None - can happen with Exploration or\n Evolution, since `pypet` does None -> \"None\".\n \"\"\"\n if value == \"None\":\n return None\n else:\n return value\n\n for param, value in params_dict.items():\n if hasattr(self, param):\n setattr(self, param, _sanitize(value))\n
Standard deviation of the Wiener process, i.e. strength of the noise
required taufloat
Timescale of the OU process, in ms
required Source code in neurolib/utils/stimulus.py
def __init__(\n self,\n mu,\n sigma,\n tau,\n n=1,\n seed=None,\n):\n\"\"\"\n :param mu: Drift of the OU process\n :type mu: float\n :param sigma: Standard deviation of the Wiener process, i.e. strength of the noise\n :type sigma: float\n :param tau: Timescale of the OU process, in ms\n :type tau: float\n \"\"\"\n self.mu = mu\n self.sigma = sigma\n self.tau = tau\n super().__init__(\n n=n,\n seed=seed,\n )\n
def as_array(self, duration, dt):\n\"\"\"\n Return sum of all inputes as numpy array.\n \"\"\"\n return np.sum(\n np.stack([input.as_array(duration, dt) for input in self.inputs]),\n axis=0,\n )\n
Return sum of all inputes as cubic Hermite splines.
Source code in neurolib/utils/stimulus.py
def as_cubic_splines(self, duration, dt, shift_start_time=0.0):\n\"\"\"\n Return sum of all inputes as cubic Hermite splines.\n \"\"\"\n result = self.inputs[0].as_cubic_splines(duration, dt, shift_start_time)\n for input in self.inputs[1:]:\n result.plus(input.as_cubic_splines(duration, dt, shift_start_time))\n return result\n
Stimulus sampled from a Wiener process, i.e. drawn from standard normal distribution N(0, sqrt(dt)).
Source code in neurolib/utils/stimulus.py
class WienerProcess(Input):\n\"\"\"\n Stimulus sampled from a Wiener process, i.e. drawn from standard normal distribution N(0, sqrt(dt)).\n \"\"\"\n\n def generate_input(self, duration, dt):\n self._get_times(duration=duration, dt=dt)\n return np.random.normal(0.0, np.sqrt(dt), (self.n, self.times.shape[0]))\n
No stimulus, i.e. all zeros. Can be used to add a delay between two stimuli.
Source code in neurolib/utils/stimulus.py
class ZeroInput(Input):\n\"\"\"\n No stimulus, i.e. all zeros. Can be used to add a delay between two stimuli.\n \"\"\"\n\n def generate_input(self, duration, dt):\n self._get_times(duration=duration, dt=dt)\n return np.zeros((self.n, self.times.shape[0]))\n
Return rectified input with exponential decay, i.e. a negative step followed by a slow decay to zero, followed by a positive step and again a slow decay to zero. Can be used for bistablity detection.
Parameters:
Name Type Description Default amplitudefloat
Amplitude (both negative and positive) for the step
required nint
Number of realizations (spatial dimension)
1
Returns:
Type Description `ConctatenatedInput`
Concatenated input which represents the rectified stimulus with exponential decay
Source code in neurolib/utils/stimulus.py
def RectifiedInput(amplitude, n=1):\n\"\"\"\n Return rectified input with exponential decay, i.e. a negative step followed by a\n slow decay to zero, followed by a positive step and again a slow decay to zero.\n Can be used for bistablity detection.\n\n :param amplitude: Amplitude (both negative and positive) for the step\n :type amplitude: float\n :param n: Number of realizations (spatial dimension)\n :type n: int\n :return: Concatenated input which represents the rectified stimulus with exponential decay\n :rtype: `ConctatenatedInput`\n \"\"\"\n\n return ConcatenatedStimulus(\n [\n StepInput(step_size=-amplitude, n=n),\n ExponentialInput(inp_max=amplitude, exp_type=\"rise\", exp_coef=12.5, n=n)\n + StepInput(step_size=-amplitude, n=n),\n StepInput(step_size=amplitude, n=n),\n ExponentialInput(amplitude, exp_type=\"decay\", exp_coef=7.5, n=n),\n StepInput(step_size=0.0, n=n),\n ],\n length_ratios=[0.5, 2.5, 0.5, 1.5, 1.0],\n )\n
"}]}
\ No newline at end of file
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 29029a9..d3c4584 100644
Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ
diff --git a/utils/dataset/index.html b/utils/dataset/index.html
index 9a00d70..4a1eb4b 100644
--- a/utils/dataset/index.html
+++ b/utils/dataset/index.html
@@ -708,6 +708,86 @@
+
+
+
+
+
+