Skip to content

Commit

Permalink
+ gamma and beta models
Browse files Browse the repository at this point in the history
  • Loading branch information
Mark-Kramer committed Nov 20, 2024
1 parent 8839ece commit 88facd3
Show file tree
Hide file tree
Showing 18 changed files with 1,040 additions and 113 deletions.
Binary file modified .DS_Store
Binary file not shown.
125 changes: 125 additions & 0 deletions Bursting_Lab.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e713df72-6ca2-4213-9278-f718f2fda642",
"metadata": {},
"source": [
"---\n",
"title: Bursting Neuron\n",
"project:\n",
" type: website\n",
"format:\n",
" html:\n",
" code-fold: false\n",
" code-tools: true\n",
"jupyter: python 3\n",
"number-sections: false\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "472ba20d-8461-41fe-9518-50c286f6b552",
"metadata": {},
"source": [
"The goal in this notebook is to update the HH equations to include a new (slow) current and produce bursting activity.\n",
"\n",
"To do so, start with the [HH code available here](https://raw.githubusercontent.com/Mark-Kramer/BU-MA665-MA666/master/HH_functions.py).\n",
"\n",
"Update the HH model to include the **$\\bf{g_K-}$\"M\" current** listed in [**Table A2** of this publication](/Readings/Traub_J_Neurophysiol_2003.pdf).\n",
"\n",
"In the code below, we'll use the variable `B` to define the gate for this current.\n",
"\n",
"## Challenges\n",
"\n",
"#### 1. Plot the steady-state function and time constant for this new current.\n",
"\n",
"*HINT:* In [**Table A2** of this publication](/Readings/Traub_J_Neurophysiol_2003.pdf), the authors provide the forward rate function ($\\alpha[V]$) and backward rate function ($\\beta[V]$) for this current. Use these functions to compute the steady-state function and time constant, and plot both versus V. \n",
"\n",
"#### 2. Update the HH model to include this new current.\n",
"\n",
"*HINT:* Update the HH model to accept three inputs: `HH(I0, T0, gB0)`, where `gB0` is the maximal conductance of the new current.\n",
"\n",
"*HINT:* Update the HH model to return six outputs: `return V,m,h,n,B,t`, where `B` is the gate variable of the new current.\n",
"\n",
"#### 3. Find parameter settings so that the model produces bursting activity.\n",
"\n",
"*HINT:* Fix `I0=10` and `T0=500` and vary the maximal conductance of the new current, `gB0`, until you find a value that supports bursting in the voltage.\n",
"\n",
"*HINT:* Plot the voltage `V` and the new current gate `B` to visualize how the dynamics behave.\n",
"\n",
"#### 4. Compute the spectrum to characterize the dominant rhythms.\n",
"\n",
"*HINT:* Be sure to carefully define `T`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a21a66c6-93b9-43e6-ad92-5bf168cbdd97",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e4af71a7-49d8-4004-8947-6ebddef6cb76",
"metadata": {},
"outputs": [],
"source": [
"def alphaM(V):\n",
" return (2.5-0.1*(V+65)) / (np.exp(2.5-0.1*(V+65)) -1)\n",
"\n",
"def betaM(V):\n",
" return 4*np.exp(-(V+65)/18)\n",
"\n",
"def alphaH(V):\n",
" return 0.07*np.exp(-(V+65)/20)\n",
"\n",
"def betaH(V):\n",
" return 1/(np.exp(3.0-0.1*(V+65))+1)\n",
"\n",
"def alphaN(V):\n",
" return (0.1-0.01*(V+65)) / (np.exp(1-0.1*(V+65)) -1)\n",
"\n",
"def betaN(V):\n",
" return 0.125*np.exp(-(V+65)/80)\n",
"\n",
"def alphaB(V):\n",
" return \"SOMETHING\"\n",
"\n",
"def betaB(V):\n",
" return \"SOMETHING\"\n",
"\n",
"def HHB(I0,T0,gB0):\n",
" \"SOMETHING\""
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
2 changes: 1 addition & 1 deletion Gamma_Lab.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@
"metadata": {},
"outputs": [],
"source": [
"gI = 20;\n",
"gI = 'SOMETHING'\n",
"[V,s,t] = ing(I0,gI,tauI,T0);\n",
"plt.subplot(2,1,1)\n",
"plt.plot(t,V); plt.xlabel('Time [ms]'); plt.ylabel('Voltage [mV]');\n",
Expand Down
Binary file added Slides/Beta_Lecture.pdf
Binary file not shown.
Binary file modified Slides/Gamma_Lecture.pdf
Binary file not shown.
Binary file added Slides/MA666_Homework_4_Models.pdf
Binary file not shown.
2 changes: 1 addition & 1 deletion docs/Backpropagation.html
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@
}

// Store cell data
globalThis.qpyodideCellDetails = [{"id":1,"options":{"message":"true","label":"","context":"interactive","comment":"","fig-width":7,"read-only":"false","out-width":"700px","classes":"","fig-cap":"","output":"true","warning":"true","results":"markup","autorun":"","dpi":72,"out-height":"","fig-height":5},"code":"import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd"},{"id":2,"options":{"message":"true","label":"","context":"interactive","comment":"","fig-width":7,"read-only":"false","out-width":"700px","classes":"","fig-cap":"","output":"true","warning":"true","results":"markup","autorun":"","dpi":72,"out-height":"","fig-height":5},"code":"df = pd.read_csv(\"https://raw.githubusercontent.com/Mark-Kramer/BU-MA665-MA666/master/Data/backpropagation_example_data.csv\")\n\n# Extract the variables from the loaded data\nin_true = np.array(df.iloc[:,0]) #Get the values associated with the first column of the dataframe\nout_true = np.array(df.iloc[:,1]) #Get the values associated with the second column of the dataframe"},{"id":3,"options":{"message":"true","label":"","context":"interactive","comment":"","fig-width":7,"read-only":"false","out-width":"700px","classes":"","fig-cap":"","output":"true","warning":"true","results":"markup","autorun":"","dpi":72,"out-height":"","fig-height":5},"code":"print(np.transpose([in_true, out_true]))"},{"id":4,"options":{"message":"true","label":"","context":"interactive","comment":"","fig-width":7,"read-only":"false","out-width":"700px","classes":"","fig-cap":"","output":"true","warning":"true","results":"markup","autorun":"","dpi":72,"out-height":"","fig-height":5},"code":"def sigmoid(x):\n return 1/(1+np.exp(-x)) # Define the sigmoid anonymous function.\n\ndef feedforward(w, s0): # Define feedforward solution.\n # ... x1 = activity of first neuron,\n # ... s1 = output of first neuron,\n # ... x2 = activity of second neuron,\n # ... s2 = output of second neuron,\n # ... out = output of neural network.\n return out,s1,s2"},{"id":5,"options":{"message":"true","label":"","context":"interactive","comment":"","fig-width":7,"read-only":"false","out-width":"700px","classes":"","fig-cap":"","output":"true","warning":"true","results":"markup","autorun":"","dpi":72,"out-height":"","fig-height":5},"code":"w = [0.5,0.5] # Choose initial values for the weights.\nalpha = 0.01 # Set the learning constant.\n\nK = np.size(in_true);\nresults = np.zeros([K,3]) # Define a variable to hold the results of each iteration. \n\nfor k in np.arange(K):\n s0 = in_true[k] # Define the input,\n target = out_true[k] # ... and the target output.\n \n #Calculate feedforward solution to get output.\n \n #Update the weights.\n w0 = w[0]; w1 = w[1];\n w[1] = \"SOMETHING\"\n w[0] = \"SOMETHING\"\n \n # Save the results of this step. --------------------------------------\n # Here we save the 3 weights, and the neural network output.\n # results[k,:] = [w[0],w[1], out]\n\n# Plot the NN weights and error during training \n# plt.clf()\n# plt.plot(results[:,1], label='w1')\n# plt.plot(results[:,0], label='w0')\n# plt.plot(results[:,2]-target, label='error')\n# plt.legend() #Include a legend,\n# plt.xlabel('Iteration number'); #... and axis label.\n\n# Print the NN weights\n# print(results[-1,0:2])"}];
globalThis.qpyodideCellDetails = [{"id":1,"code":"import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd","options":{"results":"markup","warning":"true","out-width":"700px","read-only":"false","fig-height":5,"fig-cap":"","comment":"","message":"true","output":"true","label":"","autorun":"","fig-width":7,"dpi":72,"out-height":"","classes":"","context":"interactive"}},{"id":2,"code":"df = pd.read_csv(\"https://raw.githubusercontent.com/Mark-Kramer/BU-MA665-MA666/master/Data/backpropagation_example_data.csv\")\n\n# Extract the variables from the loaded data\nin_true = np.array(df.iloc[:,0]) #Get the values associated with the first column of the dataframe\nout_true = np.array(df.iloc[:,1]) #Get the values associated with the second column of the dataframe","options":{"results":"markup","warning":"true","out-width":"700px","read-only":"false","fig-height":5,"fig-cap":"","comment":"","message":"true","output":"true","label":"","autorun":"","fig-width":7,"dpi":72,"out-height":"","classes":"","context":"interactive"}},{"id":3,"code":"print(np.transpose([in_true, out_true]))","options":{"results":"markup","warning":"true","out-width":"700px","read-only":"false","fig-height":5,"fig-cap":"","comment":"","message":"true","output":"true","label":"","autorun":"","fig-width":7,"dpi":72,"out-height":"","classes":"","context":"interactive"}},{"id":4,"code":"def sigmoid(x):\n return 1/(1+np.exp(-x)) # Define the sigmoid anonymous function.\n\ndef feedforward(w, s0): # Define feedforward solution.\n # ... x1 = activity of first neuron,\n # ... s1 = output of first neuron,\n # ... x2 = activity of second neuron,\n # ... s2 = output of second neuron,\n # ... out = output of neural network.\n return out,s1,s2","options":{"results":"markup","warning":"true","out-width":"700px","read-only":"false","fig-height":5,"fig-cap":"","comment":"","message":"true","output":"true","label":"","autorun":"","fig-width":7,"dpi":72,"out-height":"","classes":"","context":"interactive"}},{"id":5,"code":"w = [0.5,0.5] # Choose initial values for the weights.\nalpha = 0.01 # Set the learning constant.\n\nK = np.size(in_true);\nresults = np.zeros([K,3]) # Define a variable to hold the results of each iteration. \n\nfor k in np.arange(K):\n s0 = in_true[k] # Define the input,\n target = out_true[k] # ... and the target output.\n \n #Calculate feedforward solution to get output.\n \n #Update the weights.\n w0 = w[0]; w1 = w[1];\n w[1] = \"SOMETHING\"\n w[0] = \"SOMETHING\"\n \n # Save the results of this step. --------------------------------------\n # Here we save the 3 weights, and the neural network output.\n # results[k,:] = [w[0],w[1], out]\n\n# Plot the NN weights and error during training \n# plt.clf()\n# plt.plot(results[:,1], label='w1')\n# plt.plot(results[:,0], label='w0')\n# plt.plot(results[:,2]-target, label='error')\n# plt.legend() #Include a legend,\n# plt.xlabel('Iteration number'); #... and axis label.\n\n# Print the NN weights\n# print(results[-1,0:2])","options":{"results":"markup","warning":"true","out-width":"700px","read-only":"false","fig-height":5,"fig-cap":"","comment":"","message":"true","output":"true","label":"","autorun":"","fig-width":7,"dpi":72,"out-height":"","classes":"","context":"interactive"}}];


</script>
Expand Down
Loading

0 comments on commit 88facd3

Please sign in to comment.