Skip to content

Commit 15ab9e5

Browse files
rgerkinrusselljjarvismwatts15ChihweiLHBirdRick Gerkin
authored
optimization -> dev (#255) (#256)
* test NU * test NU sqaush * squash circle * circle squash * squash circle * better chance of working * squash circle * circle squash * circle squash * squash circle * circle squash * circle squash * circle squash * squash circle ci * rebase confusion * Added StaticBackend * resolve rebase * Removing duplicate jNeuroMLBackend import (#251) * fixed cannot be safely interpretated as integer linspace * merge * Added StaticBackend * fixup! Added StaticBackend * Removing duplicate jNeuroMLBackend import (#251) * Fix mistake in Izhikevich equation markdown * merge * plotly added * plotly added * circle squash * squash circle * squash circle * squash circle * squash circle * squash circle * modified requirements * circle squash * moved allen api from BPO to NU * refactor * circle squash * circle squash * circle squash * redirectory circle squash * will this work * circle squash * circle squash * circle squash * circle squash * circle ci squash * perhaps fix minor travis annoyance on scidash * try to make scidash travis work too * Update README.md * Update README.md * Update README.md * Update README.md * replace some unit testing files * adding back in important looking tests * circle squash * clean up * brutal clean up but unit tests work * brutal clean up but unit tests work * brutal clean up but unit tests work * easier target * easier target * travis squash * travis squash * Update README.md * travis might work now * setter property methods added to adexp model * circle ci squash * better method stacking and encapsulation, removed redundancy * fix circle squash * circle squash * travis circle squash * circle travis squash * travis update * circle squash * updates before checkout * delete silly bug * unit test for rick to check * unit tests built in to continuous integration for relative difference sciunit debug * unit tests built in to continuous integration for relative difference sciunit debug * unit tests built in to continuous integration for relative difference sciunit debug * unit tests built in to continuous integration for relative difference sciunit debug * graphical unit test made * graphical unit test made * both Relative difference and ZScore working now * after change to BPO source code where I remove special treatment for models not ADEXP in BPO * continuous integration updated * before merge * before merge branch * perhaps branch fixed now * merge into merge * meaningless * simplified best model * now ephys properties and multispiking optimize, as well as allen examples * better integration of unit testing * introduced some typing to optimization management, and some code comments, reduced opt_man file size by 50% * added back in neuroelectro api * fixing unit tests for neuroelectro * removed erroneous path from travis build * removed erroneous path from travis build * renamed get_neab neuroelectro_api added in more typing and documentation tried to fix broken import paths * aggressive typing probably broke some methods * aggressive typing probably broke some methods * fixed typing issues * run black over everything * fix bug caused by refactor of efel_evaluation revealed in continuous integration * fix typing bug caused by refactor revealed in continuous integration * fix typing bug caused by refactor revealed in continuous integration * fix typing bug caused by refactor revealed in continuous integration * fixed small bug in constructing neuronunit tests from allen data and neuronunit static models * updated travis script * shortened CI tests to avoid timeouts * made a score obs prediction reformatter in data transport container, did more typing, removed more unnecessary methods used more inheritance in BPO * reduced cell optimization * almost ready for pull request take two * changes * files changed * fix skip decorators so that CI works again * changed circle ci config file to point to BPO circle-ci-branch * gentle refactor and typing * ran black over everything * ran black over everything * typing accident fixup * fix ci requirement accident * fixing ci dependency issue * merge circle ci * merge circle ci * fixed None return type * refactor unit testing return type * refactored unit tests * fixed tab spaces issue * unit test refactor * update unit test for refactor rheobase_dtc_tests * refactor small test * refactor target current into method * fixed new method missing argument * fixed new method missing argument * fix * update code * passing travis tests * update content * speed up travis ci unit tests * speed up travis ci unit tests * make opt work passable on shorter test duration ci * reduce travis burden * reduce travis burden * make more unit tests pass * clean up for passing more unit tests, especially import tests * overall cleaned up unit tests, this commmit represents greater test passing than dev branch * overall cleaned up unit tests, this commmit represents greater test passing than dev branch * update for PR * update for PR * applied black to all files again * Update README.md * izhi optimization slowed down checking out what went wrong * very effected by mutpb, eta, cxptb * very effected by mutpb, eta, cxptb * push changes to ci * update * update * last commit before going backwards through reflog * make stale branch functional again * Jan 28th end of day * found new problems with dtc/model param over ride in dtc class, made variance explained error possible, identified conceptually that brute force is necessary to optimize, made it so that algorithms has functional time diminishing eta * factored out redundant rheobase seeking method * added in some comments * improved documentation * refactor optimization_management documentation improvements simplify return value of functions * removed backends depricated/not supported * removed backends depricated/not supported * refactor optimization_management documentation improvements simplify return value of functions * refactor optimization_management documentation improvements simplify return value of functions * fixing rheobase solving management code * applied black updated methods called in Allen API * applied black to unit test directory, and made it so recursively importing eveything should work in theory over CI * update code for passing CI * ran black over all neuronunit test files * rheobase test on CI * rheobase test on CI * update ci * elitism in bpo via neuronunit flag in constructor * ci will probably work again now * ci fix * update ci unit test passing * shorter build * turns out the right efel package is important * updated travis ci build * update travis build * abolished dtc, useful dtc methods inherited directly into model * ci change dependencies * optimization -> optimization (#255) * update * merger * first incremental pull request * basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization * Removing duplicate jNeuroMLBackend import (#251) * Update Unit Test Cases and Other Improvements (#249) * Add test_get_files * Update .travis.yml * Drop Python 2 support * Minor update * Update dependency requirenments. * Update .travis.yml * Change allensdk version to 0.16.3 (#1) * Make it to be setup.cfg-only project * Make it to be setup.cfg-only project 2 * Update unit tests. * Update test cases * fix * Update parameter * Update test cases * Drop support of Python 2 * Import new test cases in __init__.py in unit_test directory * remove PYTHON_MAJOR_VERSION constant * Fix error * Update dependency * Drop Python 2 support * Update bluepyopt * Update unit tests * Update unit tests * Improved logic in url_to_path method * Update unit tests * Update unit tests * Update unit tests * Update unit tests * Update unit tests * Update unit tests * forceobj = true for jit decorator * add test_geppetto_backend * Update unit tests * Requiring Python version >= 3.5 * Clean up `__future__ import something` * Import deepcopy, and improve coding style. * Import available_backends * Make ExternalModel inherits from RunnableModel instead of Model * Fix warning * Make ExternalModel call constructor of the parent class * Improve ReducedModel, make unit test cases for it. * get pynn and pyneuroml from Github * Update unit tests * Update unit tests * Try to fix a shell command * Update unit tests * Update unit tests * Update unit tests * Delete useless code * Update address of BBP microcircuit portal. Add test cases for bbp.py * Ran bbp.ipynb * Update unit tests * Update unit tests * Update unit tests * Update unit tests * Change sciunit.settings to sciunit.config_set * update * merger first incremental pull request basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization adding in continuous integration update * update circle * refactor code * continuous integration plus coverage related deletions * Removing duplicate jNeuroMLBackend import (#251) update merger first incremental pull request basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization basic NeuronUnit dev fork from scidash with minimalist changes to support multispiking optimization adding in continuous integration update update circle refactor code continuous integration plus coverage related deletions * resolved merge * rebuild and squash circle ci * test NU * test NU sqaush * squash circle * circle squash * squash circle * better chance of working * squash circle * circle squash * circle squash * squash circle * circle squash * circle squash * circle squash * squash circle ci * rebase confusion * Added StaticBackend * resolve rebase * Removing duplicate jNeuroMLBackend import (#251) * fixed cannot be safely interpretated as integer linspace * merge * Added StaticBackend * fixup! Added StaticBackend * Removing duplicate jNeuroMLBackend import (#251) * Fix mistake in Izhikevich equation markdown * merge * plotly added * plotly added * circle squash * squash circle * squash circle * squash circle * squash circle * squash circle * modified requirements * circle squash * moved allen api from BPO to NU * refactor * circle squash * circle squash * circle squash * redirectory circle squash * will this work * circle squash * circle squash * circle squash * circle squash * circle ci squash * perhaps fix minor travis annoyance on scidash * try to make scidash travis work too * Update README.md * Update README.md * Update README.md * Update README.md * replace some unit testing files * adding back in important looking tests * circle squash * clean up * brutal clean up but unit tests work * brutal clean up but unit tests work * brutal clean up but unit tests work * easier target * easier target * travis squash * travis squash * Update README.md * travis might work now * setter property methods added to adexp model * circle ci squash * better method stacking and encapsulation, removed redundancy * fix circle squash * circle squash * travis circle squash * circle travis squash * travis update * circle squash * updates before checkout * delete silly bug * unit test for rick to check * unit tests built in to continuous integration for relative difference sciunit debug * unit tests built in to continuous integration for relative difference sciunit debug * unit tests built in to continuous integration for relative difference sciunit debug * unit tests built in to continuous integration for relative difference sciunit debug * graphical unit test made * graphical unit test made * both Relative difference and ZScore working now * after change to BPO source code where I remove special treatment for models not ADEXP in BPO * continuous integration updated * before merge * before merge branch * perhaps branch fixed now * merge into merge * meaningless * simplified best model * now ephys properties and multispiking optimize, as well as allen examples * better integration of unit testing * introduced some typing to optimization management, and some code comments, reduced opt_man file size by 50% * added back in neuroelectro api * fixing unit tests for neuroelectro * removed erroneous path from travis build * removed erroneous path from travis build * renamed get_neab neuroelectro_api added in more typing and documentation tried to fix broken import paths * aggressive typing probably broke some methods * aggressive typing probably broke some methods * fixed typing issues * run black over everything * fix bug caused by refactor of efel_evaluation revealed in continuous integration * fix typing bug caused by refactor revealed in continuous integration * fix typing bug caused by refactor revealed in continuous integration * fix typing bug caused by refactor revealed in continuous integration * fixed small bug in constructing neuronunit tests from allen data and neuronunit static models * updated travis script * shortened CI tests to avoid timeouts * made a score obs prediction reformatter in data transport container, did more typing, removed more unnecessary methods used more inheritance in BPO * reduced cell optimization * almost ready for pull request take two * changes * files changed * fix skip decorators so that CI works again * changed circle ci config file to point to BPO circle-ci-branch * gentle refactor and typing * ran black over everything * ran black over everything * typing accident fixup * fix ci requirement accident * fixing ci dependency issue * merge circle ci * merge circle ci * fixed None return type * refactor unit testing return type * refactored unit tests * fixed tab spaces issue * unit test refactor * update unit test for refactor rheobase_dtc_tests * refactor small test * refactor target current into method * fixed new method missing argument * fixed new method missing argument * fix * update code * passing travis tests * update content * speed up travis ci unit tests * speed up travis ci unit tests * make opt work passable on shorter test duration ci * reduce travis burden * reduce travis burden * make more unit tests pass * clean up for passing more unit tests, especially import tests * overall cleaned up unit tests, this commmit represents greater test passing than dev branch * overall cleaned up unit tests, this commmit represents greater test passing than dev branch * update for PR * update for PR * applied black to all files again * Update README.md * izhi optimization slowed down checking out what went wrong * very effected by mutpb, eta, cxptb * very effected by mutpb, eta, cxptb * push changes to ci * update * update * last commit before going backwards through reflog * make stale branch functional again * Jan 28th end of day * found new problems with dtc/model param over ride in dtc class, made variance explained error possible, identified conceptually that brute force is necessary to optimize, made it so that algorithms has functional time diminishing eta * factored out redundant rheobase seeking method * added in some comments * improved documentation * refactor optimization_management documentation improvements simplify return value of functions * removed backends depricated/not supported * removed backends depricated/not supported * refactor optimization_management documentation improvements simplify return value of functions * refactor optimization_management documentation improvements simplify return value of functions * fixing rheobase solving management code * applied black updated methods called in Allen API * applied black to unit test directory, and made it so recursively importing eveything should work in theory over CI * update code for passing CI * ran black over all neuronunit test files * rheobase test on CI * rheobase test on CI * update ci * elitism in bpo via neuronunit flag in constructor * ci will probably work again now * ci fix * update ci unit test passing * shorter build * turns out the right efel package is important * updated travis ci build * update travis build Co-authored-by: Mark Watts <[email protected]> Co-authored-by: Zhiwei <[email protected]> Co-authored-by: Russell Jarvis <[email protected]> Co-authored-by: Richard Gerkin <[email protected]> * merge into optimization branch * bug fix for neuronunit * replaced dtc with model almost * managed to factor out DataTC all but in name * managed to factor out DataTC all but in name * factor out datatc * update files * updates * update * updated unit tests * update setup config * refactor setup.cfg * update * typing unnecessary in setup * adding in plt depency to opt_man * update * update for CI * update CI * update for CI * further factor out DataTC * factoring out * update * update * update * update * CI updates * circle update * coverage run * speed up CI * update * update ci * update ci * update ci * update ci * update ci * update * update * more skip for CI * update ci * update rm * factor out DataTC * update * update ci * update ci Co-authored-by: Russell Jarvis <[email protected]> Co-authored-by: Mark Watts <[email protected]> Co-authored-by: Russell Jarvis <[email protected]> Co-authored-by: Zhiwei <[email protected]> Co-authored-by: Rick Gerkin <[email protected]>
1 parent 701c33a commit 15ab9e5

File tree

142 files changed

+10665
-78197
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

142 files changed

+10665
-78197
lines changed

.circleci/config.yml

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
defaults: &defaults
2+
working_directory: ~/markovmodel/PyEMMA
3+
docker:
4+
- image: continuumio/miniconda3
5+
6+
inst_conda_bld: &inst_conda_bld
7+
- run: conda config --add channels conda-forge
8+
- run: conda config --set always_yes true
9+
- run: conda config --set quiet true
10+
- run: conda install conda-build
11+
12+
version: 2
13+
14+
jobs:
15+
build:
16+
<<: *defaults
17+
parallelism: 1
18+
steps:
19+
- checkout
20+
- run: git fetch --unshallow || true
21+
- run: apt-get install -y cpp gcc
22+
- run: apt-get install -y libx11-6 python-dev git build-essential
23+
- run: apt-get install -y autoconf automake gcc g++ make gfortran
24+
- run: apt-get install -y python-tables
25+
- run: apt-get install -y libhdf5-serial-dev
26+
27+
- run: conda config --add channels conda-forge
28+
- run: conda config --set always_yes true
29+
- run: conda config --set quiet true
30+
- run: conda install conda-build
31+
- run: pip install pip --upgrade;
32+
- run: conda install numpy;
33+
- run: conda install numba;
34+
- run: conda install dask;
35+
- run: pip install tables
36+
- run: pip install scipy==1.5.4
37+
- run: pip install coverage
38+
- run: pip install cython
39+
- run: pip install asciiplotlib;
40+
- run: pip install ipfx
41+
- run: pip install streamlit
42+
- run: pip install sklearn
43+
- run: pip install seaborn
44+
- run: pip install frozendict
45+
- run: pip install igor
46+
#- run: pip install plotly
47+
- run: pip install allensdk==0.16.3
48+
- run: pip install --upgrade colorama
49+
- run: pip install -e .
50+
- run: rm -rf /opt/conda/lib/python3.8/site-packages/sciunit
51+
- run: git clone -b neuronunit https://github.com/russelljjarvis/jit_hub.git
52+
- run: cd jit_hub; pip install -e .; cd ..;
53+
- run: git clone -b neuronunit_reduced_cells https://github.com/russelljjarvis/BluePyOpt.git
54+
- run: cd BluePyOpt; pip install -e .
55+
- run: git clone -b dev https://github.com/russelljjarvis/sciunit.git
56+
57+
- run: cd sciunit; pip install -e .; cd ..;
58+
- run: pip install git+https://github.com/russelljjarvis/eFEL
59+
- run: pip install coveralls
60+
- run: sh build.sh
61+
- run: sh test.sh;
62+
#- run: cd neuronunit/unit_test; coveralls -m unittest rheobase_model_test.py; cd -;
63+
#- run: cd neuronunit/unit_test; coverage report
64+
#- store_artifacts:
65+
# path: htmlcov

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
*.vscode
12
*.py[co]
23
*.pkl
34
*.p

.travis.yml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,25 @@ install:
2121
- conda info -a
2222
- pip install -U pip
2323
- pip install .
24+
- pip install sklearn
25+
- pip install seaborn
2426
- pip install coveralls
2527
- pip install pylmeasure # required by morphology tests
28+
- sh build.sh
29+
2630
######################################################
2731

2832
script:
2933
- export NC_HOME='.' # NeuroConstruct isn't used but tests need this
3034
# variable set to pass.
35+
<<<<<<< HEAD
36+
- cd neuronunit/unit_test; python -m unittest scores_unit_test.py; cd -;
37+
- cd neuronunit/unit_test; python -m unittest rheobase_dtc_test.py; cd -;
38+
#- sh test.sh
39+
=======
40+
#- cd neuronunit/unit_test; python -m unittest scores_unit_test.py; cd -;
41+
#- cd neuronunit/unit_test; python -m unittest rheobase_model_test.py; cd -;
3142
- sh test.sh
43+
>>>>>>> 9fb0c2e613a1bf059f38eeeae80582d0cfb11f2f
3244
after_success:
3345
- coveralls

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
### Circle CI russelljjarvis/optimization build:
2+
[![Build Status](https://circleci.com/gh/russelljjarvis/neuronunit/tree/optimization.svg?style=svg)](https://app.circleci.com/pipelines/github/russelljjarvis/neuronunit/)
3+
### Travis CI scidash/optimization build:
4+
[![Travis](https://travis-ci.org/scidash/neuronunit.svg?branch=optimization)](https://travis-ci.org/scidash/neuronunit?branch=optimization)
15
| Master | Dev |
26
| ------------- | ------------- |
37
| [![Travis](https://travis-ci.org/scidash/neuronunit.svg?branch=master)](https://travis-ci.org/scidash/neuronunit) | [![Travis](https://travis-ci.org/scidash/neuronunit.svg?branch=dev)](https://travis-ci.org/scidash/neuronunit) |

asv.conf.json

Lines changed: 160 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,160 @@
1+
{
2+
// The version of the config file format. Do not change, unless
3+
// you know what you are doing.
4+
"version": 1,
5+
6+
// The name of the project being benchmarked
7+
"project": "neuronunit",
8+
9+
// The project's homepage
10+
"project_url": "https://github.com/russelljjarvis/neuronunit",
11+
12+
// The URL or local path of the source code repository for the
13+
// project being benchmarked
14+
"repo": ".",
15+
16+
// The Python project's subdirectory in your repo. If missing or
17+
// the empty string, the project is assumed to be located at the root
18+
// of the repository.
19+
// "repo_subdir": "",
20+
21+
// Customizable commands for building, installing, and
22+
// uninstalling the project. See asv.conf.json documentation.
23+
//
24+
// "install_command": ["in-dir={env_dir} python -mpip install {wheel_file}"],
25+
// "uninstall_command": ["return-code=any python -mpip uninstall -y {project}"],
26+
// "build_command": [
27+
// "python setup.py build",
28+
// "PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}"
29+
// ],
30+
31+
// List of branches to benchmark. If not provided, defaults to "master"
32+
// (for git) or "default" (for mercurial).
33+
// "branches": ["master"], // for git
34+
// "branches": ["default"], // for mercurial
35+
36+
// The DVCS being used. If not set, it will be automatically
37+
// determined from "repo" by looking at the protocol in the URL
38+
// (if remote), or by looking for special directories, such as
39+
// ".git" (if local).
40+
// "dvcs": "git",
41+
42+
// The tool to use to create environments. May be "conda",
43+
// "virtualenv" or other value depending on the plugins in use.
44+
// If missing or the empty string, the tool will be automatically
45+
// determined by looking for tools on the PATH environment
46+
// variable.
47+
"environment_type": "virtualenv",
48+
49+
// timeout in seconds for installing any dependencies in environment
50+
// defaults to 10 min
51+
//"install_timeout": 600,
52+
53+
// the base URL to show a commit for the project.
54+
// "show_commit_url": "http://github.com/owner/project/commit/",
55+
56+
// The Pythons you'd like to test against. If not provided, defaults
57+
// to the current version of Python used to run `asv`.
58+
// "pythons": ["2.7", "3.6"],
59+
60+
// The list of conda channel names to be searched for benchmark
61+
// dependency packages in the specified order
62+
// "conda_channels": ["conda-forge", "defaults"],
63+
64+
// The matrix of dependencies to test. Each key is the name of a
65+
// package (in PyPI) and the values are version numbers. An empty
66+
// list or empty string indicates to just test against the default
67+
// (latest) version. null indicates that the package is to not be
68+
// installed. If the package to be tested is only available from
69+
// PyPi, and the 'environment_type' is conda, then you can preface
70+
// the package name by 'pip+', and the package will be installed via
71+
// pip (with all the conda available packages installed first,
72+
// followed by the pip installed packages).
73+
//
74+
// "matrix": {
75+
// "numpy": ["1.6", "1.7"],
76+
// "six": ["", null], // test with and without six installed
77+
// "pip+emcee": [""], // emcee is only available for install with pip.
78+
// },
79+
80+
// Combinations of libraries/python versions can be excluded/included
81+
// from the set to test. Each entry is a dictionary containing additional
82+
// key-value pairs to include/exclude.
83+
//
84+
// An exclude entry excludes entries where all values match. The
85+
// values are regexps that should match the whole string.
86+
//
87+
// An include entry adds an environment. Only the packages listed
88+
// are installed. The 'python' key is required. The exclude rules
89+
// do not apply to includes.
90+
//
91+
// In addition to package names, the following keys are available:
92+
//
93+
// - python
94+
// Python version, as in the *pythons* variable above.
95+
// - environment_type
96+
// Environment type, as above.
97+
// - sys_platform
98+
// Platform, as in sys.platform. Possible values for the common
99+
// cases: 'linux2', 'win32', 'cygwin', 'darwin'.
100+
//
101+
// "exclude": [
102+
// {"python": "3.2", "sys_platform": "win32"}, // skip py3.2 on windows
103+
// {"environment_type": "conda", "six": null}, // don't run without six on conda
104+
// ],
105+
//
106+
// "include": [
107+
// // additional env for python2.7
108+
// {"python": "2.7", "numpy": "1.8"},
109+
// // additional env if run on windows+conda
110+
// {"platform": "win32", "environment_type": "conda", "python": "2.7", "libpython": ""},
111+
// ],
112+
113+
// The directory (relative to the current directory) that benchmarks are
114+
// stored in. If not provided, defaults to "benchmarks"
115+
// "benchmark_dir": "benchmarks",
116+
117+
// The directory (relative to the current directory) to cache the Python
118+
// environments in. If not provided, defaults to "env"
119+
"env_dir": ".asv/env",
120+
121+
// The directory (relative to the current directory) that raw benchmark
122+
// results are stored in. If not provided, defaults to "results".
123+
"results_dir": ".asv/results",
124+
125+
// The directory (relative to the current directory) that the html tree
126+
// should be written to. If not provided, defaults to "html".
127+
"html_dir": ".asv/html",
128+
129+
// The number of characters to retain in the commit hashes.
130+
// "hash_length": 8,
131+
132+
// `asv` will cache results of the recent builds in each
133+
// environment, making them faster to install next time. This is
134+
// the number of builds to keep, per environment.
135+
// "build_cache_size": 2,
136+
137+
// The commits after which the regression search in `asv publish`
138+
// should start looking for regressions. Dictionary whose keys are
139+
// regexps matching to benchmark names, and values corresponding to
140+
// the commit (exclusive) after which to start looking for
141+
// regressions. The default is to start from the first commit
142+
// with results. If the commit is `null`, regression detection is
143+
// skipped for the matching benchmark.
144+
//
145+
// "regressions_first_commits": {
146+
// "some_benchmark": "352cdf", // Consider regressions only after this commit
147+
// "another_benchmark": null, // Skip regression detection altogether
148+
// },
149+
150+
// The thresholds for relative change in results, after which `asv
151+
// publish` starts reporting regressions. Dictionary of the same
152+
// form as in ``regressions_first_commits``, with values
153+
// indicating the thresholds. If multiple entries match, the
154+
// maximum is taken. If no entry matches, the default is 5%.
155+
//
156+
// "regressions_thresholds": {
157+
// "some_benchmark": 0.01, // Threshold of 1%
158+
// "another_benchmark": 0.5, // Threshold of 50%
159+
// },
160+
}

benchmarks/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+

benchmarks/benchmarks.py

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# Write the benchmarking functions here.
2+
# See "Writing benchmarks" in the asv docs for more information.
3+
from neuronunit.unit_test.opt_ephys_properties import testOptimizationEphysCase
4+
from neuronunit.unit_test.scores_unit_test import testOptimizationAllenMultiSpike
5+
from neuronunit.unit_test.rheobase_model_test import testModelRheobase
6+
7+
8+
class TimeSuite:
9+
"""
10+
An example benchmark that times the performance of various kinds
11+
of iterating over dictionaries in Python.
12+
"""
13+
def speed_check():
14+
testModelRheobase.setUp()
15+
testModelRheobase.test_opt_1()
16+
17+
class MemSuite:
18+
def mem_list(self):
19+
return [0] * 256

build.sh

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
apt-get install -y cpp gcc
2+
apt-get install -y libx11-6 python-dev git build-essential
3+
apt-get install -y autoconf automake gcc g++ make gfortran
4+
apt-get install -y python-tables
5+
apt-get install -y libhdf5-serial-dev
6+
conda install numpy;
7+
conda install numba;
8+
conda install dask;
9+
pip install pip --upgrade;
10+
pip install tables
11+
pip install scipy==1.5.4
12+
pip install -e .
13+
pip install coverage
14+
git clone -b neuronunit https://github.com/russelljjarvis/jit_hub.git
15+
cd jit_hub; pip install -e .; cd ..;
16+
pip install cython
17+
pip install asciiplotlib;
18+
git clone -b neuronunit_reduced_cells https://github.com/russelljjarvis/BluePyOpt.git
19+
cd BluePyOpt; pip install -e .
20+
pip install git+https://github.com/russelljjarvis/eFEL
21+
pip install ipfx
22+
pip install streamlit
23+
pip install sklearn
24+
pip install seaborn
25+
pip install frozendict
26+
pip install plotly
27+
<<<<<<< HEAD
28+
=======
29+
pip install igor
30+
pip install pylmeasure
31+
>>>>>>> 9fb0c2e613a1bf059f38eeeae80582d0cfb11f2f
32+
pip install --upgrade colorama
33+
rm -rf /opt/conda/lib/python3.8/site-packages/sciunit
34+
git clone -b dev https://github.com/russelljjarvis/sciunit.git
35+
cd sciunit; pip install -e .; cd ..;
36+
pip install allensdk==0.16.3

codecov.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
coverage:
2+
range: "90...100"
3+
4+
status:
5+
project:
6+
default:
7+
target: "90%"
8+
threshold: "5%"
9+
patch: false

environment.yml

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,15 @@ dependencies:
66
- pip:
77
- neo==0.4
88
- elephant
9-
- scoop
10-
- git+http://github.com/scidash/sciunit@dev#egg=sciunit-1.5.6
11-
- git+http://github.com/rgerkin/[email protected]#egg=allensdk-0.12.4.1
12-
- git+http://github.com/rgerkin/pyNeuroML@master#egg=pyneuroml-0.2.3
9+
- dask
10+
- numba
11+
- streamlit
12+
- sklearn
13+
- seaborn
14+
- frozendict
15+
- plotly
16+
- asciiplotlib
17+
- ipfx
18+
- git+https://github.com/russelljjarvis/jit_jub@neuronunit
19+
- git+https://github.com/russelljjarvis/BluePyOpt@neuronunit_reduced_cells
20+
- git+https://github.com/russelljjarvis/sciunit@dev

0 commit comments

Comments
 (0)