Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing ijulia notebooks #268

Closed
mlubin opened this issue Jan 31, 2015 · 7 comments
Closed

testing ijulia notebooks #268

mlubin opened this issue Jan 31, 2015 · 7 comments

Comments

@mlubin
Copy link
Member

mlubin commented Jan 31, 2015

We're starting to put together a collection of notebooks at https://github.com/JuliaOpt/juliaopt-notebooks and one thing I'm worried about is bit rot in the notebooks. Are there any tools that could be used like a doctest for IJulia notebooks?
Ref JuliaOpt/juliaopt-notebooks#5.

@Carreau
Copy link
Contributor

Carreau commented Jan 31, 2015

We should have a tool to rerun notebook and compare the previously stored outputs with re-running the cells. I think @jhamrick and @ellisonbg are working with that on https://github.com/jupyter/nbgrader (to partially autograde students assignments as notebook).

They will be more capable than me to tell how much of this is usable with non Python kernels.

Would that suit you ?

@jhamrick
Copy link

nbgrader doesn't compare new outputs to previously stored outputs, so I'm not sure if that's exactly what you want (though in theory it should work with in any kernel).

I'm pretty sure there is an existing project that does that, though -- running the notebook and comparing outputs -- but I can't remember what it's called, sorry :-/

There's also the nosetest plugin, which is specifically for IPython, but may be useful just as an example of what types of things can be done in this vein: https://github.com/taavi/ipython_nose

@mlubin
Copy link
Member Author

mlubin commented Jan 31, 2015

Just comparing new outputs to old outputs won't quite work since some outputs might be system dependent, and others should match up to a numerical tolerance. We'll have to add some metadata to store this logic, though it would be nice if it wasn't visible by default. It could just be some hidden cells that run something like @test and @test_approx_eq. Would that make sense?

@Carreau
Copy link
Contributor

Carreau commented Feb 1, 2015

I'm pretty sure there is an existing project that does that, though -- running the notebook and comparing outputs -- but I can't remember what it's called, sorry :-/

Rha, sorry, I was sure it was in nbgrader.

I grepped in IPython core and found that .
It can probably be extended to be an external command.

Maybe @minrk or @takluyver have gist with a CL tool to do that ?

@minrk
Copy link
Contributor

minrk commented Feb 1, 2015

nbgrader uses asserts for checking, rather than comparing outputs. If you want 'real' tests, I think that's much better than comparing output. I put together this proof of concept a long time ago as the basis for running notebooks and comparing outputs. runipy is a more complete project based on that, which might be useful.

In IPython 3.0, there is an execute preprocessor that runs the notebook (ipython nbconvert --execute --to notebook mynb.ipynb --output=newnb.ipynb). You can use that to produce the new notebook to compare with the original.

I don't like doctests in general, but checking the sanity of your docs by stepping through a notebook and verifying that it doesn't throw exceptions, or possibly checking that you get the right kind of output is sensible (we should be doing this in IPython). If you want to write notebooks that are tests, then explicit asserts, etc. in the code itself is what I would do.

@ellisonbg
Copy link

+1 to all that @minrk says

On Sun, Feb 1, 2015 at 2:39 PM, Min RK [email protected] wrote:

nbgrader uses asserts for checking, rather than comparing outputs. If you
want 'real' tests, I think that's much better than comparing output. I put
together this proof of concept https://gist.github.com/minrk/2620735 a
long time ago as the basis for running notebooks and comparing outputs.
runipy https://github.com/paulgb/runipy is a more complete project
based on that, which might be useful.

In IPython 3.0, there is an execute preprocessor that runs the notebook (ipython
nbconvert --execute --to notebook mynb.ipynb --output=newnb.ipynb). You
can use that to produce the new notebook to compare with the original.

I don't like doctests in general, but checking the sanity of your docs by
stepping through a notebook and verifying that it doesn't throw exceptions,
or possibly checking that you get the right kind of output is sensible
(we should be doing this in IPython). If you want to write notebooks that
are tests, then explicit asserts, etc. in the code itself is what I
would do.


Reply to this email directly or view it on GitHub
#268 (comment).

Brian E. Granger
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
[email protected] and [email protected]

@stevengj
Copy link
Member

NBInclude.jl should resolve this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants