-
-
Notifications
You must be signed in to change notification settings - Fork 416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
testing ijulia notebooks #268
Comments
We should have a tool to rerun notebook and compare the previously stored outputs with re-running the cells. I think @jhamrick and @ellisonbg are working with that on https://github.com/jupyter/nbgrader (to partially autograde students assignments as notebook). They will be more capable than me to tell how much of this is usable with non Python kernels. Would that suit you ? |
nbgrader doesn't compare new outputs to previously stored outputs, so I'm not sure if that's exactly what you want (though in theory it should work with in any kernel). I'm pretty sure there is an existing project that does that, though -- running the notebook and comparing outputs -- but I can't remember what it's called, sorry :-/ There's also the nosetest plugin, which is specifically for IPython, but may be useful just as an example of what types of things can be done in this vein: https://github.com/taavi/ipython_nose |
Just comparing new outputs to old outputs won't quite work since some outputs might be system dependent, and others should match up to a numerical tolerance. We'll have to add some metadata to store this logic, though it would be nice if it wasn't visible by default. It could just be some hidden cells that run something like |
Rha, sorry, I was sure it was in nbgrader. I grepped in IPython core and found that . Maybe @minrk or @takluyver have gist with a CL tool to do that ? |
nbgrader uses asserts for checking, rather than comparing outputs. If you want 'real' tests, I think that's much better than comparing output. I put together this proof of concept a long time ago as the basis for running notebooks and comparing outputs. runipy is a more complete project based on that, which might be useful. In IPython 3.0, there is an execute preprocessor that runs the notebook ( I don't like doctests in general, but checking the sanity of your docs by stepping through a notebook and verifying that it doesn't throw exceptions, or possibly checking that you get the right kind of output is sensible (we should be doing this in IPython). If you want to write notebooks that are tests, then explicit asserts, etc. in the code itself is what I would do. |
+1 to all that @minrk says On Sun, Feb 1, 2015 at 2:39 PM, Min RK [email protected] wrote:
Brian E. Granger |
NBInclude.jl should resolve this. |
We're starting to put together a collection of notebooks at https://github.com/JuliaOpt/juliaopt-notebooks and one thing I'm worried about is bit rot in the notebooks. Are there any tools that could be used like a doctest for IJulia notebooks?
Ref JuliaOpt/juliaopt-notebooks#5.
The text was updated successfully, but these errors were encountered: