Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

write up steps to compile the Mandelbrot simulator #168

Open
wants to merge 4 commits into
base: mandelbrot_compile
Choose a base branch
from

Conversation

ev-br
Copy link
Collaborator

@ev-br ev-br commented Jul 19, 2023

No description provided.

@ev-br ev-br changed the title write up steps to compile the Mandelbrot write up steps to compile the Mandelbrot simulator Jul 19, 2023
ev-br added 2 commits July 20, 2023 12:46
$ jupyter nbcobvert --to markdown steps_to_compile_mandelbrot.ipynb
1354 elif op == "call_method":


File ~/mambaforge/envs/torch_nightly/lib/python3.8/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Truncate this output with a ... as it is always a pain to scroll through lol.
We won't show all this output in the tutorial as it's rather overwhelming.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll mention it, but won't show it explicitly of course.

@lezcano
Copy link
Collaborator

lezcano commented Jul 24, 2023

Ok, now I've given it a proper read.

So, at the moment we're touching on:

  1. Linear traces and Dyamo's loop unrolling
  2. Data-dependent code
  3. Complex numbers are not supported
  4. A bug we found
  5. Manual loop unrolling
  6. [marginally] CUDA

This is a bit too all over the place, as some of these points are really important, and others are a bit anecdotal. For example, while complex numbers are cool, 99% of the people do not use them, and a fairly high percentage of these may even get scared from the get go, because they do not have an intuitive understanding of them.

Similarly for data-dependent code. We have a fair amount of engineering in place to support data-dependent code, and it may even be available in torch.compile for PyTorch 2.1. I also expect that bug to be fixed in the following weeks.

What about a structure as follows:

  1. Introduction. [We want to trace through numpy programs. Do we allow mixed NumPy and PyTorch programs?]
  2. The running example: kmeans.
  3. Linear traces and Dyamo's loop unrolling. [Compile just one iteration. Benchmark. Perhaps peek at the code.]
  4. Manual loop unrolling. [Same same. Benchmark]
  5. Run your code on CUDA! [Benchmark. It's so slow. Implement in PyTorch. It's fast! Profile. Discuss HtD and DtH. @torch.compile(return_ndarray=False)]
  6. Prototyping: Run your NumPy code on PyTorch [Not sure about this section, should we have it? If we do, we should put a thousand warnings at the beginning saying THIS IS GOING TO BE SUPER SLOW. USE THIS JUST TO TEST!!]
  7. Differences NumPy and PyTorch [Catchall section. Discuss a number of patterns that are common on NumPy but a no-no in PyTorch and compiled code. E.g. in-place ops, boolean masking, complex numbers, torch.float64 does not vectorise on CPU, etc]
  8. Conclusion

@ev-br
Copy link
Collaborator Author

ev-br commented Jul 25, 2023

The proposed structure sounds great for the PyTorch centered audience indeed.

This write-up, as discussed, was never meant to be a first text on the topic, and def not for a pytorch blog. This was meant as a "tips, tricks and workarounds" showcase, to come last in a series of topics. And it naturally included bits and pieces from previous, yet unwritten, sections, so is naturally all-over-the-place. If it served as a starting point for your designing the proper pytorch blog post, great, it served its major purpose :-).

So I'm happy to leave this for now, and parts of it either to section 7 above, or as a follow-up post, when the foundational one is available.

All that said, a few notes in no particular order:

  • Maybe better to not frame data-dependent flows, explicit loops etc as a difference between NumPy and PyTorch. ISTM the difference is that historically numpy caters to a wider range of workloads, and iterative algorithms are not that uncommon in numerics :-).
  • IMO there's value in discussing an example which is not clustering or regression or DL. And which is a bit more DIY than slapping together several linalg black boxes. Again, maybe not as a first example and not for a pytorch technical user audience.
  • I'm rather hard pressed at believing that complex numbers are that spooky :-). Surely people who have intuitive understanding of null spaces of vector operators (SVD / PCA etc) should not have that big of a problem with a complex plane :-).

@lezcano
Copy link
Collaborator

lezcano commented Jul 25, 2023

On your note

  1. Sure, we just have to word it correctly
  2. Let's discuss what would be a nice example
  3. You'd be surprised. Even more now that everyone wants to do DL.

@ev-br
Copy link
Collaborator Author

ev-br commented Jul 25, 2023

Let's discuss what would be a nice example

For the pytorch blog, I'm all for the kmeans/clustering example.

You'd be surprised. Even more now that everyone wants to do DL.

I've counterexamples :-).

@ev-br ev-br force-pushed the transform_for_compile branch 10 times, most recently from 495b851 to b104441 Compare September 17, 2023 18:37
@ev-br ev-br force-pushed the transform_for_compile branch from b104441 to 79d263c Compare September 17, 2023 18:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants