Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jpeg-decoder is slower than libjpeg-turbo #155

Open
Shnatsel opened this issue Jun 20, 2020 · 22 comments
Open

jpeg-decoder is slower than libjpeg-turbo #155

Shnatsel opened this issue Jun 20, 2020 · 22 comments

Comments

@Shnatsel
Copy link
Contributor

Shnatsel commented Jun 20, 2020

jpeg_decoder::decoder::Decoder::decode_internal seems to take 50% of the decoding time, or over 75% if using Rayon because this part is not parallelized. This part alone takes more time than libjpeg-turbo takes to decode the entire image.

It appears that jpeg-decoder reads one byte at a time from the input stream and executes some complex logic for every byte, e.g. in HuffmanDecoder::read_bits and a number of other functions called from decode_internal. I suspect performing a single large read (a few Kb in size), then using something that lowers to memchr calls to find marker boundaries would be much faster.

Profiled using this file: https://commons.wikimedia.org/wiki/File:Sun_over_Lake_Hawea,_New_Zealand.jpg via image crate, jpeg-decoder v0.1.19

Single-treaded profile: https://share.firefox.dev/30ZTmks
Parallel profile: https://share.firefox.dev/3dqzE49

@lovasoa
Copy link
Contributor

lovasoa commented Jun 20, 2020

did you use a BufReader for this test?

@Shnatsel
Copy link
Contributor Author

Shnatsel commented Jun 20, 2020

Yes. Here's the code used for testing:

fn main() -> std::io::Result<()> {
    let path = std::env::args().skip(1).next().unwrap();
    let _ = image::io::Reader::open(path)?
        .with_guessed_format()
        .unwrap()
        .decode()
        .unwrap();
    Ok(())
}

image::io::Reader::open does require BufRead: https://github.com/image-rs/image/blob/0b21ce8bc8d0b697964820e649fd40127ef404fa/src/io/reader.rs#L124

@Shnatsel
Copy link
Contributor Author

Initial experiments with buffering are available in the buffered-reads branch but do not demonstrate significantly better results so far.

@Shnatsel
Copy link
Contributor Author

jpeg_decoder::huffman::HuffmanDecoder::read_bits accounts for 23% of all time spent, does byte-by-byte reads and spends most of its time calling std::io::Read::read_exact. Plus has additional complex logic because of its inability to return a byte it has already read to the reader. So that's probably where buffered reads would actually make a difference.

@Shnatsel Shnatsel changed the title Image reading and marker extraction is slow Decoder::decode_internal is slow Jun 25, 2020
@thomcc
Copy link

thomcc commented Jun 26, 2020

Came across the link to this in zulip, but... For what it's worth there's a very good series on how to do bitwise io performantly in compressors on Fabien Giesen's blog, if you haven't seen it before:

Sorry if this is old news.

@Shnatsel Shnatsel changed the title Decoder::decode_internal is slow jpeg-decoder is slower than libjpeg-turbo Oct 17, 2020
@Shnatsel
Copy link
Contributor Author

I've done some more profiling and tinkering, and I believe my earlier assumptions are incorrect. In parallel mode most of the time is spent in jpeg_decoder::idct::dequantize_and_idct_block_8x8_inner. Here's a finer-grained profile to back that up.

I've also verified this experimentally by speeding up that function and seeing it reflected in end-to-end performance gain.

This is really good news because the function is self-contained and takes up 75% of the end-to-end execution time, so any optimizations we can make to it will translate to large gains in end-to-end decoding performance. The function can be found here.

@lovasoa
Copy link
Contributor

lovasoa commented Oct 18, 2020

see my pull request that uses simd for this function: #146

@Shnatsel
Copy link
Contributor Author

Shnatsel commented Oct 18, 2020

After looking at it some more I don't think we can do much here without parallelization and/or SIMD, since the IDCT algorithm appears to be identical to the fallback one in libjpeg-turbo (which normally uses hand-written assembly with SIMD instructions).

@Shnatsel
Copy link
Contributor Author

After looking at IDCT some more, particularly the threaded worker, there's really no reason why it cannot be made multi-threaded by component. They are already decoded independently and 95% of the infrastructure is already there. https://github.com/image-rs/jpeg-decoder/blob/master/src/worker/threaded.rs already does most of the heavy lifting, but doesn't split the image by component. This should be a nearly flat 3x speedup except for grayscale images.

@Shnatsel
Copy link
Contributor Author

Shnatsel commented Oct 22, 2020

I've opened #168 for parallelizing IDCT. We can combine it with SIMD later to hopefully outperform libjpeg-turbo in the future.

Sadly it doesn't do all that much for performance because we get bottlenecked by the reader thread instead, as described in the original post. Most of the time is now spent in jpeg_decoder::decoder::decode_block.

It's time to dust off those BufReader optimizations that didn't seem to do anything! Nope, the branch buffered-reads still makes no difference. It's slightly worse, if anything.

Profile after IDCT parallelization

@lovasoa
Copy link
Contributor

lovasoa commented Oct 22, 2020

Is that the profile for a release build ? It contains function calls for things like core::num::wrapping::::sub, that I would have expected to be inlined in a production build.

image

@Shnatsel
Copy link
Contributor Author

They're inlined! perf is just that good. I'm using this in Cargo.toml:

[profile.release]
debug=true

and profiling with perf record --call-graph=dwarf so that it uses debug info to see into inlined functions.

@willcrichton
Copy link

Just another data point. I'm using jpeg-decoder via the image crate in a WASM project. I've noticed that loading JPEGs is very slow, roughly 200ms to decode a 2048 x 2048 image. Here's a screenshot of the Chrome profile of a single load, along with the most common functions calls at the bottom.

Screen Shot 2021-02-25 at 11 43 40 AM

It seems like most of the time is spent in color_convert_line_ycbcr. I don't see that mentioned on the thread, so a different kind of bottleneck for WASM perhaps?

@lovasoa
Copy link
Contributor

lovasoa commented Feb 25, 2021

In what situation would you want to decode a JPEG in wasm ? You would have to ship a large wasm jpeg decoder to your users, that is always going to run slower than the native jpeg decoder in their browser. If you have a project that handles images in wasm, I would suggest handling the image loading and decoding with native browser APIs, and passing only a UInt8Array containing the pixels to your wasm.

@willcrichton
Copy link

@lovasoa yes I could implement all that. It's just significantly more convenient to use image, since it works cross-platform and my app also targets native. If the JPEG decoder were fast enough then I wouldn't bother with platform-specific code.

@HeroicKatora
Copy link
Member

@willcrichton This would be a more useful data point if you submited traces, not screenshots. Spending 30% of time in memset and memcpy is surely not optimal either and anyone debugging would surely want to know where in the callgraph they occur.

@willcrichton
Copy link

Sure thing, here's the trace. wasm-jpeg-decoder.json.zip

@Shnatsel
Copy link
Contributor Author

I'm afraid that JPEG decoding will always be significantly slower in WASM than it is in native code. It's very computationally expensive and relies on SIMD and/or parallelization to perform well, and WASM allows neither.

@willcrichton
Copy link

willcrichton commented Feb 28, 2021

For the record, I implemented a web image loader: https://github.com/willcrichton/learn-opengl-rust/blob/88c0282be6bc855dd52d61e5395c3fa1df2c3fc4/src/io.rs#L54-L107

I haven't done a rigorous benchmark, but based on my observations from the traces:

  • Overall load times improved ~50%.
  • Time spent in the decoder went from max 1000ms per image to 200ms per image.
  • In the web loader, after decoding an image, I spend about 150ms in getImageData.
  • Then there's a mysterious ~50-100ms of work done by the GPU?
  • So a whole decode task takes ~400ms max.

Traces for the interested. traces.zip

@lovasoa
Copy link
Contributor

lovasoa commented Feb 28, 2021

@willcrichton : 😎 cool, this looks very useful, you should publish this as a small crate on crates.io ! One small remark: maybe I read too quickly, but it looks like you are waiting for the image to have fully loaded to start creating your canvas and creating a context. So your CPU will idle while the image is being downloaded, then it will be busy exclusively decoding the image (probably on a single core), then creating the canvas.

edit : Here is a small demo: http://jsbin.com/xunatebovu/edit

@Shnatsel
Copy link
Contributor Author

As of version 0.2.6, on a 6200x8200 CMYK image, jpeg-decoder is actually faster than libjpeg-turbo on my 4-core machine!

Without the rayon feature it's 700ms for jpeg-decoder vs 800ms for libjpeg-turbo. And according to perf it's only utilizing 1.38 CPU cores, not all 4, so similar gains should be seen on dual-core machines as well.

The rayon feature is not currently usable due to #245, but once it is fixed I expect the decoding time to drop to 600ms.

Even without parallelism jpeg-decoder is within striking distance of libjpeg-turbo: 850ms as opposed to 800ms.

@Shnatsel
Copy link
Contributor Author

Oops. I fear the celebration has been premature.

Now that I've tested it on a selection of photos, it appears that jpeg-decoder is still considerably slower than libjpeg-turbo even with parallelism: it takes 6 seconds to decode a corpus of photos with libjpeg-turbo and 10 seconds with jpeg-decoder. (measuring without rayon so far because of #245).

Huffman decoding continues bottlenecking decoding. In fact, on 3000x4000 photos Huffman decoding alone takes about as much time as libjpeg-turbo's entire decoding process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants