You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry I can't share code (out of my control), but here's a description. Hopefully reproducible from my description. I'm working on getting the code approved for release.
Versions
raptor-code 1.0.5
rust: 1.66.0 (this was actually found a while ago, but I filed the bug with the wrong project)
Linux: Ubuntu 22.04.2 LTS
What I was doing
I took the example code:
let source_data: Vec<u8> = vec![1,2,3,4,5,6,7,8,9,10,11,12];
let max_source_symbols = 4;
let nb_repair = 3;
let mut encoder = raptor_code::SourceBlockEncoder::new(&source_data, max_source_symbols);
let n = encoder.nb_source_symbols() + nb_repair;
for esi in 0..n as u32 {
let encoding_symbol = encoder.fountain(esi);
//TODO transfer symbol over Network
// network_push_pkt(encoding_symbol);
}
And I set the source_data to be 3684 bytes (a random example file), and max_source_symbols to 19 (chosen in order to get ~200 byte chunks, which is a requirement to me).
This produced a bunch of 194 byte chunks.
When decoding (I took example from the same place):
let encoding_symbol_length = 194;
let source_block_size = 19; // Number of source symbols in the source block
let mut n = 0u32;
let mut decoder = raptor_code::SourceBlockDecoder::new(source_block_size);
while decoder.fully_specified() == false {
//TODO replace the following line with pkt received from network
let (encoding_symbol, esi) = (vec![0; encoding_symbol_length],n);
decoder.push_encoding_symbol(&encoding_symbol, esi);
n += 1;
}
let source_block_size = encoding_symbol_length * source_block_size;
let source_block = decoder.decode(source_block_size as usize);
I set encoding_symbol_length to 194 and source_block_size = 19 (per above), and ran it. It almost works perfectly. First of all of course the file is two bytes too big. I expected this, since 19*194 is 3686. I expected two null bytes at the end, though, which would be truncatable. But what actually happens is that the two extra null bytes are one each at the end of the last two blocks. That is, one at position 3686, and one at position 3492 (194 bytes earlier).
This seems like a bug to me. Surely giving non-multiple input should still produce the correct output?
Workaround
I successfully worked around this by padding the input itself to 3686 bytes. Not sure if it needs to be a multiple of 194 or 19. After doing that a simple truncation to 3684 produces perfect output.
The text was updated successfully, but these errors were encountered:
Sorry I can't share code (out of my control), but here's a description. Hopefully reproducible from my description. I'm working on getting the code approved for release.
Versions
What I was doing
I took the example code:
And I set the
source_data
to be 3684 bytes (a random example file), andmax_source_symbols
to 19 (chosen in order to get ~200 byte chunks, which is a requirement to me).This produced a bunch of 194 byte chunks.
When decoding (I took example from the same place):
I set
encoding_symbol_length
to 194 and source_block_size = 19 (per above), and ran it. It almost works perfectly. First of all of course the file is two bytes too big. I expected this, since 19*194 is 3686. I expected two null bytes at the end, though, which would be truncatable. But what actually happens is that the two extra null bytes are one each at the end of the last two blocks. That is, one at position 3686, and one at position 3492 (194 bytes earlier).This seems like a bug to me. Surely giving non-multiple input should still produce the correct output?
Workaround
I successfully worked around this by padding the input itself to 3686 bytes. Not sure if it needs to be a multiple of 194 or 19. After doing that a simple truncation to 3684 produces perfect output.
The text was updated successfully, but these errors were encountered: