You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand this is an invalid URL but is it possible to drop invalid metainfo instead of erroring? especially non critical ones like this where there are several more working trackers. This is a perfectly downloadable torrent and clients like Qbit simply report it as unsupported while continuing to download it
Possibly a strict: bool = True flag in read() and read_stream() where:
True behaves like the current behavior, erroring at any metainfo error
False drops non critical info instead of erroring
Thank you
The text was updated successfully, but these errors were encountered:
Using https://torf.readthedocs.io/en/latest/#torf.Torrent.validate is an option but it's not exactly the same. validate=False will no longer error but it'll still end up with an invalid torrent file while my proposal basically means torf will attempt to get a valid file out of a invalid one by dropping non critical invalid data. Torf should raise an error if the torrent file is still invalid after dropping as many non critical data as it could
I've also noticed that despite validate=False, property access still errors:
I agree that this should be possible, but I don't see a straightforward way to
implement it. torf is probably too overengineered by now. I'm afraid
implementing it will break something else.
I recently encountered this torrent file in the wild and it has one invalid tracker url out of many.
Trying to read it raises a metainfo error
I understand this is an invalid URL but is it possible to drop invalid metainfo instead of erroring? especially non critical ones like this where there are several more working trackers. This is a perfectly downloadable torrent and clients like Qbit simply report it as unsupported while continuing to download it
Possibly a
strict: bool = True
flag inread()
andread_stream()
where:Thank you
The text was updated successfully, but these errors were encountered: