Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tridonic: fix AssertionFailure #142

Merged
merged 2 commits into from
Aug 2, 2024
Merged

tridonic: fix AssertionFailure #142

merged 2 commits into from
Aug 2, 2024

Conversation

jktjkt
Copy link
Contributor

@jktjkt jktjkt commented Jul 26, 2024

Two related commits:

tridonic: log failures from the bus_watcher

These were all logged as debug messages, which means that they are not shown by default. This makes any debugging of failures much harder, see, e.g., #124.

tridonic: prevent a possible AssertionFailure

I was playing with some commands, and one of my devices apparently just won't send any reply to a DT5 ResetConverterSettings. That's a sendtwice=True command, and when that happens, I was getting an assertion failure later on on line 600, which looks pretty much like bug #124:

15:00:59.333780 D: tridonic: Sending with seq e7
15:00:59.339550 D: tridonic: waiting for outstanding_transmissions=1 {response=}
15:00:59.357560 D: tridonic: _handle_read 12730000C105FFFFE7
15:00:59.363037 D: tridonic: bus_watch message 12730000C105FFFFE7
15:00:59.368941 D: tridonic: Command EnableDeviceType(5), immediate
15:00:59.384012 D: tridonic: remembering device type 5
15:00:59.387287 D: tridonic: Bus watch waiting for data, no timeout
15:00:59.400013 D: tridonic: message mode=12 rtype=73 frame=b'\x00\x00\xc1\x05' interval=ffff seq=e7
15:00:59.403352 D: tridonic: waiting for outstanding_transmissions=0 {response=}
15:00:59.407774 I: dali: EnableDeviceType(5)
15:00:59.431360 D: tridonic: _handle_read 127100000000008AE7
15:00:59.454305 D: tridonic: bus_watch message 127100000000008AE7
15:00:59.457782 D: tridonic: Bus watch waiting for data, no timeout
15:00:59.461983 D: tridonic: message mode=12 rtype=71 frame=b'\x00\x00\x00\x00' interval=008a seq=e7
15:00:59.693383 D: tridonic: Sending with seq e8
15:00:59.698993 D: tridonic: waiting for outstanding_transmissions=1 {response=}
15:00:59.716567 D: tridonic: _handle_read 1273000001E6103BE8
15:00:59.721514 D: tridonic: bus_watch message 1273000001E6103BE8
15:00:59.727816 D: tridonic: Bus watch waiting with timeout
15:00:59.732960 D: tridonic: message mode=12 rtype=73 frame=b'\x00\x00\x01\xe6' interval=103b seq=e8
15:00:59.736298 D: tridonic: waiting for outstanding_transmissions=0 {response=}
15:00:59.741356 D: tridonic: _handle_read 127100000000008AE8
15:00:59.746287 D: tridonic: message mode=12 rtype=71 frame=b'\x00\x00\x00\x00' interval=008a seq=e8
15:00:59.785298 D: tridonic: bus_watch message 127100000000008AE8
15:00:59.788843 D: tridonic: Unexpected response waiting for retransmit of config command
15:00:59.794324 E: tridonic: current_command is not None: ResetConverterSettings(<address (control gear) 0>)

I'm not saying that this is surely a fix for #124, but that one was happening in an environment where DT8 Tc commands were sent regularly.

These were all logged as debug messages, which means that they are not
shown by default. This makes any debugging of failures much harder, see,
e.g., sde1000#124.
I was playing with some commands, and one of my devices apparently just
won't send any reply to a DT5 ResetConverterSettings. That's a
sendtwice=True command, and when that happens, I was getting an
assertion failure later on on line 600, which looks pretty much like
bug sde1000#124:

 15:00:59.333780 D: tridonic: Sending with seq e7
 15:00:59.339550 D: tridonic: waiting for outstanding_transmissions=1 {response=}
 15:00:59.357560 D: tridonic: _handle_read 12730000C105FFFFE7
 15:00:59.363037 D: tridonic: bus_watch message 12730000C105FFFFE7
 15:00:59.368941 D: tridonic: Command EnableDeviceType(5), immediate
 15:00:59.384012 D: tridonic: remembering device type 5
 15:00:59.387287 D: tridonic: Bus watch waiting for data, no timeout
 15:00:59.400013 D: tridonic: message mode=12 rtype=73 frame=b'\x00\x00\xc1\x05' interval=ffff seq=e7
 15:00:59.403352 D: tridonic: waiting for outstanding_transmissions=0 {response=}
 15:00:59.407774 I: dali: EnableDeviceType(5)
 15:00:59.431360 D: tridonic: _handle_read 127100000000008AE7
 15:00:59.454305 D: tridonic: bus_watch message 127100000000008AE7
 15:00:59.457782 D: tridonic: Bus watch waiting for data, no timeout
 15:00:59.461983 D: tridonic: message mode=12 rtype=71 frame=b'\x00\x00\x00\x00' interval=008a seq=e7
 15:00:59.693383 D: tridonic: Sending with seq e8
 15:00:59.698993 D: tridonic: waiting for outstanding_transmissions=1 {response=}
 15:00:59.716567 D: tridonic: _handle_read 1273000001E6103BE8
 15:00:59.721514 D: tridonic: bus_watch message 1273000001E6103BE8
 15:00:59.727816 D: tridonic: Bus watch waiting with timeout
 15:00:59.732960 D: tridonic: message mode=12 rtype=73 frame=b'\x00\x00\x01\xe6' interval=103b seq=e8
 15:00:59.736298 D: tridonic: waiting for outstanding_transmissions=0 {response=}
 15:00:59.741356 D: tridonic: _handle_read 127100000000008AE8
 15:00:59.746287 D: tridonic: message mode=12 rtype=71 frame=b'\x00\x00\x00\x00' interval=008a seq=e8
 15:00:59.785298 D: tridonic: bus_watch message 127100000000008AE8
 15:00:59.788843 D: tridonic: Unexpected response waiting for retransmit of config command
 15:00:59.794324 E: tridonic: current_command is not None: ResetConverterSettings(<address (control gear) 0>)

I'm not saying that this is surely a fix for sde1000#124, but that one was
happening in an environment where DT8 Tc commands were sent regularly.
@sde1000 sde1000 merged commit c32d673 into sde1000:master Aug 2, 2024
5 checks passed
@sde1000
Copy link
Owner

sde1000 commented Aug 2, 2024

It does look plausible — after making this change, have you had any further problems?

@jktjkt
Copy link
Contributor Author

jktjkt commented Aug 2, 2024

It does look plausible — after making this change, have you had any further problems?

No problems so far, but it's been only a week, and in the past I would often get a week-long period of everything working just fine (sometimes I think even longer). I think I'll close #124 in a few months unless I see any other problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants