Just randomly thinking about an interesting use-case for screen spying: reading the poker apps (that have security protocols in place to prevent cheating), so that one can extract all the relevant information (cards, pot, etc.).
What's interesting in my mind is that this is a case (much like in finance) where you really, really don't want to ever misinterpret something.
More interestingly, in the future, you might have companies trying to be adversarial: one way to stop people from spying is to actively change card design to fool the CV system^[just yesterday there was a link on hackernews about an ML invisibility cloak so that object detection algorithms wouldn't be able to recognise you if you wear the right adversarial clothes].
All this suggests that, for these kinds of mission critical applications, ML shouldn't be fully trusted.