-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider returning to strict universes #189
Comments
Fascinating
(c) Robert Harper. All Rights Reserved.
…On Jun 1, 2021, 10:29 -0400, Jonathan Sterling ***@***.***>, wrote:
I had decided to use cooltt to experiment with weak universes, i.e. universes where the decoding laws hold up to iso. These are fine, but they make certain things harder — including synthesizing motives. The motivation for this design was twofold
1. i had an axe to grind: i wanted to point out that you could get pretty far without all the crazy strict laws that people ask for, and that elaborators can handle all the coercions without irritating the user. this turned out to be true.
2. There are benefits to not computing the decoding strictly, in that you can store data in type codes that gets deleted by the decoding
i think (1) is validated and could still scale to harder features with some work, but i have to say that good research has never ever come from indulging “axes to grind”. For (2), this is very true but I think that it may ultimately be more practical to simply NOT delete information during decoding. This can be achieved by simply adding additional types that capture certain use patterns of the LF type structure (so that we can easily check what things are Kan and not, etc.).
I think that moving in this direction will make it easier to make improvements to cooltt in the future.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Oh yeah, and something I should add is that one of the reasons I was interested in weak universes was that I was very concerned at the time about (1) the existence of strict universes in various models of type theory, and (2) the applicability of our Synthetic Tait Computability methods to prove theorems about type theory with strict universes. Both of these questions are now resolved to my satisfaction, so I don't anymore have much interest in the weak universes beyond theoretical curiosity. |
Is this still something you all (the RedPRL dev team) are interested in doing 🤔 ❓ If so I may make an attempt at implementing this. Seems like the main difficulty would be the loss information when decoding, as Jon mentions. For instance we'd need extension types in the syntax and domain, but presumably we'd still want them to be convertible with Pi types out of the interval, as they are now. |
Yes. I just did this in algaett but I have not entered the dangerous cubical zone. |
Clarification @favonia, would the preference be for a strict Tarski universe, or a Russel universe? I thought strict Tarski based on the issue but I see algett is using Russel. |
With all the strictness, it doesn't really matter in actual implementations. Or, more precisely, the "Russel" universes never make semantic sense and we are always using the Tarski ones in our hearts. You could mentally wrap any code that functions as a type with |
I had decided to use cooltt to experiment with weak universes, i.e. universes where the decoding laws hold up to iso. These are fine, but they make certain things harder — including synthesizing motives. The motivation for this design was twofold
i think (1) is validated and could still scale to harder features with some work, but i have to say that good research has never ever come from indulging “axes to grind”. For (2), this is very true but I think that it may ultimately be more practical to simply NOT delete information during decoding. This can be achieved by simply adding additional types that capture certain use patterns of the LF type structure (so that we can easily check what things are Kan and not, etc.).
I think that moving in this direction will make it easier to make improvements to cooltt in the future.
The text was updated successfully, but these errors were encountered: