- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 3.6k
Stabilize behavior of createVector() with zero arguments #8203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev-2.0
Are you sure you want to change the base?
Conversation
        
          
                src/core/environment.js
              
                Outdated
          
        
      | const matrix = this._renderer.getWorldToScreenMatrix(); | ||
|  | ||
| if (screenPosition.dimensions === 2) { | ||
| if (origDimension === 2 || screenToWorldArgs === 2) { | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic here is, when we are padding 0's to the other components of vector to prevent NaN, the screenToWorld breaks because it the count here changes and maybe it projects the z components as well.
So, when we see screenPosition.dimensions always comes out to 3 since we are storing other component with 0. Here I am preserving the count with original arguments and makes the test pass. Since it's not the actual fix and tends to be a workaround for the future release, I think this should work, Let me know if there's any objection @davepagurek
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me know if I'm understanding the flow here correctly:
- If you pass two numbers into screenToWorld(e.g.screenToWorld(width/2, height/2)) or if you pass a single vector in that was created with two numbers (e.g.screenToWorld(createVector(1, 2))), we want to generate a z value
- We can directly check the arguments length, but when the arg is a single vector, its values get padded to length 3, so we don't know directly how many arguments were manually provided
- We store a global dimensionvalue corresponding to the number of args passed to the lastcreateVectorcall
That sounds like it works. I wonder if it might be a little more direct, instead of using global state, to edit createVector to instead set a secret internal property on the vector before returning it, like _origDimension, so then we could check if that exists on any vector?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right. I am using _origDimension now, if screenToWorld(width/2, height/2) we take the argument from there, and if we have not a number (probably a vector) we fetch the _origDimension, other logic are same? Also, now we are not using global state. Do you think it's correct for now?
| Hi @perminder-17 and @davepagurek! Thanks so much for working on this. I'm really glad we're trying to figure out how to accommodate the 1.x  Since I've been organizing the  This implementation seems to revert  Here are the key conflicts I've identified: 
 A Real-Time Example of This ComplexityThis PR's discussion of the  To fix the bug this patch created, a new "secret internal"  SummaryThis PR would significantly complicate the entire vector class, for all 2.x users, in order to accommodate a legacy edge case in a single function. Fundamentally, this patch replaces the simpler, more robust 2.x model with the legacy 1.x model. This breaks or blocks improvements that have already been added or are actively being worked on, based on extensive discussions (including the concept of true 1D and 2D vectors, as discussed in #8118). Proposal: Move this back to #8156 for alternativesSince this patch appears to have a much larger blast radius than intended, I propose we continue to discuss viable options in #8156, before we merge. I think we need to find a way to patch the  My sense is that the simplest, lowest-impact solution may be to include a short note in the compatibility README, instructing users how to fix their sketches by supplying explicit  Thanks again for all your work on this! | 
| 
 I think we aren't trying to reach a permanent solution in this, but just unblock sketches that are currently broken. So it's very likely that we undo whatever we do in this PR when making larger changes to broadcasting. That said, do you think it would be reasonable to do a similar kind of padding, but only when operating on another vector for now? So like, temporarily implementing a version of broadcasting where we pad with 0s, but only when you add two vectors. So if you did: const a = createVector(1, 2) // Dimension is 2
const b = createVector(3, 4, 5) // Dimension is 3
a.add(b) // Maybe this logs an FES message saying you've multiplied two different-sized vectors together
// A's dimension is now 3, its data is [4, 6, 5]This would make  | 
| Hi @davepagurek! Thank you so much for your quick reply. I love seeing all these ideas being put forth, and your new idea is really interesting. We're totally in agreement about wanting to fix the current user-facing bugs as quickly as possible. With that in mind, I have a proposal for a different quick fix, a few concerns about the "stopgap" approach, and what I see as a path to immediately unlock quick, permanent fixes. A truly quick fixIf our goal is to unblock users today with the lowest possible effort and zero technical debt, I think the fastest path is to add a note to the compatibility README and the 2.0 beta documentation. Something like this: "In 1.x,  This is a 10-minute, no-code change that immediately solves the user's "why is my sketch broken?" problem. Crucially, we can reinforce this with a dedicated Friendly Error System message. That way, the README and reference pages provide advance notice, and users who are still bitten by the bug will find an immediate, simple solution when and where it happens. Risks of a custom "stopgap"My main concern with the stopgap you proposed (padding with 0s for  This rule is precisely the same in every major library for math, machine learning, and when applicable, creative-coding. For example, openFrameworks also follows the same rule. Violating the rule means we'd have to code in warning messages that teach users special behavior that they'll have to unlearn—the permanent solution explicitly disallows this exact behavior. This feels like the "whack-a-mole" problem: we'd be solving the  An opportunity for quick, permanent fixesThis brings me to what I think is the most exciting and high-leverage opportunity. The permanent solutions for this bug (and many others) are already clearly defined, the key policies like broadcasting have strong community support, and the issues have volunteers ready and waiting to implement them. The only thing preventing us from fixing these bugs quickly and permanently is that we're blocked on finalizing a few key policy decisions. I believe the most effective use of our time is to unblock the community. If the maintainer team can reach a final decision on these three key issues, it would unleash all of that volunteer energy: 
 Once these policies are approved, the community can immediately start implementing permanent fixes for these  What do you think of this approach? It seems like a way to get the best of both worlds: a zero-debt immediate fix (the README) and a clear plan to rapidly unlock the permanent fixes. | 
| Hi @GregStanton thank you for your thoughtful review! After going through this with @davepagurek , here are my takeaways and recommendation for this PR: 
 While considering this I also referred to Dan Shiffman's comment and from what I could tell, although the code examples in that comment won't work with this patch, the teaching approach does use constructors with explicit arguments, so this supports keeping the scope of this fix separate from broadcasting fix. Thanks all for your careful work on this! | 
| Hi @ksen0 and @davepagurek, Thank you both for digging into this so thoroughly and proposing the size-deferred approach. I really appreciate the effort to find a solution that minimizes friction for users migrating from 1.x, especially those working from major references like Nature of Code. I absolutely share the goal of making this transition as smooth as possible. However, after analyzing the size-deferment proposal in more detail, I'm increasingly concerned that this specific automatic fix might inadvertently introduce more friction and complexity than it eliminates, both for users and contributors. While I've tried to remain open to different automatic solutions, my analysis keeps leading back to the conclusion that the simpler approach (guiding users via README + FES message) might actually be the most user-friendly one in this specific case. Concerns about the size-deferment approachWhile extremely clever, this approach seems to introduce several significant drawbacks: 
 Reconsidering the frictionThe primary motivation seems to be avoiding the friction of users manually updating  
 In contrast, the README + FES message approach involves a simple, one-time, explicitly-guided code change. It leverages the excellent capabilities of the Friendly Error System, turning a potentially breaking error into a helpful, in-context tutorial, guiding the user directly to the simple fix. While it requires a manual update, this transparency respects users and leads to clearer code. Crucially, since the zero-argument  RecommendationGiven the significant drawbacks, I strongly believe the size-deferment approach, while well-intentioned as a temporary bridge, is ultimately detrimental due to the persistent complexity it introduces. Could we reconsider the simpler path? 
 This approach seems to offer the best balance: it addresses the immediate compatibility concern clearly and effectively, without compromising the long-term stability, simplicity, and maintainability of  What are your thoughts on this analysis? I'm eager to find the best solution together. | 
| I think the simplest, lowest-impact solution to fix 1.x sketches that break today is to replace the body of  But the issues we are discussing under #8149 are important and I'm glad we are having the discussions. This simple fix does not address the larger issue of vector dimension mismatches (#8159), so is a quick fix until that is resolved. Making  | 
| Hi @ksen0 and @davepagurek. Thanks, @sidwellr, for that insightful analysis! You've hit on a crucial insight: the  The permanent fix is just as quick as the temporary fixFixing this the temporary way (making  The full, permanent solution isn't a long-term project. It's an extremely simple change: 
 This implementation is just as fast as the temporary patch, but it has the massive benefit of also unblocking all the other stabilization work for  The case for the permanent fixAs we've discussed in the broadcasting proposal (#8159), the non-standard pad-by-zero logic creates internal inconsistency and confusion: 
 My careful review of 100 sketches (50 for  Seizing the opportunityGiven that the permanent fix is just as fast, has 0% disruption, and aligns with a universal standard that is the only mismatch policy our users actually rely on, it seems to be the most efficient path forward. Why spend time on a temporary patch that is no faster to implement, implements a 0%-used confusing behavior, and will need to be undone? Could we seize this opportunity to implement the quick, permanent fix by approving #8159 and patching  | 
| I agree with @GregStanton that making  So, I don't object to this proposal, but observe that it is not "just as quick" because it would require updating documentation as well as code, and also note it isn't a complete fix; #8159 also requires checking for mismatched dimensions. Do we have enough support for #8159 to approve and implement it? | 
| Thanks, @sidwellr, for your feedback and attention to detail! The good news is that the simplicity of the broadcasting proposal means the fix itself is extremely simple, including the documentation, which would need to be updated in either case (it's currently incomplete, in various ways). I'll address the excellent points you raised. Sample-size concerns: Abundance of evidence indicates zero disruptionI totally agree that we need to be cautious about breaking changes, and I'm glad you questioned the sample size. It's a critical point, which is why I combined (1) the empirical data from the sketches with (2) strong domain knowledge. 
 A more rigorous, Bayesian analysis (for the curious)For those interested in statistical rigor, this is a textbook case for a beta-binomial model. Given the  This change simplifies documentationYou're also right that this requires a documentation change—thankfully, it's a simplification! The 2.x docs need a significant update anyway. For example, most of the  Adopting standard broadcasting makes the documentation easier because the rules are simpler, they reflect actual usage, and they make the API consistent. Instead of the complicated and incomplete documentation that we have now, we can provide documentation that is so consistent, it's essentially a matter of copying and pasting: 
 
 
 
 And yes, we would absolutely add this to the compatibility README. The full fix is simple, fast, and removes flawed codeThe permanent fix (patching  Simpler Code: The entire implementation is just adapting the simple  Better Performance: The current  So, the permanent fix is quick to implement (it's just a copy/paste), easy to document, and results in cleaner, faster-running code. This seems like a clear win-win, allowing us to fix the  Given this, could we seize this opportunity by approving #8159? This would unblock the community to implement the permanent fix right away and seems like the most efficient path forward for everyone. | 
| Hi all! Thanks for your attention on all this and the various vector topics. Maintainers (@limzykenneth , @perminder-17 , @dave , and I) discussed this and the related threads. I hope that you can understand that we would like to keep this PR narrowly focused on a compatibility fix (not on changing broadcasting policy) to reduce friction migrating 1.x → 2.x. We're not quite ready to support the standard broadcasting proposal in 2.1. This PR is the last code change in that milestone. We are happy to review all the discussion for standard broadcasting for 2.2 (or any next 2.1.x patches), but for this minor release, it is too last-minute. All those proposals include substantial refactoring; and to the best of my understanding, the proposed fix is not creating the backwards compatibility we're discussing. Planned scope of this PR, which should "address" rather than "resolve" the issue (plus tests): 
 I also acknowledge @sidwellr’s and @GregStanton's suggestions (adopting the behaviour of add or mult everywhere) as a reasonable targeted options, but these also address broadcasting policy, which we'd like to have more time to consider. For now, we prefer to keep any changes in this PR to address usage of zero-argument vectors only. We then can finish discussion and adopt broadcasting policy with less of a rush for a later release (2.2 or any patch before that) given the 2.1 timeline. I really hope this helps to clarify the reasoning. | 
Resolves #8156
Changes:
Screenshots of the change:
PR Checklist
npm run lintpasses