-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new {.allowMissingCases.} pragma to allow growing enums #15646
Conversation
266b354
to
4179f10
Compare
This should apply for single fields of an enum instead. Regardless this kind of defeats the purpose of the exhaustiveness check. |
I agree but I'm not sure enum members support pragmas?
you can't expect client code (eg
the idea is it typically wouldn't; for eg for #15545, previous code would raise when parseJson would encounter a number out of range, so as long as only the old enum members would be encountered, there would be no runtime error. Note that there are caveats, it's a hard problem. That being said, the EDIT: I'm patching nimpy in yglukhov/nimpy#177 with a trailing else |
This PR is incomplete, Nim's code generators and optimizers rely on the fact that Nim's Now before you patch all these places, don't. As this is not covered by an RFC and I don't like this feature at all. |
What alternative do you suggest for the problem of growing an enum without breaking changes, eg #15545? I'm fine with closing this if the stance is: I'm less fine if the stance is to complicate libraries with |
Code should change when semantics change. What else? This is why breaking changes are breaking, and it's intentional -- I use exhaustive case because I really cannot handle the scenario where a new enumeration is introduced, not because I'm too lazy to write |
you're asking here #15545 (comment)
and here #15413 (comment)
I don't see that magical way that fixes things like #15545, involves no breaking change, doesn't require adding |
Well, you could simply add a new parser procedure and then my code wouldn’t be broken. My concerns are no less valid just because you can’t meet them with your preferred implementation, and while mine may be the only whining voice here, there is a lot of code in the wild that uses JSON. I don’t think it’s a novel concern. But honestly, I’m not afraid to fix my code. So if you want to PR everyone else’s stuff, I’ll handle anything on the @disruptek account. I don’t want to be the bad guy here. |
I don't understand what you're suggesting, please elaborate. The way I wrote #15545 is much more practical and simple; there was a single breakage in important_packages, which I've already patched (in nimpy). |
...he said, before elaborating on my behalf... The only thing I'd add is that serialization is not a problem AFAICT; am I wrong about that? Again, it's not my problem that the API was designed poorly. My problem is that programmers are relying upon that broken API. If you want to be practical, present a practical design that removes the problem. Again, I'd support breakage if it's curative. What I hear you saying is, I could supply a new API, but then the API is a mess. It certainly does seem that way based upon your suggestions. The API is already broken. It's up to you to provide an improvement. A replacement that doesn't move forward is equivalent to moving backwards or not moving at all; I would sooner do nothing than make a change that doesn't improve our position in the future. But in fact, interoperation isn't difficult. We have converters as a means to grandfather types. I think if you put the same effort into working with the language we have instead of developing new dialects, you'd find that there are solutions to these problems that may not be perfect, but which do objectively move the ball forward without endless bikeshedding and disruption. Nim is a very powerful language and you're a very talented programmer. I'm sure you can do better than this. Simple solutions are what put us in these positions. More than simplicity, we need foresight to avoid repeating the same simple mistakes. |
Sure, ok. For this specific example, there is no need for a new JsonKind at all as I outlined elsewhere. No idea why it got ignored, but I'll patch json.nim my way. |
@Araq it's fine to close this; however:
it didn't get ignored, it has its own drawbacks. Before merging in devel it should be compared apples to apples with #15545, taking into account all relevant factors:
The way I've handled uint64 follows what's done in other libs eg https://dlang.org/phobos/std_json.html |
There is no performance advantage with parsejson's current design. We process tokens as strings and parse the numbers afterwards. I still don't care about special casing uint64, it's not justified by the JSON spec, plenty of language runtimes don't support uint64 and an enum extension breaks json.nim's clients. |
use case: minimize impact of PR's like #15545 that extend existing enum's (refs #15545 (comment))
No solution is 100% foolproof (it's a hard problem) but this one is particularly simple by avoiding to litter the code with
when: ... else
everywhere which would make the code hard to maintain.Client code will get a warning with
-d:workaround15545
(which could be localized in{.push warningAsError[AllowMissingCases].} {.pop.}
sections) to help them migrate.So long code doesn't generate enums with added enum members (eg
JNumber
,JUint
in that PR example), most existing code would continue to work.