-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Paimon Spark 2025 Roadmap #4816
Comments
Any other features or requirements that you would like to have, please comment here. Then we can discuss and modify this roadmap together. Thanks. |
And if someone want to take one or some, take it and let us know. |
Can I take on this task? Distributed Planning : [perf] Support distributed planning in the scan phase. |
@Aiden-Dong Yes, feel free for it, you can create an issue for it, additionally, this feature actually requires changes in the core, and then each compute engine will need to support it. |
Yes, I understand that we need to extend the functionality of |
@Zouxxyy Thank you for raising this, these optimizations are all highly anticipated!
If no one has worked on this, I would like to volunteer to take it on. We are currently endeavoring to enhance write performance by utilizing the V2 write RequiresDistributionAndOrdering. In fact, I am on the verge of completing a MVP version locally. |
So glad if you can take it on. I just want to remind you that be aware of the support for scenarios with different bucket modes, especially dynamic bucket mode in your implementation. This is why we compromised to use V1 write at first. |
Yeah, I haven't found a easy way to support this yet. In fact, I've only implemented V2 write for the fixed bucket mode. I think we can first let the unsupported bucket modes fall back to V1 write. |
Motivation
2025 has arrived, and we would like to thank everyone for the contributions in the past! Here we present the 2025 Paimon Spark roadmap, and welcome to take ownership of them or expand upon them!
The text was updated successfully, but these errors were encountered: