Replies: 3 comments 1 reply
-
TLDR Version@DillonJ In principle, I like the idea of clearer separation of handling the model structure as opposed to the model generation. However, I'm not sure if doing such a sub-module for SpineOpt would be worth it if the only objective is speed. As far as I understand, Julia can be fast enough, we just need to learn to use it properly, and fix our code. Extended thoughts on the topicThis is related to something I've pondered occasionally as an outlet for frustration when dealing with SpineOpt and Backbone (a lot of which I imagine applies to other large-scale energy system modelling frameworks as well). There's a lot of similar "gruntwork" that goes into building and maintaining these types of models (mostly points 1 & 2), and I can't help but feel that it could be better modularised to allow for reuse. To me, there are at least 5 distinct stages of the modelling process that could ideally be modularised:
|
Beta Was this translation helpful? Give feedback.
-
This sounds like a good idea, however I'd like to point out that we haven't yet gotten to the bottom of the performance problems I don't feel. We blame the indexing functions for all our problems - based on an initial profiling - but I have experienced that there is much to gain in other parts of the code too. But the idea of isolating performance critical code in a separate package that is easy to compile is a good idea - whatever that part of the code is. |
Beta Was this translation helpful? Give feedback.
-
@manuelma that for this. In your opinion, what would the next steps be to indentify what that code is and indentify the code most ripe for optimisation? |
Beta Was this translation helpful? Give feedback.
-
Discussions on the use of package_compiler (e.g. #885) seem to indicate that it's of limited use due to the dependence on the underlying data.
Also - investigations and discussions so far are suggesting that the main culprit for the poor model-building time (even the second run time as in Yi's tests vs PyPSA) is due to indexing that is needed to support flexibile temporal resolution and stochastic structure.
However, the stochastic and temporal structures depend on a relatively small amount of data. I wonder would there be a benefit in developing a compiled subpackage that generates all the fundamental indices that are needed for SpineOpt. This could be RUST or C or compiled Julia - but in the case of Julia - this would need to be done in a way that a reusable system image could be compiled that is minimally dependent on the data - if this is possible at all).
The best tradeoff between effort and utility would probably be to have it sit between SpineInterface / SpinedbAPI and SpineOpt. I.e. we use Spineinterface and/or SpineDBAPI to get the data we need to construct the temporal and stochastic structures and fundamental sets... but do all the heavy lifting in a subpackage that creates some sets that are then used by SpineOpt.
Any thoughts @manuelma @abelsiqueira @jkiviluo @Tasqu @suvayu @datejada ?
Beta Was this translation helpful? Give feedback.
All reactions