Replies: 4 comments 4 replies
-
While I'm asking for the moon... Here are a few other libraries that you might consider adding to your comparisons. These are all branded as mesh generators, so the comparison is probably only valid in the case that you add a refinement algorithm. Extremely simple DT w/ Ruppert's, not sure if it does CDT. No robust predicates (probably easy upgrade). No License included. Some 2D mesh generators https://github.com/wildmeshing/TriWild https://github.com/vladimir-ch/umeshu (deprecated) Even if some of these are impossible to add to your tests, some exposition of features and discussion of each would be valuable to someone trying to make a good choice. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your answers. I would encourage you to add benchmarks for the constrained case -- even if you limit the test to the well known libraries, it may prove very interesting. Do you have a simple example of using DLB with constraints? It looks like I might be able to dissect your big test app, but there is a lot going on there. I fully appreciate the rarity of free time. If you do find some, I would also encourage you to add some sort of refinement algorithm. I believe there are a lot of people out there who are looking for a replacement for Triangle. They're looking for a clear license -- and a responsive developer (or better yet a community). They need a tool that is performant (check), robust (check), and that reaches feature parity (Delaunay check, Constrained check, Refinement - not yet). I believe a quality replacement that checks all these boxes would be quickly and broadly adopted. I am going to go ahead and give DLB a try for my use cases where refinement is not needed. I'll let you know how my experience goes. Rob |
Beta Was this translation helpful? Give feedback.
-
For these benchmark results, it looks like I should sum the entries in each mini-column to arrive at a total time for each code to complete its test? I.e. the two-letter abbreviations are each test broken down into somewhat standardized steps? Thanks for providing these. |
Beta Was this translation helpful? Give feedback.
-
If you're interested in let say full processing times for delaunay with constrains but without interior detection and polygonization steps (which are rarely useful), and you are certainly sure there are no vertex duplicates, you should sum: TR+CE+ES. If you can't guarantee there are no duplicates you should add RD to it too. EDIT: for DLB you should always add RD as there is no option to avoid checking it :) |
Beta Was this translation helpful? Give feedback.
-
I am looking for a replacement for Triangle for my C++ project.
I use Triangle two ways -- 1) CDT, usually very small data sets (~10 points). 2) CDT with refinement, with larger, but not huge data sets (~1000 points, outputting ~10000 triangles after refinement).
Triangle has several deficiencies that I'm trying to get past. The license of course, but perhaps the bigger problem is error trapping. When Triangle detects a failure situation, it calls exit() -- crashing the host program. Although several projects have tried to wrap Triangle for better library etiquette, none is great -- they still crash and brings everything down. There are also un-detected errors (over-running arrays, etc) that are problematic.
I really appreciate the benchmarks you've done and your work to improve CDT as well as Delabella. I value that both libraries are actively developed and have some means for user interaction (this forum). Thanks for everything.
Have you considered adding constrained triangulations to your benchmarks? I suspect that honoring constraints could change performance behavior of certain algorithms. I am not 100% sure, but it looks like delaunator-cpp does not handle constraints, but the other algorithms you test do.
Do you have any plans to add refinement to your tool (Ruppert's or Chew's algorithms)? This would be a huge advantage for me.
Have you considered adding some measure of robustness to your benchmark testing? I'm mostly thinking about cases that include true degeneracies -- duplicate points, co-linear points, zero-area triangles. The kinds of situations that are not supposed to exist -- but in the real world, they come up and cause programs to crash. I'd appreciate even any anecdotal experience with respect to robustness in your testing.
Finally, how does Delabella handle errors, failures, and other problems? This doesn't need to be some heavy-handed exception handling, but some guarantee that the library isn't going to crash the whole application with no opportunity for intervention.
Thanks for all your work.
Beta Was this translation helpful? Give feedback.
All reactions