Skip to content

Latest commit

 

History

History
90 lines (60 loc) · 5.87 KB

README.md

File metadata and controls

90 lines (60 loc) · 5.87 KB

delaunay_compare

Small benchmark suite for comparing different (constrained) Delaunay triangulation implementations in rust.

Crates under test

For spade: Both insertion with a lookup structure ("hierarchy") and without are being tested. The look up structure allows efficient position lookup (e.g. for nearest neighbor searches) on the resulting triangulations but takes additional time to construct.

For spade and cdt: This benchmark suite also contains a constrained Delaunay triangulation (CDT) benchmark that bulk loads a single real data set (Europe_coastline.shp) consisting of 2,912,812 input vertices and 2,837,094 constraint edges.

Point distributions under test

Two point distributions are tested:

  • uniform: f64 coordinates are uniformly distributed in a given interval
  • local insertion: consecutive input points are located close to each other. A point is generated by adding a random step (with a fixed maximum length) to the last inserted point (random walk). This creates a more skewed input set.

How to run

Clone this repository and run cargo bench inside the delaunay_compare folder.

Results are stored in <repository_root>/target/criterion.

For the cdt loading benchmark, run cargo run --release --example real_data_benchmark

Results

For better comparability, measurements are grouped in point sets with less than 14000 vertices ("small") and more than 50000 vertices ("big").

Insertion times for small point sets: Measurements on small inputs

Insertion times for big point sets: Measurements on bigger inputs

CDT bulk loading

The CDT bulk loading uses real world data consisting of roughly 2.9e6 input vertices and 2.8e6 input constraint edges:

European coastline dataset

CDT Loading times

For completeness, the dataset is both loaded with constraint edges and without (as if it was a regular Delaunay triangulation).

The benchmark also includes the stable bulk load variant for spade that keeps the relative vertex order consistent.

CDT Bulk load Spade cdt crate
With constraint edges 3668ms 4032ms
Without constraint edges 2897ms 5668ms
With stable vertex order 6502ms -

Comparison & takeaways

  • All libraries are probably fast enough, depending on the use case. For many applications it probably doesn't matter if inserting 14000 vertices takes 5.8 or 7.3 milliseconds.
  • For small triangulations, delaunator and cdt seem to have a slight edge. For big triangulations, spade (without hierarchy) appears to be favorable.
  • Use spade's HierarchyHintGenerator only if you really need it - it does come at some additional cost.
  • The cdt crate seems to be optimized around loading CDTs with constraints - loading a triangulation without constraints should be faster, not slower!
  • Spade's stable bulk load is not yet optimized and comparably slow - use it with caution!

Additional feature comparison

bulk loading efficiency is not everything. The libraries also differ in which other features they support:

spade delaunator cdt
bulk loading
incremental insertion
extracting the Voronoi diagram
Constrained Delaunay triangulations (CDT)
bulk loading support for CDTs
lookup structure for efficient searches
use robust arithmetic predicates
custom types for edges, faces and vertices
near-neighbors (points very close to each other) regularly inserted near neighbors get merged together near neighbors get merged together
supported coordinate values abs(x) == 0 or 2-142 < abs(x) < 2201 any f64 value (can overflow) any f64 value (can overflow)
maximum number of vertices roughly 232 / 6
716 million
264 / 6 232 / 6 or
264 / 6 with the large-indexes feature

Disclaimer

This comparison has been created by the author of spade and may be biased. Please open an issue if you spot any inaccuracies!

Also, this benchmark should ideally test more than just two different point distributions. Its quite likely that some implementations can handle irregular triangulations better than others.