choice between map or unordered_map for keys consisting of calculated double values.

2.1k views Asked by At

In order to quickly find coplanar entities in a bunch of planar entities in 3d space, I want to create a mapping from 3d planes to the set of entities lying in that plane (estimated max around ~1000 planes and ~100000 entities)

I can create my own custom class to represent 3D-plane keys, but basically this class needs (mathematically) four doubles to uniquely identify a plane: 3 coordinates for the normal vector and one coordinate to specify a point on the plane. So we have:

struct Plane3d
{
    double n_x, n_y, n_z;
    double u;   // (u*n_x, u*n_y, u*n_z) is a point on the plane

    // some constructors etc.
}

These four doubles are each time calculated from the entity under consideration, so rounding errors and floating point comparison issues have to be taken into account. Suppose I have calculated a suitable (relative) error tolerance:

const double EPSILON;

Yet I do not want to compare the coplanarity of all pairs of entities one-by-one (categorization in O(n^2) time) but create a map to categorize my entities.

Most ideal would be an unordered_map (creation in O(n) time):

unordered_map<Plane3d, vector<Entity>, PlaneHash, PlaneEquality> mapping;

This would require to write two functors: PlaneEquality is no problem, but...

  • is it possible to write a Hash function for four doubles (or even just a regular double) that takes a comparison error tolerance into account.
  • in the specification I've read that the equality functor is only used to distinguish equal keys inside the same bucket. Is there a way to ensure that equal keys actually end up in the same bucket?

The other option is to use a normal map (still creation in O(n log n) time)

map<Plane3d, vector<Entity>, PlaneCompare> mapping; 

The PlaneCompare functor sounds feasible, I could use a lexicograpic ordering of the four doubles and check each "less than" using EPSILON. But I still have a few questions:

  • Would this actually work? Are there any pitfalls?
  • The specification says that equality of keys, say p1 and p2, is determined by the test !PlaneCompare(p1,p2) && !PlaneCompare(p2,p1). If I used the lexicographic ordering this should be equivalent with a direct equality test with error tolerance but is this not slower?
2

There are 2 answers

2
DJClayworth On BEST ANSWER

"is it possible to write a Hash function for four doubles (or even just a regular double) that takes a comparison error tolerance into account."

No, it is not.

That sounds like a very definite statement, how can I be so sure?

Let's assume you want a tolerance of 0.00001. The value doesn't matter, we're just going to use it for an example. That will mean that for this hash function:

  • hash(1.0) must return the same value as hash(1.00001)

so that they can be considered equal. But it also means:

  • hash(1.00001) must return the same value as hash(1.00002)

for the same reason, and

  • hash(1.00002) must return the same value as hash(1.00003)

...and so on, up to the topmost possible value of double - effectively ad infinitum. The same is true for values below 1.

So any hash function that allows for tolerance will have to return the same hash for all values, making it useless.

P.S. to actually recommend an approach that does work, the four-dimensional quadtree (technically something like sedecimtree) is probably best.

5
Andriy Tylychko On

You can round your doubles so e.g. all values in range [n - epsilon, n + epsilon) are rounded to n where n mod epsilon == 0, and hash them. In this case your close values will have the same hash values.

How to better hash 4 doubles depends on your case, but even summing them can be good enough.