What is a Good Geometry Algorithm?
What makes a geometry algorithm good? The fundamental attributes would be that it is:
 Relevant  it solves a significant geometric problem for a real world application
 Correct  it gives an accurate solution for the problem
 Robust  it tolerates small numerical errors and avoids overflow within constraints
 Efficient  it is fast in practice for typical applications, both small and large
 Conservative  it uses few resources, such as storage space or bandwidth
 Maintainable  it is straightforward to implement and troubleshoot
 Elegant  it is easy to comprehend why it works.
Basically, it should give a good practical software solution for a significant real world problem. If an algorithm is the very first or the only one to solve a certain problem, then that alone would make it significant. Otherwise, something else has to distinguish it.
Usually, the primary theoretical criterion used to compare algorithms is their efficiency, especially their asymptotic efficiency (aka "computational complexity") as the size n of the problem involved becomes increasingly large. But, the runtime efficiency for typical low values of n, can be even more important. In short, the algorithm has to be ultimately faster than the competition.
After speed, additional secondary criteria for good algorithms include: the amount of storage space they use, their ease of implementation, and also the sheer elegance of the algorithm's solution. In some cases, the secondary criteria can outweigh the primary one. For example, in a 1990 tourdeforce, Chazelle described a linear time O(n) algorithm for triangulating a simple polygon in the plane. However, to this day that algorithm has never been implemented because it is too complicated! So, other triangulation algorithms that are both reasonably fast and easy to implement become significant. On the other hand, finding a way to simplify Chazelle's algorithm and actually programming it would be of great significance.
The asymptotic speed of an algorithm for large problem size n is measured by specifying a function f(n) such that the execution time E(n) is bounded by a constant factor times f(n); namely, E(n) < Cf(n) for some fixed constant C > 0. When f(n) is the smallest function for which this is true, one says that the algorithm is of order f(n), or is an O( f(n) ) algorithm. This is also referred to as the computational complexity of an algorithm. Commonly used bounding functions include: log n = log_{2}(n), n, n^{k}, 2^{n}, a(n) [the inverse Ackermann function], as well as combinations of them. An asymptotic ordering for some of these is:
The following table illustrates how rapidly some of these functions increase.
log n

n

n log n

n^{2}

n^{3}

n^{4}

2^{n}

0

1

0

1

1

1

2

1

2

2

4

8

16

4

2

4

8

16

64

256

16

3

8

24

64

512

4096

256

4

16

64

256

4096

65536

65536

5

32

160

1024

32768

1048576

4294967296

6

64

384

4096

262144

16777216

xxxxxxxxxxxx

7

128

896

16384

2097152

268435456


8

256

2048

65536

16777216

4294967296


9

512

4608

262144

134217728

xxxxxxxxxxxx


10

1024

10240

1048576

1073741824



15

32768

491520

1073741824

xxxxxxxxxxxx



20

1048576

20971520

xxxxxxxxxxxx




25

33554432

838860800













This shows that the computing needed for an algorithm with high computational complexity can grow very rapidly as n gets larger. Even for relatively small n, say n=1000, the amount of computation needed by a high complexity algorithm may be prohibitive. Because of this, there is also a great interest in algorithms that perform well for the small values of n usually encountered in practice even if the asymptotic behavior is bad.
On the other hand, if there is a sublinear, like O(log n), algorithm to solve a problem, it becomes practical even for extremely large n. The prototypical O(log n) algorithm is a binary search, and many geometry algorithms can be reduced to doing binary searches on sorted lists of geometric objects. The sorting is often done using an O(n log n) quicksort in a preprocessing stage. In such algorithms, the speed is separated into the preprocessing time needed to initialize data structures, and then the actual runtime speed with possibly a large number of queries being made using these data structures.
Finally, it must be noted that there are resilient problems, such as the "traveling salesperson problem" (TSP),where the only known algorithms are O(2^{n}) and it is thought that this is the best one can do (this is the infamous "P=NP?" problem). Since this is too much computation except for very small n, there is also great interest in fast algorithms that find good approximate solutions for this class of problems.
