Microsoft's Parallel Programming whitepaper describes situations that are optimal under various FLOPS thresholds, and that the FLOPS rate is a decision point as to when a certain implementation should be used.
How do I measure FLOPS in my application?
FLOPS means FLoating-point Operations Per Second and measuring them is as simple as counting the number of floating-point operations performed and dividing by the time it takes to perform them. Measuring the time is the easy part. Couting the operations is tricky and is usually depends on the hardware platform and the compiler used. Usually simple operations like addition, subtraction and multiplication are very fast. Division is a bit slower. Taking square root is even slower. On the slowest part of the spectrum are transcendental functions like sine, cosine, exponentiation and taking logarithm. These are all expanded in series and computed iteratively until convergence is achieved. Most current generation CPUs support fused multiplication and addition (FMA) operations, that is A*B+C is performed in a single cycle.
Given all that it is very hard to give an absolute FLOPS value. If your code performs only simple operations then you will get high FLOPS count. If it does lots of transcendentals, then FLOPS count will be much lower (up to 100 times lower). It also depends on the fetch/compute ratio that is how often you access the main memory and how good the compiler is at generating code that can benefit from latency hiding.
The standard FLOPS benchmark is the LINPACK test which solves a dense system of linear equations. It only uses simple arithmetic operations (no transcendentals) and although this is no way enough to say how performant the CPU will be with more complex operations, it is still used to rank the supercomputers in Top500.