Don't read this, it's boring. I sometimes run into bugs dealing with the edges of intervals. Most of these bugs are because some code assumed an interval was inclusive at the top, and some other code assumed it was exclusive.
Most people seem to understand the convention for array sizes (ie a function called int getSum(const int* a_Array, a_Count) wouldn't touch a_Array[a_Count], and a two-parameter version getSum(int* myArray, int start, int count) would touch a_Array[start] but not a_Array[start + count]), but we don't always get it for floats / real numbers.
I think the "natural" way to split intervals is half-open, like [a, b). To include the bottom and exclude the top, and to do it this way for integers and real numbers. All your "is this number in this range" checks should be of the form fMin <= x && x < fMax, and this should be a convention you can use without really thinking about it.
The important part is consistency. If different parts of code assume a different convention, you're boned. There are a few reasons I think the best convention is the interval [min, max). It's the closest to how division and quantization work - an object at 20.0 gets put into bin 2, because 20/10 = 2. Any logic and math is consistent no matter how we're storing the positions (if they were meters in integers, floating point numbers, or 10s of meters in integers, whatever). The reason the interval has to be half-open is so that only one of two adjacent intervals gets the point (this is important for partitioning groups of things).
Most functions that deal with ranges of numbers exclude the top. Random usually returns [0,1), arrays can be accessed from [0, count), this is how quantization works (representing RGB with char values gives you the range [0, 1<<num_bits) for each channel), etc. This is so boring, so don't think about it - just use min <= x < max.