Chris
Chris

Reputation: 710

How to do bounds-checking in haskell at compile-time?

I am rather new to Haskell and I was wondering how one would go about enforcing bounds using the type-system. After reading this question and checking out the linked article on smart constructors I think I get how you could generate a bounded type and generate a constructor for it like in the example:

data Probability a = P a deriving (Show, Ord, Eq)

I read in learn you a Haskell for great good, that you should

never add typeclass constraints in data declarations

so no type-constraints at this point...

instance Bounded Probability where
  minBound = P 0
  maxBound = P 1     

smartP :: (Floating a) => a -> Probability
smartP x = assert (0 <= x && x <= 1) $ P x

What I am wondering now is if and how it would be possible to do the bounds-check at compile-time for a type that is not easily represented in terms of Peano numbers (like Double...). And maybe if possible, how one would go about finding an encoding for non-enumerable but bounded types.

Furthermore: Probabilities are numbers, and I would like them to behave like numbers (have + - * etc. work on them as you would expect). Is it possible to achieve this behavior by making Probability part of the Number type-class (via deriving or some other way that doesn't involve coding all the functions by hand in the obvious way)?

Thanks a lot for any explanations.

Upvotes: 1

Views: 512

Answers (1)

jberryman
jberryman

Reputation: 16645

Interpreting your question as "What is the best way to represent probability such that invalid states are not representable", and riffing on that. I'm not an expert and don't know if this will help you.


Let's assume (maybe incorrectly) you are originally computing probabilities from samples. Your successes and tries are both integers in memory. Let's even assume they are arbitrary precision (in haskell Integer).

Except for the fact that they can be computed quickly, IEEE floating point is not a good way to represent probability (see here):

  • they become less dense as you move from 0 to 1.0, and so if you care about precision you cannot freely transform between the probability of a thing happening and it not happening
  • it's wasteful: most representable values lie outside your 0 - 1 range

From the blog post I linked above, we learn that working with odds instead of probability improves the situation (we double the number of useful values), and in particular log-odds solves the first issue above by "stretching the dense regions of floating point numbers and compressing the sparse regions."

Odds in log space

Most info related to log-odds I learned in this nice write-up byt Brian Lee and Jacob Sanders

log-odds is simply the logarithm to some base of the odds (so we use logOdds where we'd use / if we wanted probability). We can play around with it in ghci and get an intuition pretty quickly:

Prelude> let logOdds successes tries = logBase 10 ( successes / (tries - successes))
Prelude> logOdds 1 2
0.0
Prelude> logOdds 1 10
-0.9542425094393249
Prelude> logOdds 5 1000
-2.2988530764097064
Prelude> logOdds 995 1000
2.2988530764097064
Prelude> logOdds 99995 100000
4.301008280396999
Prelude> logOdds 9999995 10000000
6.3010297785166856

We can see the whole float domain is useful now, and we can convert between likelihood and unlikelihood without losing precision. There are also arguments that it's easier to make decisions based on intuition with these numbers.

So log-odds is representation isomorphic to probability where invalid states are not representable. We can convert back with

Prelude> toProbabilty lgOdds = (1 - (1 / (1 + (10**lgOdds))))
Prelude> toProbabilty 2.2988530764097064
0.995

Working with log-probability

I'm not sure how convenient this is or what you're trying to do with your probabilities. The link above mentions that log-odds works naturally in the context of Bayes theorem.

But I wanted to elaborate on this, because log-odds is an instance of a general technique I generally hear called "working in log space" which can be very helpful when trying to do calculations where you might get very large or very small values. The idea is to use the log laws to transform your calculation, to lower multiplications and divisions to additions and subtractions.

I'm out of time and will try to come back to this later, but observe

odds successes tries = successes / (tries - successes)
odds successes tries = 10 ** (logBase 10 ( successes / (tries - successes)))
odds successes tries = 10 ** (logBase 10 successes - logBase 10 (tries - successes)

my goal if I had more time being to show hopefully how a ratio representation of probability can be used for some calculations you might be interested in, without losing precision.

conclusion

Part of my point of this answer is that we don't usually need to do type-level computation to make invalid states unrepresentable. When we return Either Foo Bar from a function we're doing this.

Another example in a library I wrote I originally had a comment that the user should only pass on Int that was a power of two, otherwise error. Instead I changed the function to take the power of two directly.

Upvotes: 1

Related Questions