Reputation: 3216
I'm just starting with Haskell, and I thought I'd start by making a random image generator. I looked around a bit and found JuicyPixels, which offers a neat function called generateImage
. The example that they give doesn't seem to work out of the box.
Their example:
imageCreator :: String -> IO ()
imageCreator path = writePng path $ generateImage pixelRenderer 250 300
where pixelRenderer x y = PixelRGB8 x y 128
when I try this, I get that generateImage
expects an Int -> Int -> PixelRGB8
whereas pixelRenderer
is of type Pixel8 -> Pixel8 -> PixelRGB8
. PixelRGB8
is of type Pixel8 -> Pixel8 -> Pixel8 -> PixelRGB8
, so it makes sense that pixelRenderer
is doing some type inference to determine that x
and y
are of type Pixel8
. If I define a type signature that asserts that they are of type Int
(so the function gets accepted by generateImage
, PixelRGB8
complains that it needs Pixel8
s, not Ints
.
Pixel8
is just a type alias for Word8
. After some hair pulling, I discovered that the way to convert an Int
to a Word8
is by using fromIntegral
.
The type signature for fromIntegral
is (Integral a, Num b) => a -> b
. It seems to me that the function doesn't actually know what you want to convert it to, so it converts to the very generic Num
class. So theoretically, the output of this is a variable of any type that fits the type class Num
(correct me if I'm mistaken here--as I understand it, classes are kind of like "interfaces" where types are more like classes/primitives in OOP). If I assign a variable
let n = fromIntegral 5
:t n -- n :: Num b => b
So I'm wondering... what is 'b'? I can use this variable as anything, and it will implicitly cast to any numeric type, as it seems. Not only will it implicitly cast to a Word8
, it will implicitly cast to a Pixel8
, meaning fromPixel
effectively gets turned from (as I understood it) (Integral a, Num b) => a -> b
to (Integral a) => a -> Pixel8
depending on context.
Can someone please clarify exactly what's happening here? Why can I use a generic Num
as any type that fits Num
, both mechanically and "ethically"? I don't understand how the implicit conversion is implemented (if I were to create my own class, I feel like I would need to add explicit conversion functions). I also don't really know why this works; here I can use a pretty unsafe type and convert it implicitly to anything else. (for example, fromIntegral 50000
gets translated to 80
if I implicitly convert it to a Word8
)
Upvotes: 2
Views: 167
Reputation: 116139
A common implementation of type classes such as Num
is dictionary-passing. Roughly, when the compiler sees something like
f :: Num a => a -> a
f x = x + 2
it transforms it into something like
f :: (Integer -> a, a -> a -> a) -> a -> a
-- ^-- the "dictionary"
f (dictFromInteger, dictPlus) x = dictPlus x (dictFromInteger 2)
The latter basically says: "pass me an implementation for these methods of class Num
for your type a
, and I will use them to produce a function a -> a
for you".
Values such as your n :: Num b => b
are no different. They are compiled into things such as
n :: (Integer -> b) -> b
n dictFromInteger = dictFromInteger 5 -- roughly
As you can see, this turns innocent-looking integer literals into functions, which can (and does) impact performance. However, in many circumstances the compiler can realize that the full polymorphic version is not actually needed, and remove all the dictionaries.
For instance, if you write f 3
but f
expects Int
, the "polymorphic" 3
can be converted at compile time. So type inference can aid the optimization phase (and user-written type annotation can greatly help here). Further, some other optimizations can be triggered manually, e.g. using the GHC SPECIALIZE
pragma. Finally, the dreaded monomorphism restriction tries hard to force non-functions to remain non-functions after translation, at the cost of some loss of polymorphism. However, the MR is now being regarded as harmful, since it can cause puzzling type errors in some contexts.
Upvotes: 3