Reputation: 2236
I am looking at the arithmetic coding implementation on https://www.codeabbey.com/index/task_view/adaptive-arithmetic-coding
My understanding is that when you're using floating point precision, the compressed result will be some floating point value. However, when using infinite precision with integers and shifting, I thought the output are binary values. In the above link, there's a function
def remove_first_digit(v): # returns truncated value and digit which was truncated
d = v // tail
v %= tail
return v, d
and d
is appended to output
which is the compressed result. However d
here is a char
. I don't quite understanding how this implementation works. Why is there a conversion to char going on here?
Upvotes: 0
Views: 52
Reputation: 112502
Here d
is not a "char". It is a base 27 digit, i.e. an integer in the range 0..26. code_to_char()
then converts the digit into an upper-case letter or a period to write out.
This is not an arithmetic encoder that produces binary output. It is an arithmetic encoder that produces output consisting of those 27 characters.
What is going on there is that the high digit of the low
and high
values are equal, so that digit is written out, and then removed from both the low
and the high
values. Then new digits are shifted in to low
and high
-- a 0 digit into low
and a 26 digit into high
.
Upvotes: 1