Reputation: 3670
I'm working on an application that reads in some strings in Q Number format.
The java implementation converts a string like this:
int i = Integer.parseInt("00801600", 16);
System.out.println("Number from Integer.parseInt = " + i); // i=8394240
float j = Integer.reverseBytes(i);
System.out.println("After Integer.reverseBytes = " + j); // j=1474560.0
float k = j / 65536; //TWO_POWER_OF_16 = 65536
System.out.println("After Q division = " + k); // k=22.5
I've played with a lot of combinations of swift functions, and this is (hopefully) pretty close:
let i: Int = Int("00801600", radix: 16) ?? 0
let istr = "Number from Int = \(i)"
let j: Double = Double(i.byteSwapped)
let jstr = "After byte swapping = \(j)"
let k: Double = Double(j) / 65536.0
let kstr = "After Q division = \(k)"
Obviously, Int.byteSwapped
isn't what I'm looking for. In my example above, j
is where it all goes off the rails. The java code produces 1474560.0
, whereas my swift is 6333186975989760.0
.
Upvotes: 0
Views: 158
Reputation: 63369
This is an alternate approach to my main answer. Read that one first.
This is a more protocol oriented approach. It encodes the numeratorBitWidth at the type level, so each instance only has to have enough memory to store I
. Unfortunately, this requires a new struct definition for every type of Q encoded integer you might want (There's 16 variants just for 16 bit integers alone: QEncoded1_15
, QEncoded2_14
, ... QEncoded15_1
, QEncoded16_0
).
protocol QEncoded {
associatedtype I: BinaryInteger
var i: I { get set }
static var numeratorBitWidth: Int { get } // "m"
static var denominatorBitWidth: Int { get } // "n"
}
extension QEncoded {
static var denominatorBitWidth: Int { return I().bitWidth - Self.numeratorBitWidth }
static var qFormatDescription: String {
let (m, n) = (self.numeratorBitWidth, self.denominatorBitWidth)
return (n == 0) ? "Q\(m)" : "Q\(m).\(n)"
}
var numerator: I {
return i >> Self.denominatorBitWidth
}
var denominator: I {
if Self.denominatorBitWidth == 0 { return 1 }
let denominatorMask: I = (1 << I(Self.numeratorBitWidth)) - 1
return i & denominatorMask
}
var ratio: Double { return Double(numerator) / Double(denominator) }
}
Example usage:
extension BinaryInteger {
var binaryDescription: String {
var binaryString = ""
var internalNumber = self
var counter = 0
for _ in (1...self.bitWidth) {
binaryString.insert(contentsOf: "\(internalNumber & 1)", at: binaryString.startIndex)
internalNumber >>= 1
counter += 1
if counter % 4 == 0 {
binaryString.insert(contentsOf: " ", at: binaryString.startIndex)
}
}
return binaryString
}
}
extension QEncoded {
func test() {
print("\(self.i.binaryDescription) with \(Self.qFormatDescription) encoding is: \(numerator.binaryDescription) (numerator: \(numerator)) / \(denominator.binaryDescription) (denominator: \(denominator)) = \(ratio)")
}
}
struct QEncoded16_0: QEncoded {
static let numeratorBitWidth = 16
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
struct QEncoded8_8: QEncoded {
static let numeratorBitWidth = 8
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
struct QEncoded4_12: QEncoded {
static let numeratorBitWidth = 4
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
Output:
0011 1110 0000 1111 with Q16 encoding is: 0011 1110 0000 1111 (numerator: 15887) / 0000 0000 0000 0001 (denominator: 1) = 15887.0
0011 1110 0000 1111 with Q8.8 encoding is: 0000 0000 0011 1110 (numerator: 62) / 0000 0000 0000 1111 (denominator: 15) = 4.133333333333334
0011 1110 0000 1111 with Q4.12 encoding is: 0000 0000 0000 0011 (numerator: 3) / 0000 0000 0000 1111 (denominator: 15) = 0.2
Upvotes: -1
Reputation: 63369
You say that you're trying to implement Q encoded numbers, but the Java code you've shown doesn't really do that. It hard-codes the case of Q16 (by virtue of dividing by 65536
, which is 2^16), but frankly, I'm not even sure how it's intended to work, but it doesn't.
0x00801600
when Q encoded with a numerator of size 16, represents 0x0080 / 0x1600
, which is 128 / 5632
, which is equal to ~0.0227. Even if you imagine that your input is swapped, 5632 / 128
is 44
, not 22.5
. So I don't see any interpretation under which this math works out.
To implement this in Swift (and in Java, for that matter), I would make a new QEncoded
data type, that stores an integer and a number of bits that count towards the numerator (the number of bits that count for the denominator can be inferred as the formed minus the latter).
This approach is the most flexible, but it isn't particularly efficient (since it wastes one Int
for the numeratorBitWidth
for every instance). If you have so many of these that memory usage is a concern, you can use a more protocol oriented approach, which I detail in a second answer.
// A QEncoded binary number of the form Qm.n https://en.wikipedia.org/wiki/Q_%28number_format%29
struct QEncoded<I: BinaryInteger> {
var i: I
var numeratorBitWidth: Int // "m"
var denominatorBitWidth: Int { return i.bitWidth - numeratorBitWidth } // "n"
var numerator: I {
return i >> denominatorBitWidth
}
var denominator: I {
if denominatorBitWidth == 0 { return 1 }
let denominatorMask: I = (1 << I(numeratorBitWidth)) - 1
return i & denominatorMask
}
var ratio: Double { return Double(numerator) / Double(denominator) }
var qFormatDescription: String {
let (m, n) = (self.numeratorBitWidth, self.denominatorBitWidth)
return (n == 0) ? "Q\(m)" : "Q\(m).\(n)"
}
init(bitPattern: I, numeratorBitWidth: Int, denominatorBitWidth: Int) {
assert(numeratorBitWidth + denominatorBitWidth == bitPattern.bitWidth, """
The number of bits in the numerator (\(numeratorBitWidth)) and denominator (\(denominatorBitWidth)) \
must sum to the total number of bits in the integer \(bitPattern.bitWidth)
""")
self.i = bitPattern
self.numeratorBitWidth = numeratorBitWidth
}
// Might be useful to implement something like this:
// init(numerator: I, numeratorBits: Int, denominator: I, denominatorBits: Int) {
//
// }
}
Here's a little demo:
extension BinaryInteger {
var binaryDescription: String {
var binaryString = ""
var internalNumber = self
var counter = 0
for _ in (1...self.bitWidth) {
binaryString.insert(contentsOf: "\(internalNumber & 1)", at: binaryString.startIndex)
internalNumber >>= 1
counter += 1
if counter % 4 == 0 {
binaryString.insert(contentsOf: " ", at: binaryString.startIndex)
}
}
return binaryString
}
}
extension QEncoded {
func test() {
print("\(self.i.binaryDescription) with \(qFormatDescription) encoding is: \(numerator.binaryDescription) (numerator: \(numerator)) / \(denominator.binaryDescription) (denominator: \(denominator)) = \(ratio)")
}
}
// ↙︎ This common "0_" prefix does nothing, it's just necessary because "0b_..." isn't a valid form
// The rest of the `_` denote the seperation between the numerator and denominator, strictly for human understanding only (it has no impact on the code's behaviour)
QEncoded(bitPattern: 0b0__00111111 as UInt8, numeratorBitWidth: 0, denominatorBitWidth: 8).test()
QEncoded(bitPattern: 0b0_0_0111111 as UInt8, numeratorBitWidth: 1, denominatorBitWidth: 7).test()
QEncoded(bitPattern: 0b0_00_111111 as UInt8, numeratorBitWidth: 2, denominatorBitWidth: 6).test()
QEncoded(bitPattern: 0b0_001_11111 as UInt8, numeratorBitWidth: 3, denominatorBitWidth: 5).test()
QEncoded(bitPattern: 0b0_0011_1111 as UInt8, numeratorBitWidth: 4, denominatorBitWidth: 4).test()
QEncoded(bitPattern: 0b0_00111_111 as UInt8, numeratorBitWidth: 5, denominatorBitWidth: 3).test()
QEncoded(bitPattern: 0b0_001111_11 as UInt8, numeratorBitWidth: 6, denominatorBitWidth: 2).test()
QEncoded(bitPattern: 0b0_0011111_1 as UInt8, numeratorBitWidth: 7, denominatorBitWidth: 1).test()
QEncoded(bitPattern: 0b0_00111111_ as UInt8, numeratorBitWidth: 8, denominatorBitWidth: 0).test()
Which prints:
0011 1111 with Q0.8 encoding is: 0000 0000 (numerator: 0) / 0000 0000 (denominator: 0) = -nan
0011 1111 with Q1.7 encoding is: 0000 0000 (numerator: 0) / 0000 0001 (denominator: 1) = 0.0
0011 1111 with Q2.6 encoding is: 0000 0000 (numerator: 0) / 0000 0011 (denominator: 3) = 0.0
0011 1111 with Q3.5 encoding is: 0000 0001 (numerator: 1) / 0000 0111 (denominator: 7) = 0.14285714285714285
0011 1111 with Q4.4 encoding is: 0000 0011 (numerator: 3) / 0000 1111 (denominator: 15) = 0.2
0011 1111 with Q5.3 encoding is: 0000 0111 (numerator: 7) / 0001 1111 (denominator: 31) = 0.22580645161290322
0011 1111 with Q6.2 encoding is: 0000 1111 (numerator: 15) / 0011 1111 (denominator: 63) = 0.23809523809523808
0011 1111 with Q7.1 encoding is: 0001 1111 (numerator: 31) / 0011 1111 (denominator: 63) = 0.49206349206349204
0011 1111 with Q8 encoding is: 0011 1111 (numerator: 63) / 0000 0001 (denominator: 1) = 63.0
Upvotes: -1
Reputation: 385920
A Java int
is always 32 bits, so Integer.reverseBytes
turns 0x00801600 into 0x00168000.
A Swift Int
is 32 bits on 32-bit platforms and 64 bits on 64-bit platforms (which is most current platforms). So on a 32-bit platform, i.byteSwapped
turns 0x00801600 into 0x00168000, but on a 64-bit platform, i.byteSwapped
turns 0x0000000000801600 into 0x0016800000000000.
If you want 32 bits, be explicit:
1> let i = Int32("00801600", radix: 16)!
i: Int32 = 8394240
2> let j = Double(i.byteSwapped)
j: Double = 1474560
3> let k = j / 65536
k: Double = 22.5
4>
Upvotes: 6