Porting C endianness & pointers black magic to Swift

374 views Asked by At

I'm trying to translate this snippet :

ntohs(*(UInt16*)VALUE) / 4.0

and some other ones, looking alike, from C to Swift. Problem is, I have very few knowledge of Swift and I just can't understand what this snippet does... Here's all I know :

  • ntohs swap endianness to host endianness
  • VALUE is a char[32]
  • I just discovered that Swift : (UInt(data.0) << 6) + (UInt(data.1) >> 2) does the same thing. Could one please explain ?
  • I'm willing to return a Swift Uint (UInt64)

Thanks !

3

There are 3 answers

4
Sulthan On BEST ANSWER
  1. VALUE is a pointer to 32 bytes (char[32]).
  2. The pointer is cast to UInt16 pointer. That means the first two bytes of VALUE are being interpreted as UInt16 (2 bytes).
  3. * will dereference the pointer. We get the two bytes of VALUE as a 16-bit number. However it has net endianness (net byte order), so we cannot make integer operations on it.
  4. We now swap the endianness to host, we get a normal integer.
  5. We divide the integer by 4.0.

To do the same in Swift, let's just compose the byte values to an integer.

let host = (UInt(data.0) << 8) | UInt(data.1)

Note that to divide by 4.0 you will have to convert the integer to Float.

2
phil On

It looks like the code is taking the single byte value[0]. This is then dereferenced, this should retrieve a number from a low memory address, 1 to 127 (possibly 255). What ever number is there is then divided by 4.

I genuinely can't believe my interpretation is correct and can't check that cos I have no laptop. I really think there maybe a typo in your code as it is not a good thing to do. Portable, reusable

I must stress that the string is not converted to a number. Which is then used

0
zwol On

The C you quote is technically incorrect, although it will be compiled as intended by most production C compilers.¹ A better way to achieve the same effect, which should also be easier to translate to Swift, is

unsigned int val = ((((unsigned int)VALUE[0]) << 8) |  // ² ³
                    (((unsigned int)VALUE[1]) << 0));  // ⁴

double scaledval = ((double)val) / 4.0;                // ⁵

The first statement reads the first two bytes of VALUE, interprets them as a 16-bit unsigned number in network byte order, and converts them to host byte order (whether or not those byte orders are different). The second statement converts the number to double and scales it.

¹ Specifically, *(UInt16*)VALUE provokes undefined behavior because it violates the type-based aliasing rules, which are asymmetric: a pointer with character type may be used to access an object with any type, but a pointer with any other type may not be used to access an object with (array-of-)character type.

² In C, a cast to unsigned int here is necessary in order to make the subsequent shifting and or-ing happen in an unsigned type. If you cast to uint16_t, which might seem more appropriate, the "usual arithmetic conversions" would then convert it to int, which is signed, before doing the left shift. This would provoke undefined behavior on a system where int was only 16 bits wide (you're not allowed to shift into the sign bit). Swift almost certainly has completely different rules for arithmetic on types with small ranges; you'll probably need to cast to something before the shift, but I cannot tell you what.

³ I have over-parenthesized this expression so that the order of operations will be clear even if you aren't terribly familiar with C.

⁴ Left shifting by zero bits has no effect; it is only included for parallel structure.

⁵ An explicit conversion to double before the division operation is not necessary in C, but it is in Swift, so I have written it that way here.