When temp is over 0 ˚C it seem OK. When temp is under 0 ˚C I'm getting weird values:
[0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1]
Parsed: Humidity:
[0, 0, 0, 0, 0, 0, 1, 1] 3
[0, 1, 1, 1, 1, 0, 0, 1] 121
Temp:
[0, 0, 0, 0, 1, 1, 1, 1] 15
[1, 1, 1, 0, 1, 0, 1, 0] 234
Checksum:
[0, 1, 1, 1, 0, 1, 0, 1] 117
Checksum is OK and these values gives me:
3×256+121 = 88.9 %
15×256+234 = 407.4 ˚C
which is weird when outside temp is ~ -1,5 ˚C
in specification when under 0 ˚C the first bit of temperature should be 1 but there is 1 on first bit of second byte of temperature.
this is rust code decoding readouts:
pub struct Reading {
pub temperature: f32,
pub humidity: f32,
}
impl Reading {
pub fn from_binary_vector(data: &[u8]) -> Result<Self, CreationError> {
if data.len() != 40 {
return Err(CreationError::WrongBitsCount);
}
let bytes: Result<Vec<u8>, ConversionError> = data
.chunks(8)
.map(|chunk| -> Result<u8, ConversionError> { convert(chunk) })
.collect();
let bytes = match bytes {
Ok(this_bytes) => this_bytes,
Err(_e) => return Err(CreationError::MalformedData),
};
// let check_sum: u8 = bytes[..4].iter().sum();
let check_sum: u8 = bytes[..4]
.iter()
.fold(0 as u8, |result, &value| result.overflowing_add(value).0);
if check_sum != bytes[4] {
return Err(CreationError::ParityBitMismatch);
}
let raw_humidity: u16 = (bytes[0] as u16) * 256 + bytes[1] as u16;
let raw_temperature: i16 = if bytes[2] >= 128 {
bytes[3] as i16 * -1
} else {
(bytes[2] as i16) * 256 + bytes[3] as i16
};
let humidity: f32 = raw_humidity as f32 / 10.0;
let temperature: f32 = raw_temperature as f32 / 10.0;
println!("{} {} {} {} {}, {}H, {}T",bytes[0], bytes[1], bytes[2], bytes[3], bytes[4], humidity, temperature);
if !(-41.0..=81.0).contains(&temperature) {
return Err(CreationError::OutOfSpecValue);
}
if !(0.0..=99.9).contains(&humidity) {
return Err(CreationError::OutOfSpecValue);
}
Ok(Reading {
temperature,
humidity,
})
}
}
Is sensor wrong or just sends something different to specification? Thanks in advance.
The temperature seems to use a 12-bit signed value, according to your findings:
With the factor of 10 taken from your source, the following table shows the relations.
Therefore, change your code to this and it will show the correct temperature:
Why the subtraction of 0x10 in case of
bytes[2] >= 0x8
?This "shifts" the range
0x800..0xFFF
down to the range-0x800..-0x001
(0x0800 - 0x1000 = -0x800 and 0x0FFF - 0x1000 = -0x001).