I have a DBase IV database. Each row has a memo field with a ASCII encoded string that holds two serialized borland c++ structures. I am able to pull the data using OleDb, re-encode it to ascii using the ASCIIEncoding class, convert it to bytes using a BinaryReader, and cast it to my C# struct using Marshal.PtrToStructure. The data I get is correct but any float that is to big in the database is completely wrong when it is cast to the c#. For example, a value of 1149.00 cast into 764.9844 but a value like 64.00 cast fine. I can post some of the code and the structures but I figured I tried to keep it short at first. I know that floats are only precise up to 7 digits but I'm confused why I'm seeing this because the values are under that limit.
Edit:
struct cplusplusstruct // from the c++ code
{
int Number;
float P;
float ZP;
float Hours;
int Month;
int Day;
int Year;
int Hour;
int Minute;
int Second;
ULONG UPCTime;
int B;
char Name[21];
float L;
float H;
float S;
}
[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct csharpstruct //The C# struct I created
{
public int Number;
public float Pr;
public float ZP;
public float Hours;
public int Month;
public int Day;
public int Year;
public int Hour;
public int Minute;
public int Second;
public UInt32 UPCTime;
public int B;
[MarshalAsAttribute(UnmanagedType.ByValTStr, SizeConst = 21)]
public string Name;
public float L;
public float H;
public float S;
}
//OLE DB Connection and query ...
//Casting data to struct
ASCIIEncoding encoding = new ASCIIEncoding();
byte[] blob = encoding.GetBytes(memoString);
MemoryStream memoryStream = new MemoryStream(blob);
BinaryReader binaryReader = new BinaryReader(memoryStream);
int dataSize = Marshal.SizeOf(typeof(csharpstruct));
GCHandle handle = GCHandle.Alloc(binaryReader.ReadBytes(dataSize), GCHandleType.Pinned);
csharpstruct data = (csharpstruct) Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(csharpstruct));
Edit: The following is java code that read the data just fine but without any use of casting.
org.xBaseJ.DBF dbf = new org.xBaseJ.DBF(dbPath);
org.xBaseJ.DBF dbf = new org.xBaseJ.DBF(dbPath);
MemoField m = (MemoField) dbf.getField("MEMOFIELD");
Charset charset = Charset.forName("US-ASCII");
CharsetDecoder decoder = charset.newDecoder();
ByteBuffer trendBytes = ByteBuffer.wrap(m.getBytes());
trendBytes.order(ByteOrder.LITTLE_ENDIAN);
trendBytes.getInt();
trendBytes.getFloat();
I wasn't able to solve the issue directly. The problem seemed to come from the OLE data provider I was using. The data retreived from the database was slightly different then what xBaseJ provided. I ended up converting xBaseJ to CLI bytecode using IKVM.NET. This allowed me to replace the OLE data provider with the xBaseJ reader. The rest of my code remained unchanged.