I'm using the Vimba API for c language. I'm having trouble to access the grayscale values of the image that I'm capturing with the camera. I'm also using the VisualStudio for the first time, since it was advised by Vimba, not sure why.
I was updating the SynchronousGrab example provided by Vimba, adding a pixel analysis after capturing the image and before printing the image to a bitmap. That pixel analysis consists of finding, in a dark grayscale image, the centroid of the lighter "cloud" of pixels.
Relevant declarations:
VmbError_t SynchronousGrab( const char* pCameraID, const char* pFileName, int threshold )
{
VmbFrame_t frame; // The frame we capture
VmbUint32_t j = 0; //CUSTOM
VmbUint32_t k = 0; //CUSTOM
unsigned char* CurSrc = 0; //CUSTOM
int arraysize = 0; //CUSTOM
float* coordY = 0; //CUSTOM
float* coordX = 0; //CUSTOM
float* intensity = 0; //CUSTOM
float centroid_x = -1; //CUSTOM
float centroid_y = -1; //CUSTOM
And the VmbFrame_t
struct
:
typedef struct
{
//----- In -----
void* buffer; // Comprises image and ancillary data
VmbUint32_t bufferSize; // Size of the data buffer
void* context[4]; // 4 void pointers that can be employed by the user (e.g. for storing handles)
//----- Out -----
VmbFrameStatus_t receiveStatus; // Resulting status of the receive operation
VmbFrameFlags_t receiveFlags; // Flags indicating which additional frame information is available
VmbUint32_t imageSize; // Size of the image data inside the data buffer
VmbUint32_t ancillarySize; // Size of the ancillary data inside the data buffer
VmbPixelFormat_t pixelFormat; // Pixel format of the image
VmbUint32_t width; // Width of an image
VmbUint32_t height; // Height of an image
VmbUint32_t offsetX; // Horizontal offset of an image
VmbUint32_t offsetY; // Vertical offset of an image
VmbUint64_t frameID; // Unique ID of this frame in this stream
VmbUint64_t timestamp; // Timestamp set by the camera
} VmbFrame_t;
The code that I added to the example:
err = VmbCaptureFrameWait( cameraHandle, &frame, nTimeout );
if ( VmbErrorSuccess == err ) //Existing SynchronousGrab code
{
CurSrc = (unsigned char*)frame.buffer;
for (k = 0; k < frame.height; k++) {
for (j = 0; j < frame.width; j++) {
if (*(CurSrc+k * frame.width + j) >= threshold) {
coordY = realloc(coordY, ++arraysize * sizeof(*coordY));
coordX = realloc(coordX, arraysize * sizeof(*coordX));
intensity = realloc(intensity, arraysize * sizeof(*intensity));
coordY[arraysize - 1] = k * (*(CurSrc + k * frame.width + j));
coordX[arraysize - 1] = j * (*(CurSrc + k * frame.width + j));
intensity[arraysize - 1] = (*(CurSrc + k * frame.width + j));
}
}
}
if (arraysize != 0) {
centroid_x = getSum(coordX, arraysize) / getSum(intensity, arraysize);
centroid_y = getSum(coordY, arraysize) / getSum(intensity, arraysize);
printf("Centroid is in (%.1f,%.1f)\n", centroid_x, centroid_y);
free(coordX);
free(coordY);
free(intensity);
arraysize = 0;
}
if ( VmbFrameStatusComplete == frame.receiveStatus ) //Existing SynchronousGrab code
And the getSum()
function
I created:
int getSum(float* head, int size) { //CUSTOM FUNCTION
int i;
int sum = 0;
for (i = 0; i < size; i++) {
sum = sum + head[i];
}
return sum;
}
In theory, this would work. Since I had already tried the centroid algorithm with an image (with a fixed 1440:1080 size) that I loaded through <stdlib.h>
to a c
application that only did that.
Now, when adapting that algorithm to a bigger size image, through frame.width
and frame.height
, and cycling through frame.buffer
(that I'm not sure exactly what it holds), the centroid coordinates are no longer correct. Not only they are not correct, they actually oscilate a bit (might be due to vibrations of the physical camera/table/etc).
I had an error at the beginning, because I thought the image was the same size as the ones I was trying before, so, in the coordX
and coordY
assigns I was using 1440 instead of frame.width
. Well, the first time I ran the program after editing that, the centroidY
value kept being negative (which is quite odd since what I'm assigning to coordY
is a product of 2 positive values), but the 3 times I ran after that, it started being positive. Although, with an error of 1900 pixels or something (centroidX
has an error of the same magnitude). Dunno if this bit information helps or further confuses the reader.
I'm guessing that frame.buffer
doesn't point to what I think it was pointing to, i.e., a memory array of the grayscale of every pixel with length = frame.width
* frame.height
. But I'm not sure, and don't know how to get out of this.
Also note that I have limited access to the place where the camera setup is, so brute-force solutions of excessive memory lookups aren't that feasible. But, if it's the only way to go, I'll manage.
EDIT: after posting, I just noticed that the function below my code is waiting for the completion of the function above my code. I might have messed up that. I should've placed the centroid algorithm below both functions. I just placed the algorithm after the last use of frame
as a whole. But now I'm guessing that this isn't working because I'm looking at a non-complete image in the memory.
As @ryyker kindly pointed, the
getSum
function andsum
variable inside it should befloat
. That, and maybe the re-positioning of the centroid algorithm (after the status change), solved my problem. It is now working as intended.