write_imagef seems to not change any values

149 views Asked by At

I'm trying to write texture data using opencl and displaying it with opengl. The goal is to compare the performance with and without interoperability. But right now I am failing to generate the texture itself. I first wanted to try to do it without the interoperability:

cl Buffer initialization and kernel creation:

this->imageBuffer = new cl::Image2D(*context, CL_MEM_WRITE_ONLY, cl::ImageFormat(CL_RGBA, CL_FLOAT), this->width, this->height); //this->imageBuffer is a cl::Image*

//create kernel
this->kernel = new cl::Kernel(program, "simple_kernel");

//set kernel arguments
this->kernel->setArg(0, *(this->imageBuffer));
this->kernel->setArg(1, this->offset); 

kernel execution (happening in loop):

cl::size_t<3> origin;
origin[0] = 0; origin[1] = 0; origin[2] = 0;
cl::size_t<3>  range;
range[0] = this->width; range[1] = this->height; range[2] = 1;

//Not necessary needed, but shows my point
this->queue->enqueueWriteImage(*(this->imageBuffer), CL_TRUE, origin, range, 0, 0, this->imageOutput);

//enqueue kernel with NDRange
this->queue->enqueueNDRangeKernel(*(this->kernel), cl::NullRange, *(this->global_size), cl::NullRange);

this->queue->enqueueReadImage(*(this->imageBuffer), CL_TRUE, origin, range, 0, 0, this->imageOutput);

this->queue->finish();

std::cout << "fancy output: " << std::endl;;
for(int y = 0; y < this->height; y++) {
    for(int x = 0; x < this->width; x++) {
        std::cout << this->imageOutput[(y * this->width) + x] << ";";
    }
    std::cout << std::endl;
}

OpenCL kernel:

__kernel void simple_kernel(__global __write_only image2d_t texture, float offset) { //offset is not used for testing
    int x = get_global_id(0);
    int y = get_global_id(1);

    int2 pixelPos = (int2)(x, y);
    float4 pixelColor = (float4)(0.5f, 0.0f, 0.0f, 1.0f);

    write_imagef(texture, pixelPos, pixelColor);
};

All solutions that I found for similar problems are related to the internal format and format used in glTexImage2D (used at the beginning in OpenGL rendering part) so it might actually be the same problem here, but I am not seeing what I am doing wrong.

The expected result would be a red quad. But it only displays the initialized texture (so white in this case. Or black if initialized with 0.0f. Or grey if initialized with 0.4f). By enqueuing also the writeImage, I was able to narrow down that the kernel seems to not change the buffer at all. If writeImage is commented out it displays a black quad. So reading the buffer seems to work, as it reads an empty buffer in that case (resulting in the black quad).

This is reinforced by the fact, that the fancy output (used at the end in kernel execution part) only prints the initialized values (e.g. 1s or 0.4s. Or it prints 0s when writeImage is not used)

1

There are 1 answers

4
Elad Maimoni On BEST ANSWER

First, your "fancy output" part is wrong. Your image has width * height * 4 float elements. You treat it as having width * height elements.

auto pitch = this->width * 4;
std::cout << "fancy output: " << std::endl;
for(int y = 0; y < this->height; y++) {
    for(int x = 0; x < this->width; x++) {
        auto r = this->imageOutput[(y * pitch) + x * 4 + 0];
        auto g = this->imageOutput[(y * pitch) + x * 4 + 1];
        auto b = this->imageOutput[(y * pitch) + x * 4 + 2];
        auto a = this->imageOutput[(y * pitch) + x * 4 + 3];
        std::cout << r << ' ' << g << ' '<< b << ' ' << a << ';';
    }
    std::cout << '\n';
}

Second, your kernel fails to compile on my platform because you mark the image as both __global and __write_only image2d_t. You should omit the __global.

Third is your clEnqueueReadImage call. Your row pitch is 0 when its supposed to be width * 4 * sizeof(float).