PyOpenCL, failed to set arguments. Invalid values

3.1k views Asked by At

I have gotten the provided OpenCL kernel to execute in a C environment, but when I try to run it using PyOpenCL with the provided code, I get the following error:

> Traceback (most recent call last):
>  File "integral.py", line 38, in <module>
>    example.execute()
>  File "integral.py", line 26, in execute
>    self.program.integrate_f(self.queue, self.a, None, self.a, self.dest_buf)
>  File "/Library/Python/2.7/site-packages/pyopencl-2013.3-py2.7-macosx-10.9-
> x86_64.egg/pyopencl/__init__.py", line 506, in kernel_call
>    self.set_args(*args)
>  File "/Library/Python/2.7/site-packages/pyopencl-2013.3-py2.7-macosx-10.9-
> x86_64.egg/pyopencl/__init__.py", line 559, in kernel_set_args
>    % (i+1, str(e), advice))
> pyopencl.LogicError: when processing argument #1 (1-based): Kernel.set_arg failed: invalid value -
> invalid kernel argument

So it seems that I'm passing the kernel an invalid argument but I have no idea why its complaining about this. Any ideas?

import pyopencl as cl
import numpy

class CL:

    def __init__(self):
        self.ctx = cl.create_some_context()
        self.queue = cl.CommandQueue(self.ctx)

    def loadProgram(self, filename):
        #read in the OpenCL source file as a string
        f = open(filename, 'r')
        fstr = "".join(f.readlines())
        print fstr
        #create the program
        self.program = cl.Program(self.ctx, fstr).build()

    def popCorn(self, n):
        mf = cl.mem_flags

        self.a = int(n)

        #create OpenCL buffers
        self.dest_buf = cl.Buffer(self.ctx, mf.WRITE_ONLY, bumpy.empty(self.a).    nbytes)    

    def execute(self):
        self.program.integrate_f(self.queue, self.a, None, self.a, self.dest_buf)
        c = numpy.empty_like(self.dest_buf)
        cl.enqueue_read_buffer(self.queue, self.dest_buf, c).wait()
        print "a", self.a
        print "c", c


if __name__ == "__main__":
    example = CL()
    example.loadProgram("integrate_f.cl")
    example.popCorn(1024)
    example.execute()
__kernel void integrate_f(const unsigned int n, __global float* c)
{
    unsigned int i = get_global_id(0);
    float x_i = 0 + i*((2*M_PI_F)/(float)n);
    if (x_i != 0 || x_i != 2*M_PI_F)
    {
        c[i] = exp(((-1)*(x_i*x_i))/(4*(M_PI_F*M_PI_F)));
    }
    else c[i] = 0;
}
1

There are 1 answers

0
jprice On BEST ANSWER

There's two errors with your kernel invocation. The error that relates to your backtrace is that self.a is a Python int object, and the kernel expects an OpenCL unsigned int, which is specifically 32-bits. You need to explicitly pass in a 32-bit integer, by using (for example) numpy.int32(self.a). The second error is that the global work size argument needs to be a tuple.

So, the correct code for your kernel invocation should be:

self.program.integrate_f(self.queue, (self.a,), None, numpy.int32(self.a), self.dest_buf)