How to set exposure time in Python using mvImpact library

346 views Asked by At

I connected to Matrix Vision BVS0036 industrial camera with Python using mvImpact library. I can use the camera now, but I don't know how to change the exposure time or other settings of the camera. Or rather, I couldn't. The code you see is the thread class of a program I wrote using PyQt5 to view the video in a GUI. Can you help me with this?

class VideoThread(QThread, QMainWindow):

    _signal_change_pixmap_signal = pyqtSignal(np.ndarray)    
    def __init__(self):
        super().__init__()
        self._run_flag = True
    
    # The starting point for the thread. After calling start(), the newly created thread calls this function.
    # The default  implementation  simply calls  exec(). You  can  reimplement  this function  to  facilitate
    # advanced thread management. Returning from this method will  end the execution of the thread.
    
    def run(self): 
        try:
            devMgr = acquire.DeviceManager()
            pDev = devMgr.getDevice(0)                          # exampleHelper.getDeviceFromUserInput()
            pDev.open()
            
            isDisplayModuleAvailable = platform.system() == "Windows"
            fi = acquire.FunctionInterface(pDev)
            fi.createSetting("ExposureTime")
            exp = acquire.FullSettingsBase(pDev, "ExposureTime")
            while fi.imageRequestSingle() == acquire.DMR_NO_ERROR:
                print("Buffer queued - Kamera Başlatılıyor...")
            pPreviousRequest = None

            while True:
                requestNr = fi.imageRequestWaitFor(10000)
                if fi.isRequestNrValid(requestNr):
                    pRequest = fi.getRequest(requestNr)
                    if pRequest.isOK:
                        cbuf = (ctypes.c_char * pRequest.imageSize.read()).from_address(int(pRequest.imageData.read()))
                        channelType = np.uint16 if pRequest.imageChannelBitDepth.read() > 8 else np.uint8
                        
                        self.cv_img = np.frombuffer(cbuf, dtype= channelType)
                        self.cv_img.shape = (pRequest.imageHeight.read(), pRequest.imageWidth.read(), pRequest.imageChannelCount.read())

                        self._signal_change_pixmap_signal.emit(self.cv_img)
                        global snapshot
                        snapshot = self.cv_img
                        
                    if pPreviousRequest != None:
                        pPreviousRequest.unlock()
                    pPreviousRequest = pRequest
                    fi.imageRequestSingle()
                    
                else:
                    print("imageRequestWaitFor failed (" + str(requestNr) + ", " + acquire.ImpactAcquireException.getErrorCodeAsString(requestNr) + ")")
        except:
            pass
    #list = run()
    def stop(self):
        self._run_flag = False
        self.wait()
1

There are 1 answers

0
MTTI On

I have spent quite a lot of time in the past months trying to figure out how to make use of mvImpacts advanced features in spite of the lack of examples online and the basic documentation provided by Matrix Vision.

It is important to understand that all of the python methods are just a wrapper around the underlying C++ methods and therefor their use is often un-pythonic.

In order to change the exposure setting of your camera you need to instantiate the AcquisitionControl class for your camera. You can then set the exposure value like this

ac = acquire.AcquisitionControl(pDev)
ac.exposureTime.write(42) # value is in nanoseconds

you can also set the camera to auto expose and then read the value it came to exposing a couple of frames.

ac.exposureAuto.writeS("Continuous")
# start an acquistion  and trigger a few frames until the exposureTime converges
# ...

# read the value the camera derived 

print("Automatic exposure value : {}".format(ac.exposureTime.read()))

With mvImpact and it's python wrapper I would generally reccomend doing a ctrl+f search on this page to find a function vaguely related to what you are trying to do and take it from there. It took me about a week to get an advanced used case working (using the Sequencer Module), but I eventually got the hang of it and the error messages are clear enough in most cases.