Using the following code, is it correct? I have 2GB Geforce 750M and using the PGI Fortran compiler. The program works fine for 4000x4000
arrays, anything higher it complains even though it should not, You can see i have allocated a 9000x9000
array but if i use a n value > 4000 it complains and throws a runtime error.
program matrix_multiply
!use openacc
implicit none
integer :: i,j,k,n
real, dimension(9000,9000) :: a, b, c
real x_scalar
real x_vector(2)
n=5000
call random_number (b)
call random_number (a)
!$acc kernels
do k = 1,n
do i = 1,n
do j = 1,n
c(i,k) = c(i,k) + a(i,j) * b(j,k)
enddo
enddo
enddo
!$acc end kernels
end program matrix_multiply
Thanks to Robert Crovella
My guess is that there is some sort of display timeout on the mac (also here) As you increase to a larger size, the matrix multiply kernel takes longer. At some point the display driver timeout in the Mac OS resets the GPU. If that is the case, you could work around it by switching to a system/GPU where the GPU is not hosting a display. Both Linux and Windows (TDR) also have such timeout mechanisms.
You have to boot into >console mode in Mac OS and also disable automatic graphic switching as the console mode turns off Aqua (GUI in Mac) and thus is supposed to remove the limitation.