I've found that the function genfromtxt
from numpy
in python is very slow.
Therefore I decided to wrap a module with f2py
to read my data. The data is a matrix.
subroutine genfromtxt(filename, nx, ny, a)
implicit none
character(100):: filename
real, dimension(ny,nx) :: a
integer :: row, col, ny, nx
!f2py character(100), intent(in) ::filename
!f2py integer, intent(in) :: nx
!f2py integer, intent(in) :: ny
!f2py real, intent(out), dimension(nx,ny) :: a
!Opening file
open(5, file=filename)
!read data again
do row = 1, ny
read(5,*) (a(row,col), col =1,nx) !reading line by line
end do
close (5)
end subroutine genfromtxt
The length of the filename is fixed to 100 because if f2py can't deal with dynamic sizes. The code works for sizes shorter than 100, otherwise the code in python crashes.
This is called in python as:
import Fmodules as modules
w_map=modules.genfromtxt(filename,100, 50)
How can I do this dynamically without passing nx
, ny
as parameters nor fixing the filename
length to 100?
I think you can just use:
to deal with filenames shorter than the length of
filename
. (i.e. you could makefilename
much longer than it needs to be and just trim it here)I don't know of any nice clean way to remove the need to pass
nx
andny
to the fortran subroutine. Perhaps if you can determine the size and shape of the data file programatically (e.g. read the first line to findnx
, call some function or have a first pass over the file to determine the number of lines in the file), then you couldallocate
youra
array after finding those values. That would slow everything down though, so may be counter productive