For various maintenance, stability and backup reasons I need to replace a 10 node (10 Linux hosts) ocfs2 shared filesystem with something that does not rely on a shared disk. The client applications are PHP in a Linux only environment.
Right now each PHP client requests a unique id from the database and creates a file with that id/name on the shared disk. The database stores all the file metadata. Existing files are accessed in a similar fashion.
I want to replace the shared disk solution with putfile(id, '/tmp/path') and getfile(id, '/tmp/path') calls to a file server over the network. Client-side I could work with the files in a tmpfs. The server should handle compression etc. This would also free me of the PHP client dependency and I could use the file server directly from some other applications as well like from Windows Delphi applications.
In theory a FTP based solution could even work, though it would probably not perform very well. Or am I wrong to distrust the old FTP protocol?
I have over 30 million file id's currently, most of them being a few KB in size with notable exceptions up to 300MB, totalling only 320GB. The PHP client also does some compression and grouping with gzip and tar, it's all very clumsy.
I was hoping to find something fast and simple like memcachedb but for files. The closest I've found is hadoop's hdfs but I don't think that's quite the correct solution.
Any recommendations? Something obvious I'm missing?