I just created a VERY large neural net, albeit on very powerful hardware, and imagine my shock and disappointment, when I realized that NeuralFit[] from NeuralNetworks` package only seems to use one core, and not even to its fullest capacity. I was heartbroken. Do I really have to write an entire NN implementation from scratch? Or did I miss something simple?
My net took 200 inputs to 2 hidden layers of 300 neurons to produce 100 outputs. I understand we're talking about trillions of calculations, but as long as I know my hardware is the weak point - that can be upgraded. It should handle training of such a net fairly well if left alone for a while (4Ghz 8-thread machine with 24Gb of 2000Mhz CL7 memory running RAID-0 SSD drives on SATA-III - I'm fairly sure).
Ideas? Suggestions? Thanks in advance for your input.
You can contact the author of the package directly, he is a very approachable fellow and might be able to make some suggestions.