Deep Learning is somewhat trendy and it may still be a while before the cool kids finally shuffle over to the next buzzword. One of the amusements of getting older is seeing how things go in and out of fashion. I first started doing things with neural networks a couple of decades ago, and at that time it was mostly artificial life simulations typically involving what were called animats (i.e. simulated animals). I even sometimes talked about neural net stuff in the first programming job interviews I ever had, and back then to be doing things of this nature was about as uncool as it gets. Maybe talking about pure statistics - k-means and things like that - would have been uncooler, but not by much. My impression is that the reverse is true today.
I've now upgraded the Python API for the libdeep deep learning library to be compatible with Python 3. It's one of those things I had been meaning to do for a while, and now that Python 3 is the default on the desktop operating systems which I use then I thought it's about time I bit the bullet.
I know that Python 2 is still used extensively in all sorts of places, and so since the differences between versions are not all that great I've added a #define within the code so that you can switch back to Python 2 if you need to. It's documented on the manpage.
Another recent decision on libdeep was to remove the differentiable neural computer code. While exotic types of neural Turing Machine are interesting, since they combine the analogue nature of traditional neural nets with the power of a Von Neumann architecture, I'd rather keep libdeep specifically focused on deep learning and not stray into adjacent areas. That is, keep to the Unix philosophy of small tools doing well defined jobs. If I do anything later with neural Turing Machines it will be as a separate project.