Well well. These days the terms “Machine Learning” and “Deep Learning” seem really to be all over the places. But what is all this hype. about 25 years ago, for my university thesis at Pisa University, I and my team mate have implemented a Functional programming language based on structures called “Hypergraphs” that specialized in finding solutions to “Hard” problems by directing the computation towards empirically “good” solutions and then applying local optimization techniques. There are still traces of it on the net, this is the article from the director of the CS department at the university. Article by prof. Gallo, Carraresi et al. We applied these techniques to the problem of forming duty shifts for a public transportation company in a medium sized Italian city (Florence). But this was the end result after one year of work and in a bigger schema that involved also other students and other universities. This work helped save a consistent amount of money to the public transport company (ATAF Florence) that gave us the data and for whom we provided a solution. We knew how a solution was made, we gave the rules and we provided some precedence rules. We picked subsets of the problem almost at random and found local optima, then we moved on. The computation could go on for a very long time, but you would get better solutions as they were found. Isn’t this Machine Learning in a way? The technique was generic enough to be applied to a number of problems. And about deep learning… Please. Neural Networks have been around for generations. Having more computing power makes it possible to go deeper, to have more and more layers and more complex structures. But this – alone – is not capturing “knowledge”. The knowledge, the criteria to discriminate when a solution is better than another must still be known in the beginning. The new facts must be still sought. In my new learning experience and until now, at least!