Data150-Aisling

Transfer Learning, Stacked Generalization, and Keras

It is amazing to see the capabilities of a neural network and how it is able to process complex data and make connections based on different factors. Clare explained a model using a music example in which there were assigned variables that acted as the input which then predicted what genre. The fact that a model can be trained to predict certain artists is surprising but what is also crucial to this process is comparing it to when the model is used to predict 20 artists that it wasn’t trained on. 
I think it is significant to note that the sonic variables do not need to related to each other because I think that makes the whole matrix operations and Keras much simpler. However, even despite this, it does create a problem as “we start to lose the context of where things are relative to each other,” and thus lose powerful predictions. In systems like this there is always give and take as well as pros and cons. So, in the explanation of this system, there is the pro that it is able to identify images, but it can sometimes be useless without using convolutional neural networks. 
Clare talks about how Keras is quite a long process, which poses the question of whether there is really any net gain that comes from using this method as there are a lot of ways in which it could be done more efficiently and better, overall. Transfer Learning presents a solution to the problem of efficiency and I think it is critical that we constantly try to develop processes and systems to make them faster and better. It is already a far better system as it uses previously learned images that it had already been trained on and applies it to identify other images. The ability of it to be able to continue to learn and be trained based on previous information is hard to comprehend because you wouldn’t expect for a machine to have the capacity to do it. This is critical, though, for the growth and development of these models as they’re applied to real-world problems. 
A study is being conducted at William and Mary to test road quality. Students from a lab called ‘Roadrunner’ drove around these roads using Androids to record the quality. They then used satellite images to create a model with the intention of then comparing to the actual quality of the road. It was cool to learn about the classifier and how much of an impact that can have on the determination of a good road vs a bad road. From what I understand, there is great importance in using weights that have been previously determined in other images. This makes the process of model training so much faster. Essentially, randomly initialized vs. initialized weights would not be as beneficial.
When Clare presented a low-quality picture that she asked 3 people to predict on what it was, it was somewhat mind-boggling to hear that machines are better at predicting than humans. It makes one think about the future and how much more advanced machines will be than humans. The fact that humans can create something that ends up performing more complex tasks than us ourselves is incomprehensible. It is somewhat frightening to think about the potential these machines have and that even now, the naked eye is just not as accurate. I liked the connection she made between the 3 people who predicted what the image was to how 3 models could be voted on to then predict what the image was. 
The ability for these inputs into these models to make the best prediction is mind-blowing. Not only is it just making a prediction based on training, but it is actually processing a variety of factors to make the most accurate prediction it can. Code also plays an important role here because that is what you load the models into, which is how she was able to get the predictions for good, medium, and bad roads. The concept of stacked generalization is amazing as it allows for the models to be manipulated into producing more accurate predictions.