We enter the field of “deep learning”, deep learning, which is essentially based on artificial neural networks. To follow a logical thread, it is necessary to repeat some concepts already expressed above.
From a conceptual point of view, this is the point we have arrived at today, and the evolutions are for the moment limited to the invention of increasingly sophisticated algorithms based on this concept. Not that it is a simple thing, mind you, also because the real power of the algorithm is given by the ability to train the neural network, to make it gain experience. Such as? Let’s explain it in 3 steps:
Learning phase: Backpropagation is usually used: an input relating to an exercise of which the network knows the correct outcome is inserted into the network, it is reached at the end by passing it through all the hidden layers; since the system is “ignorant”, the transition from one layer to another occurs randomly, and the output will almost certainly be wrong; knowing, however, what the correct answer should have been, the network understands how unfair it was and goes back along the path by setting different parameters that, at each level, are getting closer and closer to the correct way. The more examples (the better they are) that are “digested” by the network and propagated backwards, the greater the probability that the system will make the right associations arrive at the correct answer.
Test phase: When the programmer believes that the network is sufficiently educated, he enters the system with inputs relating to problems for which he (but not the network) knows the solution. This allows him to understand if the network is ready to be able to process real problems on new data or if he still makes mistakes too often and therefore needs to be better educated. If you get the idea that this is relatively simple, you are off the mark: it takes millions of training and testing sessions to put an effective neural network into production.
Put into production: Once the test phase has been passed, the work does not end with the putting into production of the network. In fact, feedback mechanisms need to be created because a system that works well today does not necessarily work well tomorrow: contexts, behaviours, scenarios change, and the network must be able to update itself in near real-time; to do this she needs mechanisms that make her understand if she is going in the right direction or not.
An important frontier for the evolution of neural networks, in terms of processing capacity, is represented by research in the field of nanotechnologies and in particular of neuromorphic chips.
Google Home Max White Speaker is an AI Smart Speaker that allows users to have… Read More
DisneyPlus.com has become a precious streaming platform for millions worldwide, thanks to its vast library… Read More
In this digital era, almost everyone has a part in Instagram. Many social media platforms… Read More
Have you ever heard of the PNPCODA entry? If you still want it, you will… Read More
In this era of technology and virtual spaces, the term "Hyperverse" has gained grip as… Read More
In rapidly developing dynamic educational geography, searching for innovative ways to engage students in meaningful… Read More