Global AI and Data Science

Global AI & Data Science

Train, tune and distribute models with generative AI and machine learning capabilities

 View Only

Neural Network and Iterative Machine Learning

By Moloy De posted Sat April 01, 2023 10:07 PM

  

Many modern day machine learning platforms and frameworks have implemented the iteration process on their own to create better data models, Apache Spark and MapR are two such examples. The way the two implement iteration is technically different and they have their merits and limitations.

Neural Networks consist of electronic networks simulating the way the human brain is works. Every network has an input and output node and in-between hidden layers that consist of algorithms. The input node is given the initial data set to perform a set of actions and each iteration creates a result that is output as a string of data. This output is then matched with the actual result dataset and the error is then fed back to the input node. This error then enables the algorithms to correct themselves and reach closer and closer to the actual dataset. This process is called training the Neural Networks and each iteration improve the accuracy. The key difference between the iteration performed here as compared to how it is performed by Boosting algorithms is that here we don’t have to update the classifiers manually, the algorithms change themselves based on the error feedback.

Neural Networks have become the poster child for unsupervised machine learning because of their accuracy in predicting data models. Some well known neural networks are:

1. Convolutional Neural Networks  
2. Boltzmann Machines
3. Recurrent Neural Networks
4. Deep Neural Networks
5. Memory Networks

Artificial neural networks are highly accurate in simulating data models mainly because of their iterative process of learning. But this process is different from the one we explored earlier for Boosting algorithms. Here the process is seamless and natural and in a way it paves the way for reinforcement learning in AI systems.

The main advantage of this process is obviously the level of accuracy that it can achieve on its own. The model is also reusable because it learns the means to achieve accuracy and not just gives you a direct result. The flip side of this approach is that the models can go wrong heavily and deviate completely in a different direction. This is because the induced iteration takes its own course and doesn’t need human supervision. The facebook chat-bots deviating from their original goal and communicating within themselves in a language of their own is a case in point. But as is the saying, smart things come with their own baggage. It’s a risk we would have to be ready to tackle if we want to create more accurate models and smarter systems.   

QUESTION I: Isn't Gradient Descent an iterative method? 

QUESTION II: What is Back Propagation?

REFERENCE: Blog Iterative Machine Learning: A step towards Model Accuracy

0 comments
11 views

Permalink