MACHINE LEARNING FACTS

Machine Learning and AI algorithms tend to be very good. So instead of going into the mechanics of how they work, we will focus on what the algorithms do conceptually. Let’s start with a simple example: determining whether a moth is a Loth Moth or an Emperor Moth. This decision-making process is called separation, and the algorithm it performs is called division.

Although there are techniques that can use raw data in training - such as pictures and sounds - many algorithms reduce the complexity of real-world objects and situations in so-called features. Elements of values   that accurately describe the things we wish to distinguish. In our web model, we will use two elements: “wingspan” and “mass”. In order to train our equipment division to make good predictions, we will need training data. To find out, we would send an entomologist into the forest to gather information on both the luna and the emperor. These technicians can see different moths, so they not only record feature values, but also record that data with the actual lab types. This is called labeled data. Because we have only two features, it is easy to see this data on the platform. Here, I have organized the data for 100 Emperor Moths in red and 100 Luna Moths in blue. We see that these species form two sets, but…. there is an overlap between… so it is not entirely clear how to distinguish the two good ones. That’s what machine learning algorithms do - get a good split! I’ll just keep an eye on you and say that anything less than 45 millimeters in wings is likely to be an Emperor Moth.

We could add another section that further states that the weight should be less than .75 to guess it to be Emperor Moth. These lines that cut the decision space are called decision boundaries. A closer look at our data reveals that 86 emperor moths may end up within the emperor's decision zone, but 14 will end up in the luna moth's realm. On the other hand, 82 luna moths will be fine, with 18 falling on the wrong side. A table, like this one, showing where the separator finds the right and wrong things is called the matrix of confusion . which may also mean  subject of the last two movies in the Matrix Trilogy .Note that none of our line drawing methods provide us with 100% accuracy. When we lower the limit of the decision of our wings, we properly distinguish Emperor moths like the Lunas. When we lift it, we carefully separate the many moths of Luna. The function of machine learning algorithms, at the highest level, is to increase the correct distortion while minimizing errors In our training data, we find 168 correct moths, and 32 incorrect moths, with a split accuracy of between 84%. Now, using these decision limits, when we go out into the woods and encounter an unknown moth, we can measure its features and plan our decision area.

This is information without a label. Our decision limits provide speculation about what kind of moth this is. In this case, we would have guessed it was the Luna Moth. This simple method, dividing the decision area into boxes, can be represented by a so-called decision tree, which can look like this or figuratively using If-Statement, as follows. The machine learning algorithm that produces decision-making trees needs to choose which elements will be subdivided… and for each of those features, which criteria will be used for classification. Tree decisions are just one basic example of a machine learning process. There are hundreds of algorithms in computer science textbooks today. And a lot is published all the time. A few algorithms use multiple decision trees that work together to make predictions. Computer scientists call the Forests reckless… because they contain so many trees. There are also non-medical methods, such as Support Vector Machines, that actually separate the decision space using conflicting lines. And these do not have to be straight lines; they could be polynomials or other interesting mathematical functions
As before, it is the job of the machine learning algorithm to find the best lines to provide the most accurate decision limits. So far, my examples have only two features, which are easy enough for one to discover. If we add a third dimension, say, the length of the horns, then our 2D lines become 3D planes, creating resolution parameters in three dimensions. These planes should also not be straight. Also, a really useful divider can contradict many types of moths. Now I think you can agree that this is going to be very difficult to get by hand… But still this is a basic example - just three features and five types of lab. We can still show it in this 3D distribution strategy. Unfortunately, there is no better way to visualize four elements at once, or twenty elements, not to mention hundreds or thousands of features. But that is the challenge facing many of the world's top learning problems. CAN you imagine trying to figure out the huge airplane that would fly through the space of a thousand? Probably not, but computers, with intelligent machine learning systems …… Techniques such as Decision Trees and Support Vector Machines are firmly grounded in the field of mathematical decision-making, which deals with making informed decisions, using data, long before computers.

There is a very large class of mathematical learning techniques that are widely used, but there are other methods that have no mathematical origin. Most notable is the artificial neural networks, which are inspired by neurons in our brain! To get the start of biological neurons, check the three-dimensional view here, but basically the neurons of cells process and transmit messages using electrical and chemical signals. They take one or more inputs from other cells, process those signals, and extract their signal. These build up large interconnected networks that can process complex information. Artificial neurons are very similar. Each takes a series of inputs, combines them, and pulls out a signal. Instead of being electrical or chemical signals, artificial neurons add numbers, and they come out numbers. They are organized into interconnected layers, forming a network of nerves, hence the name. Let’s go back to our web model to see how neural nets can be used in isolation. Our first layer - the installation layer - provides data from a single moth that needs to be separated. Also, we will use size and wings.

On the other hand, we have an extraction layer, consisting of two particles: one Emperor Moth and one Luna Moth. The happiest neuron will be our deciding factor. In the middle, we have a hidden layer, which converts our input into results, and does the hard work of differentiating. To see how this is done, let's move one neuron to a hidden layer. The first thing a neuron does is replicate each input by a certain weight, say 2.8 with its first input, and then .1 to find the second input. After that, it summarizes this saved combination together, which is in this case, a total of 9.74. A neuron then uses bias to this effect - in other words, it adds or subtracts a fixed number, for example, by giving a thumbs up, with a new value of 3.74. These bias and input devices are initially placed at random values   when creating a neural network. After that, the algorithm comes in, and it starts to convert all those values   into neural network training, using labeled data for training and testing. This happens in many interactions, gradually improving accuracy - a process similar to human learning. Finally, neurons have an activation function, also called a transfer function, which activates the output, which triggers the final mathematical modification to result. For example, to limit the value in range from one positive and one negative, or to set any negative values   to 0. We will use a direct transfer function that transfers the value consistently, so 3.74 remains 3.74. So for example our neuron, given the input.55 and 82, the result would be 3.74. This is just one neuron, but this process of measuring, summarizing, selecting and using the activation function is calculated for all in-house neurons, and values   are distributed across the front of the network, one layer at a time. The example, the output  of the neuron with highest value is our decision: Luna Moth. Importantly, the hidden layer does not need to be a single layer… it can have many deep layers.

This is where the term in-depth reading comes from. Training these weird networks gets a lot of calculations and data. Apart from the fact that neural networks were established more than fifty years ago, deep neural networks have been in operation recently, thanks to powerful processors, but even more so, the fastest GPUs. So, thank you gamers for wanting so much simple framates! A few years ago, Google and Facebook

Post a Comment

0 Comments