Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
We want to connect the people who have knowledge to the people who need it, to bring together people with different perspectives so they can understand each other better, and to empower everyone to share their knowledge.
Consider the execution of the instruction MOV CH, [BX-100H], find …
answer: 3000*10+(2000-100)= 31900HSince 8 bit register so only take 31900H
answer:
3000*10+(2000-100)= 31900H
See lessSince 8 bit register so only take 31900H
What is Reinforcement Learning? | What is meant by reinforcement learning?
RL algorithm learns how to act best through many attempts and failures. Trial-and-error learning is connected with the so-called long-term reward. This reward is the ultimate goal the agent learns while interacting with an environment through numerous trials and errors. The algorithm gets short-termRead more
RL algorithm learns how to act best through many attempts and failures. Trial-and-error learning is connected with the so-called long-term reward. This reward is the ultimate goal the agent learns while interacting with an environment through numerous trials and errors. The algorithm gets short-term rewards that together lead to the cumulative, long-term one.
So, the key goal of reinforcement learning used today is to define the best sequence of decisions that allow the agent to solve a problem while maximizing a long-term reward. And that set of coherent actions is learned through the interaction with environment and observation of rewards in every state.
Main points in Reinforcement learning –
Design document – An Online bookstore is to be implemented. …
Vote up the answer: For the answer download the given below attachment :
Vote up the answer:
For the answer download the given below attachment :
See lessHow do you make sure which Machine Learning Algorithm to …
The important considerations when choosing machine learning algorithms: Type of problem: It is obvious that algorithms have been designd to solve specific problems. So, it is important to know what type of problem we are dealing with and what kind of algorithm works best for each type of probRead more
The important considerations when choosing machine learning algorithms:
Type of problem: It is obvious that algorithms have been designd to solve specific problems. So, it is important to know what type of problem we are dealing with and what kind of algorithm works best for each type of problem. I don’t want to go into much detail but at high level, machine learning algorithms can be classified into Supervised, Unsupervised and Reinforcement learning. Supervised learning by itself can be categorized into Regression, Classification, and Anomoly Detection.
Size of training set: This factor is a big player in our choice of algorithm. For a small training set, high bias/low variance classifiers (e.g., Naive Bayes) have an advantage over low bias/high variance classifiers (e.g., kNN), since the latter will overfit. But low bias/high variance classifiers start to win out as training set grows (they have lower asymptotic error), since high bias classifiers aren’t powerful enough to provide accurate models [1].
Accuracy: Depending on the application, the required accuracy will be different. Sometimes an approximation is adequate, which may lead to huge reduction in processing time. In addition, approximate methods are very robust to overfitting.
Training time: Various algorithms have different running time. Training time is normally function of size of dataset and the target accuracy.
Linearity: Lots of machine learning algorithms such as linear regression, logistic regression, and support vector machines make use of linearity. These assumptions aren’t bad for some problems, but on others they bring accuracy down. Despite their dangers, linear algorithms are very popular as a first line of attack. They tend to be algorithmically simple and fast to train.
Number of parameters: Parameters affect the algorithm’s behavior, such as error tolerance or number of iterations. Typically, algorithms with large numbers parameters require the most trial and error to find a good combination. Even though having many parameters typically provides greater flexibility, training time and accuracy of the algorithm can sometimes be quite sensitive to getting just the right settings.
Number of features: The number of features in some datasets can be very large compared to the number of data points. This is often the case with genetics or textual data. The large number of features can bog down some learning algorithms, making training time unfeasibly long. Some algorithms such as Support Vector Machines are particularly well suited to this case [2,3].
Below is an algorithm cheatsheet provided by scikit-learn (works as rule of thumb), which I believe it has implicitely considered all the above factors in making recommendation for choosing the right algorithm. But it doesn’t work for all situations and we need to have a deeper understanding of these algorithms to employ the best one for a unique problem.
Note: For diagram download the below attachment
An Online bookstore is to be implemented. This project is …
Answer for design document of online book store: Download below attachment:
Answer for design document of online book store:
Download below attachment:
See lessHow to initialize Weights and Biases in Neural Networks?
Search your question in the search box provided on the homepage before asking it on the website. or Directly search questions on Google with the word ‘Sikshapath’ added to the last. Example: ‘How to initialize Weights and Biases in Neural Networks sikshapath’.
Search your question in the search box provided on the homepage before asking it on the website.
or
Directly search questions on Google with the word ‘Sikshapath’ added to the last.
Example: ‘How to initialize Weights and Biases in Neural Networks sikshapath’.
See lessWrite a Signup servlet that enables the user to register …
CODE: import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.sql.*; public class Register extends HttpServlet { protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/htmlRead more
CODE:
Where do we use Fuzzy Logic in Artificial Intelligence?
Fuzzy Logic (FL) is a method of reasoning that resembles human reasoning. This approach is similar to how humans perform decision-making. And it involves all intermediate possibilities between YES and NO. The conventional logic block that a computer understands takes precise input and produces a defRead more
Fuzzy Logic (FL) is a method of reasoning that resembles human reasoning. This approach is similar to how humans perform decision-making. And it involves all intermediate possibilities between YES and NO.
The conventional logic block that a computer understands takes precise input and produces a definite output as TRUE or FALSE, which is equivalent to a human being’s YES or NO. The Fuzzy logic was invented by Lotfi Zadeh who observed that, unlike computers, humans have a different range of possibilities between YES and NO, such as:
Generally, we use the fuzzy logic system for both commercial and practical purposes such as:
So, now that you know about Fuzzy logic in AI and why we actually use it, let’s move on and understand the architecture of this logic.
The Fuzzy logic works on the levels of possibilities of input to achieve a definite output. Now, talking about the implementation of this logic:
List down the names of some popular Activation Functions used in Neural Networks
1. Sigmoid Function In an ANN, the sigmoid function is a non-linear AF used primarily in feedforward neural networks. It is a differentiable real function, defined for real input values, and containing positive derivatives everywhere with a specific degree of smoothness. The sigmoid function appearsRead more
1. Sigmoid Function
In an ANN, the sigmoid function is a non-linear AF used primarily in feedforward neural networks. It is a differentiable real function, defined for real input values, and containing positive derivatives everywhere with a specific degree of smoothness. The sigmoid function appears in the output layer of the deep learning models and is used for predicting probability-based outputs. The sigmoid function is represented as:
Generally, the derivatives of the sigmoid function are applied to learning algorithms. The graph of the sigmoid function is ‘S’ shaped.
Some of the major drawbacks of the sigmoid function include gradient saturation, slow convergence, sharp damp gradients during backpropagation from within deeper hidden layers to the input layers, and non-zero centered output that causes the gradient updates to propagate in varying directions.
2. Hyperbolic Tangent Function (Tanh)
The hyperbolic tangent function, a.k.a., the tanh function, is another type of AF. It is a smoother, zero-centered function having a range between -1 to 1. As a result, the output of the tanh function is represented by:
The tanh function is much more extensively used than the sigmoid function since it delivers better training performance for multilayer neural networks. The biggest advantage of the tanh function is that it produces a zero-centered output, thereby supporting the backpropagation process. The tanh function has been mostly used in recurrent neural networks for natural language processing and speech recognition tasks.
However, the tanh function, too, has a limitation – just like the sigmoid function, it cannot solve the vanishing gradient problem. Also, the tanh function can only attain a gradient of 1 when the input value is 0 (x is zero). As a result, the function can produce some dead neurons during the computation process.
3. Softmax Function
The softmax function is another type of AF used in neural networks to compute probability distribution from a vector of real numbers. This function generates an output that ranges between values 0 and 1 and with the sum of the probabilities being equal to 1. The softmax function is represented as follows:
This function is mainly used in multi-class models where it returns probabilities of each class, with the target class having the highest probability. It appears in almost all the output layers of the DL architecture where they are used. The primary difference between the sigmoid and softmax AF is that while the former is used in binary classification, the latter is used for multivariate classification.
4. Softsign Function
The softsign function is another AF that is used in neural network computing. Although it is primarily in regression computation problems, nowadays it is also used in DL based text-to-speech applications. It is a quadratic polynomial, represented by:
Here “x” equals the absolute value of the input.
The main difference between the softsign function and the tanh function is that unlike the tanh function that converges exponentially, the softsign function converges in a polynomial form.
5. Rectified Linear Unit (ReLU) Function
One of the most popular AFs in DL models, the rectified linear unit (ReLU) function, is a fast-learning AF that promises to deliver state-of-the-art performance with stellar results. Compared to other AFs like the sigmoid and tanh functions, the ReLU function offers much better performance and generalization in deep learning. The function is a nearly linear function that retains the properties of linear models, which makes them easy to optimize with gradient-descent methods.
The ReLU function performs a threshold operation on each input element where all values less than zero are set to zero. Thus, the ReLU is represented as:
By rectifying the values of the inputs less than zero and setting them to zero, this function eliminates the vanishing gradient problem observed in the earlier types of activation functions (sigmoid and tanh).
The most significant advantage of using the ReLU function in computation is that it guarantees faster computation – it does not compute exponentials and divisions, thereby boosting the overall computation speed. Another critical aspect of the ReLU function is that it introduces sparsity in the hidden units by squishing the values between zero to maximum.
6. Exponential Linear Units (ELUs) Function
The exponential linear units (ELUs) function is an AF that is also used to speed up the training of neural networks (just like ReLU function). The biggest advantage of the ELU function is that it can eliminate the vanishing gradient problem by using identity for positive values and by improving the learning characteristics of the model.
ELUs have negative values that push the mean unit activation closer to zero, thereby reducing computational complexity and improving the learning speed. The ELU is an excellent alternative to the ReLU – it decreases bias shifts by pushing mean activation towards zero during the training process.
The exponential linear unit function is represented as:
The derivative or gradient of the ELU equation is presented as:
Here “α” equals the ELU hyperparameter that controls the saturation point for negative net inputs, which is usually set to 1.0. However, the ELU function has a limitation – it is not zero-centered.
See lessAnalyze the role of testing tools in maintaining the quality …
Follow the below link for the answer: https://sikshapath.in/question/analyze-the-role-of-testing-tools-in-maintaining-the-quality/
Follow the below link for the answer: