## What Are Loss Function And Cost Functions Explain The Key Difference Between Them

When calculating loss we consider only a single data point, then we use the term loss function.

Whereas, when calculating the sum of error for multiple data then we use the cost function. There is no major difference.

In other words, the loss function is to capture the difference between the actual and predicted values for a single record whereas cost functions aggregate the difference for the entire training dataset.

The Most commonly used loss functions are Mean-squared error and Hinge loss.

Mean-Squared Error: In simple words, we can say how our model predicted values against the actual values.

Hinge loss: It is used to train the machine learning classifier, which is

L = max

Where y = -1 or 1 indicating two classes and y represents the output form of the classifier. The most common cost function represents the total cost as the sum of the fixed costs and the variable costs in the equation y = mx + b

## What Does A Google Machine Learning Engineer Do

A Google machine learning engineer is responsible for** researching, building, and designing artificial intelligence systems** that run on their own to automate predictive models. Some of your responsibilities as a Google machine learning engineer will be to:

**Develop next-generation technologies**that change how billions of users access and use information.**Design**major software components, systems, and features.- Design, develop, test, deploy, maintain, and improve
**machine learning framework and infrastructure**. **Optimize models**for production deployment.**Partner**closely with other engineering teams to ensure organizational alignment.- Be versatile, demonstrate
**leadership qualities**, and take on new problems enthusiastically across the full stack. **Review Googleâs products**to build centralized solutions, break down barriers, and strengthen existing Google systems.

## How Do You Handle Missing Or Corrupted Data In A Dataset

One of the easiest ways to handle missing or corrupted data is to drop those rows or columns or replace them entirely with some other value.

There are two useful methods in Pandas:

- IsNull and dropna will help to find the columns/rows with missing data and drop them
- Fillna will replace the wrong values with a placeholder value

**Also Check: How To Conduct A Podcast Interview **

## What Are The Most Famous Distribution Curves And In Which Scenario Can They Be Applied

*This question may appear for you, its also gonna test your statistical knowledge. Its important to know the common distribution curves and how to identify each of them.*

*Statistics relies on probability distributions in the same way that computer science relies on databases. If you want to sound like a Data Scientist, youll need to know these terms. In the same way that you can write a Python program without knowing object-oriented programming, you can run a rudimentary Machine Learning analysis using R or Python without understanding distributions. A lot of times this leads to problems and tedious debugging, or even worse: incorrect forecasts.*

The most common distribution curves are Bernoulli, Uniform, and Binomial distributions, as well as Poisson and Normal curves.

**Bernoulli Distribution**

There are just two possible outcomes in the Bernoulli Distribution model: SUCCESS or FAILURE.

Whether you choose correctly or incorrectly is determined by whether you answered the questions in the multiple-choice format. A coin toss has two outcomes, heads or tails, and we may model that heads equal success and tails represent failure. Also, if our random event has just two potential outcomes, we may examine the probability using Bernoullis distribution.

**Uniform Distribution**

A line drawn from the y-axis at 0.50 depicts the uniform distribution, which may be seen as a straight horizontal line when a coin is flipped and shows up as either head or tail.

**Normal Distribution**

## What Is The Lifecycle Of A Data Science Project

*A structured procedure is simpler to follow in life, more lucrative, and more trustworthy than improvising endlessly all the time. In a data science profession, it is no different. In this case, its hard to have a globally standardized solution, but having a clearly defined cycle is what sets you apart from the rest of the pack. In your day-to-day existence, there is a great possibility that you will not constantly execute in all stages of a project. However, being able to behave confidently in each of these situations is a turning point.*

**Comprehend the issue and various solutions**

For didactic reasons, it is extremely typical that lectures and publications about Data Science start with gathering and analyzing data. Whether you are a member of the Data Science team or corporate management, the first big challenge will be the need to comprehend the issue. Without a thorough grasp of the issue, how can we hope to identify the best solution?

Understanding the issue thoroughly is half the answer, therefore invest time and attention in describing this phase.

**Data collection and processing**

Stage one requires a significant amount of time and effort. When we study Data Science, we usually have ready-to-use databases at our disposal. In reality, the polar opposite is true! Data has various sources and forms. We can analyze tables, photos, audios, texts originating from social networks, websites, databases, surveys, scanned documents, etc.

**Analyze and interpret the data**

**Read Also: How To Prepare For Aws Interview **

## Identify A Core Problem

The first thing you need to do when *applying* for such a role is to imagine yourself *in* that roll. To do this, you need to find out as much as possible about the company and position. To organize your research, ask yourself: *What is one core problem I can solve for this company?* Pursuing an answer to this question should excite you, and drive you to find out more about the problemexisting approaches, recent developments in that domainand lead you to a bunch of more specific challenges. If you know what team you are being interviewed for, picking an appropriate problem might be easy otherwise, choose something that is essential for the company. Put another way, think about the challenges facing the company, and then try to determine the questions theyre likely asking.

## Explain How The Principal Components Analysis Works

*The PCA is one of the most famous dimensionality reduction methods and one of more straightforward too. Because of this, its important to highlight their advantages and disadvantages. The interviewer wants to see here, besides deep knowledge of them, when you use it. People often apply them for every kind of data indiscriminately.*

*So, the how and the when is really important here.*

Principal Component Analytic or PCA is a multivariate analysis approach that may investigate interrelationships among many variables and explain these variables in terms of their intrinsic dimensions.

The ultimate objective is to condense as much information as possible from several original variables into one or more statistical variables with as little loss as possible.

The number of primary components becomes the number of variables included in the study, although usually, the first components are the most relevant as they explain most of the overall variance.

The covariance matrices are often used to extract principal components however, the correlation matrices may also be used.

When the covariance matrix is utilized for extraction, the components are impacted by the variables with the largest variance. Thus, when there is a very high disagreement between the variances, the main components are of limited value, as each component tends to be dominated by one variable.

**Read More**: Mathematical Approach to PCA

**You May Like: How To Answer Phone Interview Questions **

## What Happens Behind The Scenes

If things go well at your onsite interviews, here is what the final steps of the process look like:

- Interviewers submit feedback
- Senior leader and compensation committee review
- Final executive review
- You get an offer

After your onsite, your interviewers will all submit their feedback usually within two to three days. This feedback will then be reviewed by a hiring committee, along with your resume, internal referrals, and any past work you have submitted. At this stage, the hiring committee will make a recommendation on whether Google should hire you or not.

If the committee recommends that you get hired, you’ll usually start your team matching process. In other words, you’ll talk to hiring managers and one or several of them will need to be willing to take you in their team in order for you to get an offer from the company.

In parallel, the hiring committee recommendation will be reviewed and validated by a senior manager and a compensation committee who will decide how much money you are offered. Finally, if you are interviewing for a senior role, a senior Google executive will review a summary of your candidacy and compensation before the offer is sent to you.

As you’ve probably gathered by now, Google goes to great lengths to avoid hiring the wrong candidates. This hiring process with multiple levels of validations helps them scale their teams while maintaining a high caliber of employee. But it also means that the typical process can spread over multiple months.

## Top 10 Machine Learning Interview Questions And Answers

Looking for a Machine Learning scientist job?Try Turing jobs

Machine learning is a branch of artificial intelligence through which computers can learn and develop on their own without the need for explicit programming. Machine learning powers almost every single common domain in the present time. These machine learning interview questions will help you in exploring this extremely vast domain and also will prepare you to ace your machine learning interview.

Whether you are a candidate actively looking for a machine learning job or a recruiter looking to hire a machine learning scientist, the following list of machine learning interview questions will be of great use for you.

**You May Like: How To Interview With Google **

## Machine Learning Interview Questions Asked At Spotify

1) Explain BFS

2) How will you tell if a song in our catalogue is a duplicate or not?

3) What is your favourite machine learning algorithm and why?

4) Given the song list and metadata, disambiguate artists having same names.

5) How will you sample a stream of data to match the distribution with real data?

** Most Watched Projects **

## Explain Correlation And Covariance

Correlation: Correlation tells us how strongly two random variables are related to each other. It takes values between -1 to +1.

Formula to calculate Correlation:

Covariance: Covariance tells us the direction of the linear relationship between two random variables. It can take any value between – and + .

Formula to calculate Covariance:

**Recommended Reading: What Makes A Great Interview **

## What Is A False Positive And False Negative And How Are They Significant

False positives are those cases that wrongly get classified as True but are False.

False negatives are those cases that wrongly get classified as False but are True.

In the term False Positive, the word Positive refers to the Yes row of the predicted value in the confusion matrix. The complete term indicates that the system has predicted it as a positive, but the actual value is negative.

So, looking at the confusion matrix, we get:

False-positive = 3

True positive = 12

Similarly, in the term False Negative, the word Negative refers to the No row of the predicted value in the confusion matrix. And the complete term indicates that the system has predicted it as negative, but the actual value is positive.

So, looking at the confusion matrix, we get:

False Negative = 1

## Highlight The Differences Between Classical And Bayesian Statistics

*Being flexible regarding statistics techniques is the key to serious and responsible analysis. The interviewer wants to know if you can explain the main differences between these two methods and see if you can work with any of them if necessary.*

A prediction or conclusion about any occurrence is generally the goal of any statistical study, taking some degree of confidence into account. The idea and approach to arrive at the prediction or conclusion vary widely across statistical schools. Its essential to keep in mind that both rely on probability to draw their conclusions.

The initial distinction between the classical approach and the Bayesian method is connected to the information available. Classical statistics rely on a smaller data collection, but Bayesian statistics use a more extensive data set to draw more thorough conclusions. Another aspect is preliminary information, which is utilized in Bayesian statistics and the data in the analysis via prior knowledge. However, classical statistics are not regarded as a priori information because it is considered personal information.

**Read More**: Bayesian vs. Frequentist A/B Testing: Whats the Difference?

**Don’t Miss: Why This Company Interview Question **

## Explain The Advantages And Disadvantages Of Neural Networks

*The interviewer wants to test if you truly understand this topic by letting you detail the use of neural networks. Also, explain when to prefer more advanced machine learning methods than simple machine learning algorithms. The company they represent doesnt want to spend money without a need, which means, if your problem can be appropriately solved with linear regression, do not use a neural network.*

Models based on artificial neural networks are the ones that have attracted the most attention in recent years for being able to handle AI challenges when little progress could be achieved with other approaches. This provides up a lot of potential for creative growth by practitioners of Deep Learning using them. ANNs are the most exciting class of Machine Learning models for many reasons.

- They are straightforward, once we have understood linear models
- they are pretty intuitive, as they allow the interpretation of learning from hierarchical levels of abstractions
- they are very flexible, which makes them ideal for solving the most diverse types of machine learning problems
- they are absurdly effective regarding the quality of the results.

However, there are several drawbacks to using artificial neural networks that should be noted. Cons: ANN-based models tend to be large, requiring considerable energy and processing resources. Second, because of the non-convex structure of the cost function, it is exceedingly difficult to train an ANN.

## Explain The K Nearest Neighbor Algorithm

K nearest neighbor algorithm is a classification algorithm that works in a way that a new data point is assigned to a neighboring group to which it is most similar.

In K nearest neighbors, K can be an integer greater than 1. So, for every new data point, we want to classify, we compute to which neighboring group it is closest.

Let us classify an object using the following example. Consider there are three clusters:

- Football
- Basketball
- Tennis ball

Let the new data point to be classified is a black ball. We use KNN to classify it. Assume K = 5 .

Next, we find the K nearest data points, as shown.

Observe that all five selected points do not belong to the same cluster. There are three tennis balls and one each of basketball and football.

When multiple classes are involved, we prefer the majority. Here the majority is with the tennis ball, so the new data point is assigned to this cluster.

**Recommended Reading: How To Prepare For Amazon Coding Interview **

## What Is Pruning In Decision Trees And How Is It Done

Pruning is a technique in machine learning that reduces the size of decision trees. It reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting.

Pruning can occur in:

- Top-down fashion. It will traverse nodes and trim subtrees starting at the root
- Bottom-up fashion. It will begin at the leaf nodes

There is a popular pruning algorithm called reduced error pruning, in which:

- Starting at the leaves, each node is replaced with its most popular class
- If the prediction accuracy is not affected, the change is kept
- There is an advantage of simplicity and speed

## What Roles Do Purdue University And Ibm Play In Simplilearn’s Ai And Ml Courses

This extensive Post Graduate Program in AI and ML was developed in collaboration with Purdue University and IBM. A Purdue University faculty member serves as your AI ML course adviser, and you’ll have optimal learning throughout the program. Purdue academics and IBM professionals will give masterclasses to assist you in obtaining a more profound knowledge of AI/ML concepts.

**Also Check: How Does A Phone Interview Work **

## What Can I Anticipate From Purdue’s Ai And Machine Learning Postgraduate Program

You will receive the following as part of this IBM-sponsored Post Graduate Program in AI and Machine Learning:

## How To Prepare For A Machine Learning Interview

Getting ready for a job interview has been likened to everything from preparing for battle, to gearing up to ask someone out on a date, to lining up a putt on the 18th green at The Masters. Meaning, at best, its nerve-racking, and at worse, its terrifying! Preparing for a Machine Learning interview is no different. You know youve got something ahead with the potential to be either really great, or really terrible. But how do you ensure your result is the great one?

Its all about mindset, and preparation.

**Also Check: What Happens During An Exit Interview **

## Machine Learning Engineer/ Ai Engineer

From this point, you are expected to have strong software engineering skills, I mean data structures and algorithms. Having gone through the selection process at Apple, Google, OpenAI, Yelp, ASOS, World Remit, and many small companies for this role, I can assure you that all start with medium-difficult coding tests, and you are expected to understand concepts like multithreading, concurrency, distributed computing, code testing, and A/B Testing.

Most coding assessments require solutions that use concepts like Object Oriented Programming, Dynamic Programming, Recursion, and data structures like Trees, Graphs, Arrays, etc. In fact, tech companies use the title AI/ML Software Engineer for more product-design-based roles. To build a functionality for Siri for example, you must understand that a long list of words like the English dictionary is better saved in a Trie data structure and not arrays, as searching for a word takes O complexity where m is the length of the word. Compare this to looping through a million words in a list and the characters in each string to check for matches.

To prepare for machine learning engineering interviews, I would strongly advise practicing on LeetCode daily, in addition to working on Omdena projects. You should be able to implement algorithms like Back Propagation, Batch Normalization, LSTMs, etc. in raw code without libraries and Omdena gives members access to courses that can help in this regard.