How to Evaluate Candidates After an Interview

 

The importance of structuring the evaluation process, based on the insights gathered, is often overlooked.

 

A lot of energy and time is invested in the previous recruitment stages – mostly the design and conduct of the interview – but somehow the digestion and processing of that information are perceived as easier tasks. This perception causes a certain loosening on the quality standards recruiters apply to themselves when it comes to evaluation.

 

“Do not make your mind on whether a candidate is fit for the role during the interview”


In this brief practical guide, 
we share some advice and reconnect with previous shared knowledge that, hopefully, will keep you on a straight path in your recruitment process. 

 

Start with your notes


For starters, do not make your mind on whether a candidate is fit for the role during the interview. 
Remember the most common biases you may fall prey. It is not easy to overcome them on the spot. Your initial impressions from the candidate are mediated by those biases, so it is important you do not turn your initial judgments into a rejection/non-rejection decision.

 

You should review your notes (see tips for taking notes here) and, as objectively as possible, evaluate all the information available. This you can only do once the interview is over. It is then that you have the time to look at the whole picture from a more removed, detached perspective.

 

 

The ORCE process


To contextualize the evaluating and rating process it is time now to introduce yet another acronym: ORCE. It stands for Observe, Record, Classify and Evaluate. (1) The acronym serves as a label to represent a logical step-by-step process that here we are applying for recruitment purposes.

 

So far, what we have been doing during the interview has been to observe and record. What is left, after the interview, is to classify and evaluate using the framework we have agreed upon: the scoring key.


When classifying you must pair the information you 
collected with the job requirements. That means to examine the candidate’s behaviors against the required competencies for the role. To be effective at that, you must be familiar with the competencies and the behavioral indicators. Refer again to our guide on creating a scoring key and check for the proposed Interview Assessment Template.


Check also for objections that indicate the candidate does not meet some of the requirements. For example, let’s say the role requires extensive traveling and the candidate expresses she is open to travel but for shorter periods of time. In this case, the candidate meets the requirement to a certain extent. The fact that traveling schedule could be modulated later to better suit the candidate’s needs does not mean that you can use the initial objections to reject that candidate if, all other things equal, you have another applicant who made clear during the interview that intensive traveling was not an impediment or even regarded it as a positive.


To rate the candidates, use the scoring key that was defined when designing the interview. If back then some competencies were deemed more important than others, you will now use different weighted scales to evaluate the candidate’s answers. You do this for all interviewed job applicants and at the end of the process, you will be left (ideally) with a short-list of suitable, strong candidates.

 

How to Create a Scoring Key to Rate Interviews

 

As we pointed out in a previous piece on how to design interview questions, competency-based interviews (CBI) – or structured interviews, in general – require interviewers have a set of scoring keys, or a rating scale, to be able to evaluate every candidate’s competency as objective as possible.

 

It is of vital importance that all interviewers share the same scale. The first step is to establish one proficiency-level scale for all competencies (i.e., a range from 0 to 4, or a range from 0 to 6; where 0 indicates no proficiency at all and 4 or 6 the highest proficiency level possible).

 

After deciding about the scale then it is time to define each of the proficiency-levels. For instance, here we present a 5-scale evaluation: excellent, good, average, poor, and no evidence. See Figure 1 for more detail:

 

An Example of Rating Scales for Job Interviews | ChinaHRnews.com

 

 

Other potential scales could look like:

 

A) Far exceeds requirements (Score 4), Exceeds requirements (3), Meets requirements (2), Less than requirements (1), Misses requirements (0).

 

B) Expert (Score 5), Advanced (4), Intermediate (3), Basic (2), Awareness (1), Not Aware (0).

 

“More than the actual values of the scale itself, what is relevant is that all competencies are measured using the same scale and that all interviewers apply the same rating criteria”

 

Rather than the specific range and labels that we want to assign to the scales, which is more or less a question of organizational preferences, what is relevant is that all competencies are measured using the same scale and that all interviewers apply the same criteria.

 

Additionally, before the interview, we might want to determine which sort of answers, for each question, would score positive points and which would count as negative scores.

 

Let’s take the competency Proving solving skills and its formulation into the following question: “Describe a situation in which you had to address a problem whose cause was not clear to the organization.” The positive and negative points could be, for this example:

 

Positive: recognizes her limitations, uses effective strategies, demonstrates a constructive approach towards the issue, takes ownership.

 

Negative: tries unsuccessfully to fix the situation by herself, uses inappropriate strategies, does not reframe the problem as a challenge, does not take ownership.

 

The positive/negative points might help us better assess interviewees. Imagine for instance the case where two candidates are given the same score for a specific question: the observation of positive/negative points can help to decide which candidate fared best.

 

At this stage we should have for each competency: the specific question we will use in the interview together with any follow-up or probing question to clarify either context, action or results (see practical guide How to Design Interview Questions), the scoring key, and the positive and negative points. With all these pieces we are in the position to put together a question-assessment template (see Figure 2).

Question Assessment Template | ChinaHRnews.com

 

Finally, if we consider that one competency, or a set of competencies, is more relevant than other/s, then competencies might also be assigned a specific weight. For example, given the particular role we are selecting for, we might conclude that Judgment is more relevant than the others because the position requires a great deal of experience in a cross-cultural environment. We could, therefore, determine that this competency 50% more relevant than the others in the interview.

 

Figure 3 represents an example of an Interview Assessment template, with a summary of all the scores from the candidate, the different weights to each competency (optional), and the overall score for the interview.

 

Interview Assessment Template | ChinaHRnews.com