Competency-Based Interviews: One Criticism


One of the criticisms placed on competency-based interviews (CBI) is that, by focusing on competencies, the stress is on the past rather than in the future. Consequently, CBI interviews are useful – the criticism goes – to identify candidates that are looking for a lateral career move: to apply their already acquired competencies in another organization and at a somewhat similar role. However, top talent is not looking for lateral moves but career progression. And here is where CBI interviews would fall short: their lack of focus on the future opportunity for professional growth. 


This is a fair criticism. There is, unfortunately, no single bullet assessment tool. However, we believe this does not diminish CBI interviews’ value nor disqualify them as a methodology.



“Incentives to grow professionally are not antagonistic of current competencies. There is not a continuum that separates the two as if they were opposites”



To put the focus in the future, we could use situational interviews – another structured form of interviews – to present hypothetical contexts and measure analytical or problem-solving skills from the interviewees under those situations. Insights gathered in situational interviews could complement CBI. But then again, situational interviews also present their shortcomings: mostly, there is no way to know with certainty how someone will behave in the future regardless of how precisely the individuals describe what they will do.


Another available tool, assessment centers, can also come in handy to help recruiters understand whether they are dealing with candidates that can motivate themselves by defining their own goals or are a good fit for their organization’s culture.


CBI interviews are designed taking into consideration the necessary competencies to perform successfully in a particular position. When confronting the required competencies with the actual competency levels candidates are bringing to the table, it is unlikely that a single candidate will check all the boxes. It is in such a gap where exists a growth opportunity for the candidates.


This might sound like code for accommodating candidates that are not top-notch, but it is not. A candidate that looks promising based in her ability to cope with the challenges thrown at her under the form of hypothetical situations might fall short from the expected performance if she is not able to tap in her own pool of experiences when the time comes. That is why it is essential, in CBI interviews, to get specific examples of how candidates behaved in the past.


Incentives to grow professionally are not antagonistic of current competencies. There is not a continuum that separates the two as if they were opposites, with candidates moving from one end to the other by trading more of one extreme for less of the other.




Wait, Competency-Based What?

Competency-based Interviews (CBI). In these types of interviews, questions are designed to find out whether a candidate possesses a particular set of competencies that are considered necessary to perform in a specific role successfully.


Taking into consideration the significant costs to any organization that come with wrong recruitment decisions it is critical that those decisions are as accurate, reliable and objective as possible.


The cornerstone assumption of CBI is that one of the best predictors of an individual’s future performance is to look at what she did in the past. That is why these interviews are also called behavioral interviews. Still, we prefer to refer to them as competency-based interviews since competency encompasses not only behaviors but also a set of skills and a stock of knowledge. 


As the above may have suggested, questions in CBIs are designed in order to get the candidates to elicit past work experiences and decisions they took to achieve a particular outcome under scenarios that were similar to the circumstances and situations they will have to face under the prospective role. Recruiters will use then the candidates’ answers to obtain a better understanding of their capabilities, including as well how they go about articulating thoughts and presenting arguments if that is also relevant for the position. The ultimate goal for the recruiter is to have a more reliable, valid indicator of how the candidate will act in the future.


The first challenge that CBI interviews pose is about having a proper definition of competencies. How can we define competency? Let’s start with one of the most straightforward descriptions: according to the Cambridge Dictionary, competency (or competence) is “the ability to do something well”. The Dictionary also offers a definition more appropriate for the purposes of this article: “an important skill that is needed to do a job”. Skills can be broken down into hard (technical) skills, such as the ability to write code or to operate a particular machine, as well as soft skills like communication, leadership, the ability to work on a team setting, interpersonal skills or creativity (combine facts and information – knowledge – in innovative ways). 


However, referring to competency solely as a skill seems a somewhat limited definition. The impression is that competency encompasses something larger than skill. We have mentioned knowledge. If knowledge refers to the cognitive ability an individual has to retrieve facts and information acquired through a theoretical or practical understanding of a particular subject, then it sure looks like knowledge also can play a role in defining what competency is.


The OECD defines competency as “something more than just knowledge and skills. It involves the ability to meet complex demands, by drawing on and mobilizing psychosocial resources (…) in a particular context”.  Psychosocial resources are motivations, desires, values, attitudes and even skills. An individual drawing and mobilizing those resources is what, in the eye of the beholder, translates in the observation of a behavior or a specific action.


Combining then behaviors, knowledge, hard technical skills as well as interpersonal skills in the context of this article leads us to the following definition:


Competency is the result of the combination of knowledge, behavior, tangible and interpersonal skills an individual needs to perform well in a specific job role


The same way an enterprise is more than the sum of its parts (i.e., 30 engineers will do more working together than each working individually) the combination of knowledge, behaviors, and skills will result in a competency that is higher than just the sum of these components.


“The basis for designing an interview should always be the specific job description to ensure interviewers aim to the relevant core competencies given a particular role”


A recruiter wants to put together those questions in the interview that will help her understand whether candidates possess a particular combination of behaviors, knowledge, and skills that will support the candidates to perform on the specific role. By focusing on how the candidates handled specific and relevant situations in the past, the interviewer can gather the evidence she needs to conclude whether the candidates have the competencies required. That is, in a nutshell, what competency-based interviews are about.


What is Special About CBI?


We summarize the advantages of CBI in three main points:

A) It uses the job description (analysis) as the source from where the recruiter can derive most of the questions, keeping questions in the interview relevant to the position;

B) It uses the same questions for all applicants;

C) It uses standardized scoring keys to evaluate the answers.


The basis for designing an interview should always be the specific job description to ensure interviewers aim to the relevant core competencies given a particular role. In turn, the job description is an outcome of the collaboration between the recruiters and hiring managers.



CBI questions should always be related to the job. Let’s suppose that one of the requirements is “leadership.” Interviewers then should make sure that in the interview there is at least one question that aims to assess this competency: for instance, “Can you provide an example in your role as X at company Y when you had to push others and yourself to achieve a certain goal?”.


Using the job description to design the interview will have the added benefit of providing interviewers with a consistent structure that later will facilitate comparisons between the performance of different applicants. Inconsistencies in the questioning (lack of structure in the interviews performed across a set of candidates) are, on the one hand, not fair to applicants and it will also make it very difficult to compare who did best in equal terms.


Finally, because CBI questions focus on facts from real situations and CBI questioning requires that interviewers design a set of scoring keys, this will result in more objective evaluation – away from biases and judgments. Developing a rating score requires a great deal of time and work in the preparations leading up to the interviews. It often feels troublesome for recruiters and hiring managers. However, the benefits of such investment will become apparent further down the recruitment process when, thanks to a clear set of rules, assessments will be based on objective, non-biased observations from the interview process. It should be clear though that CBI questions, by themselves alone, cannot completely prevent bias. Interviewers will be required to make a conscious effort to avoid falling for heuristics or stereotyping


The Biases That Fooled Us

When interviewing candidates we want to adhere to standard and consistent evaluations across the board. However, often we incur in biases – consciously or unconsciously – that result in potentially poor hiring decisions as well as in unequal opportunities for candidates.


Biases undermine the consistency and fairness intended with structured interviews. Orchestras in the U.S. were mostly composed of men in the 70s, with the top five having fewer than 5% of female musicians. To address such a gender imbalance, orchestra conductors started holding blinded auditions where musicians would play hidden behind a sheet. Consequently, gender, race, personal connections, or reputation stopped accounting for anything in auditions. The only thing that mattered was the music that came from behind the sheet. The proportion of women playing in the larger orchestras has grown by a factor of five since blinded auditions became the norm – a great leap, though they still make up for only one-quarter of the musicians.


On the other hand, rating errors include, for instance, giving all candidates high ratings or all low ratings.


Most Common Biases and Rating Mistakes in a Recruitment Interview


Attribution bias

Categorizing people helps individuals navigating our social world more efficiently. To prevent from brain overload, we make sense of a large number of stimuli by sorting those into buckets, into categories. This categorizing frees up mental resources for other tasks. For example, our stereotype for the elderly is what causes us to speak loudly in their company, even if a particular individual does not suffer from deafness.


In the context of an interview, the risks for an interviewer is to end up paying more attention to actions that are more consistent with the stereotype, than to actions that contradict it, resulting in less accurate evaluations (increases the risk of getting false positives).


“Similar to me” / Confirmation bias

We have a natural tendency to like others who are similar to us in various ways, whether it is because they studied at the same institutions we did, support the same basketball team, or have similar interests.


In the context of an interview, the risk is to bestow higher ratings than they actually deserve to candidates who appear to be similar to the interviewer.


Halo effect

The tendency to like (dislike) everything about a person, based only in very limited information. Taking just a very limited piece of information we build unfounded associations, including things that we have not observed. Because someone shows she is such a great (poor) speaker, we assume she might be also a great (poor) creative person or a great (poor) problem solver.


In the context of an interview, the halo effect increases the weight of first impressions, causing the interviewer to build associations that are not funded in observations.  It might induce higher ratings on, let’s say, Problem Solving just because the candidate has a high score in verbal communication (Interpersonal Skills), irrespective of the candidate’s performance on Problem Solving. It also increases the likelihood to dismiss information that might contradict the first impression.


“Whether it is an attribution (“Similar to me”) or halo effect bias, the best course of action is to avoid jumping to conclusions too fast”


Rating mistakes: Strictness, Leniency, and Central Tendency

Rather than mistakes per se, the following tendencies devoid evaluations of any discriminatory value since they all represent a propensity to give all interviewees similar ratings.


Strictness refers to the tendency to give low ratings to all candidates. On the opposite extreme, we have leniency: a tendency to give high ratings to all candidates.


As opposed to leniency and strictness – which are about extreme ratings – interviewers might present a tendency to rate all competencies at the middle of the scoring scale. This rating mistake is referred to as central tendency.


How to Minimize Them?


How to minimize biases? Whether it is an attribution (“Similar to me”) or halo effect bias, the best course of action is to avoid jumping to conclusions too fast. When evaluating, interviewers should concentrate on the responses given by the candidate rather than on the outward characteristics or personality of the candidate; hold back from considering any non-performance related factors. A re-examination of the scores to the candidate based on the hand notes interviewers took during the interview might also help to reduce biases.


Another way to tackle biases is to conduct interviews using a panel of interviewers, rather than being led by just one interviewer. This is a way to even out individual judgments. Yes, there might be individual errors, but since all panelists share a common basis when all judgments are averaged, the average usually is accurate. However, a panel might not work if all panelists share the same bias and/or if panelists evaluations influence one another – that is if assessments from different panelists are correlated.


How to minimize rating errors? By understanding the competencies assessed and comparing the behaviors observed in the interview with the behaviors used to establish the proficiency-level ratings for each competency.


When there are doubts on whether to award a high (low) score, interviewers need to understand such score does not indicate perfect (complete lack of) performance. It means, in the high score cases, that the interviewee demonstrated more of the competency than it is generally common; equally, in low score cases, it means she did not show much of the competency in her responses. 



A Brief History of Assessments and Interviews

Assessments in China have a long past. The use of interviews for recruitment purposes stretches back millennia. The civil service examination system (科举, kējǔ) in Imperial China can be taken as a form of interviewing and may be the first documented record of using selection tests. Imperial exams had been around as early as the Han dynasty (206 BC–220 AD), though it wasn’t until the Song dynasty (960–1279) that the exams were institutionalized as a means of recruitment for government office. (1)



During the Song dynasty the widespread use of one technological advancement was fundamental to the expansion of education and, by extension, the imperial examination system: the printed book. Although its development dates to the first century BC, it was China’s Song society the first to use printed books. Despite efforts to control all printing at first, by the 1020s the opening of schools was encouraged by the government, awarding land endowments to schools enrolling students.(2) The civil service examination system prevailed until 1905 when it was discontinued as a result of pressures from reformers looking to develop a national school system and other modernization measures. In 1915 western psychological testing was introduced in China, but it was not until 1980 when they became popular following on 1978 China’s open policies, favoring the country’s active participation in the world economy rather than a development model based on self-sufficiency.



The “Modern” Job Interview


The invention of the first productive steam engine (1712) was the harbinger that brought the Industrial Revolution to Europe and the US. The steam engine brought the factories and the railway. And with those came a radical change in the structuring of labor. A new labor market developed, away from the traditional (often hereditary) master-apprentice structure that had been at the core of production systems up until the 18th century. Initially, a job “interview” was as simple as showing up at the gate of a factory and hoping to get picked up for a job.


A brief history of assessments. Edison Questions Stir Up Storm |


Interviews got more demanding as the pace of technological development increased, requiring more capable, better-educated workers. “Edison Questions Stir Up a Storm”: That went a headline from a The New York Times piece on May 11, 1921. It referred to a questionnaire designed by Thomas A. Edison to select college graduates applying for an executive position at the first commercial central power plant ever built: the Pearl Street Station (named after, you guessed right, the street the plant was sitting at 255-257 Pearl Street in Manhattan).


The interview consisted of a list of 141 questions that, according to opinions from interviewees – featured as “victims” by the newspaper – “Only “a Walking Encyclopedia” Could Answer Questionnaire.” Some of the questions were, based on the recollections of the interviewees “What country makes the best optical lenses and what city?”, “Who invented the cotton gin?”, “What is the weight of air in a room 20 x 30 x10?”. The article’s main criticism was that the questions were only a test of someone’s memory, and not a measure of someone’s knowledge, intelligence and reasoning abilities. 


Rather than an interview, those 141 questions were more like a personality test, with Mr. Edison present in the room going about his business and waiting for candidates to finish the test. Those succeeding the test would then sit down for an interview with Mr. Edison. Today’s equivalent would be nothing short of having Jack Ma or Elon Musk in the room with you, waiting for you to finish the test. Nothing short of stressful for novice candidates, we would assume.


Many scholars take the Edison interview as the origin of the modern employment interview. After Edison’s, other industrial leaders followed suit and began developing their interview processes. This is how it started what would eventually become the employment interview methods that we use today. They evolved into a whole industry. According to the most recent data, the 2018 World Employment Federation Economic Report, the employment industry generated globally (in sales revenue) USD 471 billion in 2016, with five countries making up for the majority of the revenue (US, Japan, UK, Germany, and China).


Although almost 100 years have passed by since Edison’s employment interview, and despite the further advancement and sophistication of job application processes and techniques available, the goal in recruitment remains the same: reduce the uncertainty around the future performance of a pool of prospective hires and chose the best ones. With best we mean those who will achieve results consistently, regardless of the environment or circumstances, in a way that is sustainable for the organization.


As data scientist Cathy O’Neil writes “How a candidate would actually perform at the company (…) is in the future, and therefore unknown”. (3) So the recruitment process needs to settle for proxies when trying to predict the future. From 1921 until today a whole body of research has emerged to develop and measure the effectiveness of those proxies (See Figure 1).


The Role of Interviews in the Selection Process


Interviews are the most often used tool for selection in organizations. They play a very prominent role in the overall selection process: final hiring decisions are often based entirely only on the interviews.



Predictive Validity of Selection Methods |


Most of the times the interviews are unstructured. They resemble more a conversation; their content is discretionary and have a loose framework. There are no predefined standards to evaluate candidates’ performance, and it is pretty much up to the recruiter which questions to pose.


As research has shown (see Figure 1), unstructured interviews are not particularly effective predictors of job performance and, due to its non-standard nature, they leave more room for biases on the recruiter’s side and do not allow for equitable assessments on the performance of several candidates (there is no consistency in the ratings across interviewers).


In contrast to unstructured interviews, a more reliable predictor of job performance is structured interviews. These can take the form of:


Situational interviews: based on questions presenting the interviewees with hypothetical situations, similar to those they might encounter on the job, and aim to observe hypothetical behaviors. These interviews are designed to measure analytical and problem-solving skills on the spot.

Competency-based interviews (CBI): sometimes referred to as performance-based interviewing. Questions here are designed to assess whether the candidate possesses the required competencies to perform on the job. The interviews are designed to gather evidence that the interviewees had used those competencies in the past.


There are other assessment methods, besides interviews: GMA tests, assessment centers, or job knowledge tests to name only a few. They act more as gate-keepers: their role is not as much to find the perfect candidate as to weed out candidates that are not the right fit for one or the other reason (i.e., not the right organizational fit, not motivated enough, or lack of the required body of knowledge).


“Looking to target as many candidates as possible to increase the likelihood of finding that needle in a haystack is no longer a sustainable strategy. Quality of hire is now, more than ever, the name of the game.”


Most of these assessment methods are the legacy of the Industrial Society, where the supply of talent was higher than the demand, an era of talent surpluses. In our present time that is no longer the case. In the Information and Knowledge Society talent is a scarce resource, and recruiting is no longer a game of large numbers. Looking to target as many candidates as possible to increase the likelihood of finding that needle in a haystack is no longer a sustainable strategy. Quality of hire is now, more than ever, the name of the game.


These remarks are not to diminish the value of these tools. After all, interviews are a legacy of the Industrial Society as well. No, the point here is to emphasize the need for these tools to adjust regularly to the circumstances and times they are deployed. If we assume Asimov’s simple paradox that change is the only constant and that critical thinking, communication, collaboration, and creativity (the four C’s) trump technical skills then this provides us a compass on how we should be applying these tools.