According to Harvard Business Review, competence is not just a checklist of skills but also includes an individual’s unique perspective on their job, which shapes how they perform.
Competency assessments evaluate these dimensions systematically, enabling organizations to make more accurate hiring decisions, identify true high-potential talent, and develop targeted programs that drive measurable results. Without such an approach, recruitment and promotion decisions are essentially guesses.
This article guides you through what competency assessments are, effective methods, and an adaptable step-by-step process, including practical tools like proficiency scales and rubrics to help capture both the science and nuance of job competence.
TL;DR — Quick Summary
Competency Model Definition: A framework outlining specific skills, behaviors, and knowledge essential for role success, extending beyond basic job descriptions.
Key Applications: Utilized in hiring, promotions, targeted development programs, and succession planning to ensure organizational alignment.
Development Process: Six-step approach includes defining scope, gathering inputs, selecting 5-7 core competencies, creating proficiency scales, assembling the matrix, and validating through calibration.
Role-Specific Examples: Tailored competency models provided for sales, management, HR, customer service, engineering, marketing, healthcare, and finance roles.
Maintenance Practices: Regular updates, bias controls, and calibration sessions ensure the model remains fair, current, and legally defensible.
What Is a Competency Assessment (and How Does It Work)?
A competency assessment is a structured process that evaluates whether an individual demonstrates the specific behaviors, knowledge, and skills necessary to perform effectively in a role.
It measures proficiency against defined standards, using:
Rating scales
Multiple data sources
Observable behavioral evidence
What's the Difference Between Competencies, Skills, and KPIs?
These terms get used interchangeably, but they measure different things.
Concept
Definition
Example
Key Distinctions
Competencies
Combination of observable behaviors, knowledge, and skill application in a specific work context.
"Strategic thinking": understanding business drivers, analyzing data, and balancing short- and long-term goals through decision-making.
Describes how someone works by integrating skills with knowledge and behavior in real situations.
Skills
Specific capabilities that can be learned and demonstrated through training or practice.
Writing Python code, conducting a performance review.
Building blocks of competencies; measurable and task-specific but don't reflect effective application on their own.
KPIs
Quantitative measures of outcomes and results achieved by an individual or team.
Revenue per sales rep, customer satisfaction scores, project completion rates.
Show what was achieved but not how; they measure results, not the behaviors or capabilities behind success.
A complete employee evaluation needs all three. Competency frameworks provide the "how," employee skills matrices track the "what," and KPIs measure the "results." When you spot areas of improvement at work through competency assessment, you can build targeted skill development that actually improves performance metrics.
When Should You Run a Competency Assessment?
Competency evaluation isn't something you do once and forget. Different situations call for different assessment approaches.
Hiring and Selection: Use competency assessments early in recruitment to predict job performance beyond resumes. Behavioral interviews and work samples reduce mis-hires and speed time-to-productivity.
Performance Reviews: Assess competency growth at regular review cycles. Moving beyond vague feedback, competency ratings define where employees stand and guide targeted development. This can be facilitated by an employee development softwarelike Teamflect, which shows exactly where someone stands on a defined scale and what behaviors would move them to the next level.
Promotion and Succession Planning: Evaluate leadership competencies required for new roles, not just current performance. Rigorous assessments support better promotion decisions and reduce derailments.
Learning and Development: Identify competency gaps before training to focus on real needs. Use post-training assessments to measure if gaps are closed effectively.
Organizational Change: Following restructures or new strategies, assess if teams have needed competencies to execute changes. This informs whether to develop, hire, or reorganize talent.
Match Method to Decision: Use rigorous, multi-method assessments for high-stakes decisions like promotions, and simpler methods like self-assessments plus manager input for developmental discussions.
Competency assessments should be an ongoing process and tailored to each organizational decision to maximize their impact.
What Are the Best Competency Assessment Methods (and When Should You Use Each)?
No single competency assessment method works for every situation. Each approach has strengths and limitations. The best competency assessments combine multiple methods to build a complete picture.
360-Degree Feedback
In this method, participants rate the individual on defined competencies, often using a standardized scale. The aggregated feedback shows how someone's behavior is perceived across different relationships.
Pros:
Provides a comprehensive view that one person can't offer. Reduces individual bias through multiple raters.
Reveals blind spots between self-perception and how others experience someone's work.
Particularly valuable for assessing interpersonal and leadership competencies.
Cons:
Time-intensive to coordinate.
Rater bias still exists, just distributed across more people.
Feedback quality depends on rater familiarity and honesty.
Can create political dynamics if not handled carefully.
Requires clear behavioral anchors to ensure raters assess the same things.
Best for:
Leadership competency assessment
Mid-year development check-ins
Identifying coaching priorities for managers and senior individual contributors
Implementation tip: Use rater training to explain the scale and reduce common biases like leniency or recency effects. A talent management software like Teamflect, which includes 360-degree feedback, makes administration and anonymity management much simpler.
Behavioral Interviews
Structured interviews ask candidates to describe specific past situations where they demonstrated target competencies. The STAR method (Situation, Task, Action, Result) helps interviewers probe for concrete behavioral evidence rather than hypothetical responses or general claims.
Pros:
Strong predictive validity when structured properly. Relatively cost-effective.
Can be used for both internal and external candidates.
Probes actual behavior rather than theoretical knowledge.
Easy to train interviewers on a standardized approach.
Cons:
Candidates can prepare rehearsed stories.
Recall bias affects which examples come to mind.
Requires skilled interviewers to probe effectively and spot inconsistencies.
Less effective for assessing competencies that require real-time demonstration.
Best for:
Hiring decisions
Internal mobility assessments
Promotion conversations where you need specific evidence of past competency demonstration
Particularly strong for evaluating problem-solving, leadership, communication, and adaptability competencies.
Implementation tip: Create interview guides with 3 to 4 questions per competency, each with suggested follow-up probes. This ensures consistency across candidates and reduces interviewer bias.
Work Samples and Simulations
These assessments ask individuals to perform tasks or respond to scenarios that mirror actual job demands. For example, a management simulation might present an inbox exercise with competing priorities and stakeholder emails.
Pros:
Highest predictive validity for job performance because you're directly sampling the work. Harder to fake than interview responses.
Provides observable evidence of competency in action.
Candidates see the role more realistically, improving self-selection.
Cons:
Time and resource intensive to design and administer
Requires subject matter expertise to create realistic scenarios
Some competencies are hard to simulate in controlled settings
Scoring requires trained assessors to ensure reliability
Best for:
Technical roles where competencies must be demonstrated in real time (engineering, design, data analysis).
Customer-facing roles where interpersonal competencies matter (sales, customer success, nursing).
Leadership roles where judgment and prioritization are critical.
Implementation tip: Design simulations that reflect your actual work environment and challenges. Generic exercises have lower predictive value than contextualized ones.
Psychometric, Cognitive, and Personality Assessments
Standardized tests measure cognitive abilities (reasoning, problem-solving, learning agility) or personality traits that underlie competency development. These tools compare individuals against norm groups and provide objective scoring.
Pros:
Standardized and scientifically validated.
Reduce certain types of bias when used appropriately.
Efficient to administer at scale.
Useful for predicting learning potential and cultural fit.
Can identify risks or development needs that interviews miss.
Cons:
Require proper interpretation by qualified professionals.
Can introduce adverse impact if not validated for your population.
Measure potential more than demonstrated competency.
Some candidates find them intrusive or irrelevant.
Costs add up with per-assessment licensing fees.
Best for:
High-volume hiring where you need efficient screening.
Leadership pipeline development to identify high-potential employees.
Situations where cognitive ability or personality traits strongly predict competency development in your context.
Implementation tip: Never use psychometric tests as the sole decision criterion. Combine them with behavioral evidence. Work with qualified professionals to ensure tests are validated for your use case and don't create unintended bias.
Assessment Centers
Multi-exercise programs where candidates complete several activities over hours or days, observed by trained assessors. Exercises might include group discussions, presentations, role-plays, case analyses, and in-basket simulations. Each exercise is designed to elicit specific competencies.
Pros:
Highest reliability and validity when well-designed.
Multiple exercises and multiple assessors reduce individual biases.
Simulates real job complexity and competing demands.
Provides rich developmental feedback. Strong face validity with participants and stakeholders.
Cons:
Expensive and time-consuming.
Requires significant expertise to design and operate.
Assessor training is critical and ongoing.
Can be overwhelming for participants.
Logistically challenging to coordinate.
Best for:
Senior leadership selection, high-potential identification programs, graduate hiring programs where investment per candidate is justified.
Most valuable when consequences of a wrong decision are significant.
Implementation tip:
Clearly link each exercise to target competencies.
Use multiple trained assessors per candidate to ensure reliability.
Provide developmental feedback even to unsuccessful candidates to build your employer brand.
Self-Assessments
Individuals rate their own proficiency on defined competencies, often as part of performance reviews or development planning. Self-assessments typically use the same rubric as other methods to enable comparison.
Pros:
Promotes reflection and ownership of development.
Quick and inexpensive to administer.
Provides insight into self-awareness when compared with other ratings.
Useful starting point for development conversations.
Cons:
Self-ratings often inflate actual competency levels.
Cultural factors affect willingness to self-promote or self-critique.
Limited value as standalone assessment.
Can't replace observation or evidence-based methods.
Identifying employee strengthsthey want to build on
Understanding where someone sees their own areas of improvement at work
Most effective when combined with manager or peer assessment
Implementation tip: Use self-assessment to open dialogue, not to make decisions. An individual development plan tool like Teamflect, which includes self-assessment alongside manager ratings, helps employees see gaps between perception and reality.
On-the-Job Observations and Checklists
Direct observation of performance in real work settings, often using structured checklists that list specific behaviors or competency indicators. Observers watch the individual perform tasks and mark whether competencies are demonstrated.
Pros:
Captures actual job performance, not simulated or retrospective accounts.
Particularly effective for technical or procedural competencies.
Can be done repeatedly over time to track improvement.
Relatively quick once checklists are developed.
Cons:
Observation can change behavior (Hawthorne effect).
Requires trained observers who understand both the competencies and the work. Not practical for all roles or competencies.
Scheduling observations can be disruptive.
Best for:
Clinical roles (nursing, medical)
Manufacturing and quality control
Customer service, retail, any role where competencies are observable in day-to-day work.
Excellent for assessing technical competencies and adherence to procedures.
Implementation tip:
Create behavioral checklists that clearly define what "good" looks like.
Train observers to rate what they see, not what they think someone could do.
Conduct multiple observations to account for day-to-day variation.
Sample 1-5 Proficiency Scale and Scoring
Most competency assessments use a 1 to 5 rating scale to measure proficiency. This scale balances enough detail without overwhelming raters. The key to effective scoring is behavioral anchors, specific observable indicators that define each level.
The 1-5 Scale with Behavioral Anchors
A well-designed scale describes what competency demonstration looks like at each level. Here's a general framework that can be adapted to any competency:
1 - Novice: Minimal competency; requires step-by-step help; applies knowledge only in simple situations; performance is inconsistent and heavily supervised.
2 - Developing: Basic skills with coaching; somewhat independent in routine tasks; outcomes inconsistent; learning to work alone
3 - Proficient: Reliable and independent; consistently meets role expectations; adapts in common situations; baseline for success.
5- Expert: Mastery and innovation; sets standards; handles complex tasks; recognized as a go-to authority and leader.
Some roles may not require all five levels; leadership roles often expect levels 4-5 in key competencies.
Example Rubric: Stakeholder Communication
Here's how behavioral anchors work for a specific competency. This example shows what "Stakeholder Communication" looks like at each proficiency level.
Level 1 - Novice
Shares information when directly asked but rarely proactively
Messages lack clarity or contain errors that create confusion
Struggles to tailor communication style to different audiences
Misses or ignores important stakeholder concerns
Requires manager review before most communications
Level 2 - Developing
Communicates routine updates with some prompting
Basic messages are clear but lacks depth or context
Attempts to adjust tone for different stakeholders with mixed results
Sometimes addresses stakeholder concerns but misses nuances
Needs guidance on sensitive or complex communications
Level 3 - Proficient
Proactively shares relevant information with appropriate stakeholders
Writes and speaks clearly, organizing ideas logically
Adjusts communication approach based on audience needs
Acknowledges and addresses stakeholder concerns constructively
Handles most communications independently with good outcomes
Level 4 - Advanced
Anticipates stakeholder information needs and communicates proactively
Crafts compelling messages that drive understanding and action
Adapts style, detail level, and medium effectively for any audience
Navigates difficult conversations and conflicting stakeholder priorities
Builds credibility through consistent, transparent communication
Coaches others on stakeholder communication approaches
Level 5 - Expert
Shapes organizational communication standards and best practices
Communicates complex, sensitive information that influences major decisions
Manages communication across senior leaders and external stakeholders
Turns resistant or skeptical audiences into supporters
Recognized across the organization for communication excellence
Develops communication frameworks others use
Using this rubric, an assessor can objectively rate someone's stakeholder communication competency by identifying which behavioral description best matches observed performance. The key is having specific, observable indicators rather than vague adjectives.
Weighting and Aggregation
Not all competencies matter equally for every role. Sales roles might weight "customer focus" at 25% while weighting "strategic thinking" at 10%. Engineering leadership roles might do the opposite. Weighting ensures your overall competency score reflects what actually drives success in the role.
Here's a typical weighting approach for a management role:
Leadership & People Development: 30%
Execution & Results Delivery: 25%
Collaboration & Influence: 20%
Customer Impact: 15%
Continuous Learning & Adaptability: 10%
To calculate a weighted score:
Rate each specific competency on the 1 to 5 scale
Average competencies within each category
Multiply each category average by its weight
Sum the weighted scores for an overall competency rating
Example:
If someone scores 3.5 on Leadership competencies, 4.0 on Execution, 3.0 on Collaboration, 4.5 on Customer Impact, and 3.5 on Continuous Learning, the overall score is calculated as: (3.5 × 0.30) + (4.0 × 0.25) + (3.0 × 0.20) + (4.5 × 0.15) + (3.5 × 0.10) = 3.63
This weighted score provides a single number for comparison and tracking while preserving the nuance of strengths and development areas within the full profile.
5 Steps to Conduct a Competency Assessment
Running an effective competency-based assessment requires planning and structure. Here's how to do it right, from scoping to follow-up action.
Step 1: Define Roles and Competencies
Start by identifying which roles you're assessing and what competencies matter for success. This sounds simple but it's where many organizations stumble. Resist the urge to create exhaustive lists. Focus on the 6 to 10 competencies that genuinely differentiate high performers from average ones in each role.
Best Practices:
Involve role incumbents in defining competencies so frameworks reflect actual work.
Use clear, jargon-free language that everyone understands the same way.
Validate competencies with data where possible (what predicts performance or retention in your organization?).
Review and update competencies annually as roles and business needs change.
Keep competency lists focused; more isn't better if it dilutes attention from what truly matters.
Mistakes to Avoid:
Copying generic competency lists from the internet without customizing to your context.
Including too many competencies, which makes assessment unwieldy and results unusable.
Using vague competency definitions that different assessors interpret differently.
Forgetting to differentiate competencies by level (entry vs. mid vs. senior expectations differ).
Defining competencies that sound impressive but don't actually predict success in your roles.
Step 2: Design the Rubric and 1-5 Anchors
Once competencies are defined, you need to describe what each proficiency level looks like. Generic scales like "below expectations" to "exceeds expectations" create inconsistency because raters interpret them differently.
Best Practices:
Write 3 to 5 specific behavioral indicators per level, not just reworded adjectives.
Make level differences clear and meaningful (what changes between 3 and 4?).
Include both positive indicators (what's present) and negative indicators (what's absent).
Field-test rubrics with actual raters before full deployment.
Provide reference examples that show what each level looks like in practice.
Build rubrics collaboratively so stakeholders buy into the standards.
Mistakes to Avoid:
Using vague descriptors like "good" or "strong" that mean different things to different people.
Making levels too similar, which reduces inter-rater reliability.
Creating standards that are either impossibly high or too easy to achieve.
Writing rubrics that emphasize style over substance (how someone communicates vs. whether they get results).
Forgetting to specify what "meets expectations" looks like (usually level 3), which is the most important anchor point.
Step 3: Select Methods and Tools
Choose assessment methods based on what you're trying to decide, the competencies you're measuring, and practical constraints like time and budget. High-stakes decisions warrant more rigorous multi-method approaches. Developmental assessments can use simpler methods.
Best Practices:
Use multiple methods for important decisions (triangulate evidence).
Train all assessors on the rubric, common rating biases, and how to gather behavioral evidence.
Pilot new assessment methods on a small scale before full rollout.
Build buffer time into timelines; good assessment can't be rushed.
Use talent management software to automate scheduling, reminders, and data aggregation.
Create assessor guides with example questions, observation tips, and rating criteria
Schedule calibration sessions before finalizing ratings.
Mistakes to Avoid:
Relying on a single method, which increases error and bias.
Skipping assessor training because "it's intuitive" (it's not).
Using tools that weren't validated for your specific use case or population.
Letting recency bias dominate (only considering recent performance vs. the full assessment period).
Asking untrained managers to conduct complex competency assessments without support.
Forgetting that method selection affects candidate experience and your employer brand.
Step 4: Conduct the Assessment
Execute your assessment plan consistently across all participants. Consistency is what makes competency assessment fair and defensible. If some people get rigorous multi-rater evaluations while others get a quick manager review, your data won't be comparable or credible.
Best Practices:
Follow the same process for every participant to ensure fairness.
Gather specific behavioral examples to support ratings, not just impressions.
Use structured guides and checklists to reduce bias and ensure coverage.
Take notes during assessments so you remember evidence and can provide examples.
Schedule adequate time for thorough assessment; rushing creates errors.
Calibrate ratings across assessors before finalizing scores.
Keep assessment data confidential and stored securely.
Mistakes to Avoid:
Letting one strong or weak competency create a halo/horn effect on others.
Rating based on potential rather than demonstrated performance.
Comparing people to each other instead of to the rubric standards (forced ranking pitfall).
Allowing personal relationships to influence professional competency ratings.
Skipping the evidence gathering step and just assigning gut-feeling numbers.
Announcing results before calibration, which locks assessors into initial ratings.
Step 5: Turn Results Into Action
Competency assessment data is worthless unless you act on it. The final step is converting assessment results into concrete development plans, talent decisions, and organizational capability building.
Best Practices:
Deliver feedback within two weeks of assessment while it's still relevant.
Focus development plans on 1 to 2 priorities; spreading too thin reduces impact.
Connect competency results directly to performance reviews and career conversations.
Use assessment data to design learning programs that address real gaps.
Track competency growth over time to measure development effectiveness.
Tie competency development to actual business objectives and team OKRs.
Celebrate competency growth to reinforce that development matters.
Mistakes to Avoid:
Conducting assessments but never following up with feedback or action.
Creating generic development plans that don't target specific competency gaps.
Treating assessment as a one-time event instead of an ongoing process.
Failing to connect competency data to talent decisions, which signals it doesn't matter.
Overloading development plans with too many priorities.
Ignoring organizational patterns in competency gaps that point to systemic issues.
Using competency data punitively instead of developmentally, which kills psychological safety.
Build a Stronger Team Through Competency Assessments With Teamflect
Identifying employee strengths and areas of improvement at work isn't guesswork when you have structured competency assessment in place. The difference between average and exceptional team performance often comes down to knowing exactly where each person stands and building targeted development from there.
If you're using Microsoft Teams for collaboration, Teamflect brings competency assessment directly into your workflow. Create custom competency frameworks, conduct 360-degree feedback, design individual development plans, and track competency growth over time without leaving Teams.
A competency assessment measures if someone demonstrates the specific behaviors, knowledge, and skills needed to perform a role effectively. It uses defined standards and rating scales to provide objective data for hiring, promotion, and development decisions.
What are the pros and cons of competency assessments?
Pros: They offer objective criteria, reduce bias, identify individual development needs, support succession planning, improve feedback, and build organizational capability.
Cons: They need upfront design investment, risk bureaucracy if overdone, require ongoing training, may feel unfair if poorly managed, and don’t fully capture performance results.
What are the main types of competency assessments?
Common types include 360-degree feedback, behavioral interviews, work samples and simulations, psychometric tests, assessment centers, self-assessments, and on-the-job observations.
How do you conduct a competency assessment, step by step?
Define roles and 6 to 10 key competencies, create a scoring rubric with behavioral anchors, choose suitable assessment methods, conduct assessments consistently with trained raters, and use results for development and talent decisions.
How do you keep competency assessments unbiased and equitable?
Use clear behavioral rubrics, train assessors to avoid biases, gather evidence from multiple sources, hold calibration sessions, document specific examples, audit results for fairness, and use validated methods.
What are best practices for leadership competency assessment?
Use multiple methods including 360-degree feedback, assess both results and behaviors, focus on future-level competencies, use simulations for key decisions, give development feedback regularly, tie results to succession planning, and reassess as needs evolve.
Which tools or software can be used for competency assessments?
Platforms like Teamflect integrate competency assessments with performance reviews and development planning. Talent management and specialized platforms offer rating workflows, behavioral interviewing, simulations, or psychometric testing. The choice depends on scale, tech stack, and integration needs.
Oops! Something went wrong while submitting the form.
Free Performance Management Assessment: Get Custom AI-Analysis
Teamflect’s AI tool for HR delivers custom analysis to help you refine your performance management process. Take a short quiz and get your personalized report in 5 minutes.