The 10 Best Practices for Assessments
Last Updated: September 2022
The expert guide for evaluating technical candidates with online assessments using code challenges, spreadsheet challenges, multiple choice questions, open-ended questions, and video responses
On Coderbyte, assessments consist of challenge(s) and question(s) that a candidate answers asynchronously (i.e. independently), recorded within an online environment and/or unrecorded in their own environment and uploaded afterward, timed or untimed with or without a deadline, with automated and/or manual grading.
Coderbyte is primarily used for junior, mid, and senior technical roles across software development, data science, and analysts. However, Coderbyte is increasingly used to assess candidates for other software-adjacent roles, including go-to-market, product management, and account management.
In terms of workflow, assessments are sometimes used early on to replace resume reviews or immediately after initial phone screens. They can also include rigorous manually-graded take-home assignments at the end of a recruiting process to help identify the best candidate amongst finalists. In either case, the primary goal for recruiting teams should be completion rates, which can be accomplished by following the best practices below. Here are 10 best practices for evaluating candidates with online assessments:
- Give every candidate a shot. The entire purpose of assessments is to facilitate equitable, data-driven recruiting decisions. If you pre-screen which candidates you screen, you’re introducing bias at the top of the funnel. Use an assessment software that includes unlimited candidates or a very high cap at an affordable price so that you can give every applicant a shot at an interview.
- Use assessments as filters, not selectors. Organizations must understand a nuanced but critical point: assessments with automated grading can identify unqualified candidates (e.g. who not to hire), but they cannot identify qualified candidates (e.g. who to hire). In other words, if you design an effective assessment, you will be able to filter out candidates who clearly can’t do the job. But the candidates who pass the assessment aren’t necessarily qualified for the job. There is no assessment in the world that can automatically qualify candidates for interpersonal skills, collaborative abilities, passion and enthusiasm, etc. Don’t use assessments for the purpose of identifying your top candidate. Instead, use assessments to filter out clearly-unqualified candidates and prioritize your pipeline accordingly.
- Keep assessments as short as possible. It’s compelling to make your assessments as robust as possible with challenges and questions to evaluate every skill listed on the job description. If you’re one of the top technology companies in the world, maybe you can get away with it. But top candidates won’t have the time to take longer assessments because they’re being recruited by many other companies, some of which will be willing to bypass the assessment phase altogether. The opposite is true for the worst candidates who will have plenty of time to spend taking assessments.
Even a very straightforward challenge will filter out 60%-80% of unqualified candidates, which is an ideal target range. Any higher than that and you’re probably inadvertently filtering out more good candidates than bad candidates. The benefit of screening for additional skills is thoroughly outweighed by the diminishing returns of lengthy assessments. Drop-off rates begin increasing once your assessment reaches 30 minutes, and dramatically so after 60 minutes at which point the assessment is completely counterproductive. Assessments aren’t meant to be perfect — they accelerate prioritization but your team still needs to conduct actual in-person or remote interviews.
Companies sometimes legitimately need to administer lengthier “real-world” manually-graded projects (2+ hours) at the end of a recruiting process to identify the right candidate with the right qualifications for a specific role. A brief white-boarding exercise or challenge simply isn’t enough. Compensating candidates for doing a lengthier take-home project can increase your completion rates. Further, it can help your employer brand avoid the negative sentiment generated by essentially asking candidates to do “unpaid labor.”
-
Keep assessments focused on real-world skills. You need to ensure that the platform you select has a library challenges and questions that match the skills and experience that you’re hiring for. For example, if you’re creating assessments for full-stack software developers, make sure to find a solution that offers real-world challenges for modern languages and frameworks like React, and not just algorithmic challenges for legacy languages.
-
Prepare candidates for success. As the employer, you have full context about the assessment and recruiting process. But for candidates that genuinely want to be fresh and prepared, the process can vary greatly from company to company, causing a lot of stress. It’s important for the candidate to receive clear information about how long the assessment should take, what skills they’ll be tested for, and what the next steps are after completing the assessment.
-
Avoid invasive cheating prevention techniques. Cheating and bad actors are unfortunately rampant in all forms of standardized testing and credentialing, so it’s reasonable to want to mitigate cheating on assessments! The issue is that many of today’s most common ‘cheating detection’ mechanisms are lazy and poorly considered. They lead to assessments experiences that are frustrating and unrealistic, not to mention discriminatory and unethical.
Tools like HackerEarth offer ‘advanced proctoring’ which creates an invasive and completely unrealistic coding environment for candidates. Candidates have to turn their webcams on to ensure it’s really them, while basic functionality like copy and paste or navigating the browser are blocked. The whole point of assessments is to make hiring as unbiased and meritocratic as possible, but webcam recordings directly invite discrimination. Further, self-respecting developers will feel uncomfortable participating in such an assessment, while desperate ones can still easily cheat by simply having a second computer or phone right next to them.
Most cheating detection techniques will reflect poorly on your employer brand, while repelling qualified candidates and inviting sophisticated bad actors. If cheating is so rampant amongst your candidates, you might want to reconsider your channels for sourcing. Regardless, there are superior approaches to detecting cheating and identifying strong candidates.
At Coderbyte, we take a more principled approach to product management and have discovered straightforward and innovative ways to detect cheating without sacrificing the candidate experience.
-
Use time-tested challenges and questions. It's enticing to create custom assessments, challenges, and questions to test candidates for the requisite skills. But creating unbiased assessments is truly difficult and the only way to really test them is to do so at scale and get feedback over a long period of time from both candidates and hiring managers.
A decade of engineering experience does not qualify your VP of Engineering to create assessments because it’s a completely different skillset. Our challenges on Coderbyte have been iterated on for almost years based on feedback from millions of candidates. Issues related to readability, test cases, edge cases, and time constraints have been ironed out. The carefully crafted challenge that your engineering team thinks is so novel probably doesn’t work well in an automated assessment where the candidate can’t ask questions to get clarity.
Consider just a few common mistakes you’ve probably already made. You may have asked questions that assumed knowledge like how many innings are in a baseball game, but this fact may not be so common amongst people from various backgrounds who are now at a disadvantage in your assessment process. Or your team may only be looking for a specific answer having never considered an alternative but equally acceptable answer which is then automatically graded wrong.
-
Anonymize candidate information for decision-making. Mitigate bias in your interview process by separating candidate performance from personally identifiable information. With Coderbyte, you can activate anonymization for candidate reports, so that the candidate's information will be anonymized for read-only admins.
-
Give candidates reasonable expectations about next steps. The best talent acquisition teams are transparent, over-communicate rather than under-communicate, set reasonable expectations, and ensure the candidate and employer are on the same page. Thoughtfully send rejections rather than ghosting candidates. Don’t burn bridges as today’s candidate might be tomorrow’s customer, partner, referrer, or a candidate again in the future!
Remember, it’s a candidate’s market, and will be for the foreseeable future. Today’s candidates are in-demand and desire transparency, flexibility, and authenticity. That might mean texting rather than emailing if that’s how the candidate prefers to communicate. Or perhaps bring a sales rep into the interview process to give the candidate an actual demo of the product they’ll be building, selling, marketing etc. Whatever you do, think about how it aligns the candidate with what makes you unique as an employer, and then be straightforward and candidate-driven.
-
Integrate assessments seamlessly into your recruiting process. When it comes to recruiting, speed is a competitive advantage. Integrate your assessment software with your recruiting management tool or ATS to automate assessment invites based on candidate stage, reporting upon assessment completion, and scheduling next steps.
You can integrate Coderbyte with 100+ tools including Slack, Workable, and Greenhouse. Use a tool like Growhire to begin automation from the moment a candidate applies to a job.