The people on our team are what enables our company to deliver energy savings that are 10x more cost-effective than traditional energy service companies. We’ve found opportunities in hundreds of buildings and grown revenue and impact faster than Moore’s Law. Without incredible teammates, Carbon Lighthouse would have no chance of stopping climate change. Thanks to current and future teammates and the work of other fantastic organizations, we are on a path to creating a more profitable, cleaner planet.
We take hiring extremely seriously.
When done intelligently, hiring is a science, not an art. There is more than a century of peer-reviewed academic research exploring which elements of hiring processes work, which do not, and comparing the hypotheses to real-world data.
In establishing its hiring practices the Carbon Lighthouse leadership team read and analyzed 400+ pages of academic research in the field of Personnel Selection. The short version of our process, the outcome of our analysis of the academic literature, is below.
Importantly, the process we developed has been met with resounding success. Our average annual turnover rate is substantially below industry norms of 19%.
Carbon Lighthouse’s hiring process works as follows:
- Online Application of cover-letter, resume, and a few short-answer questions
- Phone Interview
- Sample Project
- In-person Interview
- Reference Checks
This process ideally feels both rigorous and straightforward. If it doesn’t, please let us know! We’re always looking for feedback and ways to improve.
The process is designed to offer significant opportunities for our team to get to know our candidates, and for candidates to get to know our team and ask any questions they may have.
Detailed Process: Why We Hire The Way We Hire
The answer is long. We will attempt to explain in detail what we do during our hiring process so candidates know what to expect as well as why we do it the way we do.
We’ll start with one hundred years of academic research.
Background Part 1: Research Grounding
The academic field of Personnel Selection offers some of the richest datasets available in the business world. Information collected about candidates before they are hired – interview scores, years of work experience, educational background, reference check results, work samples, etc. – can be compared to performance and retention years later. Companies interview, hire, promote, and terminate millions of people every year, providing a robust set of data on which to run statistics.
Academics in the field of Personnel Selection have been performing statistical analyses of this ever-growing dataset for more than a century, and have published thousands of peer-reviewed papers.
And the results of the research are striking:
1. The conclusions are remarkably consistent.
2. There is a 75% correlation between future performance and how candidates perform in certain key areas that can be incorporated into a hiring process.
What are these magical predictive activities? They are relatively simple: structured interviews and tests of general mental ability and integrity.
In a structured interview every candidate receives the same set of questions and responses are graded in the same way.
Tests of general mental ability are examinations of one’s ability to think critically and accurately in a time-constrained environment.
Unstructured interviews, years of educational experience, GPA, brainteasers, and years of work experience (beyond the first 5 years), do not significantly contribute to predicting performance.
We built our hiring process around these research conclusions.
Background Part 2: The Bias Menace
Bias is a pervasive problem in hiring. We are predisposed to think favorably of, and hence select, people who are similar to us.
How problematic are our natural biases in hiring? Here’s one example that has been recreated more than a dozen times across the world with similar results:
- University of Chicago Booth School of Business professors submitted thousands of identical resumes in response to job ads placed in major US cities. The only non-identical element of the resumes was the name: half the resumes had the names “Greg” or “Emily” at the top. The other half the resumes had the names “Lakisha” or “Jamal” at the top.
- The result: Greg and Emily received 50% more calls than Lakisha and Jamal did.
This evidence of how strong biases can be is just one of many examples, and resumes are just one place it comes up. Where jobs get posted creates bias. Evaluating candidates consistently becomes a source of bias. Biases can be explicit and conscious, or implicit and subconscious, even among people who are conscientiously trying to avoid them.
Throughout our hiring process, we have taken steps to eliminate or reduce the effects of these biases. We believe that doing so is not only the ethical course of action, but also that it pays dividends for the company by helping make sure we continue to attract and select the very best people.
Defining Success: Trait Selection
The first step to success in hiring is defining what the job requires.
To state an obvious truth that frequently gets missed in many hiring searches: to find what you’re looking for you first must know what you’re looking for. We start by writing down the traits we believe the successful candidate for the role should possess.
Some examples of traits we might look for in a job include: attention to detail, capability for self-management, effective working in groups, desire to learn, proven track record managing complex projects, etc.
We find this exercise often yields ten or more traits and skill sets that would help candidates be successful, but if we created roles that required people to excel in a dozen different areas then we’d be creating an impossible expectation. Anyone we put in such a role would perform poorly, and would probably be miserable, too.
So it’s important to limit the job scope to the five absolute most important traits. If we cannot choose only five traits that are needed for success, we know we need to rethink what is actually required for the role. Perhaps multiple roles are needed instead.
One key trait we are always looking for in new team members is strong verbal and written communication skills. The ability to communicate effectively with others is important throughout Carbon Lighthouse. Our sales are complex, multi-party transactions that require clear communication. Our engineers are often client-facing and spend weeks interacting with individuals in the client organization. Our organization is rapidly growing and clear, effective internal communications across teams and within teams is vital for effective growth.
After the desired traits are determined, we then determine where they can be measured in the hiring process. For example, if one trait is “proven experience as an external sales representative,” then this is identifiable on a resume: candidates can list how many years they have beaten quota on a resume and we can verify it. On the other hand, for a trait like “quickly adopts and incorporates feedback,” the resume is not the best place to look. Instead, we’ll try to assess this trait during an in-person interview by asking the candidate to pitch a product, receive feedback, and then repeat the pitch. This cycle might repeat through several iterations. Then we’d analyze how quickly and comprehensively the candidate incorporated the feedback.
These traits aren’t a secret. Just the opposite. We put them directly in the job ad in the Requirements section. If a candidate knows what the key traits are, they can focus solely on demonstrating excellence in those traits. We do our best to help team members perform at their best once they are members of the Carbon Lighthouse team, and we believe in extending this approach during interviews to make it as easy as possible for people to perform at their best.
Once the traits and job ad is finalized, the next strategic decision is where to post.
Nothing leads to a successful search more than having a great pool of candidates to begin with. And the best way to ensure a great pool of candidates to begin with is to make sure as many people as possible have heard about and are excited about the job.
So we post our job opportunities far and wide wherever they might be relevant. Team members also share the job with their personal networks. Through these efforts, the job goes out to dozens of professional networks, alumni networks, job boards and interest groups.
Posting broadly also helps reduce bias. If we post a role only to Ivy League schools, we will only receive Ivy League candidates. If we post only to male-dominated engineering lists, we will only receive male engineering applicants. Posting everywhere, and spending the slight extra effort to find and post to lists and job boards targeting underrepresented groups protects against creating an applicant pool that is narrower than ideal. This helps remove bias from our search while also helping us reach the most talented possible applicants.
Online Application Review: Resumes, Cover Letter, & Questionnaires
The first screen of candidates is from their resumes, cover letters, and online form questionnaires. We use this information to try to quickly identify which candidates might do well in the job and therefore should move on to a phone interview.
A lot of great, basic information is available from a resume, which is why most companies request them as part of their hiring process. Levels of experience, past job roles, educational history, etc. are all useful.
To reduce bias from our resume review we create a strict scoring rubric: we are looking for specific items in resumes, and we establish a clear grading scale for each role. For example, if experience is important for a particular role, part of the resume rubric would include a scoring system of 0 points for < 2 years of relevant experience, 1 point for 2 – 4 years of relevant experience, and 2 points for 5+ years of experience. Research shows that additional experience in a role beyond 5 years of experience does not greatly contribute to performance.
We take a similar approach to cover letters and online questionnaires. We want to see how you write, what you think, and how passionate you are about our mission. These are reviewed the same way we review resumes: a strict rubric keyed to look for specific items. These items may be clear, concise writing or absence of typos and formatting problems (attention to detail).
The short-answer application questions provide guidance on what we’re looking for. This extra step is helpful since cover letters may go in many different directions. And we take the responses to the questions we ask online seriously: if we ask you why you’re excited about Carbon Lighthouse, it’s because we want to see and feel your excitement. If you give us a five-word answer, you will be unlikely to progress. Environmental passion is key for every role we hire for because the problems we face in stopping climate change are daunting and there is an enormous amount of work to get done over many decades. We want people who are committed to our mission for the long term.
Resumes and cover letters are reviewed by two people. The reviewers will compare notes frequently during the review process to ensure they are both in agreement about how to interpret the rubric. If the two scores are more than one or two points off from each other a third person may review that candidate. Candidates who exceed the passing score threshold move on to a phone interview.
Candidates with great applications progress to a phone interview. Why start with a phone interview? It saves time for both Carbon Lighthouse and our candidates. There’s no travel required and we can keep phone interviews very brief, often to only 10 minutes. We have a structured set of questions we ask, and concision is appreciated. If it’s not a fit we can quickly move on.
Phone interviews serve a couple of purposes. First, they allow us to screen for people whose interpersonal skills might be a hazard (e.g. if a candidate is insulting or callous, we’re glad to stop the process then and there). Second, phone interviews allow us to begin to test for some of the traits that can’t be determined from a resume such as intellectual curiosity.
Typically, questions asked in a phone interview will be focused on a particular trait. For example, for a marketing role where a key trait is “proven experience launching new products” we might ask “tell us about a time you helped launch a new product. How did you go about doing it? How did you measure your success?”
As with resumes, answers to interview questions are evaluated against a pre-determined scoring rubric. This rubric is pretty simple. The interviewer will give a 0-3 score. Zero for no answer, 1 for answering the question but with weak or no examples or experiences highlighted, 2 for a good answer that included examples and made an effort to tie their experience to Carbon Lighthouse, or a 3 for an excellent answer that included examples and a strong connection drawn between their experience and background and Carbon Lighthouse and the role.
For some roles we might conduct a second, longer phone interview.
Depending on the role we’re hiring for, a successful phone interview will lead to either a sample project or an in-person interview. Whether the in-person interview or the sample project comes first depends upon what traits we are looking for, and which are the most important. Whichever activity will uncover the most important traits better, we do that activity first.
For example, if the most important trait is “quickly adopts and incorporates feedback,” a sample project will not help us understand the candidate’s abilities nearly as well as an in-person interview with role-play can. We would do an in-person interview first. On the other hand, if the most important untested trait is “ability to apply machine learning to unfamiliar data sets,” then a sample project will better reveal how a candidate manages data than an interview.
Interviews at Carbon Lighthouse are done in four sessions that are 30-60 minutes each. They are generally done in person though sometimes may be done via video conference.
A recurring theme throughout our hiring process is that the questions during our interviews are structured. We ask the same questions in the same order to every candidate. Ideally, we even interview all candidates in the same room and with the same interviewers. Afterwards, the interviewers have a clear and simple rubric for grading responses, just like with the phone interview. All of these steps reduce bias and improve candidate selection.
Academic research shows that structured interviews have a 50%+ correlation with performance. By contrast, performance in unstructured interviews has only a 38% correlation with job performance.
Not only are structured interviews more predictive of performance on their own, they also combine better with other elements of our hiring process. When structured interviews are combined with a sample project, predictive ability increases to 75%.
The types of questions matter as well. There are generally two types of interview questions: Situation Judgment questions and Behavioral questions.
Situation Judgment questions are “what would you do if…” and Behavioral Questions are “tell me about a time in the past that X happened. What did you do?” A typical Situation Judgment question we will ask is, “If you had to start modeling a building’s energy use, what do you think the biggest challenges would be? How would you start?” A typical Behavioral question we will ask is, “Describe a time when you faced a problem that felt insurmountable. What did you do?”
While research indicates both types of questions can be effective, it is easier and faster to be effective with Behavioral questions. So if you’re coming in for an interview, get excited to tell us stories about your past! Behavioral Questions are also easier to verify than Situational Judgment ones: we can verify stories when we talk to references.
Interviews at Carbon Lighthouse also frequently involve role plays or other exercises designed to mimic the real job function. In general we want to mimic the actual job as much as possible through our hiring process. If you excel at and enjoy the interview questions and the sample projects, you will likely excel at and enjoy the actual job at Carbon Lighthouse. Which is the whole point!
Before or after the in-person interview you will be given a sample project. Our sample projects are designed to achieve two goals: 1) help us better assess candidates, and 2) expose candidates to as much of the real job experience as possible.
A typical project for a sales development representative role would involve a handful of exercises: qualifying customers, researching to identify a prospective additional customer, drafting an engaging email for reaching out to a decision maker, and creating a compelling value proposition pitch for a particular target customer based on their unique needs.
The sample projects are designed to help us uncover, in the most reasonable way possible, a trait that can be difficult to measure but is highly correlated with success in every type of job: intelligence.
One trait the literature highlights over and over again as predictive for success in every single job function, regardless of skill level, is “general mental ability.”
Selecting people who are objectively intelligent is challenging. This is why we use sample projects. We give every candidate a hard sample project that mimics the actual work entailed for the job. People who do well on the sample project have a very high likelihood of doing well in the role. As importantly, people who enjoy the sample project have a very high likelihood of enjoying the work.
To keep the evaluation of sample projects as fair as possible we enforce a strict time limit. Candidates who have more time on their hands would generally outperform those who do not, but by putting a time limit on sample projects we reduce that risk. We also want to be respectful of candidates’ time and we believe that setting time limits furthers that goal.
Sample projects have proven to be a highly effective part of our hiring process. Success on sample projects has led to direct success at Carbon Lighthouse. It has also been predictive of many details about performance. For example, people who were exceptionally organized in their Excel modeling were exceptionally organized in other modeling later on. People who struggled in sales role-plays have benefitted from coaching later on.
Many candidates, including those who received job offers and those who did not, have shared that they really enjoy the challenge of the sample projects. Which is good since they’re designed to be as close to the real work as possible.
We ask for three to four references. Our interviewing team will reach out to these folks to confirm that you will excel in the role in the way we hope. A typical question we might ask a reference is, “Can you tell me about a time when this person received constructive criticism ─ how did they react? Were they able to incorporate it?”
References checks are not just about box checking. There are certain things we cannot learn from candidates directly and that are best learned from a third party. We try and be respectful of everyone’s time so we will only request references if we are very excited about a candidate.
At the end of our reference call, we ask each reference to provide an additional reference or two. We ask for these additional references because we know the people you pick as references will be extremely supportive. When we hear the opinion of someone who knows the candidate’s work, but who wasn’t hand-picked as a reference, we can expect to gain new insight about the candidate. It’s also an opportunity to potentially catch red flags about interpersonal or other issues that may exist which the proffered references did not wish to mention. You can always ask us not to talk to specific people.
All in all we typically talk to 6-8 references for each candidate.
If you haven’t told some or all of your existing co-workers that you’re considering leaving you should let us know. That way we will avoid accidentally talking to them. We’ll remind you of this again when we ask for references.
References are the last stage of the process. Not everyone who provides references receives an offer, but by the time references are requested the odds for the candidate become favorable.
Our hiring process is detailed and thorough and requires us to spend much more time in hiring than most other companies do. It also requires some additional time from candidates compared to most other firms’ hiring processes (an extra 2 to 4 hours to complete the sample project, for example).
Taking this additional time to be rigorous up front, however, has worked great for Carbon Lighthouse and for everyone who has become a part of it. Our team member turnover is extremely low. Hiring carefully and thoughtfully is also critical given what comes next. While our hiring process is thorough, our training process is even more rigorous. Once people are on board, we pour resources into training and developing our team members.
Depending on their role, new teammates receive up to 200 hours of integrated lecture and field-based education including role-plays and hands-on activities before they begin contributing to Carbon Lighthouse. We also provide ongoing education and training. Additionally, team members set professional development goals, and team members’ have budgets to spend to pursue external education and professional development opportunities they cannot obtain from inside Carbon Lighthouse.
None of this would make financial sense if team members did not stay at Carbon Lighthouse for a long period of time. The rigor of our hiring process is the first step in building a long-lasting relationship between team members and the organization. Our processes are designed to ensure our ranks are filled with well-qualified, well-challenged, and well-supported team members. This greatly minimizes team member turnover and thus the total time spent hiring. And it ensures we can pour more resources into all team members.
To stop climate change we need a team of people who are not only a great fit for helping build Carbon Lighthouse, but who also will be happy and enjoy working here for decades to come.
Thus far, the thoroughness of our hiring process has led to a highly effective team that busts carbon and has a great time doing it. We hope you will join us!
Want to read the research that formed the foundation of our hiring process? Below is a list of the resources we found to be most helpful, along with very brief notes on each.
Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings. Schmidt and Hunter. 1998
- Meta-analysis and summary of the last century of personnel hiring research.
Implications of Methodological Advances for the Practice of Personnel Selection: How Practitioners Benefit from Meta-Analysis. Huy Le, In-Sue Oh, Jonathan Shaffer, Frank Schmidt. 2007.
- Similar to Schmidt and Hunter, but more recent and less descriptive.
Implicit Measurement of Attitudes, Stereotypes, and Self-Concepts in Organizations. Haines and Sumner. 2006.
- Provides background and context on the IAT (Implicit Association Test) and bias, and indications on how biases in personnel selection can be mitigated.
Interview with Adam Bryant, 2013.
- Easy-to-read piece on what Google has found works and what doesn’t in personnel selection. Note: Google has moved away from their historic brainteaser questions like “How many ping pong balls can fit in a bus”, and their current hiring processes now closely match the Academic Literature.
The employment interview: A review of current studies and directions for future research. Therese Macan. 2009.
- Overview of the current state of research on employment interviews.
Half a Minute. Predicting Teacher Evaluations from Thin Slices of Nonverbal Behavior and Physical Attractiveness. Ambady and Rosenthal. 1993.
- Looks at the close correlation between first impressions and long-term impressions.
The Look of a Leader. 2014. The Economist.
- Easy-to-read piece on the failure to remove bias from selection processes within the Fortune 500 and other large global firms.
Personnel Selection. Paul Sackett, Filip Lievens. 2007.
- Detailed overview of academic research progress since 2000.
Racial Cognition and the Ethics of Implicit Bias. 2008.
- Easy-to-read piece citing more evidence of the impacts of bias in hiring processes.
The Validity and Incremental Validity of Knowledge Tests, Low-Fidelity Simulations, and High-Fidelity Simulations for Predicting Job Performance in Advanced-Level High-Stakes Selection. Lievens and Patterson. 2011.
- A study explaining and comparing the benefits of Situational Judgment Tests (SJTs) and High-Fidelity Simulations.
More On Mitigating Bias & Other Helpful Information
Combining Biodata Test and Interview Information: Predicting Decisions and Performance Criteria. Anthony Dalessio. Todd Silverhart. 1994.
- More detailed overview of the predictive performance of resume data.
Combining Predictors to Achieve Optimal Trade-Offs between Selection Quality and Adverse Impact. Sackett and Lievens. 2007.
- Focuses on different combinations of prediction methods and how to create the ideal combination of predictors.
Comprehensive Meta-Analysis of Integrity Test Validities. Ones et. al. 1993.
- In depth meta-analysis of Integrity Tests.
Constraints and Triggers: Situational mechanics of Gender in Negotiation. Bowles, Babcock, McGinn. 2005.
- Article on factors that influence women’s performance in negotiations.
Interaction of Recruiter and Applicant Gender in Resume Evaluation: A Field Study. Cole et al. 2004.
- Looks at 2,000 job applicants, as reviewed by 40 recruiters, and examines the relationship between the gender of the reviewer and the gender of the applicant.
Sex Discrimination in Simulated Employment Contexts: A Meta-analytic Investigation. Davison and Burke. 2000.
- Another meta-analysis looking at the effects of gender in hiring decisions.
Stereotypes, Bias, and Personnel Decisions: Strange and Stranger. Landy. 2008
- An article demonstrating disagreements in the academic community about bias and its impact in the workplace.
Understanding and Using the Implicit Association Test: Meta-Analysis of predictive Validity. Greenwald et. al. 2009.
- A meta-analysis of IAT.
The Use of Person-Organization Fit in Employment Decision Making: An Assessment of Its Criterion-Related Validity. Bell et. al. 2006.
- Examines why not to use P-O fit for personnel selection