Skip to main content

Postcard CV


As a hiring manager it'll often be days between engagements with a candidate. I am not renowned for my memory but, even if I was, it'd be hard to remember all relevant details of all relevant candidates during a recruitment drive. Over the years I've developed a way of taking notes which I find helps me to cheaply review and keep track of what I think of a candidate and why, and which gives me the data I need at each stage.

When we open recruitment for a new position I'll start a new directory and each applicant will get a text file in it as I read their CV. I use very simple markup to record my thoughts into the file and to put notes for myself to pick up when I come back.

I have a handful of key requirements:
  • I want my notes on each candidate to be in one place (for ease of consumption).
  • I want my notes across candidates to be consistent (for ease of comparison and navigation).
  • I want my notes to be put down in real time (for efficiency).

Let's have an example. Imagine a candidate, call him Rupert Rowling. When Rupert's CV arrives, I'll start a new file, Rupert Rowling.txt, and in it I'll type:
== Details ==

== CV ==
I'll paste in any relevant details from the agent, or a cover letter, or our HR staff into the top section, and then my thoughts as I read the CV into the bottom. For Rupert, it might contain this kind of thing:
== Details ==
Currently a teacher, but extremely keen to move into testing. 
Lives in Stoke. Would relocate for the role. 

== CV ==
- testing 
+ tech support role
? ... but 15 years ago
+ open source development
The annotations represent potential negative (-), potential positive (+), potential concern or query (?), a sub-thought (...), and very occasionally a WTF (!). You'll notice that I say "potential"; at this point, all the evidence is from written materials which are, at best, a biased representation of the candidate filtered through my own prejudices.

Here I'm comparing the candidate to the job advert, noting things that stand out, wondering about questions that I would ask if they get to the next stage. This section is rarely more than 10 lines long. I think of it as a kind of telegram, or a postcard of my opinion of the candidate's CV. Once I'm done, I'll add a final line which says what I think I'd do:
No testing, but tech support is often a good fit. Let's interview.
If I'm recruiting with someone else I'll add a subsection with their thoughts:
=== Sheila ===
Worth interviewing.
I like Rupert's obvious enthusiasm for testing.
His experience with foreign languages could be useful. 
If there's no agreement, we'll talk until there is. When there's agreement, I'll copy-paste data from my notes to the place I have to log decisions inside the formal company HR process.

At any point, if I think we'd interview the candidate, I will also make the next section and put any questions I thought of while reading the CV into it. For Rupert this might include:
== Phone ==
> why is now the time for a career change?
> what does teaching give you that would be valuable as a tester?
Notice the annotation (>) again. I use it in this section to represent my input, either questions I asked during a phone conversation or planned to ask beforehand.

I don't generally note down generic questions in this section in advance. I use a separate checklist for those, and sometimes refer to it during the call. For particular specialist roles, I will make up a set of questions designed to exercise the candidate's knowledge and experience and, again, keep that in a separate place, pulling questions from it during the interview, based on the direction the conversation has taken.

In this example, me and Sheila both think Rupert has potential, so a phone interview is set up. Shortly before it, I'll re-skim the CV, review my notes, and add any more questions that come to mind. During the call, I'll record questions and answers as we go. Here's Rupert's answer to one of my prepared questions, and then my follow-up:
> what does teaching give you that would be valuable as a tester?
* prioritisation, time-boxing: content fits to a lesson
* balance with opportunism: follow kids' interest

> when to take the opportunity?
* experience
* rules of thumb: 
* ... class mood; % covered; proximity to the curriculum
The annotation this time (*) is a standard bullet point, where I'll aim to make each a distinct thought from the candidate.

During the call, I'll create a summary sub-section and, when I spot something notable or a pattern, I'll write it in using the annotations from before:
=== Summary ===
+ spoke very confidently
+ good reasons for testing
+ ... appreciated testers on his OS projects
? OK at coming up with test ideas 
- tech support was on Vic 20 network
As some point after the call I'll debrief with any other interviewers, compare to other candidates, and decide whether we'd like the candidate to proceed. If needed, I'll tweak the summary to include those conversations and, as before, it'll form the basis of what I put into the HR machinery.

The next phase in the Linguamatics process is a technical exercise. Again, I'll enter my assessment into the text file as I go through the candidate's response, with the same annotation conventions we've already seen. My review will have a Summary section, and also thoughts on questions I might like to ask at face-to-face interview.

In this made-up story, Rupert's exercise is good enough for someone with no testing experience, so Sheila and me decide we'll ask him to come into the office to talk to us, and some more of the team.

At this point, my personal process switches to being paper-based. Interviews are hard enough for the candidates without an additional distraction of me typing, and the possible perception that I'm not paying attention to them. Later on, I'll summarise aspects of the face-to-face interview back into my notes but those are usually conclusions of the interview panel rather than anything detailed about the interview.

So that's my current note-taking approach, tuned by me for my requirements, and enabled by other things I do, such as:
  • Setting things up so that making a new file is just a handful of keystrokes. 
  • Learning to touch type.
  • Developing my annotation conventions.
Have you got a custom system? I'd love to hear about it.
Image: Discogs

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll

Test Now

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When is the best time to test?" Twenty posts in , I hope you're not expecting an answer without nuance? You are? Well, I'll do my best. For me, the best time to test is when there