TAL .015: How to Use Writer Scoring Sheets to Improve Output

Categories
The Arete Letter

I’m revisiting writer scoring – something I talked about in issue .008 of The Arete Letter.

In this letter, I’m revisiting writer scoring (something I talked about in issue .008 of The Arete Letter).

I’ve got a brand-new template I can’t wait to share with you.

1. The Problem

‘Consistency’ is a word that gets thrown around a lot in marketing.

But, too often, the feedback we give our team isn’t consistent.

Especially when it comes to writing.

Most of the time, writing feedback = project-specific feedback like in-line markup and comments.

Many teams don’t have clear quality indicators that translate across different projects.

Which makes it hard for writers to track their progress over time.

And no long-term tracking means improvement is much harder.

2. The Solution

In TAL .008, I said I was trialling a ‘project scorecard’ and an accompanying spreadsheet.

While I’m not sold on the effectiveness of project scorecards (yet), I do think a sheet for ‘writer scoring’ is very useful.

These are the metrics I recommend tracking:

  • Overall Marketing Effectiveness, out of 10 (1–5 is chaff content that’s not fit for purpose, and 6–10 is kernel content that is fit for purpose)
  • Technical Writing Quality, out of 10 (how good, at a sentence level, is the writing?)
  • Structure and Sequencing, out of 10 (how well is the deliverable structured in terms of logical sequencing and layout?)
  • Brief Conformance, out of five (how well does the deliverable conform to the brief?)
  • Brand Conformance, out of five (how well does the deliverable conform to brand voice and messaging requirements?)
  • Style Conformance, out of five (how well does the deliverable conform to your textual style guide?)
  • Format Conformance, out of five (how well does the deliverable conform to your format requirements?)
  • Total Writing Minutes (how much time did the writer, in minutes, spend on the deliverable before publication?)
  • Total Editing Minutes (how much total editing time, in minutes, did the deliverable require before publication? Note: this is time spent by your reviewer/editor, not time spent self-editing.)
  • Number of Versions (how many versions of the deliverable did the writer submit before it was approved for publication?)

Here’s why I like each of those metrics.

Overall Marketing Effectiveness tells you whether your writers are producing something usable on their first draft or not. If you have someone on your team who’s consistently producing chaff content, that’s something you need to address.

Technical Writing Quality and Structure and Sequencing are both good indicators of writing skill. If someone keeps falling short in these areas, investing in writing training can be a good idea. Article/page templates and the use of tools like Verbatim can also help.

Brief, Brand, Style and Format Conformance all relate to staying inside the boundaries. If someone has good technical writing quality and structure and sequencing scores, but consistently low conformance scores, it’s probably an indicator that a) they have trouble understanding/navigating the brief/style guide, or b) they don’t take complying with those documents seriously.

Total Writing and Editing Minutes are both good gauges of overall efficiency. For example, if someone writes fast and requires only a little editing, they’re probably the type of writer you want to keep on board.

If a writer flashes through drafts quickly, though, and leaves the editor to take up the slack, that’s a problem (your editor’s time will generally be more valuable than your writer’s). For slow writers who require lots of editing time, think about coaching options and resources/tools to help them improve.

Number of Versions is a good way to see how editorial feedback is being implemented. If a writer has a consistently high number of versions (4+), it’s probably that a) they’re not implementing feedback properly, b) the editor is being overly picky, or c) there’s some kind of miscommunication going on.

Together, these 10 metrics can help you get a clear picture of exactly what’s going on inside your writing teams.

Once you can track what’s happening, you can diagnose problems and devise solutions.

That means your writers get the support they need to excel – and you’ll stop wasting budget on easily fixed inefficiencies.

3. Implementation

Tech Needed: Spreadsheet software like Airtable, Google Sheets, or Excel

Ease of Uptake: Easy

  1. Go to this Airtable Universe link.
  2. If you’re using Airtable, click ‘Use Template’.
    1. If you’re using another spreadsheet tool, click ‘Explore the Base’ and then copy the column layout into your spreadsheet of choice.
  3. Strip out the three existing examples.
  4. In the first row, enter data for an exemplar piece – a content asset written by your team that scores close to the top for all subjective metrics. This gives editors a benchmark to work off and helps ensure that scoring is consistent.
  5. From now on, track each writing project that you and your team deliver.
    1. Your editor(s) should be the only people entering scores; your writers shouldn’t have access to the sheet.
    2. You should be able to get Total Writing Minutes and Total Editing Minutes from your team’s timesheets.
  6. At the end of each quarter, analyse the average scores for each writer and take action as needed.
    1. You can also use project-specific scores for post-mortem analyses – for example, to identify what went wrong if a particular project went hugely over budget.

By Duncan Croker

Duncan is a copywriter with a background in editing and storytelling. He loves collaborating with brands big and small, and thrives on the challenges of hard marketing.