After completing two translation tests for a prospective customer, I was given some feedback. It was not what I wanted to hear. From ‘translator is not quite familiar with the industry terminology’ to ‘needs supervision’, the comments were stinging. Why would I feel bothered by an anonymous critique, you’d say? For the same reason a stranger tells you that you don’t know how to run your business.
I wrote back to my prospective customer and expressed my frustration at the inconsistency between the level of the criticism and the kind of “errors” found in my translation tests. The main point I tried to make was that many of the “errors” were preferences of the translators or editors who checked my translations. Weeks later, I received an email expressing concern, approval of my vendor status and an offer to do better at communicating. I replied my prospective customer by saying that, apparently, she takes translation test results only as one of many factors to decide who to hire as a freelancer. The message reads as follows:
I definitely do not just use the errors in the sample to determine the approval of a translator. I take into account many different things. I even take into account the tone and wording of the e-mails and telephone conversations in general. That tells me a lot about a person. A sample of 350 words is hardly enough to base my entire judgment on.
What weight does this have on translation quality control? That error counting does nothing to tell you what you need to know about a freelance translator. I’ve been thinking about the whole business case for implementing translation quality standards, and I think that some in the industry are so focused on finding errors that they fail to see the tree from the woods.
For QA to work in any field, it has to offer practical, cost-effective instruments to measure things. But you need to find measurable things. Languages are not like math or geography or archeology. How do you measure a language? How do you even measure if a document is well written? By counting the typos or syntax errors? Then how do you measure style?
I posit that none of this can be measured in any meaningful way. I propose a different way to ‘measure’ translation quality: effectiveness.
Now you’ll tell me ‘But effectiveness cannot be measured!’ And you might be right…to a point. Let’s imagine marketing campaigns. An effective marketing campaign is the one that increases sales, name recognition, gets people to talk about your company and your product. A similar strategy can be employed for translation effectiveness. That the focus is on business results is the beauty of it.
This is an ongoing analysis and it is a work in progress. I am not claiming to have found the ultimate solution to measuring translation, but my experience strongly suggests in my mind that we are going about it the wrong way. Go ahead, measure words and errors all you want. You will end up empty-handed.