The most viewed article I have ever written by far was “So how many words do you think it was?” which I wrote in 2012 almost ten years ago. I revised it once in 2015 and whilst I could revise it again based on the current versions of Trados Studio I don’t really see the point. The real value of that article was understanding how the content can influence a word-count and why there could be differences between different applications, or versions of the same application, when analysing a text. But I do think it’s worth revisiting in the context of MT (machine translation) which is often measured in characters as opposed to words… and oh yes, another long article warning!
Tag: machine translation
What’s in a name?
“What’s in a name? That which we call a rose
By any other name would smell as sweet.”
In Shakespeare’s soliloquy, Romeo and Juliet, Juliet isn’t allowed to be with Romeo because his family name is Montague… sworn enemies of the Capulet family. Of course she doesn’t care about his name, he’d still be everything she wanted irrespective of what he was called. The rose would still smell as sweet irrespective of what it was called. “Trados”, “SDL” and “RWS” have endured, or enjoyed, a feuding history as competitors in the same industry. Our names are our brand and now that they’re changing do we still smell as sweet? Sadly things don’t end well for poor Romeo and Juliet… but in our story we fare a little better!
Voice or Machine Translation?
Post Survey Note: Thank you to all those who completed the survey. It’s no longer live, but you can see the final results in the article.
For the last couple of years I’ve been enjoying the TCLoc Masters degree at the University of Strasbourg. It’s been a really interesting time for me helping to fill in a lot of gaps and widen my technical knowledge around localization, and introducing me to the world of Technical Communication in general. This latter part was particularly interesting because half of our business at SDL relates to this; so having spent my time since 2006 working with our localization products it’s been an eye opener in many ways. I have done this in my own time and not as part of my job, but TCLoc does look like a course that’s tailor made for SDL employees!
Badass…
“The badass is an uncommon man of supreme style. He does what he wants, when he wants, where he wants.” (Urban Dictionary by dougdougdoug). There are in fact many definitions of what a badass is, but I like this part of one of the definitions because it really reflects what this article is about and why it’s needed. No clues so far… but let’s think anonymization!
AdaptiveMT… what’s the score?
AdaptiveMT was released with Studio 2017 introducing the ability for users to adapt the SDL Language Cloud machine translation with their own preferred style on the fly. Potentially this is a really powerful feature since it means that over time you should be able to improve the results you see from your SDL Language Cloud machine translation and reduce the amount of post editing you have to do. But in order to be able to release this potential you need to know a few things about getting started. Once you get started you may also wonder what the analysis results are referring to when you see values appearing against the AdaptiveMT rows in your Studio analysis report. So in this article I want to try and walk through the things you need to know from start to finish… quite a long article but I tried to cover the things I see people asking about so I hope it’s useful.
Spot the difference!
I don’t know if you can recall these games from when you were a kid? I used to spend hours trying to find all the differences between the image on the left and the one on the right. I never once thought how that might become a useful skill in later life… although in some cases it’s a skill I’d rather not have to develop!
You may be wondering where I’m going with this so I’ll explain. Last weekend the SFÖ held a conference in Umeå, Sweden… I wasn’t there, but I did get an email from one of my colleagues asking how you could see what changes had been made in your bilingual files as a result of post-editing Machine Translation. The easy answer of course is to do the post-editing with your track changes switched on, then it’s easy to spot the difference. That is useful, but it’s not going to help with measurement, or give you something useful to be able to discuss with your client. It’s also not going to help if you didn’t work with tracked changes in the first place because you’d need some serious spot the difference skills to evaluate your work!
Qualitivity… measuring quality and productivity
In the last year or so I’ve had the pleasure of watching Patrick Hartnett use the SDL OpenExchange (now RWS AppStore) APIs and SDK to develop SDLXLIFF Compare, then Post-Edit Compare, the Studio Timetracker and a productivity tool that combined all of the first three into one and introduced a host of productivity metrics and a mechanism for scoring the quality of a translation using the Multidimensional Quality metrics (MQM) framework. This last application was never released, not because it wasn’t good, but because it keeps on growing!
Then last month I got to attend the TAUS QE Summit in Dublin where we had an idea to present some of the work Patrick had done with his productivity plugin, get involved in the workshop style discussions, and also learn a little about the sort of things users wanted metrics for so we could improve the reporting available out of the box. At the same time TAUS were working on an implementation around their Dynamic Quality Framework (DQF) and were going to share a little during the event about their new DQF dashboard that would also have an API for developers to connect.
Continue reading “Qualitivity… measuring quality and productivity”
The ins and outs of AutoSuggest
The AutoSuggest feature in Studio has been around since the launch of Studio 2009 and based on the questions I see from time to time I think it’s a feature that could use a little explanation on what it’s all about. In simple terms it’s a mechanism for prompting you as you type with suggested target text that is based on the source text of the document you are translating. So sometimes it might be a translation of some or all of the text in the source segment, and sometimes it might be providing an easy way to replicate the source text into the target. This is done by you entering a character via the keyboard and then Studio suggests suitable text that can be applied with a single keystroke. In terms of productivity this is a great feature and given how many other translation tools have copied this in one form or another I think it’s clear it really works too!
AutoSuggest comes from a number of different sources, some out of the box with every version of the product, and some requiring a specific license. The ability to create resources for AutoSuggest is also controlled by license for some things, but not for all. When you purchase Studio, any version at all, you have the ability to use the AutoSuggest resources out of the box from three places: Continue reading “The ins and outs of AutoSuggest”
Language Cloud… word-counts… best practice?
Best practice! This is a phrase I’ve had a love/hate relationship with over the course of my entire career… or maybe it’s just a love to hate! The phrase is something that should perhaps be called “Best Suggestions” and not “Best Practice” because all too often I think it’s used to describe the way someone wants you to work as opposed to anything that represents the views of a majority of users over a long period of time, or anything that takes account the way different people want to work. In fact with new technology how can it be “Best Practice” when it hasn’t been around long enough in the first place? I think for a clearly defined and well established process then “Best Practice” has it’s place… but otherwise it’s often the easy answer to a more complex problem, or just a problem that is considered too hard to address.
Continue reading “Language Cloud… word-counts… best practice?”
Solving the Post Edit puzzle
It would be very arrogant of me to suggest that I have the solution for measuring the effort that goes into post-editing translations, wherever they originated from, but in particular machine translation. So let’s table that right away because there are many ways to measure, and pay for, post-editing work and I’m not going to suggest a single answer to suit everyone.
But I think I can safely say that finding a way to measure, and pay for post-editing translations in a consistent way that provided good visibility into how many changes had been made, and allowed you to build a cost model you could be happy with, is something many companies and translators are still investigating.