Back in 2015 I wrote an article called “Good bugs… bad bugs!” which was all about the unintended positive side effect as a result of computer software not working as intended. I’d actually forgotten about this article until this weekend as I was pondering my own behaviour in responding to a post in the RWS Community. In fact it was my wife that got me thinking as I allowed the community thread to frustrate me because I couldn’t understand why some users can’t see reason… my reason! I had comfortably created two buckets in my mind.. either they are just incapable of understanding and I’m talking to a brick wall or they just won’t understand because they don’t want to listen since it doesn’t suit their own agenda. It didn’t help that none of my suggestions were even acknowledged, but nonetheless it took my wife to remind me that perhaps I wasn’t listening to them properly!
The most viewed article I have ever written by far was “So how many words do you think it was?” which I wrote in 2012 almost ten years ago. I revised it once in 2015 and whilst I could revise it again based on the current versions of Trados Studio I don’t really see the point. The real value of that article was understanding how the content can influence a word-count and why there could be differences between different applications, or versions of the same application, when analysing a text. But I do think it’s worth revisiting in the context of MT (machine translation) which is often measured in characters as opposed to words… and oh yes, another long article warning!
There’s been a few ups and downs getting SDL Analyse off the ground, but it’s finally there and it’s worth it! If you have no idea what I’m referring to then perhaps review this article first for a little history. This app was actually released as the 200th app on the SDL AppStore in February this year, but in addition to the applause it received for its functionality there has been less positive aspects for some users that needed to be addressed.
But first, what does it do? Quite simply it allows you to get an analysis of your files without even having to start Studio, or without having to create a Project in Studio. If you’re a regular reader of this blog you may recall I wrote an article in 2014, and in 2011 before that, on how to do an analysis in Studio by using a dummy project. In all that time there has been only one app on the appstore that supports the analysis of files without having to use Studio and this is goAnalyze from Kaleidoscope. In fact goAnalyze can do a lot more than SDL Analyse but there is one significant difference between these apps that makes this one pretty interesting… you don’t require the Professional version of Studio to use it. But it’s also this difference that has been the cause of the ups and downs for some users since SDL Analyse was released. In order to resolve the problem of needing to use the Project Automation API, which needs the Professional version of Studio, the app needed to use a windows service that was hooked into Studio. For the technically minded we had a few things to resolve:
CAT tools typically calculate wordcounts based on the source material. The reason of course is because this way you can give your clients an idea of the cost before you start the work… which of course seems a sensible approach as you need to base your estimate on something. You can estimate the target wordcount by applying an expansion factor to the source words, and this is a principle we see with pseudotranslate in Studio where you can set the expansion per language to give you some idea of the costs for DTP requirements in the finished document before you even start translating. But what you can’t do, at least what you have never been able to do in all the Trados versions right up to the current SDL Trados Studio, is generate a target wordcount for those customers who pay you for work after the translation is complete and are happy to base this on the words you have actually translated. Continue reading “Target Wordcounts…”
Everyone knows, I think, that an SDL Trados Studio package (*.sdlppx) is just a zip file containing all the files that are needed to allow you to create your Studio project with all the settings your customer intended. At least it’ll work this way if you use Studio to open the package… quite a few other translation tools these days can open a package and extract the files inside to use but not a single one can help you work with the project in the way it was originally set up. One or two tools do a pretty good job of retaining the integrity of the bilingual files most of the time so they can normally be returned safely, others (like SmartCAT for example… based on a few tests that verified this quite easily) do a very poor job and should be used with caution.
… and hundreds or thousands of heads are better than two!!
I wrote an article a little while back called “Vote now… or have no say!” which was a follow up to the SDL AppStore competition SDL ran for a few months. I wanted to remind everyone to go and vote if they wanted to have an opportunity to see an app developed that would be useful for them. Well the competition is over now and we have a winner, so now we can move onto the task of creating it.
The winning idea from Marta, a Spanish freelance translator, was the “Quick Wordcount” idea and we have encouraged all users to contribute to this so it’s as useful as as we can make it for as many users as possible whilst ensuring we deliver the intent of the original idea.
Studio provides a variety of reports ranging from content to help you analyse how much work you have to do, through data designed to help you prepare quotes and invoices to reports that record the amount of corrections you had to go through when reviewing the work you did, or that of others. In fact it’s quite interesting to look at the many different reports available:
- Wordcount : Counts the number of words occurring in the files
- Translation count : Counts the number of words translated in the files
- Analysis report : Analyses files against the translation memory, producing statistics on the leverage to be expected during translation
- Update TM report : Provides statistics on what was updated to the Translation Memory with the contents of translated bilingual files
- Verification report : Verify the contents of translatable files. Reports errors based on your verification settings
- Translation Quality Assessment : Presents the translations quality assessments occurring in the files (Studio 2015 onwards)
In the last year or so I’ve had the pleasure of watching Patrick Hartnett use the SDL OpenExchange (now RWS AppStore) APIs and SDK to develop SDLXLIFF Compare, then Post-Edit Compare, the Studio Timetracker and a productivity tool that combined all of the first three into one and introduced a host of productivity metrics and a mechanism for scoring the quality of a translation using the Multidimensional Quality metrics (MQM) framework. This last application was never released, not because it wasn’t good, but because it keeps on growing!
Then last month I got to attend the TAUS QE Summit in Dublin where we had an idea to present some of the work Patrick had done with his productivity plugin, get involved in the workshop style discussions, and also learn a little about the sort of things users wanted metrics for so we could improve the reporting available out of the box. At the same time TAUS were working on an implementation around their Dynamic Quality Framework (DQF) and were going to share a little during the event about their new DQF dashboard that would also have an API for developers to connect.
If this title sounds familiar to you it’s probably because I wrote an article three years ago on the SDL blog with the very same title. It’s such a good title (in my opinion ;-)) I decided to keep it and write the same article again, but refreshed and enhanced a little for SDL Trados Studio 2014.
Something I only occasionally hear these days is “When I used Workbench or SDLX it was simple to create a quick analysis of my files. Now I have to create a Project in Studio and it takes so long to do the same thing.” I do think this is something you’re more likely to hear from experienced users of the older products because they initially find that getting a quick report out of Studio is a far more onerus process than it used to be. What they might not think of is how you can use the Projects concept to make this easy for you once you become just as experienced with the new tools.
It would be very arrogant of me to suggest that I have the solution for measuring the effort that goes into post-editing translations, wherever they originated from, but in particular machine translation. So let’s table that right away because there are many ways to measure, and pay for, post-editing work and I’m not going to suggest a single answer to suit everyone.
But I think I can safely say that finding a way to measure, and pay for post-editing translations in a consistent way that provided good visibility into how many changes had been made, and allowed you to build a cost model you could be happy with, is something many companies and translators are still investigating.