CAT tools typically calculate wordcounts based on the source material. The reason of course is because this way you can give your clients an idea of the cost before you start the work… which of course seems a sensible approach as you need to base your estimate on something. You can estimate the target wordcount by applying an expansion factor to the source words, and this is a principle we see with pseudotranslate in Studio where you can set the expansion per language to give you some idea of the costs for DTP requirements in the finished document before you even start translating. But what you can’t do, at least what you have never been able to do in all the Trados versions right up to the current SDL Trados Studio, is generate a target wordcount for those customers who pay you for work after the translation is complete and are happy to base this on the words you have actually translated. Read More
Everyone knows, I think, that an SDL Trados Studio package (*.sdlppx) is just a zip file containing all the files that are needed to allow you to create your Studio project with all the settings your customer intended. At least it’ll work this way if you use Studio to open the package… quite a few other translation tools these days can open a package and extract the files inside to use but not a single one can help you work with the project in the way it was originally set up. One or two tools do a pretty good job of retaining the integrity of the bilingual files most of the time so they can normally be returned safely, others (like SmartCAT for example… based on a few tests that verified this quite easily) do a very poor job and should be used with caution.
… and hundreds or thousands of heads are better than two!!
I wrote an article a little while back called “Vote now… or have no say!” which was a follow up to the SDL AppStore competition SDL ran for a few months. I wanted to remind everyone to go and vote if they wanted to have an opportunity to see an app developed that would be useful for them. Well the competition is over now and we have a winner, so now we can move onto the task of creating it.
The winning idea from Marta, a Spanish freelance translator, was the “Quick Wordcount” idea and we have encouraged all users to contribute to this so it’s as useful as as we can make it for as many users as possible whilst ensuring we deliver the intent of the original idea.
Studio provides a variety of reports ranging from content to help you analyse how much work you have to do, through data designed to help you prepare quotes and invoices to reports that record the amount of corrections you had to go through when reviewing the work you did, or that of others. In fact it’s quite interesting to look at the many different reports available:
- Wordcount : Counts the number of words occurring in the files
- Translation count : Counts the number of words translated in the files
- Analysis report : Analyses files against the translation memory, producing statistics on the leverage to be expected during translation
- Update TM report : Provides statistics on what was updated to the Translation Memory with the contents of translated bilingual files
- Verification report : Verify the contents of translatable files. Reports errors based on your verification settings
- Translation Quality Assessment : Presents the translations quality assessments occurring in the files (Studio 2015 onwards)
In the last year or so I’ve had the pleasure of watching Patrick Hartnett use the SDL Openexchange APIs and SDK to develop SDLXLIFF Compare, then Post-Edit Compare, the Studio Timetracker and a productivity tool that combined all of the first three into one and introduced a host of productivity metrics and a mechanism for scoring the quality of a translation using the Multidimensional Quality metrics (MQM) framework. This last application was never released, not because it wasn’t good, but because it keeps on growing!
Then last month I got to attend the TAUS QE Summit in Dublin where we had an idea to present some of the work Patrick had done with his productivity plugin, get involved in the workshop style discussions, and also learn a little about the sort of things users wanted metrics for so we could improve the reporting available out of the box. At the same time TAUS were working on an implementation around their Dynamic Quality Framework (DQF) and were going to share a little during the event about their new DQF dashboard that would also have an API for developers to connect.
If this title sounds familiar to you it’s probably because I wrote an article three years ago on the SDL blog with the very same title. It’s such a good title (in my opinion ;-)) I decided to keep it and write the same article again, but refreshed and enhanced a little for SDL Trados Studio 2014.
Something I only occasionally hear these days is “When I used Workbench or SDLX it was simple to create a quick analysis of my files. Now I have to create a Project in Studio and it takes so long to do the same thing.” I do think this is something you’re more likely to hear from experienced users of the older products because they initially find that getting a quick report out of Studio is a far more onerus process than it used to be. What they might not think of is how you can use the Projects concept to make this easy for you once you become just as experienced with the new tools.
It would be very arrogant of me to suggest that I have the solution for measuring the effort that goes into post-editing translations, wherever they originated from, but in particular machine translation. So let’s table that right away because there are many ways to measure, and pay for, post-editing work and I’m not going to suggest a single answer to suit everyone.
But I think I can safely say that finding a way to measure, and pay for post-editing translations in a consistent way that provided good visibility into how many changes had been made, and allowed you to build a cost model you could be happy with, is something many companies and translators are still investigating.