Qualitivity… measuring quality and productivity

01In the last year or so I’ve had the pleasure of watching Patrick Hartnett use the SDL OpenExchange (now RWS AppStore) APIs and SDK to develop SDLXLIFF Compare, then Post-Edit Compare, the Studio Timetracker and a productivity tool that combined all of the first three into one and introduced a host of productivity metrics and a mechanism for scoring the quality of a translation using the Multidimensional Quality metrics (MQM) framework.  This last application was never released, not because it wasn’t good, but because it keeps on growing!

Then last month I got to attend the TAUS QE Summit in Dublin where we had an idea to present some of the work Patrick had done with his productivity plugin, get involved in the workshop style discussions, and also learn a little about the sort of things users wanted metrics for so we could improve the reporting available out of the box.  At the same time TAUS were working on an implementation around their Dynamic Quality Framework (DQF) and were going to share a little during the event about their new DQF dashboard that would also have an API for developers to connect.

For Patrick this was another challenge and before the event he added quality metrics for SAE J2450 and TAUS DQF.  After the event in less than a week he added to this with the old LISA QA Metric as we do still have customers using this model, and he also added the ability to make up your own and add them to the tool.  Then, he amazingly built an integration to the TAUS DQF Dashboard that was launched at LocWorld in Berlin the week before last week.  This part required considerable work to make sure the application was capable of mirroring the required workflows whilst still neatly slotting into Studio 2014, and of course Studio 2015 which is due in the next few weeks.

You may be getting the feeling that I really like this application and I’m very impressed by the work Patrick has done, and you’d be right.  The capability this brings within the normal working environment for Studio is quite amazing and it becomes possible for the users to collect a level of detail I don’t think has been possible in this way until now.  To give you a quick summary of the sort of capability available I’ve listed a few things below:

  • The Translation Memory or Machine Translation suggestions that were used by the translator are recorded
  • Tracking of whether the translation was then later adapted by the translator before finishing the translation.
  • Records every single action that qualifies as an addition, deletion or adaptation of any of the properties in a segment.
  • The time taken to translate the complete file
  • The time taken to translate each segment (segment by segment)
  • The key strokes typed by the translator/reviewer (optional)
    The number of words (source) in the segments that were updated
  • The Quality Metric applied to a segment during a quality review
  • The PEM% (Post Edit Modification) rating and associated cost analysis
  • The Track Changes related to what was modified in the segment
  • The flat in context comparison of what was modified in the segment
  • Cascading comparison results, when a segment is modified more than once (i.e. if a translator visits a segments more than once)
  • Ability to quality review a document with your chosen metrics and report on a pass/fail along with a record of every result
  • Export the metrics as XML or to Excel where you can easily manipulate the data as you see fit
  • Export a full project report for your client containing all the metrics and reports in a way they can use them
  • Ability to create TAUS DQF projects from within Studio
  • Ability for a translator to import DQF project settings and work on a project, finally updating the TAUS dashboard themselves
  • and much much more…

The best thing of all is that all of this is pretty much managed from within Studio without the translator having to do anything more than just work.  So for any organisation working collaboratively with their translators to improve the quality of their Machine Translation engines, or help identify what can be done to improve the training for post-editors, or translators, for example, then this is the perfect platform to do it.  Note that I say “collaboratively” which is the key word here.  The idea of an application like this is not to beat anyone with a stick, and that means anyone because the granularity provided can be useful for translators, reviewers, post-editors and their clients… it’s not just there to provide a one-sided view.  It’s all about working together in an informed way to improve the use of Translation Environment Tools and the processes around them for the translator and the client.

Now, there is so much in this application I don’t even now where to start with a single blog article, so I thought the best idea would be to keep it to the basic summary I provided above and then create a few videos covering the processes and what’s possible.  So I’ve created four videos dealing with the following:

  1. Getting started as the Project manager and how to set up the TAUS DQF workflow if you wish to use it (not mandatory)
  2. Working as a translator/reviewer with Qualitivity using Studio as a standalone tool, then apply this to a TAUS workflow
  3. Working with the Quality Metrics and scoring the translations
  4. Taking a look at the reports… built in, exported and TAUS DQF

I think these give a pretty good overview of what’s possible, and might even be useful for anyone testing the tools themselves so they are clear how to use them in the workflows I have described.  It’s probably also worth mentioning that we have created a project group in the SDL Community for this development as there has been quite a bit of interest from users looking for these types of features.  It was used during the development of the toolset but if you’re interested in this kind of application please feel free to join the community now and feedback your experiences in there.  It’s called the Studio Time Tracker Professional project because the early incarnation of the tool started as an enhanced version of the Timetracker plugin that is available on the OpenExchange.  Patrick renamed it Qualitivity because it’s a lot more than a time tracker now!  You can also find some handy videos created by Patrick on the Qualitivity youtube pages which cover the functionality in nice bite sized chunks.

.

Getting started as the Project manager and how to (optionally) set up a TAUS DQF workflow

This video takes you through the creation of a Project and what you need to set up in order to start working with the Qualitivity application in Studio.  It also covers the setting up of a TAUS DQF project and explains what’s needed to engage the editors so they are updating it as they work… from a technical enablement perspective.

https://youtu.be/1lwaX0PKrxY

.

Working as a translator/reviewer with Qualitivity using Studio as a standalone tool, then (optionally) apply this to a TAUS DQF workflow

This video starts with the project package created in the previous video, explains how to use the various settings in the application, and once the post-edit is complete explains how you would then get your data into the TAUS Project where appropriate.  It’s important to note that the application is fully functional without using TAUS at all, but the TAUS integration does have the potential to offer some unique insights into what everyone else is doing too!

https://youtu.be/kptrXirUqLg

.

Working with the Quality Metrics and scoring the translations

This is all about using the Quality features in the application.  Qualitivity comes with four recogninsed standards preconfigured for use and an easily customisable interface if you want to use your own.  The video shows how to set this up, and also how to use it during the review process.  Finally it briefly covers the Quality reporting built into the interface.

.

Taking a look at the reports… built in, exported and TAUS DQF

This is what it’s all about!  You’ve got your data, now what are you going to do with it?  The video runs though the possible reporting options starting with the built in UI features, but ending in the fantastic reporting hand-off packages which top off the collaborative principles of this application quite nicely.  If all you wanted to know was what value you can get from using this application then just watch this one!

.

That’s it in a nutshell!  That was easy to say, but this is by far the most comprehensive OpenExchange plugin I have seen developed to date, and it covers such a wide range of features that are only made possible through the extensibility of the Studio platform and it’s APIs… plus of course the ingenuity of the developer!!  If you’d like to know what’s possible this website has all the information to point developers in the right direction… and it’s free!

Oh yes… where can you find this application?  It’s on the SDL OpenExchange (now RWS AppStore) (aka SDL AppStore) under QUALITIVITY.

.

.

0 thoughts on “Qualitivity… measuring quality and productivity

  1. My complements to you Paul with the release of this plugin; I really feel that it wouldn’t have been possible without your drive and commitment keeping this project on track. It has been a pleasure working with you and I must admit that I have never worked with anyone so dedicated, always with an optimistic view and encouraging outlook ensuring that people understand the important aspects of ideas and pushing them through the mix.

    In my experience you have always proven to be someone that excels in their position and is always looking for ways to improve a process or functionality, always questioning the outcome and then testing it in real-time, just to good measure :-). You are an asset not only to the development of this project but to the commitment that you have shown with all developers like me working on trying to deliver these types of tools to the OpenExchange.

    I take my hat off to you sir and I am looking forward to working with you for future challenges.

    Patrick.

  2. Translation quality has many aspects, and, of course, translators play a crucial role here. So, Patrick Hartnett’s great new Qualitivity definitely is very useful. We at Kaleidoscope are happy to see that quality frameworks such as TAUS’ DQF are becoming more and more important. Even more so, since our in-country-review solution, globalReview, incorporates both TAUS’ DQF and the QT21 Launchpad, enabling, e.g., content profiling and error typology:
    • Content profiling takes into account that not all content in a company needs to be treated equally in terms of quality: Clients and translation providers agree on certain content profiles and define to what extent these need to be reviewed. For instance, while emotional, brand-relevant public content might need to be reviewed 100%, for internal content with a short shelf-life a 10% sample might suffice.
    • Quality evaluation in turn suggests that not all errors are equally sensitive, either. A meaning error in a prominent spot is more severe than a stylistic error in a support article. Combining these error type assessments with content profiling, a certain text can efficiently be evaluated in terms of quality and subsequent measures can be decided, ranging from instant publication to re-translation.

    At the same time, these quality scores can be tracked over time, thus providing business intelligence insights into the translation process. This enables you to recognize what kind of quality process certain vendors have implemented which languages or resources are an issue, and whether certain quality improvement measures have been successful or not.

    So, many thanks to Paul for your post on Patrick’s Qualitivity and making us aware of his approach. It’s always good to know that quality is in high demand ,-)

    Arnold

  3. Indeed, I am utterly impressed with this application. There are really a multitude of use cases for it. May I add one to the list? I’ve often been in heated discussions with customers that don’t want to pay for 100% matches. My argument has always been that translators do revise them quite often. With this app, I could prove my case and even measure exactly how much the revision of 100% matches actually costs.

    /Andreas

    1. Excellent usecase… I like this because there are many uses for this and if you take the thought process past PEMT, and past the “Big Brother” thought process I think it’s a good application for many.

Leave a Reply