TGIF (Translators’ Guide to Intelligent Frameworks)
As I was wondering how to introduce an article on MCP servers to non-technical translators, my mind started wandering as I got lost in the explanations that might be needed to help these linguistic experts understand the concepts. I started to think about the dizzying change we’re all experiencing and the ever-growing terminology that comes with it. Then, and don’t ask me why, I started wondering how long it takes for these terms to become standardised in local language. As I researched this a little, it seems to come down to the language itself and the domain maturity.
Terms that almost always stay in English worldwide: The core technical vocabulary – API, LLM, GPU, transformer, fine-tuning, prompt, token, RAG, MCP, CLI, Docker, OAuth, JSON, REST, embedding, benchmark. These are used in English even in German, French, Japanese, Chinese, and Korean technical writing. Developers and engineers rarely translate them.
Terms that do get translated in some languages: Broader, older concepts tend to have established translations. German is a good example – “Künstliche Intelligenz” (AI), “Maschinelles Lernen” (ML), “Überwachtes Lernen” (supervised learning), “Neuronales Netz” (neural network). In fact now I’m wondering why I used “AI” for “Künstliche Intelligenz” and not “KI” which is another fascinating linguistic twist that probably comes down to who is talking and what they are trying to sell. In Germany, which one you hear tends to depend on who’s talking and who they’re talking to:
- KI remains the default, natural choice in most German-language contexts: news, politics, education, internal company communication, and everyday conversation.
- AI is more common in internationally-facing contexts: marketing materials, product names for global audiences, startup communication, and English-heavy technical documentation.
French, Spanish, Chinese, and Japanese similarly have native equivalents for these foundational terms. Chinese in particular has well-established translations for most ML concepts.
The newer agentic terminology is almost entirely English-only right now. Terms like vibe coding, cooking/weaving, schema bloat, context pollution, prompt injection, guardrails, hallucination (in the AI sense), RLHF – these are used in English even in non-English publications. They’re too new and too niche for translations to have settled.
The practical reality is that most technical writing worldwide uses a hybrid… native language grammar with English terminology dropped in. A German AI paper might say “ein fine-getuntes LLM mit RAG-Pipeline” rather than attempting full translation.
I then started to look for comprehensive glossaries that explained all of this in multilingual format and after a short while (I don’t have a lot of patience…) I came to the conclusion there isn’t one… at least there isn’t one that was easy to find. So I decided to create one, convert it into a Multiterm termbase and share it with you here;

It’s surely not complete, and it’s debatedly not 100% accurate, but it does contain 137 terms across 11 languages (inc. English). A few notes on the translations:
- Terms that remain in English across all languages (like Transformer, LoRA, Docker, REST, OAuth, SKILL.md, Vibe Coding, Cooking/Weaving) are kept as-is rather than forced into awkward translations.
- Chinese and Japanese have the most comprehensive native terminology – nearly every foundational and NLP term has an established translation.
- Arabic and Hebrew have good coverage for the foundational and established ML terms but tend to keep newer agentic terminology in English.
- Romanian, Hungarian, and the Romance languages generally translate the older terms well but default to English for the recent protocol and tooling vocabulary.
I should flag that the AR, HE, JA, and ZH translations would benefit from review by native speakers, particularly for the newer agentic terms where usage may not yet be settled. The foundational terms will hopefully be solid.
I’d love see it get updated over time, mostly to see how many of those terms that are used predominantly in English today find translations into the languages I’ve covered, or perhaps even have more languages added. Maybe this is something that would suit an academic project, if there isn’t such a thing already?
But I digress 😉 I wanted to write about MCP servers, and seeing as this isn’t Friday and this article is called “Translators’ Guide to Intelligent Frameworks” I’ll pick up this story where I wanted to go in the first place.
Contents
Model Context Protocol: A Standard Interface for AI Agents
Even if you’re not a technical person, it must be quite hard to have avoided reading or hearing people talking about MCP in the context of AI Agents. In the screenshot above you can see the definition is “An open protocol created by Anthropic that provides a standardised way for AI agents to connect to external tools and data sources.” And what’s an AI Agent… referring to my termbase it would be “An AI system that can autonomously perceive its environment, make decisions, and take actions to achieve goals, often using tools and interacting with external systems.”
Or to put this more clearly, AI assistants like ChatGPT or Claude are clever, but on their own they’re stuck in a box – they can only work with what’s in front of them. They can’t check your calendar, look up a customer record, or send an email unless someone builds a specific connection for each of those things. MCP is an attempt to solve that problem by creating one universal way for AI to plug into other software – rather than needing a custom-built connection for every single tool. The analogy that gets used a lot is USB-C: before it existed, every phone had a different charger. USB-C gave us one plug that works with everything. MCP is trying to do the same thing, but for AI connecting to the tools and data it needs to be useful.
It’s still early days – the standard for this was only published in late 2025 – and there’s plenty of debate about whether it’s the right approach. In fact when I decided I needed to learn more about this technology one of our product managers made a comment about another person you might have heard about, Peter Steinberger (he created the OpenClaw project that has had a lot of attention of late), who essentially took this position (I don’t think this is a direct quote, but it is essentially his position):
Steinberger calls MCP a “crutch” and “silly”, arguing it causes context pollution by forcing models to ingest excessive irrelevant data. His alternative is an “army of CLIs” where agents use standard Unix tools to filter data before it hits the context window. He built a tool called “Make Porter” specifically to convert MCP servers into CLIs. He argues MCP is rigid and prevents command chaining, while CLI allows scripting complex sequences in single lines.
After reading this, I decided to create CLI (Command Line Interface – A text-based interface for interacting with software through typed commands.) support for an application I vibe coded called SDLXLIFF refiner.
Command Line Interface
Well, that wasn’t the only reason as I had a couple of users of my app asking for CLI support so I decided to do that in preference to the MCP I was initially thinking about. This worked pretty well and allows the user to do cool things like this with SDLXLIFF files in a project:
# Understand the file sdlxliff-refiner info --file doc.sdlxliff # See what formatting exists sdlxliff-refiner list-tags --file doc.sdlxliff # Change yellow highlighting to bold sdlxliff-refiner replace-tag --file doc.sdlxliff --find-tag "cf highlight=yellow" --replace-tag "cf bold" # Translate only empty segments sdlxliff-refiner translate --file doc.sdlxliff --provider anthropic --model claude-sonnet-4-20250514 --empty-only --prompt-project
For a developer, or a localisation engineer this is pretty useful and I created a 13 page manual with all the cool stuff you can do, including being able to use an AI agent such as Claude Code that can drive the CLI autonomously using –json output to read file content and make decisions, then use the standard commands to apply changes. This is essentially what the CLI advocates are arguing for.
Steinberger and the CLI advocates are optimising for the agent’s efficiency – fewer tokens, faster execution, more composable. That’s valid engineering. But they’re building for a world where the user is a developer who’s comfortable piping grep into jq. The moment you put a non-technical person in front of that workflow, it collapses.
For a non-technical person, a CLI is genuinely intimidating. It’s a blank screen with a blinking cursor, no visual cues, no buttons, no guidance on what to type. You need to know the exact command, the right flags, the correct syntax. One wrong character and you get a cryptic error message. Even experienced localisation engineers who use command-line tools daily would probably agree that it’s a learned skill, not an intuitive one. MCP, by contrast, is designed to be invisible to the end user. When it works properly the user just talks to their AI assistant and the assistant handles the connection behind the scenes. You say “check my calendar for next Tuesday” and the AI uses an MCP connection to your calendar service without you ever seeing a terminal, a command, or a JSON blob. The whole point is that the plumbing disappears.
To be fair even coding environments allow the developer to speak in natural language and with a reduced need to know all the syntax. Steinberger himself works with three key concepts:
“Cooking” – letting agents spend 10-60 minutes reading and understanding system architecture before writing any code, rather than the “trigger-happy” generation of reading three files and starting. “Weaving” – integrating features organically into existing codebases rather than bolting them on. “Mechanical Verification” – agents must autonomously pass automated test gates before work is considered complete.
He also advocates optimising codebases for AI consumption rather than human readability, using bootstrap files for agent initialisation, and treating documentation as architectural steering for agents rather than human reference material. He suggests this resembles a return to waterfall-style upfront planning.
Even developers are moving away from reading/writing code and concentrating more on the technical design and architecture of the projects… but nonetheless, they still like to work in an environment that isn’t very friendly for your average user.
So the very next thing I did was to create (vibe coded of course!) an MCP server for the SDLXLIFF Refiner and I called this multifariousMCP. If you skim through the documentation at that link you’ll get the idea as I have screenshots of the Claude chat interface that make this pretty clear.
Moving from CLI to MCP
I’m not going to even try and explain the detail here because it would be truly disingenuous for me to even try to claim I’m clever enough to do this by myself, but the core idea is straightforward… the CLI already does all the hard work. It parses files, runs operations, handles backups, and returns structured JSON. The MCP server is just a thin relay layer that sits between Claude and the CLI. It’s this simplicity that I want to get across because it literally only took me an hour or two, to have a working version in Claude Chat and that included me learning how to install it etc.
The process boiled down to three things (explained nicely courtesy of Claude):
- One tool per CLI command. Each CLI command (info, list-segments, translate, etc.) becomes an MCP tool with the same parameters. The tool definitions are just Zod is a TypeScript validation library. A Zod schema defines the shape, types, and constraints of a data structure – and validates input against it at runtime. In the MCP context, it’s used to declare what parameters each tool accepts. When you register a tool on the MCP server, you pass a Zod schema that tells the SDK (and by extension Claude) what arguments the tool expects, their types, and whether they’re required or optional. that mirror the CLI flags. There’s a direct one-to-one mapping – no new logic, no reimplementation.
- A single executor function. Every tool calls the same runCli() function, which shells out to the CLI exe via execFile, always appending –json so the output is machine-parseable. It captures stdout, parses the JSON, and hands it back to Claude. That’s the entire bridge – about 15 lines of code.
- Parameter translation. MCP uses JSON conventions (underscores), the CLI uses hyphenated flags. A trivial replace(/_/g, “-“) converts dry_run to –dry-run. Booleans become flags, strings become flag-value pairs.
Having the CLI available first is what makes this simple. Without it, I’d be reimplementing all the SDLXLIFF parsing, tag processing, backup logic, and translation provider integration inside the MCP server itself – thousands of lines of .NET code rewritten in TypeScript. Instead, the MCP server is about 200 lines of glue code with zero business logic. The CLI is the product; the MCP server just gives Claude a way to call it.
So having a CLI to start with is a great idea!
What about the Powershell Toolkits?
The SDLXLIFF Refiner story is a nice clean example because I built both the CLI and the MCP server. But the same principle works when you don’t own the CLI at all. If a tool already has a command-line interface that returns structured output, you can wrap it in an MCP server and give Claude access to it.
This is where the Trados PowerShell toolkits come in. RWS publishes three of them – one for Trados Studio itself, one for GroupShare, and one for Trados Cloud (aka Language Cloud). These are PowerShell cmdlets that let you automate things like creating projects, managing translation memories, uploading files, and running batch tasks. They’re already CLIs in the sense that matters: you invoke them from a terminal, pass parameters, and get structured output back. Whilst they are not as complete as they should be to make them really useful, they are still quite comprehensive and are quite a timesaver for many things. For example, we use a neat script in a CLI, to set up new Academic Partners with some basic resources that runs in a minute. To give you an idea of what this looks like in practice:
- Authentication – Imports a custom LanguageCloudToolkit module and authenticates against the Language Cloud API using stored credentials.
- Customer hierarchy creation – Creates a top-level customer called “RWS Campus Tour” under the “Customers” location, then creates three child customers representing teaching years (Year 1 through Year 3), each with progressively advanced themes (Foundations, Professional, Industry).
- Translation Memory setup (Year 1 only) – Opens a file dialog (pre-pointed at a known folder) for a TMX file, extracts source/target languages from it, retrieves Year 1’s default language processing rule and field template, creates a multilingual Translation Memory, and imports the TMX content.
- Termbase setup (Year 1 only) – Opens file dialogs for a MultiTerm XML file and an XDT structure file, creates a Termbase under Year 1, imports the terminology, and polls until the import completes.
- Throughout, it uses retry loops with delays when waiting for newly created resources to become available via the API, and includes error handling at each stage.
Having to do this manually would take a fair amount of time! But it does mean having to know how to work with Powershell commands in a CLI and this is beyond the skillsets of many users who would benefit from being able to access these capabilities through natural language queries.
Could these be converted into an MCP? Of course!
Trados Powershell MCP
We already have the CLI, so I started by providing the URLs for each of the opensourced toolkits to Claude and asked it to create a Technical Design Document for me. Claude did this in a couple of mins and then, without asking me, proceeded to build the complete MCP. A few minutes later it was complete. So I’d love to stop there and just say that in five minutes I had an MCP server supporting all the features of powershell for Trados Studio, GroupShare and Trados Cloud. But it wasn’t quite that straight forward. When it created the project and TDD it was clear Claude assumed that similar names for the modules were actually the same files, and Claude also missed out quite a lot of functionality, including an extended authentication module I built to extend the powershell toolkits I use for GroupShare and Trados Cloud.
This is why creating a Technical Design Document to start with is key (Steinberger’s ‘Cooking’ approach matters), and make sure this is fully accurate before allowing Claude to code anything. So, I changed tack a little, and instead of the github repositories, I gave Claude the .psm1 files for each of the modules instead. These files contain reusable functions, variables, and cmdlets that can be imported into other scripts with Import-Module. It’s essentially a library bundling all the functions for interacting with the API (authentication, creating customers, managing TMs, etc.). Essentially the commands we would use in the CLI if we were typing them in ourselves.
In the end this took me around 4-5 hours to have a complete working MCP server inside Claude that I could talk to. You can find a description of the Trados Powershell MCP in the L10N-X pages and you can also download the compiled installer for Claude from there. But since the Powershell Toolkits themselves are all open source I have done the same with this, so this is what may be the most interesting for anyone playing with this stuff:
- https://github.com/paulfilkin/Trados-Powershell-MCP/tree/main – the main page for the application source code
- https://github.com/paulfilkin/Trados-Powershell-MCP/blob/main/TradosPowershell-MCP-Server-TDD.md – the Technical Design Document updated as I went along
- https://github.com/paulfilkin/Trados-Powershell-MCP/issues – for bug reports
- https://github.com/paulfilkin/Trados-Powershell-MCP/discussions – to ask questions and get help
I’d like to think this will be useful for the Trados community at large and also that I’ll see more activity and contributions from developers working with these tools. But just in case you’re still not sure how this can help or what it looks like in practice, I created a short video of how it might work in practice:
Length: 31mins, 36 seconds.
Teaching Claude How to Use the Tools
Building the MCP server gets you the tools. But having tools and knowing how to use them properly are two different things. This is something I discovered fairly quickly when testing… Claude would guess at IDs instead of looking them up, try to create projects without checking whether the folder was empty (Studio loves that!), or tell me credentials were missing without actually checking whether they were configured.
The problem is that MCP tool definitions only describe what each tool does and what parameters it accepts. They don’t describe the workflows… which tools need to be called first, what order things should happen in, or what to do when something goes wrong. Claude is pretty good at figuring some of this out, but “some” isn’t really going to be good enough if you’re working with a live GroupShare server with 3,000 organisations on it.
The solution is to use project instructions – a set of rules you attach to a Claude project that tell it how to behave when working with these specific tools. Think of it as the difference between handing someone a toolbox and handing them a toolbox with a manual. The tools are the same; the outcome is most likely going to be very different.
I created a set of project instructions for the demo project I used to demonstrate the MCP server. The video showed that in practice and you can see Claude following these rules as it works through real workflows against a live GroupShare server and Language Cloud tenant.
I put the full instructions here as I thought they might also be interesting to see. All of these were created by me just jotting down my thoughts and needs as I went along and then asking Claude to write them up in a way that would work best for Claude. I find this approach useful for prompts and many other things as well… and over time I hope I get better at it too:
You are working in a demo environment for the Trados PowerShell MCP Server. Follow these rules in every conversation within this project.
## Data anonymisation
This is a demo project. All personal and organisational data returned by MCP tools must be anonymised before being shown in the chat. This applies to:
– User names, display names, and email addresses
– Organisation and customer names
– Tenant IDs, server URLs, and credential file names
– Project names that contain client or personal identifiersReplace real values with plausible anonymised equivalents (e.g. “Jane Smith” becomes “User A”, “Acme Corp” becomes “Organisation Alpha”, etc.). Use consistent replacements within a conversation so references remain trackable.
Do not anonymise:
– Language codes (en-GB, de-DE, etc.)
– Technical identifiers that carry no personal meaning (status values, tool names, language pair labels)
– Counts, statistics, and structural metadata## Resource awareness
Before stating that something is unavailable or missing, check what is actually configured:
1. Check the MCP tools list to confirm which tool groups (studio_*, gs_*, lc_*) are registered.
2. For credential issues, call gs_list_credentials or lc_list_credentials before concluding that credentials are missing.
3. For any “not found” scenario, verify by calling the relevant list or get tool first.Never tell the user that environment variables are missing or that a tool group is unavailable without first confirming this through the tools themselves.
## Product name mapping
“Trados Cloud” and “Language Cloud” refer to the same product. In this project, both names map to the lc_* tools provided by the Language Cloud PowerShell Toolkit. If the user says “Trados Cloud”, treat it as a Language Cloud request – do not ask for clarification.
## Studio project creation
Templates are the default path for project creation. When the user asks to create a project:
1. Call studio_list_project_templates to discover available templates.
2. Match a template to the request based on language pair. If multiple match, list them and ask. If none match, fall back to manual language/TM specification.
3. When a template is selected, use its projectLocation as the suggested output_path (if non-empty). If empty, call studio_list_projects to discover common project locations and suggest one.
4. Suggest a project name based on the source filename (e.g. source file “Annual_Report.docx” suggests “Annual Report”). Let the user confirm or change.
5. ALWAYS confirm project name and project location with the user before calling studio_new_project.If the user provides a file path as the source, use only that file – do not assume the entire parent folder should be used.
## TM resolution
When a TM is referenced by name, call studio_list_tms with no folder argument first to auto-discover available TMs. Match using case-insensitive substring/contains matching – do not require an exact match.
If auto-discovery does not find a match, search the source file’s parent folder as a fallback before asking the user for a path.
## Project naming
When suggesting a project name from a filename, clean it up: remove the file extension, strip common technical prefixes and language codes (e.g. “OJ_L_”, “_ES_”, “_TXT”), and convert underscores to spaces. Aim for something a human would recognise. Present the suggestion and let the user refine it.
## Credential activation
GroupShare and Language Cloud credentials are session-scoped. At the start of any gs_* or lc_* workflow, check whether a credential is active by calling the relevant list tool. If multiple credentials exist and none has been activated, ask the user which one to use before proceeding.
## Tool sequencing
Before calling any tool that requires an ID or path from another resource, call the appropriate list or get tool first to resolve it. Do not guess or fabricate IDs, paths, or names. Examples:
– Before gs_new_project, call gs_list_project_templates and gs_list_organizations
– Before gs_import_tmx, call gs_list_tms to find the TM
– Before lc_new_project, call lc_list_project_templates## Error recovery
If a tool call fails, read the error message carefully before retrying. Common patterns:
– “No active credential” – call gs_set_credential or lc_set_credential
– “Folder not found” – verify the path with the user
– “already exists and contains items” – the project subfolder already exists; ask the user how to proceed
Do not retry the same call with the same parameters.
It’s probably worth explaining why I needed all this and why each of them mattered.
Data anonymisation exists purely because this is a demo environment connected to a real server. Without it, every conversation I show publicly would expose real names, email addresses, and organisation structures. The rule about consistent replacements within a conversation is important – if Claude anonymises “Acme Corp” as “Organisation Alpha” in one response, it needs to use the same label throughout or the conversation becomes impossible to follow.
Resource awareness stops Claude from giving up too early. The natural tendency when something isn’t immediately obvious is to say “I don’t have access to that” or “the credentials aren’t configured”. The instruction forces Claude to actually check before making that claim. This matters because the credential store design means credentials exist as files in a folder – they’re not visible until you call the list tool.
Product name mapping is a small thing but it prevents a genuinely confusing interaction. If a user says “check my Trados Cloud projects” and Claude responds with “I don’t have any Trados Cloud tools available”, that’s technically accurate (the tools are called lc_*) but practically useless. One line in the instructions fixes it.
Studio project creation is the longest section because it encodes the most hard-won lessons. The template-first approach, the confirmation step before creating, the rule about not assuming a whole folder should be used when only a file path was given – each of these came from real mistakes during testing. The confirmation step in particular exists because studio_new_project with a non-empty output folder used to create a recursive cascade of over a thousand nested folders that needed a PowerShell script to clean up… the first time I ever had that problem!
Tool sequencing addresses Claude’s tendency to skip preparation steps. When you ask it to import a TMX file into a GroupShare TM, it needs the TM’s internal identifier – not the name you gave it. Without the instruction to call gs_list_tms first, Claude will either ask you for the ID (which you shouldn’t need to know) or worse, fabricate one that looks plausible but doesn’t exist.
Error recovery gives Claude a playbook for the most common failures. “No active credential” is the one you hit most often because credentials are session-scoped – they don’t persist between conversations. Without this instruction, Claude treats it as a configuration problem and tells you to check your environment variables, which is the wrong diagnosis entirely.
None of these rules are complicated individually. But collectively they transform the experience from “powerful but unpredictable” to “powerful and more reliable”. The video demonstrates this in practice – you can see Claude following the sequencing rules, anonymising data on the fly, and recovering from credential issues without being told what to do.
I’d also note, just to finish off, that had I spent a lot more time designing the MCP server more carefully instead of building it in the 4-5 hours it took, then some of what’s in the instructions could be pushed into the server itself, but not all of it. So tool sequencing, error recovery, product name mapping, non-empty folder guard for example could all be moved on the server. Data anonymisation, credentiual activation workflow, project naming heuristics, confirmation before destructive actions are probably better left as project instructions. The realistic middle ground is what Steinberger would call “cooking” – the better your technical design document your tool descriptions are, the less Claude needs to be told separately. Rich tool descriptions with usage notes, well-structured error responses, and sensible defaults reduce the instruction burden significantly. But anything that depends on the user’s environment, preferences, or the conversational context will always need to live outside the server.
For this project specifically, I’d guess I could have eliminated maybe 40-50% of the instructions through better tool design, but the remainder would still be needed because they’re about how Claude should behave, not about what the tools should do.
2 Comments
Michael Beijer
HI Paul,
This is VERY interesting, thanks!
I managed to set up your system in Claude Desktop in only a matter of minutes, and I am already testing out what I can ask it. Since it’s open source, I hope you don’t mind if I borrow aspects of it for my own tools?
I have built many different things over the last couple of months (an Evernote replacement, a word counting tool, a website with a full blog system and admin panel, etc.), but this one is the most relevant to you:
– https://supervertaler.com/trados/ (website)
– https://github.com/Supervertaler/Supervertaler-for-Trados (repo)
– https://supervertaler.gitbook.io/trados (help system)
Incidentally, I read a very interesting blog article yesterday about Andrej Karpathy’s LLM Knowledge Base architecture: https://venturebeat.com/data/karpathy-shares-llm-knowledge-base-architecture-that-bypasses-rag-with-an.
And in a single day, I managed to implement a self-organising, AI-maintained translation knowledge base in Supervertaler for Trados! That actually works!
see:
https://supervertaler.gitbook.io/trados/features/supermemory (help system)
https://github.com/Supervertaler/Supervertaler-SuperMemory (repo)
https://github.com/Supervertaler/Supervertaler-for-Trados/blob/main/CHANGELOG.md
I had initially created a vector-based database system for Supervertaler Workbench (the original Python-based version of Supervertaler), but I found it very heavy and not particularly useful. However, my new SuperMemory system is already proving useful. It’s just a folder with markdown files on my computer (which I can display as a fancy graph in Obsidian); I can dump stuff in it and have the AI classify everything into a usable knowledge base. Supervertaler for Trados can use this knowledge base to augment its already remarkably powerful automatic prompt-generation system.
~
We are definitely living in very interesting times. Most of my technically minded colleagues are currently vibe-coding their own tools. I said to a friend only weeks ago that I predict anyone who is fairly good at fiddling around with their computer will have built their own CAT tool in the next year or so. I wonder how this will change the CAT tool landscape.
Paul Filkin
Hi Michael, you’re doing some interesting stuff. I think anyone who enjoys fiddling around will have built something that supplements the way they work rather than a complete CAT tool. These kinds of projects will probably appeal most to individual users who like working in a particular way and have always questioned why things work the way they do. I doubt they’ll reshape the CAT tool landscape, but they’ll be fun for those using them for as long as they can.
The KB architecture was interesting, but I don’t think it’s that useful for me in practice. I always start with a Technical Design Document and maintain it as I work through each project… and I think that’s arguably more reliable than letting an LLM rewrite my project knowledge autonomously. I know what matters for my work; an automated linting pass might not. I also don’t let any LLM work directly on my files… that’s a step too far. As much as I enjoy playing with this stuff, I like being in control! Working with Claude has been genuinely fun though, and I’ve built over a dozen applications I use actively… both personally and for work. The common theme is they all make it easier for me to do what I need, keep track of what I’m doing, and use AI where it actually helps, which is normally the drudgery around data gathering and analysis.
It is a fun time. And I think anyone with a career still ahead of them needs to make sure they understand this stuff as well as they can, and find how to make it work for them.