How do I Prompt an AI-Powered TMS?

You don’t. Plus, can AI translate gender-neutral pronouns?

There are two very common questions I get about Bureau Works. First, a question of confirmation:

“Bureau Works- You guys translate with AI right?”

Kind of, but also not really. Once I explain that we don’t use Generative AI for the translation itself, and that we instead use it to analyze and improve the translation and the translation process, question two comes.

“How do I prompt it?”

That’s the best part- you don’t have to.

Promptless

Very few people know what “AI” looks like on the backend. And, unless you are a machine learning engineer, your understanding of how AI works is probably limited (I know mine is). The result is that most of us understand AI in terms of the most popular front-end interface: ChatGPT. We type a prompt, and ChatGPT responds.

This is one version of generative AI with Large Language Models, but it is only the beginning.

LLMs can do a lot more when it comes to analyzing and producing language. However, because of the way we have been taught to interact with AI (ChatGPT), we run into two big problems.

  1. We underestimate and misunderstand what LLMs do.
  2. We overestimate our ability to communicate our own thoughts.

Underestimation

Part of our underestimation of LLMs comes from the ways in which they are most popularly used, and part of it comes directly from the name “Large Language Model.”

As I mentioned above, most of us are accustomed to the “chatbot” versions of LLMs that we use to write cover letters or get recipes for Pad Thai. These applications are genuinely useful and are a great representation of some of the capabilities of LLMs. They can receive instructions, process them, and produce a response in accordance with those instructions.

This alone is amazing.

And, it fits perfectly into actions that are easily inferred from the name “Large Language Model.” We describe what we want using natural language, and the model returns what we want in language. Language in, language out. But, that isn’t all they do.

LLMs can also analyze vast amounts of text and detect patterns in semantics, sentiments, and tone. This is where their other applications become clear. They summarize text, mimic the style of other documents, identify awkward or inaccurate language, and compare specific samples to the vast bodies of text that they are trained on or results they pull in from RAG. They can also change the parameters of their analysis based on new information.

All of this means that an LLM can provide far deeper insights about a text than any previous writing or translation technology. They may be built on language, but their capabilities extend far beyond the “stochastic parroting” (simple repetition) of words they are taught.

I am not arguing that they understand the text, which is more of a philosophical discussion than a technical one, but simply pointing out that the utility of LLMs goes far beyond a 1:1, text-in/text-out relationship.

How can an LLM-Powered TMS be helpful?

An LLM can analyze vast amounts of text, detect patterns in semantics, sentiments, and tone, mimic the style of other writing, identify awkward or inaccurate language, and compare text samples. All of that combines to create an incredibly powerful translation tool.

In a traditional machine-assisted translation scenario, you run the source text through the engine and the translator is left to sort through the remains. The translator not only needs to check the translation against the source but also needs to reference the translation against knowledge bases, research terms, and contexts. The translator needs to identify every instance in which the machine translation is not only incorrect but also where it is imperfect. Then, in order to correct the imperfections, the translator is left alone to sift through the confusion and arrive at the perfect translation.

As every translator knows, this is hard. All of the capabilities of an AI-powered TMS that I mentioned above can make it easier and more rewarding.

In an augmented translation environment, the LLM compares the machine translation output to the source and analyzes opportunities for improvements. It curates its suggestions based on all of the knowledge available to it: Glossaries, termbases, translation memories, machine translation, LLM training text, retrieved context, and translator suggestions. This last source of knowledge is perhaps the most important.

When a translator reviews the MT output and the suggestions provided by the augmented engine (in our case, this is our Context-Sensitive Translate feature), they will accept good suggestions, reject inadequate ones, and make changes to arrive at the final translation. The most powerful part of our Context-Sensitive Translate is how it responds to these actions. Every time you confirm, reject, or change a segment, CST updates its suggestions for the rest of the text based on what it learns.

For example, if you change a gender pronoun in the beginning of a text, the context-sensitive translate feature will apply it further along in the text. But, in any place where there is potential confusion, the engine will flag that possibility.

To show this, I had ChatGPT generate me a story where the protagonist uses “ze” as their pronoun. Although this is an increasingly common neopronoun in English, it is not widely used in Spanish (as far as I know). There are different gender-neutral neopronouns in Spanish.

Designer (19)

In the story, ChatGPT decided to make things even more complicated by naming the character Ze as well as giving them the pronoun ze, but it was nothing that Context-Sensitive Translate couldn’t handle! Here are some highlights from the translation process.

One of the first points of interest came in segment 17 (see photo), where the machine translation result for “ze’s eyes” was “los ojos de ze.” To refer to someone’s own eyes, this is not the most common stucture. And, in both English and Spanish, this segment could have been translated in a gender-neutral form by saying “closing their eyes” (cerrando sus ojos). While there is certainly a broad discussion that can be had about the way to translate specific pronouns, I want to focus on what the Context-Sensitive Translate did with this challenge.

Immediately, CST points out that “ze” is uncommon in Spanish, and that the translator should consider another gender-neutral pronoun. It also recognizes the potential for confusion between “ze” (the pronoun) and “Ze” (the name). Nothing groundbreaking yet, but a cool example of how CST is referencing the broader knowledge base of its training data and/or actively retrieved information in order to “understand” what “ze” means. The capitalization inconsistency is a nice flag as well, even though it shows that the engine as not yet grasped that “ze” is a pronoun while “Ze” is a name.

I ended up changing “los ojos de ze” to “sus ojos” because I felt like it still respected the gender-neutral nature of the text. Again, there can be disagreement about whether that is a good translation or not, but the important thing is how the engine responds.

Below is the segment in question, both before and after my change.

With the change to “sus ojos” made in segment 17, I want to show what happens further along in the text in segment 19 (see below). The first version shown here is before I made the change, and we can see that MT has repeated the translation of “ze’s eyes” as “los ojos de ze.”

After I made the change in segment 17, segment 19 was re-translated to the second version shown below. The engine learned from my change in segment 17 and applied it to segment 19. Also, pay attention to the two “translation smells” flags in the first segment. Two useful notes, with the reasoning explained. That is a great way to highlight the capabilities of an LLM.

Another amazing example of an LLM’s capability to analyze language is in the “smell” that is flagged in the second version of segment 19. After the context-sensitive translation applied my change of “sus ojos,” translation highlighted that “sus” is not a “neopronoun” and, therefore, the character of the translation is different.

I am not changing my translation based on that note, but it’s a good note.

 

Finally, I want to point out the last flag that the LLM gave me. It picks up that the original translation output has deviated from the gender-neutral tone of the original and explains that “guiándola” is a feminine form.

I didn’t change it, as this was just an example, but it was great to see that the translation smells feature had my back until the end.

So, in these 21 segments, the LLM that powers context-sensitive translate has identified:

  • Pronoun Agreement
  • Neopronouns vs. Traditional Pronouns
  • Redundancy and Awkwardness
  • Cultural Context of Gender-Neutral Pronouns

All without me asking it to…

Replacing the Prompt

So, getting back to the original question- How can I prompt an LLM to do all of this?

You could theoretically write some massive, very detailed prompt to capture all of these differences between an original piece of content and the translation. But, with an AI-augmented TMS like Bureau Works you don’t need to. The “prompt” is inferred from the actions you take as a translator, continuously and in runtime. It is better that way because prompting is complex.

As I list above, the second problem with prompt-based LLM interactions is that we overestimate our own ability to communicate what we want in a way that the LLM will be able to use. This is evident to anyone who has asked ChatGPT for one thing and gotten something either insufficient or unexpected. It is also why “prompt engineering” is not simply writing in English. Although LLMs process natural language, the way they process it is not completely analogous to how we understand language.

So, if a translator had to build a prompt for an LLM to analyze a translation, they would be using up all the time and effort that they theoretically were saving by using technology, and their results would likely be inferior. If you are going to spend the time thinking of prompts and tweaking them to be perfect, you may as well just translate the document without any help.

The true beauty of an AI-augmented TMS is that the prompts are replaced with the translator's actions. Changing a pronoun represents a change in perspective; accepting a formal tone dictates the tone of the whole text; using colloquial language indicates a colloquial translation.

Instead of saying “Change the translation to use X pronoun, Y tone, and Z type of language,” you just start translating and the engine will follow your lead. LLMs are incredibly powerful, and the best thing we can do in order to use them to their full potential is simply show them the way.

The best versions and applications of LLMs don’t need to be prompted, they already know how to learn from the best.

Curious about what I discuss here? Give us a visit: bureauworks.com

Topics: machine translation, AI, ChatGPT, LLMs, Bureau Works, Context-Sensitive, Gender-Neutral Pronouns, Language Models, AI-generated

Gabriel Fairman

Written by Gabriel Fairman

Gabriel Fairman is the Founder and CEO of Bureau Works , a cloud-based TMS that leverages generative AI to enhance the human authorship and translation experience. Gabriel has been translating professionally for 20 years. To hear more about AI and translation, follow Gabriel on LinkedIn, Substack, and on the Merging Minds podcast.

Subscribe to Email Updates

    Lists by Topic

    see all

    Posts by Topic

    see all

    Listen to the ProZ.com Podcast

     

    Recent Posts