Dive into the dynamic world of language services with Gabriel Fairman as he tackles the contentious topic of AI. What's your stand when it comes to technological advancements? Weigh in and draw your own conclusions!
When discussing controversial topics, we are infatuated with our ideas and paralyzed by our preferences. We stay rooted to the spot when what we need is to be agile and responsive. Unsurprisingly, I am talking about AI. Many voices in our community idealize the principle of never using AI, and this stance puts our industry at a greater risk of being made irrelevant. Not using AI does not contribute to an absence of attacks on our industry, it contributes only to an absence of defenses.
AI is a tool, and tools can be used to promote principles or to attack them. It is up to us to use AI to defend our principles and work towards our mission, not use it as a scapegoat for abandoning them. Having a mission keeps us on track as things change, but a mission can also make us stubborn. We have to be careful that the idealized versions of our mission don’t get in the way of the mission as a whole.
In our industry, I often see 4 types of "mission confusion" that are well-intentioned but lead to greater liabilities. I will outline them below, and give my thoughts on how we can approach these principles more flexibly and productively.
The Mission of Quality
The first principle that many people use to disparage the entry of AI into language services is quality. The idea is that AI or Machine Translation can never produce translations that are of the same quality as human work. In some instances, this is true. In others, it is not. But the actual quality of AI and MT translations is not what concerns me most with this principle. And neither is the fact that “quality” is measured much differently from a business standpoint than from a linguistic standpoint.
What concerns me about this principle is the reaction it provokes. If it is true that AI and MT produce inferior translations, why should the next logical step be to not use them at all? Shouldn’t we be searching for a solution that provides both efficiency and accuracy?
The argument that I often hear is that MTPE is often slower AND less accurate than translating “by hand”. If this is true, it still doesn’t suggest that we should stop looking for ways to work faster and more accurately, especially when we know the industry will constantly be looking for faster deliveries and lower costs. In fact, MTPE falling short in both efficiency and quality is an argument in favor of developing better tech solutions like augmented translation, it isn't an argument to reverse the clock on tech.
Not discovering the proper tech to keep up with the demands of the industry is what results in poor compensation models and translator exploitation. If we create the right tech, we can deliver what the buyer needs and earn more at the same time. With purely legacy methods (MTPE or translating from scratch), we are going to fail to keep up with industry demands.
The way forward is not going back to old methods, it is mastering new ones.
The Mission of Security
A second principle that comes up in discussions of AI is data security. Our industry sometimes deals with sensitive materials like medical, legal, and political documents, so it makes sense to wonder how secure information that is processed by certain programs is.
What doesn’t make sense, however, is questioning the integrity only of AI platforms. We should ask how safe our data is with Open AI, but also how safe it is in Google Drive, Microsoft Office, or even downloaded on our desktop. Then, if we identify security weaknesses in these platforms, we should be active in the discussions of how to improve them for our use cases.
Opting not to use specific platforms for specific jobs based on security concerns makes sense, but writing off an entire genre of tools does not. It doesn’t make sense based on the actual security standards of AI platforms, it doesn’t make sense when compared with other platforms we often use without a second thought (like email), and it doesn’t make sense for the majority of translation jobs that do not require extreme security.
The question we need to be asking is “How can we make this work?”, not “How can we stop this?” Otherwise, the conversation of how to make it work will go on without us, and it may not be to our liking.
The Environmental Mission
A third objection that I have been hearing recently is related to the environmental cost of AI computing. This is certainly an issue. However, I can’t think of a time in history that environmental progress was determined by those who opted out. Electric cars were invented and perfected by people who drive, not those who don't. Once an issue is presented, we need to navigate through it. There is not a way around it.
In terms of the environmental impact of new technology, we need to be engaged in reducing that impact as active users of the tech. We need to be part of this conversation from the inside. Otherwise, we will be ignored. We can not let our mission of protecting the environment allow us to stand to the side while others destroy it; we need to dive into the mess to help clean it up.
Ethical Copyright Mission
The final common principle that I find holds us back as an industry is the idea that AI models were trained unethically or illegally. In terms of the legality, that remains to be seen. Ongoing lawsuits will determine if these processes were legal from a copyright standpoint. Ethically, I certainly understand that taking this data and training models for profit make some people uneasy. However, much like the environmental concern, the change that critics seek to make in this process needs to be guided from the inside.
If this practice is going to change, it will change because leaders will represent communities of AI users who are pushing for these changes. Non-users will not be able to lobby as effectively for change because AI companies will not see them as consumers to be kept in the market. So, boycotting from the sidelines will result in being ignored.
If we are going to pursue the practical ends of all of these missions, we can't afford to be distracted by the ideal versions of how we achieve them. If quality translation, fair employment, data security, environmental justice, and ethical tech are missions that drive us, we need to fight for these missions in whatever form we can. Right now, that form is understanding AI, learning to use it, and pushing for changes from the companies of which we are patrons. This is how we can exert influence; opting out in order to adhere perfectly to our principles is not.
Turning the clock back on AI isn’t possible, and recusing ourselves from the responsible development and use of AI isn’t the expression of a principle. It is a dodge that wears the armor of cynicism and isolates itself with pessimism. It is an entire philosophy born from “I won’t”, rather than “we can”.
Having principles to guide us is important, but having the flexibility to work in their service is essential. Each time we face a challenging situation, we must ask ourselves how to improve it in light of our mission. If we choose not to confront the challenge in the name of our mission, we are effectively abandoning the mission itself. Standing steadfast may be noble, but it is hardly helpful. Work towards our mission should be messy, flawed, and complex. That's how we know it is working.