Electronic Thesis and Dissertation Repository

Thesis Format

Monograph

Degree

Master of Engineering Science

Program

Electrical and Computer Engineering

Collaborative Specialization

Artificial Intelligence

Supervisor

Ouda, Abdelkader

2nd Supervisor

Abusharkh, Mohamed

Affiliation

Ferris State University

Co-Supervisor

Abstract

During today’s age of unparalleled connectivity, language and data have become powerful tools capable of enabling effective communication and cross-cultural collaborations. Neural machine translation (NMT) models are especially capable of leveraging linguistic knowledge and parallel corpora to increase global connectivity and act as a tool for the transmission of knowledge. In this thesis, we apply a data-based domain adaptation technique to fine-tune three pre-existing NMT transformer models with attention mechanisms for the task of patent translation from English to Japanese. Languages, especially in the context of patents, can be very nuanced. A clear understanding of the intended meaning requires comprehensive domain knowledge and expert linguistic abilities which may become expensive and time-consuming. Automating the process of translation is helpful, however, commercially available NMT models perform poorly for this task as they are not trained on highly technical words whose meaning may be dependent on the relevant domain in which they are used. Our aim is to enhance the performance of translation models on highly technical inputs using a range of essential steps, focusing on data-based domain adaptation. These steps collectively contribute to the enhancement of the NMT model's performance by a 41.22\% increase in the baseline BLEU score.

Summary for Lay Audience

In an age of innovation that is driven by technology, there is a globally increasing number of patent applications being filed according to the World Intellectual Property Organization (WIPO). In 2022, the UN reported that patent applications increased to more than 278,000 patent applications, and according to Carsten Fink, the chief economist at WIPO, 2022 "represents the 13th year of uninterrupted growth" [1]. With patent applications and global connections around the world increasing at a steady rate, the need for patent translation using machine translation (MT) systems also increases. The task of translation in any field is a complex problem that requires a deep knowledge of the natural language pairs involved. Simply applying language rules to translate text does not return accurate or acceptable translations because language is a complex nuanced system that is affected by different cultural, social, and historical factors. Automated translation is even more complex for patent documents as they are highly technical documents often containing legal terminology. To translate a patent, the translator must be well-versed in subjects pertaining to legal jargon and the relevant technical domain.

Due to advancements in the field of MT, especially in neural machine translation (NMT), the field of patent translation has seen a growth in interest. Currently, the most common architectures of NMT used for translation are the transformer model, recurrent neural networks (RNNs), and the encoder-decoder architecture, or variants of the three. This study uses a pre-trained transformer model with an encoder-decoder architecture and attention mechanisms. Another source of challenge stems from low-resource domains where the translation model is not able to learn highly technical words and phrases. One technique that aims to remedy that is data-based domain adaptation. This technique leverages the value of data in order to train the model to perform well in its target domain.

References:

[1] United Nations. (2022). Patent filings hit a record high in 2022, Un Agency reveals. Retrieved April 30, 2023, from https://news.un.org

Share

COinS