by Globalization Partners International
The modern history of automated translation begins primarily in the post WW2 & Cold War era, when the race for information and technology motivated researchers and scholars to find a way to quickly translate information.
In 1947, an American mathematician and scientist named Warren Weaver published a memorandum to his peers outlining his beliefs about a computer’s capability to render one language into another using logic, cryptography, frequencies of letter combinations, and linguistic patterns. Fueled by the exciting potential of this concept, universities began research programs for MT, which eventually founded the Computational Linguistics field of study.
In the 1950’s, one such research program at Georgetown University teamed up with IBM for a MT experiment. In January 1954, they demonstrated their technology to a keenly interested public. Even though the machine translated only a few dozen phrases from Russian into English, the project was hailed a success, as it showcased the possibilities of fully automated MT and provoked interest in and funding of MT research worldwide.
The optimism of the Georgetown-IBM experiment was replaced by a feeling of pessimism in the 1960’s, when researchers and scholars became frustrated at the lack of progress made in the field of computational linguistics despite huge funding. In 1966, a special committee formed by the United States government reported that MT could not and would not ever be comparable to human translation and therefore was an expensive venture that would never yield any useable results.