Development Of An English To Yorùbá Machine Translator - MECS Press

Transcription

I.J. Modern Education and Computer Science, 2016, 11, 8-19Published Online November 2016 in MECS (http://www.mecs-press.org/)DOI: 10.5815/ijmecs.2016.11.02Development of an English to YorùbáMachineTranslatorSafiriyu I. EludioraObafemi Awolowo University, Department of Computer Science & Engineering, Ile-Ife, 220005, Nigeria.Email: sieludiora@oauife.edu.ng or safiriyue@yahoo.comOdetunji A. OdejobiObafemi Awolowo University, Department of Computer Science & Engineering, Ile-Ife, 220005, Nigeria.Email: oodejobi@oauife.edu.ng or oodejobi@yahoo.comAbstract—The study formulated a computational modelfor English to Yorùbá text translation process. Themodelled translation process was designed, implementedand evaluated. This was with a view to addressing thechallenge of English to Yorùbá text machine translator.This machine translator can translate modify and nonmodify simple sentences (subject verb object (SVO)).Digital resources in English and its equivalence inYorùbá were collected using the home domainterminologies and lexical corpus construction techniques.The English to Yorùbátranslation process was modelledusing phrase structure grammar and re-write rules. There-write rules were designed and tested using NaturalLanguage Tool Kits (NLTKs). Parse tree and Automatatheory based techniques were used to analyse theformulated model. Unified Modeling Language (UML)was used for the software design. The Pythonprogramming language and PyQt4 tools were used toimplement the model. The developed machine translatorwas tested with simple sentences. The results for theBasic Subject-Verb-Object (BSVO) and Modified SVO(MSVO) sentences translation show that the totalExperimental Subject Respondents (ESRs), machinetranslator and human expert average scores for wordsyllable, word orthography, and sentence syntaxaccuracies were 66.7 percent, 82.3 percent, and 100percent, respectively. The system translation accuracieswere close to a human expert.which is the language of education, mass media, andeveryday communication [2].Yorùbá is a tonal language with three phonologicalcontrastive tones: High (H), Mid (M) and Low (L).Phonetically, however, there are two additional allotonesor tone variances namely, rising (R) and falling (F) [3]and [4]. The Yorùbá alphabet has twenty-five letterscomprising eighteen consonants and seven vowels. Thereare five nasalised vowels in the language and two puresyllabic nasal vowels [3] and [5].Yorùbá has a well-established orthography which hasbeen in use for over ten decades (around 1843). Yorùbáisrelatively well studied when compared with other Africanlanguages and there is literature on the grammar of thelanguage. The present work is one of the works that haveexamined the machine translation systems in the contextof the text to text translation technology.A. Machine Translation Evaluation TechniquesIndex Terms—Yorùbá Language, simple sentences,orthography, experimental subject respondents, HumanExpert, AfricaI. INTRODUCTIONYorùbáis one of the major languages spoken in Africa.Other languages in this category include Fulfude, Hausa,Lingala, Swahili, and Zulu. Yorùbá has a speakerpopulation of about 30 million (South West Nigeria only)according to 2006 population census conducted by theNational Population Commission of Nigeria [1]. Yorùbálanguage has many dialects but all speakers cancommunicate effectively using the standard Yorùbá (SY)Copyright 2016 MECSMachine translation systems output can be evaluatedby considering numerous dimensions: the intended use ofthe translation, characteristics of the MT software and thenature of the translation process. There are various meansfor evaluating the performance of machine translationsystems. The oldest is the use of human judges to assessa translation's quality. Though human evaluation is timeconsuming, it is still the most reliable way to comparedifferent MT systems developed using differenttranslation approaches such as rule-based and statisticalapproaches. Automated means of evaluation includeBilingual Evaluation Understudy (BLEU), NationalInstitute of Standards and Technology (NIST) and Metricfor Evaluation of Translation with Explicit Ordering(METEOR) [6].“Reference [7]” explains that “machine translation atits best automates the easier part of a translator's job, theharder, and more time-consuming part usually involvesdoing extensive research to resolve ambiguities in thesource text, which the grammatical and lexical exigenciesof the target language require to be resolved”. Suchresearch is a prelude to the pre-editing in order to provideinput for machine-translation software such that theoutput is not meaningless.I.J. Modern Education and Computer Science, 2016, 11, 8-19

Development of an English to YorùbáMachine TranslatorIn certain applications, for example, productdescriptions written in a controlled language, adictionary-basedmachine-translation system hasproduced satisfactory translations that require no humanintervention [8].B. Turing Tests of IntelligenceAlan Turing considers the question, can a machinethink? It is meaningless to deserve discussion. Heproceeded to reduce the problem to the test ofintelligence in which, any device that passes this test canbe considered to be intelligent''. It is important to notethat Turing believed that it is possible to develop acomputing device that will achieve intelligence and theability to think. He gives criteria that such computingdevice or machine must meet. This criterion is called the Turing Test of Intelligence'' [9].Underlying the E-YMT system evaluation was theTuring test for intelligence. The purpose of thisevaluation was to compare the translations from thesystem (English to Yorùbá machine translator) withliterate Yorùbá speakers in terms of translation accuracy.The E-YMT system performance evaluation was basedon the word syllable accuracy, sentence orthography, andsyntax correctness accuracy. The E-YMT system andESRs translated sentences were compared with those ofthe Human Expert translated sentences (speaker ofYorùbáand professional translator).Section I introduces the study, related work arediscussed in section II. The system design framework ispresented in section III. Section IV describes the softwaredesign and implementation. The system performanceevaluation and results discussion are presented insections V and VI.II. RELATED WORKIn this section, the contribution of differentressearchers are discussed. The methodologies used andthe results got were considered while reviewing thestudies.“Reference [10]” experiments the machine translationprocess of nglish text to or b text. The studyprovided a machine translator that can translate simplesentences. The simple sentences can be basic subject verbobject (SVO) or modified SVO. Rule-based approachwas used and context free grammar was used for thegrammar modelling. Re-write rules was designed andimplemented for the two languages.Translation processes for translating Englishambiguous verbs are proposed by [11]. A machinetranslation system was developed for this purpose.Context-free grammar and phrase structure grammarwere used. The rule-based approach was used for thetranslation process. The re-write rules were designed forthe translation of the source language to the targetlanguage. The MT system was implemented and tested.For example, Ade saw the saw, Adéríayùn náà[11].“Reference [12]” experiment the concept ofverbs‟ tone changing. For instance, Ade entered theCopyright 2016 MECS9house, Adéwọ ilé. In this case, the dictionary meaningof enter inis wọ. This verb takes low tone, but inthe sentence above it takes mid-tone. The authorsdesigned different re-write rules that can address possibledifferentverbs that share these characteristics.The machine translator was designed, implemented andtested. The system was tested with some sentences.“Reference [13]” research on split verbs as one of theissues of English to Yorùbá machine translation system.The context-free grammars and phrase structure grammarare used for the modelling. Authors used rule-basedapproach and design re-write rules for the translationprocess. The re-write rules are meant for split-verbs‟sentences. The machine translator can translate splitverbs sentences. For instance, Tolu cheated Taiwo, Tolúr Táíwòj .“Reference [14]” propose the alternatives for the use ofHe/she/it Ó of the third personal plural of English toYorùbá machine translation system. Yorùbá language isnot gender sensitive, the authors observed the problemthat does arise when the identity of the doer/speakercannot be identified in the target language. Authorsproposed different representations for he/she/it. Kùnrinwas proposed for he, Bìnrin was proposed for she, andǹkan was proposed for it.“Reference [15]” propose a rule-based approach forEnglish to Yorùbá Machine Translation System. Thereare three approaches to machine translation process. Theauthors reviewed these approaches and considered rulebased approaches for the translation process. Accordingto Authors, there is limited corpus that is available forYorùbálanguage this informs the rule-based approach.“Reference [16]” propose system that can assist in theteaching and learning of Hausa, Igbo, and Yorùba. Thestudy consideres human body parts identification, plantsidentification, and animals‟ names. The English toYorùbámachine translation and Yorùbánumber countingsystems are part of the main system. The model wasdesigned to build a system for the learner of the threelanguages. It is on-going research work.“Reference [17]” propose a web-based English toYorùbá machine translation system. The Authorsconsidered a data-driven approach to design thetranslation process. The context-free grammar wasconsidered for the grammar modelling. The Yorùbálanguage orthography was not properly used in that study.“Reference [18]” propose a hybrid approach totranslating English to. The paper only itemisedthe steps the authors will take in the development of theproposed system. The study is on-going.“Reference [19]” propose web-based English tomachine translation system for noun phrase. Theresearch work conducted by “Reference [10]” consideredsimple sentences translation.“Reference [20]” propose English tomachinetranslation system for noun phrase. According to theauthors, rule-based approach and automa theory are usedto analysis the production rules. The system is able totranslate some noun phrases.I.J. Modern Education and Computer Science, 2016, 11, 8-19

10Development of an English to YorùbáMachine Translator“Reference [21]” propose four methods to evaluatesome system sponsored by DARPA and some otherexternal systems. The MT systems are compared withhuman translators‟ translations. The four methods usedare comprehension test, quality panel, pre-test adequacyand pre-test fluency. The MT systems that wereevaluated by “Reference [21]” used different approaches(statistical, and rule-based) for the development of theirsystems. The systems‟ outputs are compared with oneanother. “Reference [10]” used three different metrics:syllable accuracy, word orthography, and syntaxaccuracy.“Reference [22]” propose a manual and automaticevaluation of MT systems. The evaluation was based onsome European systems. The systems were developedusing statistical and rule-based approaches. The manualevaluation metrics used are based on adequacy andfluency. The automatic evaluation of the systems arebased on bilingual evaluation understudy (BLEU)approach. “Reference [22]” reported that there are nosignificant differences in the performance of the systemsusing the BLEU. The authors established that using shortsentences will produce good results than long sentences.In “Reference [10]” study, short sentences are used andthe system was manually evaluated.speaker. The English speaker is assumed to understandEnglish only and his aim is to communicate his idea to aYorùbáspeaker. The Yorùbáspeaker understands Yorùbáonly. The job of the translator is to communicate the ideaof the English speaker to a Yorùbáspeaker. The E-Y MTsystem developed mimic human translator.Fig.2. Conceptualisation of English to YorùbáMachine TranslatorIII. SYSTEM DESIGN FRAMEWORKB. Abstraction of English to YorùbáMachine TranslatorModelGenerally, the MT system can translate between onenatural language (SL) text and another natural language(TL) text. The translation may be from speech-to-speech,speech-to-text, text-to-speech, and text-to-text. Thediagram in Figure 1 illustrates the translation paradigms.Figure 3 explains the step by step English to Yorùbátranslation process. Figure 3(a) is the English (sourcelanguage) sentence to be translated. Figure 3(b) is theintermediate representation of Figure (a), this shows howthe sentence is being re-arranged. The word “is” isremoved in figure 3(c). Figure 3(d) is the target language.The word „playing‟ is two words (ń and gbá) in the targetlanguage, they can be written together as ńgorseparately as ń gbá. The word ń represents the presentcontinuous tense.(a)Fig.1. The machine translation paradigmsA. The Conceptualisation of E-Y MT SystemFigure 2 illustrates the conceptualisation of the on includes three individuals namely; (1)an English speaker, (2) a translator and (3) a YorùbáCopyright 2016 MECS(b)I.J. Modern Education and Computer Science, 2016, 11, 8-19

Development of an English to YorùbáMachine Translator11The English and Yorùbá re-write rules for the abovesentences are presented below. Word(s) swapping dooccur in the target language. Rules 2) and 3) are swappedin the target language. Yorùbá is head first unlike theEnglish language that is head last.1)2)3)4)5)6)7)8)9)10)(c)S - NP - NP - ADJP - NP - NP - NP - NP - VP - VP - NP VPDET N N DETADJP N N ADJPADJNPRNPPPRE NPVV NPFigure 4 and 5 explain the state diagrams of the entiretranslation process. The machine translator can acceptmodified subject and object. There is swapping of wordsin the TL. The determiner and noun can be swapped. Theadjective and noun can be swapped in the Yorùbálanguage.(d)Fig.3. English to Yorùbátranslation processC. Re-w ite Rules’ DesignADJ2NPDETNstartPRN/N3V4N5DETstopADJNPVP S Fig.4. State diagram for the English translation processCopyright 2016 MECSI.J. Modern Education and Computer Science, 2016, 11, 8-19

12Development of an English to YorùbáMachine TranslatorADJ2NPDETstartNPRN/N3V4DET5NstopADJNPVP S Fig.5. State diagram for the Yorùbátranslation processD. ParsingThe re-write rules of the two languages were designedand tested using NLTK tool. The sentences wereanalysed based on their POS. Re-write rules weredesigned to translate different sentences. An example isused to proof the re-write rules as shown in figure 6.database of the E-Y MT system. There is need tostructure the library from inception to facilitate easyaccess. Sentences had been collected from the homeenvironment for the system's library or database. Thelibrary is designed to store different words collected fromthe sentences. The POS are verbs, nouns, adjectives,adverbs, pronouns and prepositions. The words from theSL sentences are transcribed to the equivalent TL andplaced side by side. The lexemes are organised in twoways in the database. A verb in the SL can be regular andirregular. If it is irregular, the word may deviate from theroot word, e.g., go' and went'. In the case of Yorùbá,what changes is ń (present continuous) and ti (past tense)that can accompany the root words. For example, go lọ,going ń lọ, has gone ti lọ, and went lọ as shown in figure7. Nouns, adjective, preposition etc are non-inflexion(regular). Noun is one to one as shown in figure 8.Fig.6. Sample of the tested re-write rules using NLTKE. The E-Y MT System Database DesignFig.7. Verb databaseThis section discusses the E-Y MT system librarydesign. This library can be described as lexicon orCopyright 2016 MECSI.J. Modern Education and Computer Science, 2016, 11, 8-19

Development of an English to YorùbáMachine Translator13Fig.8. Noun databaseIV. SOFTWARE DESIGN AND IMPLEMENTATIONThis sectionimplementation,discussessoftwaredesignandA. Software DesignIn this section, software design which includes; systemflowcharts is reported. The flowcharts described theprocedures for the software development. The flowchartsexplain step by step software coding. Tagged words hadbeen stored in the database. New sentences could betyped, the system checks if the words are in the database,if they are, the system sends the sentence to the parsermodule otherwise the translation process is terminated.The system Administrator can add new word(s) to thedatabase. The re-write rules designed for the twolanguages are used to determine the correctness of thesentence. The system parses the sentence if it agrees withthe rules, otherwise, the translation is terminated.Sentences are translated using the SVO sentencestructure pattern. The system consideres words that areneeded to be swapped. The system re-groups the wordsand translates the sentence to the TL. The systemgenerates value errors when it cannot translate thesentence. New sentences can be typed when the resetbutton is pressed, otherwise, the application can beclosed. The translation processes involve all the stepsmentioned in the flowcharts shown in Figure 9 below.Copyright 2016 MECSFig.9. The E-YMT system flowchartsB. Software ImplementationIn this section, software implementation is reported.Natural language tool-kit (NLTK), Python, PyQt4, andfile-based custom database are the tools used for thesoftware implementation. The NLTK (is a version ofpython programming language) was used for the sentenceparsing. The NLTK was used to determine thecorrectness of the re-write rules implemented. The phrasestructure grammar model was used for the designing ofthe rules. The PyQt4 ( a version of Python) was used forthe implementation of the graphical user interface (GUI).The GUI has three planes. The first plane is where theEnglish text can be typed. The second plane is the wordfor word transcription. The third plane is the translatedsentence(s).I.J. Modern Education and Computer Science, 2016, 11, 8-19

14Development of an English to YorùbáMachine TranslatorC. The E-Y MT System OutputsThe user can type the SL sentence through the firstplane. The user clicks the translate button. If some of thewords are not in the library, the system generates errorsand informing the user whether she or he wanted to addnew words to the library or database. Though only theadministrator can add the new words to the variouslexemes' databases. Figures 10 and 11 are the samplesentences of the E-Y MT system inputs and outputs.Figure 10 is the basic subject verb object (BSVO)sentence. Figure 11 is a modified subject verb object(MSVO) sentence. It has a subject, a verb and an object(the object was modified).V. EVALUATIONThe evaluation process involves the questionnairedesign, detailing Expert and Experimental subjectrespondents (ESRs) profiles, and questionnaireadministration.A. Questionnaire DesignThe sentences in the instrument (questionnaire) aresubdivided into two: basic SVO (BSVO) and modifySVO (MSVO) sentences. The BSVO sentences are basicsentences (subject, verb, and object) designed to test theExperimental Subject Respondents (ESRs) on theirunderstanding of the translation of basic sentences. Thesubject and object can be noun or pronoun while the verbcould be present or past. Ten BSVO sentences are usedfor the questionnaire. The sentence such as, Ade eats thefood' is an example of a sentence in the BSVOinstrument.The MSVO words are designed to test theunderstanding of ESRs. For example, The tall Professorbought the big house'. The word tall tells more about theProfessor and big tells more about the house. Ten MSVOsentences are used for the questionnaire. The sentencesare designed to determine the understanding of the ESRsin terms of word syllable, sentence orthography, andsyntax of translated Yorùbá sentences. The Appendixshows the questionnaire details.B.Expert and Experimental Subject Respondents'ProfilesFig.10. System sample outputsThe age range, sex, hometown, state, educational level,knowledge of Yorùbá orthography and knowledge ofmachine translation system are filled in the questionnaire.The returned instrument provided the needed informationabout each ESR. The ESRs are postgraduate, graduateand undergraduate students. The majority of ESRs aremales and hailed from Oyo, Ogun, Ondo, Osun, Ekiti andLagos states. Their ages range between 20 and 50 years.Their Yorùbá orthography knowledge ranges betweenadequate and excellent. Some have used machinetranslation system before and some have not. The ESRsand Human Expert are literate Yorùbáspeakers.The human expert (HE) has a Ph.D. in Yorùbáand is atrained translator. He teaches Yorùbáand linguistics. Thehuman expert translations are used to moderate the scoreof the machine, and ESRs translations during theevaluation.C. Questionnaire AdministrationFig.11. System sample outputsCopyright 2016 MECSThe instrument comprises twenty (20) sentencesadministered to thirty (30) (ESRs). The distribution wasdone within the Obafemi Awolowo University, Ile-Ife,Nigeria. This environment was suitable because literateYorùbá speakers are needed to do the translations. Thequestionnaires were distributed among Yorùbá ethnicgroup only. Fifteen ESRs returned the questionnaires.The evaluation was carried out on 20 translated sentencesfor fifteen ESRs, excluding the machine translatedsentences.I.J. Modern Education and Computer Science, 2016, 11, 8-19

Development of an English to YorùbáMachine Translator15D. The E-Y MT System EvaluationWe evaluated the system by considering some SVOsentences. We considered the ESRs and the machinetranslated sentences. The expert translations are used tocontrol the choice of words between the ESRs and themachine. The three criteria used for the evaluation areword syllable accuracy, word orthography, and sentencesyntax correctness. Word syllable accuracy means that aword in a sentence is properly tone marked and underdotted. For example, a word Tolú has two syllables, Tohas one syllable with mid-tone on o and lú has onesyllable with a high tone on u. We evaluated each wordusing this approach and we scored the ESRs and themachine based on their ability to identify these syllables.The word orthography accuracy was evaluated byconsidering each word's orthography within the sentence.Tone marks of some words change within the sentence.For example, KúnléKúnléwọ ilé, wọ (enter) change tonemark to wọ (enter) in this sentence. All these and someother Yorùbá writing styles were considered during theevaluation. The ESRs and the machine translatedsentences were scored. The sentence syntax correctnesswas evaluated based on the sentence structure of thetarget language (TL). We considered the position of eachword in a sentence. A well-positioned word in a sentenceattracts good score.VI. RESULTS DISCUSSIONThis section discusses the results of the evaluationbased on three criteria: word syllable accuracy, wordorthography accuracy, and sentence syntax accuracy. TheBSVO and MSVO translated sentences are evaluated.A. BSVO SentencesThe translated sentences 1 to 10 are BSVO sentences.The results of the evaluation are discussed based on theword syllable, word orthography accuracy, and sentencesyntax correctness.1) BSVO Words Syllable AccuracyFigure 12 shows the results of word syllable accuracyfor sentences 1 to 10 (BSVO). Syllable accuracy meansusing appropriate tone marks on a word.Copyright 2016 MECSFig.12. BSVO Word syllable accuracyHere, each word is evaluated for its syllable accuracy.For example, the word Tolúhas two syllables, that is, To(mid-tone) and lú (high tone). It is observed that themachine translator average score is higher than theexperimental subject respondents (ESRs) average score.There were three instances where the machine translatorscore was equal to that of the Human Expert as it isshown in Figure 12, and no instances where the ESRsaverage score is equal to that of the Human Expert. Therespondents‟ average score is 44.8 percent, the machinetranslator average score is 75.8 percent and the HumanExpert average score is 100 percent.2) BSVO Word Orthography AccuracyFigure 13 shows the results of word orthographyaccuracy for sentences 1 to 10 (BSVO). The wordorthography evaluation is considered in terms of tone andunder-dots marking correctness. It is observed that themachine score is higher than that of the experimentalsubject respondents (ESRs). There are three instanceswhere the scores of machine equal to that of the HumanExpert as it is shown in the graph and no instance wherethe score of ESRs' average score is equal to that of theHuman Expert. The respondents‟ average score is 68.7percent, the machine translator average is 81.9 percentand the Human Expert average score is 100 percent.I.J. Modern Education and Computer Science, 2016, 11, 8-19

16Development of an English to YorùbáMachine Translatorexperimental subject respondents (ESRs) average scoreand lower than The Human Expert average score. Thereare three instances where the scores of machine are equalto that of the Expert and two instances where the machinetranslator scored 0 percent, the ESRs scored 21 percentand 45 percent as it is shown in Figure 15 and noinstances where the score of ERSs' average score is equalto that of the Human Expert. The respondents‟ averagescore is 37.0 percent, the machine translator average is86.7 percent and the Expert average score is 100 percent.However, it is noticed that respondents 5, 7, 9, 10, 13,and 14 did not tone mark the words. This is responsiblefor the performance recorded by the ESRs.Fig.13. BSVO Word orthography accuracy3) BSVO Sentences Syntax AccuracyFigure 14 shows the results of syntax correctness forsentences 1 to 10 (BSVO). It is observed that themachine translator scores are the same with the HumanExpert and the experimental subject respondents (ESRs)average scores are lower than that of the machine and theHuman Expert. There is no instance(s) where the scoresof ESRs equals to that of the Human Expert as it isshown in Figure 14. The ESRs average score is 86.5percent, the machine translator average is 100 percentand the Human Expert average score is also 100 percent.Fig.15. MSVO word syllable accuracy2) MSVO Word Orthography AccuracyFig. 14. BSVO Sentences syntax accuracyB. MSVO SentencesThe translated sentences 11 to 20 are MSVO sentences.The results for the word syllable accuracy, wordorthography, and syntax correctness are discussed.1) MSVO Word Syllable AccuracyFigure 15 shows the results of word syllable accuracyof sentences 11 to 20 MSVO. It is observed that themachine translator average score is higher than theCopyright 2016 MECSFigure 16 shows the results of Word orthographycorrectness of sentences 11 to 20 MSVO. The resultdiscussion focused on modified SVO sentences 11 to 20.It is observed that the machine score is higher than that ofthe experimental subject respondents (ESRs). There arefour instances where the scores of the machine are equalsto that of the Human Expert as it is shown in Figure 16and no instances where the score of ERSs' average scoreis equal to that of the Human Expert. There are twoinstances where the ESRs scored 60 percent and 70percent and machine translator scored 0 percent. Therespondents‟ average score is 65.8 percent, the machinetranslator average is 87.5 percent and the Human Expertaverage score is 100 percent.However, it was noticed that respondents 5, 7, 9, 10,13, and 14 did not tone mark the sentences. This isresponsible for the performance recorded by these ESRs.This reduces the ESRs average scores. Also, the machinewas unable to translate sentences 17 and 18. The wordsused in the instrument are nouns only instead of adjectiveand noun. The machine could not use the re-write rules totranslate the sentences.I.J. Modern Education and Computer Science, 2016, 11, 8-19

Development of an English to YorùbáMachine Translator17Fig.17. MSVO Sentence syntax accuracyFig.16. MSVO word orthography accuracyREFERENCES3) MSVO Sentences Syntax AccuracyFigure 17 shows the results of syntax correctness ofsentences 11 to 20 MSVO. It is observed that themachine score is higher than that of the experimentalsubject respondents (ESRs). It ts observed that themachine translator scores are the same with that of theHuman Expert except for three instances where machinetranslator scored below Human Expert. The experimentalsubject respondents (ESRs) average scores are lower thanthat of the machine translator and the Human Expertexcept for the two instances where the machine scoresare zero (0) percentages. There are no instances wherethe ESRs scores are equal to that of the Human Expert asit is shown in Figure 17. The ESRs average score is 82.4percent, the machine translator average score is 78.2percent and the Human Expert average score is 100percent.VII. CONCLUSIONThe English to Yorùbá machine translator wasdeveloped to make the Yorùbálanguage available to thelarger audience. The study presented issues that affect thetranslation of English to Yorùbá and its underliningprinciples. The system was developed to enhance thelearning of the Yorùbá language. It is user-friendly andallows learners to learn the language at ease. However,this work is a foundation for all other works, every otherstudy will be integrated with this study. The evaluationfocused on translation accuracy and the metricsconsidered are word syllable and orthography accuracyand sentences‟ syntax accuracy. The translation fluencyevaluation will be performed in future.Copyright 2016 MECS[1]National population commission, 2006 census: URL:www.population.gov.ng (accessed: 25/06/2012).[2] Adewole, L. O. (1988) the Categorial Status and theFunctions of the Yorùbá Auxiliary Verbs with SomeStructural Analysis in GPSG, University of Edinburgh,Edinburgh.[3] Bamgbose A. (1966) A

for English to Yorùbá text translation process. The modelled translation process was designed, implemented and evaluated. This was with a view to addressing the challenge of English to Yorùbá text machine translator. This machine translator can translate modify and non-modify simple sentences (subject verb object (SVO)).