ABSA: Computational Measurement Analysis Approach For Prognosticated .

Transcription

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.ABSA: Computational MeasurementAnalysis Approach for PrognosticatedAspect Extraction SystemMaganti Syamala, N.J.NaliniDepartment of Computer Science and Engineering, Annamalai University, Annamalai Nagar, Chidambaram,Tamil Nadu 608002, IndiaAbstract – Aspect based sentient analysis (ABSA) isidentified as one of the current research problems inNatural Language Processing (NLP). TraditionalABSA requires manual aspect assignment for aspectextraction and sentiment analysis. In this paper, toautomate the process, a domain-independent dynamicABSA model by the fusion of Efficient Named EntityRecognition (E-NER) guided dependency parsingtechnique with Neural Networks (NN) is proposed. Theextracted aspects and sentiment terms by E-NER aretrained to a Convolutional Neural Network (CNN)using Word embedding’s technique. Aspect categorybased polarity prediction is evaluated using NLTKVader Sentiment package. The proposed model wascompared to traditional rule-based approach, and theproposed dynamic model proved to yield better resultsby 17% when validated in terms of correctly classifiedinstances, accuracy, precision, recall and F-Score usingmachine learning algorithms.Keywords –Aspect, Category, Extraction,Dependency Parsing, Domain-Independent, Dynamic,Named Entity Recognition, Polarity, Prediction,Sentiment.DOI: 11Corresponding author: Maganti Syamala,Department of Computer Science and Engineering,Annamalai University.Email: ed:Published:13 September 2020.26 December 2020.04 January 2021.27 February 2021. 2021 Maganti Syamala & N.J.Nalini;published by UIKTEN. This work is licensed under theCreative Commons Attribution‐NonCommercial‐NoDerivs4.0 License.The article is publishedwww.temjournal.com82withOpenAccessat1. IntroductionThe inclusion of information available in theWorld Wide Web (WWW) and the evaluation of textanalytics, NLP made the lives of an individual easyto make any purchase decision. Sentiment analysisalso known as opinion mining is a field of study inNLP which helps to analyze individual’s feelings oropinions. These opinions are commonly expressed aspositive, neutral and negative [16]. ABSA helps tomine the raw text to obtain the sentiment for the keytopic phrases. It tends to provide word context toexplain the topic phrases in the given input text.Aspect based sentiment analysis (ABSA) is alsoknown to be Topic based sentiment analysis extractsthe sentiment of an attribute with respect to thespecific topic in a document [6]. The reasons to statethe need for ABSA are a) to find actionableinformation which requires fine distinction that filterswhat specifically people like vs dislike and b) todiscover a semantic landscape that characterizes atext corpus. In addition to the purpose, it also has aset of challenges to discover i.e., finding informativetopics in the presence of noisy data, understandingthe topic sentiment with respect to the exact wordattributes [17] that is more relevant to the context,classifying the sentiment of the topics. In ABSA,extraction of aspect is an important task and thisaspect is generally known to be a topic which canalso be further treated to be a n-gram of size greaterthan 2. This topic represents a syntactically wellformed phrase containing NP or VP and asemantically informative topic over the given corpus.Implementation of ABSA includes two differentsubtasks like aspect identification and aspectsentiment classification. The process of aspectidentification includes aspect extraction which can becarried out using NLP tools, machine learning (ML)and deep learning techniques (DL) [12]. Aspectbased sentiment classification aims to classify thesentiment of the target aspect by its polarity and usesmachine learning algorithms for predicting the testdata. Earlier research on aspect-based sentimentTEM Journal – Volume 10 / Number 1 / 2021.

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.analysis used NLP tools like rule-based approaches,supervised machine learning algorithms like LatentDirichlet Allocation (LDA), neural networks [4] [15]and finally ended up with certain limitations. Rulebased approaches like part-of-speech (POS) taggerextracts the nouns and noun phrases as aspect terms.It fails to handle the noisy data in extracting thenouns that are not treated to be as aspects, leading totagging of incorrect aspects. Similarly, LDA aprobabilistic supervised machine learning algorithmuses probabilistic measure for aspect extraction togroup the similar words related to one context andlabel them as k topics [9]. Depending upon the valueof ‘k’ given as input to LDA it extracts that manytopics. For example, if k 3, then the topics extractedare to be 3 and the same is represented as topic1,topic 2, topic 3, etc. However, here the LDA fails tolabel the topics automatically according to thecontext and it has been observed that it needs manualassignment of labeling to the topics. Even neuralnetworks for aspect extraction fail to retrieve theaspect terms having similar meaning by encoding thetarget aspect using LSTM [10]. Many morelimitations in the existing studies related to ABSAwill be further briefly discussed in section 3. Figure 1resembles the representation of extracted aspects andtheir corresponding sentiment when a test query ispassed as input to an ABSA model.Figure 1. Aspect based sentiment analysis mechanismBy considering the limitations in existing studies,in this paper an Efficient Named Entity Recognition(E-NER) guided dependency parsing techniquecombined with Neural Networks (NN) isimplemented. As significantly compared with theexisting literature on aspect extraction and sentimentanalysis, there was less research taken place usingNN. The success of NN in NLP has motivated toextract the sentiment with respect to the aspectcategory using a dense based sequential CNN model.Aspect terms are passed as word vectors using WordEmbeddings technique. Relu is used as an activationfunction for faster learning and a soft max layer isused as an output layer for displaying the results.Later, the proposed dynamic NN model is comparedto the traditional semi-supervised rule-basedapproach for sentiment aspect category detection.The traditional rule-based model limits to work withpredefined aspects related to the dataset, which werestatistically set. The performance is measured toanalyse the true positive rate of both the techniquesand finally the experimental results of the proposeddynamic model tends to perform better. Differentmachine learning algorithms are used for validatingthe proposed sentiment prediction model.TEM Journal – Volume 10 / Number 1 / 2021.The proposed dynamic ABSA model is helpful inrecommendation systems for accurate aspectextraction and polarity detection. It discovers aspectsto explain the sentiment of a topic related to thegiven context. It can also be used to summarize avariety of corpora irrespective of the noisiness in theraw data. The rest of the paper is organized asfollows, Section 2 presents the existing literature onaspect-based sentiment analysis, Section 3 describesthe different existing aspect extraction techniques,Section 4 describes the proposed methodology,Section 5 presents the experimental analysis andSection 6 summarizes with the conclusion.2. Related WorkFor implementing aspect-based sentiment analysis,studies have been carried out using rule-basedapproaches and machine learning techniques [25],[26]. As inspired with the existing studies andapproaches in sentiment analysis, in this paper a finegrained procedure for ABSA is proposed. The waythat aspects are extracted in the previous researchworks, its pros and cons are detailed below in a briefmanner.83

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.Nadeem Akhtar et al., proposed a textsummarization-based ABSA model on hotel reviews[1]. A set of predefined aspects as categories relatedto the hotel domain is identified, and we used LDAfor grouping the terms to the predefined categoriesby probabilistic values. It aims to make the job of avisitor easy in identifying the pros and cons of a hotelrather than wasting their time by reading thousandsof reviews.Deepa Anand et al., developed a twofoldclassification scheme without making the use oflabelled data concept for ABSA [3]. The overhead ofmanually constructing the labelled data is avoidedhere. Rather than focusing on the subjectivity, afiltering mechanism for extracting the relevanceterms is applied. The ultimate goal is to plot thefiltered sentence that they fall under the samecategory.Guo et al., proposed a new ranking method forfinding the aspects of alternative products based onthe consumer’s preference [5]. LDA is used forassigning the weights to the aspects in order tocalculate the sentiment of the objective value of theproduct. Directed graph and an improved pageranking algorithm is proposed to derive the finalscore of each product. This model helps as apersonalized recommendation for consumers.Liu et al., in his paper for aspect-based sentimentanalysis proposed an Attention-based SentimentReasoner [8] in short named to be AS-Reasoner, amulti-layered neural network. As focused to design amodel that works much like a human reasoner, herethey have assigned a degree of importance to thewords that are to be treated as aspects in a sentencefor capturing the sentiment expression in a sentence.They designed an intra-attention network forcapturing the similar sentiments between the wordsfor assigning the weights and a global attentionnetwork is used to assign the weights to thesentiment for classifying its polarity with respect tothe aspect. The pros to be stated in this model aremaking use of this AS-Reasoner model it performsaspect-based sentiment analysis at both aspect targetlevel and aspect category level. This proposed modelis language independent. As aspect target is encodedas a pre-set vector by assigning weights and as aspectcategory is generalized it can be only applicable todomain specific data.Sebastian Ruder et al., proposed a hierarchicalbidirectional LSTM model for ABSA, and it is testedon reviews of 5 domains [11]. Compared theproposed model with two non-hierarchical modelslike LSTM, Bidirectional LSTM and proved that theproposed model obtains better results. Theyidentified the entities, attributes in a sentence bytaking the average of entity, attribute embeddings forthe purpose of aspect representation.84Sophie de Lok et al., using ontology featuresproposed an aspect-based sentiment analysis onrestaurant reviews data evolved from SemEval 2016[13]. They defined 3 main ontology classes iment. For implementation they labelled ative as super class. In the base class byannotation connected all the related entities as aspectcategory. In the sub class using genericpositive/negative property they divided the relatedterms based on the polarity and grouped with theirentities. Finally, the sentiment super class predictsthe polarity with respect to the aspect category. Forsentiment classification, SVM model is used and themodel performance was compared to the 2 variantsof SVM i.e., Linear Binary SVM and multi-classSVM with RBF kernel.Soujanya Poria et al., proposed a Sentic LDAframework for ABSA [14] in which related contextterms are grouped as clusters and where each clusterrepresents an aspect category. Using the majoritybased criterion assigned the labels as aspectcategories for the grouped clusters. In a manual waythey assigned the labels by identifying the highestprobability value terms in the cluster.Muhammad Touseef IkramIkram1 et al., proposeda model to extract hidden patterns in the sentencesand provides a wide variety of papers with thousandsof citations [18]. By using the patterns of opinionatedphrases in the citation sentences, it extracts theaspects and uses linguistic rule-based approach. SentWordNet lexicon is used for detecting the sentimentpolarity of the extracted aspect using sentimentpolarity score. That aspects that are considered forresearch findings are defined technically related tothe domain like “methodology” “performance”,“corpus”, “study”, “measure” and “results”. Finally,performed prediction on test data is using variousmachine learning classification techniques.Wanxiang Che et al., proposed a sentimentsentence compression model called Sent Comp [19].In order to make the task of a user easy, here a finegrained ABSA is defined by identifying the polaritiesin the user comments with respect to its aspect. Thehighlights in the paper show that it removes thesentiment unrelated information by using adiscriminative conditional random field model,including some special features like perception,potential semantics. For automatic sequencelabellingtask a Conditional Random Field (CFRF)model is used. Analysis on four different productdomains of Chinese corpora is performed.Wenya Wang et al., proposed a novel unifiedframework by integrating recursive neural networkswith conditional random field (CRF) to co-extractexplicit aspects and opinion terms [20]. As opinionTEM Journal – Volume 10 / Number 1 / 2021.

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.terms are not restricted to certain POS tags or tosome selected opinion terms, here it combines theadvantages of DT-RNN, CRF’s to make the proposedmodel more flexible when compared to thetraditional Rule based approaches.J. Yang et al., proposed a novel method for ABSAcalled ME-ABSA named as multi entity ABSA [22],where they used two types of techniques named CEAand DT-CEA to carry out this task. The main goalthat is defined in this is prediction of sentimentpolarity in the sentence with respect to each entity,aspect and with their combinations. To achieve this, aContext, Entity and Aspect memory method calledCEA is designed. CEA uses an interaction layer andposition attention layer with RNN to fuse entity,aspect and context information by taking entity,aspect vector as input and performing entity wisemultiplication and concatenation. Also, as a furtherimprovement a Dependency-Tree CEA (DT-CEA) isdeveloped, this uses a dependency tree for mappingentity, aspect and context information. For predictingthe sentiment polarity a soft-max layer is used withrespect to the given entity, aspect and itscombination.Ye Yiran et al., used Latent Dirichlet Allocation(LDA) for topic modeling on amazon mobile phonereviews [24]. Within LDA, a probabilistic modelclusters some k topics are based on the probabilisticvalues of words in a sentence. Here in this model,they set the value of k to be half in the number ofterms they considered, where they can cluster theaspects in to k topics. For each term in the k topics,they assigned weights by probability value and theterm with highest value is selected as label to thatrespective topic. For aspect sentiment classificationthey detected the emotion of the input sentence byusing a domain specific lexicon and calculated thesentiment score.Later by mapping the topic weight extracted byLDA with sentiment score, the sentiment of theaspect is predicted. It has been observed that ingrouping of terms as a cluster of k topics have thesame terms to be repeated more than once indifferent topics, which may definitely show someimpact in detecting the false positivity for sentimentclassification. Also, this work was limited to consideronly three predefined aspects like display screen,battery life and camera quality.From the existing studies, it is been noticed thatmost of the work that has been carried in aspectbased sentiment analysis requires either labeledaspects in the training dataset or statisticalassignment of stuff relevant to the domain. All theseprocesses need training of input data, which takes ahuge time for finding the opinion-oriented aspectsassociated with the domain. It limits to extract theaspects which will provide only minimal amount ofrequired information.3. Aspect Extraction TechniquesFeature or aspect extraction is treated to be a subtask in the process of information extraction. Itincludes several sub tasks like entity extraction, eventextraction, named entity extraction, relationextraction, etc. In ABSA, aspect extraction is crucialfor finding out aspect on which the opinion isexpressed [23]. There exist different mechanisms toanalyze and extract the aspect from the given text.Based on the way the aspects get extracted, aspectextraction techniques are classified as TraditionalAspect Extraction (TAE) and Open AspectExtraction (OAE). In depth the TAE mechanism isclassified as unsupervised aspect extraction, semisupervised aspect extraction, and supervised aspectextraction. The classification of aspect extractiontechniques is represented in the Figure 2.Figure 2. Aspect extraction techniques classificationTEM Journal – Volume 10 / Number 1 / 2021.85

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.Traditional Aspect Extraction mechanism uses predefined trained data to derive the relation and results.Bag-of-Words (BOW), Term Frequency-InverseDocument Frequency (TF-IDF) are treated as someof the examples of unsupervised aspect extractionmethods.3.1.1. Unsupervised Aspect Extraction3.1.1.1. Parts-of-Speech (POS) TaggingUnsupervised aspect extraction methods are theone in which the aspects are extracted by analyzingthe structure of the document and there is no need toworry about the training data [2]. Some of the Rulebased methods like Parts-of-Speech tagging (POS),It finds the occurrence and co-occurrence of nounand noun phrases. It uses n-grams, parts-of-speech(POS) tagging, WordNet for generating patterns andextracting aspect terms by filtering the noun andnoun phrases. The way the POS tagger works isanalyzed in the Figure 3.3.1. Traditional Aspect ExtractionFigure 3. Analysis of Parts-of-Speech tagging.3.1.1.2. Bag-of-Words (BOW) ModelBag-of-Words model counts the occurrences of aparticular word in the given text and represents eachword in a feature column as a text vector. It willgenerate too many aspects and to overcome thisproblem, there is a selection measure based on ngram high frequency, n-gram medium frequency, ngram low frequency [7]. High frequency selectionincludes stop words, low frequency selection willover fit the data and the medium frequency selectionwill tends to have good n-grams. The way the wordsequences can be represented is given in Eq. (1) andbigram, n-gram approximation representation isgiven in Eq. (2), Eq. (3).Word Sequences𝐖𝐨𝐫𝐝𝟏 𝐧 𝐖𝐨𝐫𝐝𝟏 𝐖𝐨𝐫𝐝𝐧Bigram approximation 𝐧𝐤 𝟏 𝐏 𝐖𝐨𝐫𝐝𝐤 𝐖𝐨𝐫𝐝𝐤𝐏 𝐖𝐨𝐫𝐝𝟏 𝐧N-gram approximation 𝐧𝐤 𝟏 𝐏 𝐖𝐨𝐫𝐝𝐤 𝐖𝐨𝐫𝐝𝐤𝐤𝐏 𝐖𝐨𝐫𝐝𝟏 𝐧(1)𝟏(2)𝟏𝐍 𝟏(3)3.1.1.3. Term Frequency-Inverse DocumentFrequency (TF-IDF) ApproachTF-IDF approach finds the frequency of each wordin a document d. TF-IDF measures the weight ofeach word and represents the word by its weights in avector space. It selects the most frequently appearing86words based on word-weight. The TF-IDF can becalculated using the formula given in Eq. (4).𝐭𝐟𝐢𝐝𝐟 𝐝𝐢 , 𝐯𝐣𝐜𝐨𝐮𝐧𝐭 𝐝𝐢, 𝐯𝐣 𝐯𝟏 𝐝 𝐜𝐨𝐮𝐧𝐭 𝐝𝐢 ,𝐯 𝟏𝐢𝐥𝐨𝐠𝐧𝐯𝐣 𝐝𝟏 ,𝐝𝟏 𝐃(4)3.1.2. Semi-Supervised Aspect ExtractionSemi-supervised methods are initially treated to beun-supervised in which there is no training data.Depending on the context, it can be trained toretrieve nearly train data which then is called to besemi-supervised. Some of the examples of semisupervised approach are Latent Dirichlet Allocation(LDA) and Opinion Target Extraction (OTE).3.1.2.1. Latent Dirichlet Allocation (LDA) TopicModelingLDA is a topic-modeling technique, which groupsthe words in the document as a mixture of fixedtopics. Each topic is clustered by means ofprobability distribution to represent an aspect.Bayesian probability is used as a probabilisticmeasure to group the topics directly into the topics.The topic distributions obtained can be passed asfeature vector to supervised classification models forsentiment prediction as given in Eq. (5). ProbabilisticLatent Semantic Analysis (pLSA) is another kind ofTEM Journal – Volume 10 / Number 1 / 2021.

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.semi-supervised topic modeling technique used foraspect extraction by probability distribution function.𝐏 𝐳𝐭 𝐰 𝛂𝐭𝐧𝐭 𝐝𝛃 𝐧𝐰 𝐭(5)𝛃𝐕 𝐧 𝐭3.1.2.2. Opinion Target Extraction (OTE) ApproachAssignment of Opinion target word related to thecontext and exploring the relation with sentimentwords helps to detect the aspect. Depending on thecontext, target dependent terms are assigned andrelated terms from the input data are extracted. Incase of neural networks, opinion target in the givensentence is assigned as a vector indicating the aspectsand was passed to the input layer for aspectsentiment classification [21].3.1.3.Supervised Aspect ExtractionSupervised Aspect Extraction along with the taskof extracting relation helps to predict the relation. Atrained model consisting of labeled data is used asinput. Machine learning classification algorithms likeSVM, Naïve Bayes, Decision tree falls under thiscategory for aspect extraction. The attribute measurefor feature extraction can be carried out using theequation given in Eq. (6).𝐆𝐢𝐧𝐢 𝐭 𝐧𝐢 𝟏 𝐩 𝐂𝐢 𝐭𝟐(6)3.2. Open Aspect ExtractionIrrespective of the context, in Open AspectExtraction all the relations are derived without usingany pre-defined data.4. Proposed MethodologyIn this paper to mitigate the drawbacks in existingstudies, an integrated domain independent real timeaspect-based sentiment analysis model is proposed.The proposed domain-independent dynamic aspectcategory model is implemented by a combined fusionof Efficient Named Entity Recognition (E-NER)guided dependency parsing technique and NeuralNetworks (NN). For effective aspect extraction andsentiment classification, the proposed model includesthe following sub tasks like ‘A’spect term extraction,‘A’spect category detection, ‘A’spect relatedsentiment words filtering and ‘A’spect sentimentclassification. It is simplified as a 3 A’s model where‘A’ resembles an Aspect. The proposed methodologyis illustrated in the Figure 4, and in brief it isdiscussed in the following sub sections.Figure 4. Proposed domain independent dynamic ABSA.TEM Journal – Volume 10 / Number 1 / 2021.87

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.4.1. DatasetThe cell phone reviews dataset is used to test theresults and was collected from Kaggle website.It contains 33,000 reviews and the fields in thedataset are shown in the Figure 5.Figure 5. Fields in the cell phone reviews dataset.4.2. Pre - processing4.2.3.To process the text in a simply way, it is necessaryto convert the given text in to an analyzable formwhich makes the process of prediction orclassification task easy. This process of filtering theunwanted and unnecessary information from the textis known as pre-processing. This task involvesdifferent techniques and Natural LanguageProcessing (NLP) plays a major role in it. In thispaper, for pre-processing used an “nlp” object from apython library named spaCy to create the documentsthat includes linguistic annotations.In order to remove the ambiguity in the text it isnecessary to identify and convert the words thatrepresent same meaning in to its simplest form byremoving certain suffixes like -ing, -ies, -ion etc.4.2.1. TokenizationTokens are individual words that are to beextracted from the given input text for processing. Asin most of the cases the input for an algorithm isword token, so it is necessary to extract the usefultokens from a long string or a sentence. Tokenizationis the process of breaking the given input text in asentence to some meaning full words, called tokens.This process will reduce the size of the input text byeliminating the unnecessary punctuation marks,spaces and etc.4.2.2. Stop-Words RemovalMost of the articles like pronouns in the input textare treated to be stop-words which will not provideany useful information, rather make the data highdimensional by impacting the results in classificationprocess. We used a is stop() function from nlplibrary for filtering the necessary words by setting itsvalue to false.88Lemmatization or StemmingAlgorithm 1: Pre-ProcessingInput: Sentence or a documentOutput: Pre-processed data//Tokenization1. data “Given Input Sentence/Input documentpath”2. nlp English () // Use English dictionary byloading English class using nlp object.3. my token nlp (data)4. Create an empty list called token list to storethe extracted tokens.5. for each token in data do:6.Append the extracted token to thetoken list as token list.append (token.data)7. end for8. Print all the extracted tokens.//Stop words removal9. stop nlp(data)10. for each found in stop do:11.if foundword. is stop FALSECreate an empty list filter stop []and append foundword to thefilter stop []12.end if13. end for14. Print the values in the list filter stop []//Lemmatization or Stemming15. for each stemedword in data do:16.Print all the stemmed words usingstemeddata.lemma17. end forTEM Journal – Volume 10 / Number 1 / 2021.

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.8.4.3. Domain Independent Dynamic Aspect BasedSentiment AnalysisThe proposed dynamic model for aspect termextraction and aspect category extraction is acombined fusion of Efficient Named EntityRecognition (E-NER) guided dependency parsingtechnique and Neural Networks (NN) model. Here,in the paper we implemented a domain independentdynamic ABSA algorithm having the capability toextract tokens as aspects using Efficient NamedEntity Recognition (E-NER) guided dependencyparsing technique. A comparative analysis as atesting measure with the traditional rule-basedapproach is performed, in which patterns aregenerated at sentence level and multi-word level. Theproposed algorithm includes three stages to modelthe train data for performing domain-independentdynamic aspect-based sentiment analysis. They arecategorized as i) Aspect terms extraction ii) Aspectcategory detection iii) Aspect related sentimentwords filtering and Aspect sentiment classification.4.3.1.Aspect Terms ExtractionA noun qualifying as an entity is treated to be anaspect in a sentence and the one which is describingthe corresponding adjective is to be extracted as anaspect for sentiment analysis. Aspects are oftenrecognized with different names like object, entity,feature, attribute.In general, for aspect terms extraction certaingrammatical rules to train the input are required. InNLP, there exists a number of ways to process this.In the proposed system, to make this task easy andefficient we used an E-NER guided dependencyparsing mechanism to extract the most relevantNouns as aspects from the given input. The E-NERguided dependency parsing uses POS tagger insteadof generating the patterns like rule-based approach.Here it identifies the aspects as a “tag aspect” bydependency parsing with respect to the connectedadjective. Dependency parsing will help us to knowthe role of a word it plays in the input text andidentifies how the words can be related to each other.Algorithm 2: Aspect Terms ExtractionInput: Pre-Processed ReviewText- RTOutput: AspectTerms- AT1. for each review R in RT do2.Read the ReviewWords- RW3.Extract Noun Chunks using ENER Tagged NC4.Tagged NC- POS (E-NER)5.return Tagged NC6. end for7. for each Tagged NC in POS (E-NER)TEM Journal – Volume 10 / Number 1 / 2021.9.10.11.12.13.14.15.16.17.18.19.4.3.2.E-NER defines a list of NounPhrases- NPcreate a list for AspectTerms- ATfor each Term T in Tagged NCif POS(T) ’NN/NNS’AT.append(T)else ifPOS(T) ’NP: { NN NN JJ NN }’AT.append(T)end ifend forend forreturn ATAspect Category DetectionIn most of the existing works, the stage afterretrieving the nouns in the text as aspects uses astatistical assignment of domain related terms asaspect categories. And it limits the work to beexecuted only for the defined dataset. In thisproposed model for domain independent dynamicaspect category detection, aspects from E-NER aretrained to a dense layer Convolutional NeuralNetwork with 512 nodes for extracting the domainspecific aspect category. A soft max layer is used asan output layer and applied a probability distributionfunction for filtering the higher weighted relevantaspects as aspect categories, which was formulated inthe Eq. (8). For faster learning, relu activationfunction was used in the dense layer and isrepresented in the Eq. (9). As CNN layer cannotprocess the aspect terms feed from E-NER, so theresultant aspect terms are encoded as vectors. Thisprocess of encoding is called as Word Embeddingand is represented in the Eq. (7). To carry out thisprocess, we used bag of words mechanism as afeature extraction technique discussed in Section 3.𝐦𝐢𝐚𝐢,𝐣 𝐧𝐣 𝟏,𝐣𝐢𝐚𝐢𝐣 , 𝐰𝐣𝐞𝐱𝐩 𝐬𝐜𝐨𝐫𝐞 𝐰𝐢 ,𝐰𝐣 𝐧𝟏 𝐞𝐱𝐩 𝐬𝐜𝐨𝐫𝐞 𝐰𝐢 ,𝐰𝐣𝐣 𝟏𝐬𝐜𝐨𝐫𝐞 𝐰𝐢 , 𝐰𝐣𝐯𝐚𝐓 𝐭𝐚𝐧𝐡 𝐖𝐚 𝐰𝐢 𝐰𝐣(7)(8)(9)4.3.3. Aspect Related Sentiment Words Filtering andAspect Sentiment ClassificationHere, the mechanism that is employed for aspectterm extraction from a review text in aspect termextraction phase is applied for identifying andextracting the sentiment-oriented aspects from thereview text. Later for sentiment terms polarityprediction, an nltk library by importing ntIntensityAnalyzer() function is used.89

TEM Journal. Volume 10, Issue 1, Pages 82‐94, ISSN 2217‐8309, DOI: 10.18421/TEM101‐11, February 2021.Now the trained model is ready for predicting andclassifying the class label of test data. By treating theproposed model as a trained appli

aspect-based sentiment analysis at both aspect target level and aspect category level. This proposed model is language independent. As aspect target is encoded as a pre-set vector by assigning weights and as aspect category is generalized it can be only applicable to domain specific data. Sebastian Ruder et al., proposed a hierarchical