Präsentation zum Thema: "Carnegie Mellon University"— Präsentation transkript:
1Carnegie Mellon University Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based ProjectionsGood Morning.I am Dipanjan Das, a PhD student at Carnegie Mellon University.Welcome to this talk, where I will talk about learning part of speech taggers in a bilingual setting.More precisely, we learn taggers from unannotated data for a target language, by assuming that we have a resource rich source language with which the target language has parallel data.This is joint work with Slav Petrov, completed during an internship at Google Research.Dipanjan DasCarnegie Mellon UniversitySlav PetrovGoogle ResearchJune 21ACL 2011
2Part-of-Speech Tagging Portland has a thriving music scene .NOUNVERBDETADJ.We are all familiar with the part-of-speech tagging task.Given a sentence such as “Portland has a thriving music scene”, our goal is to tag each of its words with disambiguated part of speech categories.<Press Enter>For example here thriving is an adjective because it modifies the noun phrase music scene, and is not a verb, which can also be a syntactic category that the word thriving can associate with.
3Supervised POS Tagging To confirm that these universal tags work well across languages, we trained supervised models for these 20 odd languages, and the average accuracy turns out to be around 96%. This says that the universal tags behave in the desired way for all of these languages.However, we want to build POS taggers for much more than 20 languages.Supervised setting: average accuracy is 96.2% with TnT(Brants, 2000)
4Resource-Poor Languages Several major languages with no or little annotated datae.g.32 million37 million20 millionNative speakers7.7 million109 million69 million40 millionPunjabiHowever, lots of parallel and unannotated data!The problem however is, that several major languages have no annotated data.<press enter>For example, these seven languages with several million speakers each have little or no tagged data with which supervised NLP tools could be built.However, these languages have tons of translated data in a resource rich language like English, for which we have supervised taggers.Our goal is to build POS taggers for such languages because POS taggers are basic tools necessary for the development of language technologies in these languages.VietnameseBasic NLP tools like POS tagging essential for development oflanguage technologiesPolishOriyaIndonesian-MalayAzerbaijaniHaitianSee
5(Nearly) Universal Part-of-Speech Tags VERBDETNOUNCONJPRONNUMADJPRTADV.ADPXTalk about open class / closed class.
6(Nearly) Universal Part-of-Speech Tags Example Penn Treebank tag maps:NN → NOUNNNP → NOUNNNPS → NOUNNNS → NOUNPRP→ PRONPRP$ → PRONWP → PRONWP$ → PRONWe now give you a brief idea about how these tags were constructed.For example, the four fine grained noun tags from the original Penn Treebank were collapsed to the coarse NOUN tag for English.<press enter>Similarly the fine-grained pronoun categories map to the coarse PRON category.For Spanish, the treebank contains 2 fine grained noun tags which collapse to the universal coarse NOUN tag. Finally there are 9 different pronoun tags in the Spanish treebank which map to the PRON category. Thus, for different languages, different number of fine tags map to a coarse universal POS tag.The detailed procedure by which these tags were designed and set of experiments with these tags have been described in a short report that we have uploaded to ArXiv, and is downloadable.np → NOUNnc → NOUNExample Spanish Treebank tag maps:p0 → PRONpd → PRONpe → PRONpi → PRONpn → PRONpp → PRONpr → PRONpt → PRONpx → PRONSee Petrov, Das and McDonald (2011)
7(Nearly) Universal Part-of-Speech Tags Portland has a thriving music scene .NOUNVERBDETADJ.Other people also used universal tags.Portland hat eine prächtig gedeihende Musikszene .NOUNVERBDETADJ.পোর্টল্যান্ড শহর এর সঙ্গীত পরিবেশ বেশ উন্নত | NOUNADPADJ.
8State of the Art in Unsupervised POS Tagging Before we go into the details of our proposed method, I will talk briefly about some state of the art models in unsupervised part of speech tagging, i.e. learning taggers in a scenario where there is no annotated data.State of the Art in Unsupervised POS Tagging
9Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)estimated with the Expectation-Maximization algorithm?The basic model used for POS speech tagging in various learning scenarios is the hidden Markov model. In this framework, we denote the observation sequence, or the sentence as bold x, and the state sequence that emit each token in the sentence as bold y. The HMM inference problem for POS tagging is to find the best state sequence for a given sentence.PortlandhateineprächtiggedeihendeMusikszene.: observation sequence: state sequenceMerialdo (1994)
10Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)estimated with the Expectation-Maximization algorithmone of the 12 coarse tags?The question marks at each state position denotes one of our coarse universal tags.PortlandhateineprächtiggedeihendeMusikszene.: observation sequence: state sequenceMerialdo (1994)
11Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)estimated with the Expectation-Maximization algorithmtransition multinomials??A bigram HMM consists of a set of transition multinomial distributions which encode the probability of transitioning from one state to the next.Portlandhat: observation sequence: state sequenceMerialdo (1994)
12Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)estimated with the Expectation-Maximization algorithmemission multinomials??It also consists of a set of emission multinomial distributions which denote the probability of emitting a observation type x sub I given the state y sub I for the position i.The simplest and one of the oldest unsupervised learning methods for training an HMM is the Expectation Maximization algorithm. We trained unsupervised POS taggers for eight European languages with the EM algorithm.Portlandhat: observation sequence: state sequenceMerialdo (1994)
13Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)estimated with the Expectation-Maximization algorithm?However, results are quite poor across all these languages, which is a result previously noted. We definitely would want to improve over these models.PortlandhateineprächtiggedeihendeMusikszene.DanishDutchGermanGreekItalianPortugueseSpanishSwedishAverage68.757.075.965.863.762.971.568.466.7EM-HMMPoor average resultJohnson (2007)
14Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)with locally normalized log-linear modelsemission multinomials??Very recently Berg-Kirkpatrick and colleagues proposed an HMM where the emission multinomials were replaced by locally normalized log-linear models.Portlandhat: observation sequence: state sequenceBerg-Kirkpatrick et al. (2010)
15Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)with locally normalized log-linear modelsemission multinomials??These log-linear models look at features corresponding to various aspects of the observation X sub I, along with the state y i. Some of these features are whether the observation contains a hyphen, the nature of capitalization, whether the token contains numbers, and so forth.Portlandhatsuffix hyphen capital lettersnumbers ...: observation sequence: state sequenceBerg-Kirkpatrick et al. (2010)
16Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)with locally normalized log-linear modelsEstimated using gradient-based methodsemission multinomials??When the parameters were estimated with a gradient-based method, L-BFGS to be particular, this model resulted in much better scores than vanilla EM for English.Portlandhatsuffix hyphen capital lettersnumbers ...: observation sequence: state sequenceBerg-Kirkpatrick et al. (2010)
17Unsupervised Part-of-Speech Tagging Hidden Markov Model (HMM)with locally normalized log-linear modelsEstimated using gradient-based methodsemission multinomials??We trained this feature-rich HMM model for our eight languages, to confirm the improvement. This table shows that we get consistent improvements over the EM based HMM, resulting in more than 6% absolute average improvement.For our work, we use this model, as we will see in the following slides.PortlandhatDanishDutchGermanGreekItalianPortugueseSpanishSwedishAverage68.757.075.965.863.762.971.568.466.7184.108.40.2061.868.178.480.270.173.0EM-HMMFeature-HMMImprovements across all languages
18. Unsupervised POS Tagging with Dictionaries Hidden Markov Model (HMM)with locally normalized log-linear modelsState space constrained by possible gold tagsNOUNVERBPRONDETADJNUMADV.A POS tag dictionary contains the possible POS tags that a particular token can assume. For example, the third word eine in the sentence can either be a pronoun, determiner, adjective or num. Incorporating a tag dictionary into an HMM is straightforward. The HMMs state sequence is hardly constrained such that only those states which associate with the current token can emit it.We ran an experiment where we constructed tag dictionaries for each language from its treebank, and trained a feature-rich HMM for each of our eight languages.PortlandhateineprächtiggedeihendeMusikszene.
19. Unsupervised POS Tagging with Dictionaries Hidden Markov Model (HMM)with locally normalized log-linear modelsState space constrained by possible gold tagsNOUNVERBPRONDETADJNUMADV.These feature HMMS using the treebank based dictionaries actually perform really well, more than 20% improvement is found over purely unsupervised feature-HMMs. The accuracies are quite close to the supervised counterparts, even.PortlandhateineprächtiggedeihendeMusikszene.DanishDutchGermanGreekItalianPortugueseSpanishSwedishAverage68.757.075.965.863.762.971.568.466.7220.127.116.111.868.178.480.270.173.093.194.793.596.696.494.095.885.593.7EM-HMMFeature-HMMw/ golddictionary
20access to high-quality tag dictionaries is not realistic. Morphologically rich languages only have base forms in dictionariesFor most languages,access to high-qualitytag dictionaries is not realistic.However, for most languages without annotated data, access to high-quality tag dictionaries in not realistic.<press enter>Although there may exist dictionaries in the hard form, for morphologically rich languages, only base forms are present in dictionaries, causing problem. We tried to use such dictionaries to build POS taggers, and deficient dictionaries prevented us to build accurate POS taggers.Instead, we resort to the following ideas for the development of good, unsupervised POS taggers. We use supervision in another, resource-rich language with annotated data, which we call the source. We use translations of our language in concern, which we call the target language into the source language. And finally, we project tag lexicons from the source language to the target language.Ideas:Use supervision in resource-rich languagesUse parallel dataConstruct projected tag lexicons
21automatic labels from supervised tagger, 97% accuracy Bilingual ProjectionNOUNVERBDETADJ.automatic labels from supervised tagger, 97% accuracyPortland has a thriving music scene .Given a sentence in English, we can use existing POS taggers to tag it automatically. For English, automatic POS taggers, using the coarse tags get around 97% accuracy, which is quite good for projection purposes.
22Bilingual Projection Portland has a thriving music scene . NOUNVERBDETADJ.Portland has a thriving music scene .Automatic unsupervised alignments from translation data(available for more than 50 languages)For many languages, translations from a resource rich language like English exist.<press enter>Given a large amount of such translations, automatic unsupervised alignments are easy to get. For more than 50 languages, we can get these alignments.Portland hat eine prächtig gedeihende Musikszene .
23Baseline1: direct projection Bilingual Projection.NOUNVERBDETADJNOUNNOUNPortland has a thriving music scene .Our first idea uses these alignments directly for projection. We use parallel data between a source language, which is always English for our experiments and a target language. We run a word aligner, tag the English side with a supervised POS tagger for all sentences in a bitext corpus.<press enter>Next, for aligned words for a target sentence, we project the tags from the source side. We make use of high confidence alignments, and choose at most one alignment for each target word.For unaligned words, we use the NOUN tag.Portland hat eine prächtig gedeihende Musikszene .unaligned wordNOUN(most frequent tag)Baseline1: direct projectionYarowsky and Ngai (2001)
24Baseline1: direct projection Bilingual ProjectionPortland hat eine prächtig gedeihende Musikszene .NOUNVERBDETADJ.+more projected tagged sentencesOnce projection is done for all the sentences, we use the projected data to perform supervised training, yielding a POS tagger.supervised trainingtagger(Brants, 2000)Baseline1: direct projectionYarowsky and Ngai (2001)
25Baseline 1: direct projection Bilingual ProjectionBaseline 1: direct projectionDanishDutchGermanGreekItalianPortugueseSpanishSwedishAverage68.757.075.965.863.762.971.568.466.718.104.22.1681.868.178.480.270.173.073.677.083.279.379.782.680.174.778.8The results for this model is shown in the third row of the table.EM-HMMFeature-HMMDirectprojectionYarowsky and Ngai (2001)
26Baseline 1: direct projection Bilingual ProjectionBaseline 1: direct projectionDanishDutchGermanGreekItalianPortugueseSpanishSwedishAverage68.757.075.965.863.762.971.568.466.722.214.171.1241.868.178.480.270.173.073.677.083.279.379.782.680.174.778.8The results for this model is shown in the third row of the table.EM-HMMFeature-HMMDirectprojectionconsistent improvements over unsupervised modelsYarowsky and Ngai (2001)
27Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projectionThe second idea that we explore in this work is instead of projecting across tokens in the parallel corpus, we rather project across types. We call this lexicon projection.
28Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projection.NOUNPortlandVERBhasDETaADJthrivingNOUNmusicNOUNsceneGiven such an alignment,<press enter>we consider each source target alignment token pairs separately.PortlandhateineprächtiggedeihendeMusikszene.
29Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projectionNOUNPortlandADJthriving.Portlandgedeihende.And for target words which are unaligned, we ignore them.hatprächtigNOUNmusicignore unaligned wordNOUNsceneVERBhaseineMusikszeneDETa
30Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projectionNOUNPortlandADJthriving.Portlandgedeihende.Thus, we get a bag of alignments for each sentence pair.Bag of alignmentshatNOUNmusicNOUNsceneVERBhaseineMusikszeneDETa
31Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projectionNOUNPortlandADJthriving.Portlandgedeihende.In the next step, for each target word aligned to a auto-tagged source word, we project the tag to the target word. In other words, we get a distribution on each target word, which is skewed towards only one tag.hatNOUNmusicNOUNsceneVERBhaseineMusikszeneDETa
32Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projection.NOUNPortlandADJthrivingPortlandgedeihende.As we scan the bitext, the target words get aligned to more tagged source words, and we update their projected distributions. For example, here eine is aligned to words which were tagged with the DET, NUM and the PRON tags, and the tag distribution has equal weights for each of these tags.hatNOUNmusicNOUNsceneVERBhaseineMusikszeneDETaPRONoneNUMone
33Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projection.NOUNPortlandADJthrivingVERBthrivingPortlandgedeihende.Similarly, the word <>, when aligning to two senses of the word thriving, has a distribution with equal weights on the adjective and the noun tags.hatNOUNmusicNOUNsceneVERBhaseineMusikszeneDETaPRONoneNUMone
34Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projectionAfter scanning all the parallel data:After scanning all the data, we get a POS tag distribution for each aligned word in the bitext.<press enter>We use this set of tag distributions to a dictionary, by using a threshold tau above which all tags are entered into the dictionary for a given word.PortlandgedeihendehateineMusikszene.= probability of a tag given a word
35Baseline 2: lexicon projection Bilingual ProjectionBaseline 2: lexicon projectionFeature HMM constrained with projected dictionaryWhen this dictionary is used to constrain the state space of a feature-based HMM and the model is trained, we get pretty good results. For 6 out of 8 languages, we see improvements over the three previous models we considered. The improvement over the purely unsupervised feature-HMM model is more than 8% on an average, which is quite impressive.DanishDutchGermanGreekItalianPortugueseSpanishSwedishAverage68.757.075.965.863.762.971.568.466.7126.96.36.1991.868.178.480.270.173.073.677.083.279.379.782.680.174.778.879.082.476.384.887.082.879.4EM-HMMFeature-HMMDirectprojectionProjectedDictionaryImprovements over simple projection for majority of the languages
36Projected lexicon expansion and refinement using label propagation No information about unaligned wordsPortland hat eine prächtig gedeihende Musikszene .Portland has a thriving music scene .NOUNVERBDETADJ.One drawback of this approach I just described is that there is no information about unaligned words – we had ignored unaligned words during the projection procedure.<press enter>The question we ask at this point is whether coverage can be improved.To address this problem, we explore an idea whether the projected lexicon is expanded using a lot of unlabeled data.Can coverage be improved?Idea:Projected lexicon expansion and refinement usinglabel propagation
37How can label propagation help? Our Model: Graph-Based ProjectionsHow can label propagation help?For a language:Build graph over a 2M trigram types as verticescompute similarity matrix using co-occurrence statisticsLabel distribution at each vertex ≈ tag distribution over the trigram’s middle wordNow the question is how can label propagation help us.<press enter>We first construct a graph over a 2 million trigram types of a target language, as vertices. Distributional similarity between the trigram types, constructed from a lot of unlabeled data gives us the edge weight between a pair of trigrams. The graph is made sparse by considering only 5-NNs of each trigram type.Now, for each vertex, the distribution over labels, which we called q in the previous slide is chosen to be the distribution over the middle word of the trigram. The trigram representation is chosen just to reduce the ambiguity of the middle word of it, and to construct a better graph.The graph construction over the target language follows as it is from a recent work by Subramanya and colleagues, where their goal was to perform POS domain adaption for English.Subramanya, Petrov and Pereira (2010)
40How can label propagation help? Our Model: Graph-Based ProjectionsHow can label propagation help?For a target language:Build graph over a 2M trigram types as verticescompute similarity matrix using co-occurrence statisticsLabel distribution at each vertex ≈ tag distribution over the trigram’s middle wordNow, to spread label distributions through this target graph over trigrams, we need to seed it. Since we assume that we do not have ANY labeled data for this language, we bring in parallel data. Thus, we plug in automatically tagged words from a resource-rich source language. We connect these words to the target language trigrams, where edges between the source words and target trigrams are nothing but word alignments.Plug in auto-tagged words from a source languageLinks between source and target language units are word alignments
42How can label propagation help? Our Model: Graph-Based ProjectionsHow can label propagation help?For a target language:Plug in auto-tagged words from a source languageLinks between source and target language units are word alignmentsNext, we are ready to run label propagation. In the first stage, we propagate labels from the source words to the target trigrams.Run first stage of label propagationSource language → target language
45How can label propagation help? Our Model: Graph-Based ProjectionsHow can label propagation help?For a target language:Plug in auto-tagged words from a source languageLinks between source and target language units are word alignmentsAfter this first stage of label propagation from the source language words to the target language trigrams, we use the second stage of label propagation, because we have some seed distributions over the target language graph. This second stage of label propagation is nothing but the optimization of the graph objective function with squared penalties that we showed previously.Run first stage of label propagationSource language → target languageRun second stage of label propagationwithin target language verticesgraph objective function with squared penalties
50Our Model: Graph-Based Projections End result?PortlandgedeihendehateineMusikszene.The end result of this bilingual projection procedure is a large set of distributions over all the unigrams of the target language, once we collapse and renormalize the distribution over tags for all the target trigrams. Some examples are shown here.feinlebhafterrealisieren
51Our Model: Graph-Based Projections A larger set of tag distributions → better and larger dictionaryPortlandgedeihendehateineMusikszene.The end result of this bilingual projection procedure is a large set of distributions over all the unigrams of the target language, once we collapse and renormalize the distribution over tags for all the target trigrams. Some examples are shown here.feinlebhafterrealisieren
52Our Model: Graph-Based Projections Lexicon Expansionthousands of wordsThis slide compares the sizes of the dictionaries for each language constructed using the projected dictionary model and our full model that uses graph-based learning. Note that there is significant expansion in lexicon size. Although the lexicons are imperfect, this expansion helps reason about the consistent improvements that we get across languages.
53with Labeled and Unlabeled Data Brief Overview:Graph-Based Learningwith Labeled and Unlabeled DataWe adopt a graph-based semi-supervised learning technique using which we perform lexicon expansion. The reason why we choose this particular way of learning from indirect supervision is the following. This framework helps us learn from similar datapoints which have some bilingually projected knowledge, thus expanding our belief on a small set of items to a larger set. The framework helps us create a probabilistic resource, which is stationary and created offline, and can be used for both training and inference.I will describe the particular graph-based technique we use in this work.
54= symmetric weight matrix distributionsto be found0.90.010.80.1= symmetric weight matrix0.05supervised labeldistributionslabeled datapointsunlabeled datapointsHere we have five sample points,<press enter>The two shaded datapoints correspond to labeled datapoints,While the three white ones correspond to unlabeled ones.The geometry of the data is also given, with edges between pairs of vertices.Each edge has a symmetric weight wij, which is computed using some sort of similarity function sim. We assume this weight matrix for the entire graph to be given. Note that the graph is notfully connected, which indicates that the absent edges had weight 0.On the labeled vertices, supervised label distributions are known. These are q1 and q5 for the given graphOur goal is to learn q2, q3 and q4, the distributions on the unlabeled vertices.Zhu, Ghahramani and Lafferty (2003)
55Label Propagation Zhu, Ghahramani and Lafferty (2003) 0.9 0.1 0.05 0.010.80.10.05This transductive set up, where we want to learn the label distributions on unlabeled vertices, is called label propagation, and was proposed by Zhu et al. In the current work, to perform label propagation, we minimize the shown objective functionZhu, Ghahramani and Lafferty (2003)
56set of distributions over unlabeled vertices Label Propagation0.90.010.80.10.05Here capital QsubU is the set of distributions over unlabeled vertices, which we need to find.set of distributions over unlabeled vertices
57unlabeled vertices Label Propagation 0.9 0.1 0.05 0.01 0.8 VsubU is the set of unlabeled vertices.unlabeled vertices
58brings the distributions of similar Label Propagation0.90.010.80.10.05The first term in the objective function brings the distributions of similar vertices closer to each other.brings the distributions of similarvertices closer
59brings the distributions of uncertain neighborhoods Label Propagation0.90.010.80.10.05Size of thelabel setThe second term can be thought of as a regularizer which brings the distributions on vertices in uncertain graph nerighborhoods close to the uniform distributionbrings the distributions of uncertain neighborhoodsclose to the uniform distribution
60Iterative updates for optimization Label Propagation0.90.010.80.10.05To optimize this function, we use iterative updates, whose detail I won’t go into, in this talk. They can be found in the paper.Iterative updates for optimization
62Our Model: Graph-Based Projections Feature HMM constrained with graph-based dictionaryDanishDutchGermanGreekItalianPortugueseSpanishSwedishAverage68.757.075.965.863.762.971.568.466.7188.8.131.521.868.178.480.270.173.073.677.083.279.379.782.680.174.778.879.082.476.384.887.082.879.479.582.586.887.984.280.583.4EM-HMMWe compare these results with two oracles – a feature HMM that incorporates a gold dictionary constructed from the treebanks. We fall short of this model by quite a bit, but this is not surprising because our lexicon is noisy. Finally, the supervised model gives us an upper bound, and we are more than 13 points worse than it.Feature-HMMDirectprojectionProjectedDictionaryGraph-BasedProjections
63Our Model: Graph-Based Projections Feature HMM constrained with graph-based dictionaryDanishDutchGermanGreekItalianPortugueseSpanishSwedishAverage68.757.075.965.863.762.971.568.466.7184.108.40.2061.868.178.480.270.173.073.677.083.279.379.782.680.174.778.879.082.476.384.887.082.879.479.582.586.887.984.280.583.4EM-HMMWe compare these results with two oracles – a feature HMM that incorporates a gold dictionary constructed from the treebanks. We fall short of this model by quite a bit, but this is not surprising because our lexicon is noisy. Finally, the supervised model gives us an upper bound, and we are more than 13 points worse than it.Feature-HMMDirectprojectionProjectedDictionaryGraph-BasedProjectionsw/ golddictionary93.194.793.596.696.494.095.885.593.796.994.998.297.895.897.296.894.896.6supervised
64Concluding NotesReasonably accurate POS taggers without direct supervisionEvaluated on major European languagesTowards a standard of universal POS tagsTraditional evaluation of unsupervised POS taggers done using greedy metrics that use labeled dataOur presented models avoid these evaluation methods
65Future DirectionsScaling up the number of nodes in the graph from 2M to billions may help create larger lexiconsIncluding penalties in the graph objective that induce sparse tag distributions at each graph vertexInclusion of multiple languages in the graph may further improve resultsLabel propagation in one huge multilingual graph
67Questions? Portland has a thriving music scene . NOUNNOUNNOUNNOUNNOUNNOUNVERBVERBVERBVERBDETDETDETDETADJADJADJADJADJADJADJNOUNNOUNNOUNNOUNNOUNNOUNNOUN....Portland has a thriving music scene .Portland hat eine prächtig gedeihende Musikszene .No need to say anythingপোর্টল্যান্ড শহর এর সঙ্গীত পরিবেশ বেশ উন্নত | ADPPortland tiene una escena musical vibrante .ADJ波特兰 有 一个 生机勃勃的 音乐 场景Portland a une scène musicale florissante .ADJ