In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.[1] The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.[2]
History
The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin.[3] They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith.[4] Another was proposed by H.O. Hartley in 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated.[5] Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977.[6] Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers,[7][8][9] following his collaboration with Per Martin-Löf and Anders Martin-Löf.[10][11][12][13][14] The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997).
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983.[15]
Wu's proof established the EM method's convergence also outside of the exponential family, as claimed by Dempster–Laird–Rubin.[15]
Introduction
The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or a saddle point.[15] In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. The convergence of expectation-maximization (EM)-based algorithms typically requires continuity of the likelihood function with respect to all the unknown parameters (referred to as optimization variables).[16]
Maximization step (M step): Find the parameters that maximize this quantity:
More succinctly, we can write it as one equation:
Interpretation of the variables
The typical models to which EM is applied use as a latent variable indicating membership in one of a set of groups:
The observed data points may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points whose corresponding latent variable has that value).
However, it is possible to apply EM to other sorts of models.
The motivation is as follows. If the value of the parameters is known, usually the value of the latent variables can be found by maximizing the log-likelihood over all possible values of , either simply by iterating over or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables , we can find an estimate of the parameters fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both and are unknown:
First, initialize the parameters to some random values.
Compute the probability of each possible value of , given .
Then, use the just-computed values of to compute a better estimate for the parameters .
Iterate steps 2 and 3 until convergence.
The algorithm as just described monotonically approaches a local minimum of the cost function.
Properties
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates ), or applying simulated annealing methods.
EM is especially useful when the likelihood is an exponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment:[17] the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula[18] (proved and published by Rolf Sundberg, based on unpublished results of Per Martin-Löf and Anders Martin-Löf).[8][9][11][12][13][14]
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
Proof of correctness
Expectation-Maximization works to improve rather than directly improving . Here it is shown that improvements to the former imply improvements to the latter.[19][20]
For any with non-zero probability , we can write
We take the expectation over possible values of the unknown data under the current parameter estimate by multiplying both sides by and summing (or integrating) over . The left-hand side is the expectation of a constant, so we get:
where is defined by the negated sum it is replacing.
This last equation holds for every value of including ,
and subtracting this last equation from the previous equation gives
In words, choosing to improve causes to improve at least as much.
As a maximization–maximization procedure
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent.[21][22] Consider the function:
where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as
where is the conditional distribution of the unobserved data given the observed data and is the Kullback–Leibler divergence.
Then the steps in the EM algorithm may be viewed as:
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.[citation needed]
In structural engineering, the Structural Identification using Expectation Maximization (STRIDE)[26] algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis).
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
E-step
Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
M-step
Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
where are scalar output estimates calculated by a filter or a smoother from N scalar measurements . The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
where and are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
The convergence of parameter estimates such as those above are well studied.[28][29][30][31]
Variants
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson).[32] Also, EM can be used with constrained estimation methods.
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".[33]
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed.[34] Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.[35]
This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization–maximization procedure section.[21] GEM is further developed in a distributed environment and shows promising results.[36]
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[37] and therefore use any machinery developed in the more general case.
α-EM algorithm
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm[38]
which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
[39]
Relation to variational Bayes methods
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
Let be a sample of independent observations from a mixture of two multivariate normal distributions of dimension , and let be the latent variables that determine the component from which the observation originates.[22]
and
where
and
The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:
In the last equality, for each i, one indicator is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
E step
Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:
These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below).
This E step corresponds with setting up this function for Q:
The expectation of inside the sum is taken with respect to the probability density function , which might be different for each of the training set. Everything in the E step is known before the step is taken except , which is computed according to the equation at the beginning of the E step section.
This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.
M step
being quadratic in form means that determining the maximizing values of is relatively straightforward. Also, , and may all be maximized independently since they all appear in separate linear terms.
To begin, consider , which has the constraint :
This has the same form as the maximum likelihood estimate for the binomial distribution, so
For the next estimates of :
This has the same form as a weighted maximum likelihood estimate for a normal distribution, so
and
and, by symmetry,
and
Termination
Conclude the iterative process if for below some preset threshold.
The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[40] Special cases of this model include censored or truncated observations from one normal distribution.[40]
Alternatives
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches[41] or the so-called spectral techniques.[42][43] Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.[citation needed]
^Jeongyeol Kwon, Constantine Caramanis
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1727-1736, 2020.
^Sundberg, Rolf (1974). "Maximum likelihood theory for incomplete data from an exponential family". Scandinavian Journal of Statistics. 1 (2): 49–58. JSTOR4615553. MR0381110.
^ ab
Rolf Sundberg. 1971. Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable. Dissertation, Institute for Mathematical Statistics, Stockholm University.
^ abSundberg, Rolf (1976). "An iterative method for solution of the likelihood equations for incomplete data from exponential families". Communications in Statistics – Simulation and Computation. 5 (1): 55–64. doi:10.1080/03610917608812007. MR0443190.
^See the acknowledgement by Dempster, Laird and Rubin on pages 3, 5 and 11.
^ abPer Martin-Löf. 1966. Statistics from the point of view of statistical mechanics. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula", credited to Anders Martin-Löf).
^ abPer Martin-Löf. 1970. Statistiska Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Lecture notes 1969-1970), with the assistance of Rolf Sundberg. Stockholm University.
^ abMartin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, A. P. Dempster, D. Basu, D. R. Cox, A. W. F. Edwards, D. A. Sprott, G. A. Barnard, O. Barndorff-Nielsen, J. D. Kalbfleisch and G. Rasch and a reply by the author. Proceedings of Conference on Foundational Questions in Statistical Inference (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.
^ abMartin-Löf, Per (1974). "The notion of redundancy and its use as a quantitative measure of the discrepancy between a statistical hypothesis and a set of observational data". Scand. J. Statist. 1 (1): 3–18.
^Lindstrom, Mary J; Bates, Douglas M (1988). "Newton—Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data". Journal of the American Statistical Association. 83 (404): 1014. doi:10.1080/01621459.1988.10478693.
^Van Dyk, David A (2000). "Fitting Mixed-Effects Models Using Efficient EM-Type Algorithms". Journal of Computational and Graphical Statistics. 9 (1): 78–98. doi:10.2307/1390614. JSTOR1390614.
^Matarazzo, T. J., and Pakzad, S. N. (2016). “STRIDE for Structural Identification using Expectation Maximization: Iterative Output-Only Method for Modal Identification.” Journal of Engineering Mechanics.http://ascelibrary.org/doi/abs/10.1061/(ASCE)EM.1943-7889.0000951
^Einicke, G. A.; Malos, J. T.; Reid, D. C.; Hainsworth, D. W. (January 2009). "Riccati Equation and EM Algorithm Convergence for Inertial Navigation Alignment". IEEE Trans. Signal Process. 57 (1): 370–375. Bibcode:2009ITSP...57..370E. doi:10.1109/TSP.2008.2007090. S2CID1930004.
^Liu, Chuanhai; Rubin, Donald B (1994). "The ECME Algorithm: A Simple Extension of EM and ECM with Faster Monotone Convergence". Biometrika. 81 (4): 633. doi:10.1093/biomet/81.4.633. JSTOR2337067.
^
Matsuyama, Yasuo (2003). "The α-EM algorithm: Surrogate likelihood maximization using α-logarithmic information measures". IEEE Transactions on Information Theory. 49 (3): 692–706. doi:10.1109/TIT.2002.808105.
^
Matsuyama, Yasuo (2011). "Hidden Markov model estimation based on alpha-EM algorithm: Discrete and continuous alpha-HMMs". International Joint Conference on Neural Networks: 808–816.
^Balle, Borja Quattoni, Ariadna Carreras, Xavier (2012-06-27). Local Loss Optimization in Operator Models: A New Insight into Spectral Learning. OCLC815865081.{{cite book}}: CS1 maint: multiple names: authors list (link)
Hogg, Robert; McKean, Joseph; Craig, Allen (2005). Introduction to Mathematical Statistics. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 359–364.
Dellaert, Frank (February 2002). The Expectation Maximization Algorithm(PDF) (Technical Report number GIT-GVU-02-20). Georgia Tech College of Computing. gives an easier explanation of EM algorithm as to lowerbound maximization.
Gupta, M. R.; Chen, Y. (2010). "Theory and Use of the EM Algorithm". Foundations and Trends in Signal Processing. 4 (3): 223–296. CiteSeerX10.1.1.219.6830. doi:10.1561/2000000034. A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet.
McLachlan, Geoffrey J.; Krishnan, Thriyambakam (2008). The EM Algorithm and Extensions (2nd ed.). Hoboken: Wiley. ISBN978-0-471-20170-0.
External links
Various 1D, 2D and 3D demonstrations of EM together with Mixture Modeling are provided as part of the paired SOCR activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings.
Harry F. MillardeHarry Millarde pada tahun 1916LahirNovember 12, 1885Cincinnati, Ohio, A.S.Meninggal2 November 1931Kota New York, A.S.PekerjaanDirektur filmaktorAnakToni Seven Harry F. Millarde (12 November 1885 – 2 November 1931) adalah seorang pelopor, aktor dan sutradara film bisu Amerika.[1] Biografi Millarde lahir di Cincinnati, Ohio, dan memulai aktingnya dalam film pada tahun 1913 dengan Kalem Studios di New York City. Pada tahun 1916, ia menyutradarai film pertama dari tiga pul…
Bokor malai Klasifikasi ilmiah Kerajaan: Plantae (tanpa takson): Angiospermae (tanpa takson): Eudikotil (tanpa takson): Asteridae Ordo: Cornales Famili: Hydrangeaceae Genus: Hydrangea Spesies: H. paniculata Nama binomial Hydrangea paniculataSiebold Hydrangea paniculata , bokor malai atau kompyong malai (hydrangea bermalai) adalah sejenis spesies dari tumbuhan berbunga dalam familia Hydrangaceae yang merupakan tumbuhan asli dari Tiongkok selatan dan timur, Korea, Jepang, dan Rusia (Sakhalin)…
See also: 2014 Ohio elections 2014 United States House of Representatives elections in Ohio ← 2012 November 4, 2014 (2014-11-04) 2016 → All 16 Ohio seats to the United States House of Representatives Majority party Minority party Party Republican Democratic Last election 12 4 Seats won 12 4 Seat change Popular vote 1,770,923 1,179,587 Percentage 60.02% 39.98% Swing 9.06% 6.93% Republican 50–60% 60–…
Planet with environment similar to Earth's For the idea of a planet orbiting the sun directly opposite Earth, see Counter-Earth. Evolutionary paths of Earth and Venus. Venus has been the prime example for a planet resembling Earth and how such a planet can differ. An Earth analog, also called an Earth analogue, Earth twin, or second Earth, is a planet or moon with environmental conditions similar to those found on Earth. The term Earth-like planet is also used, but this term may refer to any ter…
Artikel ini sebatang kara, artinya tidak ada artikel lain yang memiliki pranala balik ke halaman ini.Bantulah menambah pranala ke artikel ini dari artikel yang berhubungan atau coba peralatan pencari pranala.Tag ini diberikan pada Desember 2022. LexaNama lahirLéa Cristina Lexa AraújoLahir22 Februari 1995 (umur 29)Rio de Janeiro, BrasilGenrePop, R&BPekerjaanPenyanyiTahun aktif2015–sekarangLabelSom LivreSitus webwww.lexaoficial.com.br Léa Cristina Lexa Araújo (lahir 22 Februari 1995…
Three-star reserve officers and the chief of the National Guard Bureau testify before the Senate Appropriations Subcommittee on Defense on 17 April 2018. There are currently 163 active-duty three-star officers in the uniformed services of the United States: 51 in the Army, 19 in the Marine Corps, 37 in the Navy, 45 in the Air Force, five in the Space Force, four in the Coast Guard, one in the Public Health Service Commissioned Corps, and one in the United States Maritime Service. List of designa…
Artikel ini perlu diwikifikasi agar memenuhi standar kualitas Wikipedia. Anda dapat memberikan bantuan berupa penambahan pranala dalam, atau dengan merapikan tata letak dari artikel ini. Untuk keterangan lebih lanjut, klik [tampil] di bagian kanan. Mengganti markah HTML dengan markah wiki bila dimungkinkan. Tambahkan pranala wiki. Bila dirasa perlu, buatlah pautan ke artikel wiki lainnya dengan cara menambahkan [[ dan ]] pada kata yang bersangkutan (lihat WP:LINK untuk keterangan lebih lanjut). …
Questa voce sull'argomento calciatori italiani è solo un abbozzo. Contribuisci a migliorarla secondo le convenzioni di Wikipedia. Segui i suggerimenti del progetto di riferimento. Enrico Cavalieri Nazionalità Italia Altezza 183 cm Peso 80 kg Calcio Ruolo Portiere Termine carriera 1992 Carriera Squadre di club1 1974-1976 Bologna0 (0)1976-1977 Clodia Sottomarina21 (-24)1977 Bologna0 (0)1977-1979 Avellino0 (0)1979-1980 Genoa11 (-11)1980-1981 Monza11 (-17)1…
Artikel ini membutuhkan rujukan tambahan agar kualitasnya dapat dipastikan. Mohon bantu kami mengembangkan artikel ini dengan cara menambahkan rujukan ke sumber tepercaya. Pernyataan tak bersumber bisa saja dipertentangkan dan dihapus.Cari sumber: Setsubun – berita · surat kabar · buku · cendekiawan · JSTOR (Februari 2024)Oni sedang kesakitan dilempari kacang, lukisan karya Katsushika Hokusai. Setsubun (節分code: ja is deprecated , pembagian musim) adal…
Bulgarian premier Bogdan Filov with Italian leader Benito Mussolini in Rome, 1941 Part of a series onFascism Principles Actual idealism Aestheticization of politics Anti-communism Anti-intellectualism Anti-materialism Anti-pacifism Authoritarianism Chauvinism Class collaboration Conspiracism Corporatism Cult of personality Dictatorship Direct action Dirigisme Eugenics Heroic capitalism Heroic realism Heroism Imperialism Indoctrination Interventionism Economic Social Irrationalism Machismo Mascul…
CHL 1976-1977CampionatoCentral Hockey League Sport Hockey su ghiaccio Numero squadre6 Stagione regolarePrima classificata Kansas City Blues MVP Barclay Plager (Kansas City) Top Scorer Steve West (Oklahoma City) Rookie Bernie Federko (Kansas City) Adams CupVincitore Kansas City Blues Finalista Tulsa Oilers ← 1975-1976 1977-1978 → La stagione 1976-1977 è stata la 14ª edizione della Central Hockey League, lega di sviluppo creata dalla National Hockey League per far cres…
Swansea City 2011–12 football seasonSwansea City2011–12 seasonChairmanHuw JenkinsManagerBrendan RodgersStadiumLiberty StadiumPremier League11thFA Cup4th roundLeague Cup2nd roundTop goalscorerLeague: Danny Graham (12 goals)All: Danny Graham (14 goals)Highest home attendance20,650 (13 May 2012 vs Liverpool)[1]Lowest home attendance19,028 (20 August 2011 vs Wigan Athletic)[2]Average home league attendance19,946 Home colours Away colours ← 2010–112012–13 ͛…
Greek poet Paul the Silentiary, also known as Paulus Silentiarius (Greek: Παῦλος ὁ Σιλεντιάριος, died AD 575–580), was a Greek Byzantine poet and courtier to the emperor Justinian at Constantinople. Life Plan of the imperial district of Byzantine Constantinople What little we know of Paul's life comes largely from the contemporary historian and poet Agathias, a friend and admirer, who describes him as coming from a rich and illustrious family, with a father, Cyrus, and a g…
Индиана Джонс: В поисках утраченного ковчегаангл. Raiders of the Lost Ark Жанр приключенческий боевик Режиссёр Стивен Спилберг Продюсер Фрэнк Маршалл Авторысценария Сюжет:Джордж ЛукасФилип КауфманСценарий:Лоуренс Кэздан В главныхролях Харрисон Форд Карен Аллен Оператор Ду…
西維珍尼亞 美國联邦州State of West Virginia 州旗州徽綽號:豪华之州地图中高亮部分为西維珍尼亞坐标:37°10'N-40°40'N, 77°40'W-82°40'W国家 美國加入聯邦1863年6月20日(第35个加入联邦)首府(最大城市)查爾斯頓政府 • 州长(英语:List of Governors of {{{Name}}}]]) • 副州长(英语:List of lieutenant governors of {{{Name}}}]])吉姆·賈斯蒂斯(R)米奇·卡邁克爾(英…
American teen drama television series HugePromotional posterGenreTeen dramaBased onHugeby Sasha PaleyDeveloped byWinnie HolzmanSavannah DooleyStarringNikki BlonskyZander EckhouseRaven GoodwinHarvey GuillenHayley HasselhoffAshley HollidayAri StidhamGina TorresTheme music composerW. G. Snuffy WaldenWill WaldenCountry of originUnited StatesOriginal languageEnglishNo. of seasons1No. of episodes10ProductionExecutive producersWinnie HolzmanKim RozenfeldLeslie MorgensteinBob LevyProducersSavannah Doole…
Ryota HamaHama in 2018.Born (1979-11-21) November 21, 1979 (age 44)[1]Ibaraki, Osaka, Japan[1]Professional wrestling careerRing name(s)Big Sushi[2]Captain All Japan[3]Mad Paulie[4]Ryota HamaS1 Mask[5]Yapper Man #4Billed height1.75 m (5 ft 9 in)Billed weight226 kg (498 lb)Trained byKaz HayashiKohei SuwamaDebutNovember 3, 2008[1] Ryota Hama (濱 亮太, Hama Ryōta, ring name: 浜 亮太)[6] (born Novembe…