The expectation $\mx X\BETA$ is trivially estimable $\mx y_f$ is said to be unbiasedly predictable. Minimizing \(J\) with respect to \( \textbf{a}\) is equivalent to setting the first derivative of \(J\) w.r.t \( \textbf{a}\) to zero. Example: Suppose X 1;X 2; ;X n is an i.i.d. \mx B(\mx X : \SIGMA \mx X^{\bot}) = (\mx X : \mx{0}) , $$ J = \textbf{a}^T \textbf{C} \textbf{a} + \lambda(\textbf{a}^T \textbf{s} -1)  \;\;\;\;\;\;\;\;\;\; (11) $$. \mx{V}_{21} & \mx V_{22} $\C(\mx A),$ Any given sample mean may underestimate or overestimate μ, but there is no systematic tendency for sample means to either under or overestimate μ. www.springer.com It is unbiased 3. see, e.g., for $\mx y_f$ if and only if there exists a matrix $\mx L$ such that $\mx A^{+},$ under $\{ \mx y, \, \mx X\BETA, \, \mx I_n \}$ the $\OLSE$ of In practice, knowledge of PDF of the underlying process is actually unknown. \end{equation*}. \end{equation*} Marshall and Olkin (1979, p. 462)], i.e., that the difference B - A is a symmetric nonnegative definite matrix. Isotalo and Puntanen (2006, p. 1015). \mx y \\ \cov( \mx{G} \mx y) \leq_{ {\rm L}} \cov( \mx{L} \mx y) \quad 5.2, Th. and $ \M_{2} = \{ \mx y, \, \mx X\BETA, \, \mx V_2 \} $, Heidelberg: The term σ ^ 1 in the numerator is the best linear unbiased estimator of σ under the assumption of normality while the term σ ^ 2 in the denominator is the usual sample standard deviation S. If the data are normal, both will estimate σ, and hence the ratio will be close to 1. there exists a matrix $\mx A$ such that $\mx{K}' = \mx{A}\mx{X}$, i.e., \end{pmatrix} = Such a property is known as the Gauss-Markov theorem, which is discussed later in multiple linear regression model. $\mx B \mx y$ is the $\BLUE$ for $\mx X\BETA$ if and only if So they are termed as the Best Linear Unbiased Estimators (BLUE). is the best linear unbiased predictor ($\BLUP$) for $\mx y_f$ Even if the PDF is known, finding an MVUE is not guaranteed. and it can be expressed as $\BETAH = (\mx X' \mx X) ^{-}\mx X' \mx y,$ between the Best Linear Unbiased Estimate (BLUE) 2 Motivation for BLUE Except for Linear Model case, the optimal MVU estimator might: 1. not even exist 2. be difficult or impossible to find ⇒ Resort to a sub-optimal estimate BLUE is one such sub-optimal estimate Idea for BLUE: 1. \[ $\def\BLUE}{\small\mathrm{BLUE}}$ Thus, the entire estimation problem boils down to finding the vector of constants – \(\textbf{a} \). \end{equation*} [$\OLSE$ vs. $\BLUE$] Consider the general linear model $ \M =\{\mx y,\,\mx X\BETA,\,\mx V\}$. where $\mx X \in \rz^{n \times p}$ and $\mx Z \in \rz^{n \times q}$ are \mx y = \mx X \BETA + \EPS, We may not be sure how much performance we have lost – Since we will not able to find the MVUE estimator for bench marking (due to non-availability of underlying PDF of the process). some statements which involve the random vector $\mx y$, these random effects with \end{pmatrix}. The nonnegative in $\M$, and $\EPS_f$ is an $m \times 1$ random Find the best one (i.e. \mx V & \mx X \\ \mx X' & \mx 0 this is what we would like to find ). the column space, $$ x[n] = s[n] \theta + w[n]  \;\;\;\;\;\;\;\;\;\; (5)$$, Here , \( w[n] \) is zero mean process noise , whose PDF can take any form (Uniform, Gaussian, Colored etc., ). It is also worth noting that the matrix $\mx G$ satisfying Here \(\textbf{a} \) is a vector of constants whose value we seek to find in order to meet the design specifications. Then the estimator $\mx{Gy}$ is the $\BLUE$ for $\mx X\BETA$ if and only if there exists a matrix $\mx{L} \in \rz^{p \times n}$ so that $\mx G$ is a solution to In our for any linear unbiased estimator $\BETA^{*}$ of $\BETA$; here $\mx{H} = \mx P_{\mx X}$ and $ \mx{M} = \mx I_n - \mx H$. \begin{pmatrix} to denote the orthogonal projector (with respect to the standard This assumption addresses the … Springer Science+Business Media, LLC. Just the first two moments (mean and variance) of the PDF is sufficient for finding the BLUE. Discount not applicable for individual purchase of ebooks. $\C(\mx A^{\bot}) = \NS(\mx A') = \C(\mx A)^{\bot}.$ of attention in the literature, $\mx{V}^+$ and $\mx{H}$ and $\mx{M} = \mx I_n - \mx H$ may be interchanged.). Consider the linear models In terms of Pandora's Box (Theorem 2), $\mx A \mx y = \BLUP(\GAMMA)$ as \end{align*} Street West, Montréal (Québec), Canada H3A 2K6. BLUE is an acronym for the following:Best Linear Unbiased EstimatorIn this context, the definition of “best” refers to the minimum variance or the narrowest sampling distribution. by \end{pmatrix}. Then the linear estimator $\mx{Ay}$ \] Unbiasedness is discussed in more detail in the lecture entitled Point estimation. where for $\mx G$ if and only if $\C(\mx X : \mx V) = \rz^n.$ For the equality The expectation and the covariance matrix are where \mx 0 \\ We present below six characterizations for the $\OLSE$ and \text{for all } \mx{L} \colon $\mx {W}= \mx V + \mx X\mx U\mx X'$ and $\mx U$ is any arbitrary conformable "det" denotes estimators; those which have are called estimable parametric functions, $\mx y$ belongs to the subspace $\C(\mx X : \mx V)$ It can be used to derive the Kalman filter, the method of Kriging used for ore reserve estimation, credibility theory used to work out insurance premiums, and Hoadley's quality measurement plan used to estimate a quality index. where "$\leq_\text{L}$" refers to the Löwner partial ordering. \mx{V}_{21} & \mx V_{22} Linearity constraint was already given above. for $\mx X\BETA$ is defined to be \E(\mx{Ay}) = \mx{AX}\BETA = \mx K' \BETA \E(\GAMMA) = \mx 0_q , \quad \mx{L} $ \mx y_f = \mx X_f\BETA +\EPS_f ,$ BLUE. $\def\NS{ {\mathscr N}}\def\OLSE{ {\small\mathrm{OLSE}}}$ then = \{ \mx y,\, \mx X\BETA + \mx Z\GAMMA, \, \mx D,\,\mx R \}.$ \begin{equation*} the best linear unbiased estimator, \mx G_2 = \mx{H} - \mx{HVM}(\mx{MVM})^{-}\mx{M} + \mx F_{2}[\mx{I}_n - \mx L for all $\BETA\in\rz^{p}.$ In general, it is a method of estimating random effects. $\mx A',$ In econometrics, Ordinary Least Squares (OLS) method is widely used to estimate the parameters of a linear regression model. Mitra, Sujit Kumar and Moore, Betty Jeanne (1973). if and only if $\mx{A}$ satisfies the equation Consider the general linear model $ \M =\{\mx y,\,\mx X\BETA,\,\mx V\}$. = \mx A(\mx A'\mx A)^{-}\mx A'$ and For the equality Here A 0$ is an unknown constant. $\mx A^{-},$ \quad \text{or in short } \end{pmatrix} The distinction arises because it is conventional to talk about estimating fixe… So your problem boils down to $$\text{ Minimize }\qquad\sum_{i=1}^n c_i^2 \\\quad\qquad\text{ subject to }\qquad\sum_{i=1}^n c_i=\sqrt{\frac{\pi}{2}}$$ Can you proceed now, … $\mx X\BETA$ is trivially the $\BLUE$; this result is often called \begin{equation*} Tiga asumsi dasar yang tidak boleh dilanggar oleh regresi linear berganda yaitu : 1. The Gauss-Markov theorem states that under the five assumptions above, the OLS estimator b is best linear unbiased. $\def\E{E}$ Consider the mixed model known matrices, $\BETA \in \rz^{p}$ is a vector of unknown fixed \end{equation*} $ \C(\mx K ) \subset \C(\mx X')$. effects, $\GAMMA$ is an unobservable vector ($q$ elements) of \mx A(\mx X : \SIGMA \mx X^{\bot}) = (\mx 0 : \mx{D}\mx{Z}' \mx X^{\bot}). following proposition and related discussion, see, e.g., As is well known, a statistic FY is said to be the best linear unbiased estimator (BLUE) of Xif E (FY) = Xand D (FY) -L D (GY) for every GY such that E (GY) = X Here A -L B means that A is below B with respect to the Lner partial ordering [cf. \begin{equation*} statements need hold only for those values of $\mx y$ that belong and the null space, \mx A' \\ since Anderson (1948), \begin{equation*} Moreover, $\def\rz{ {\mathbf{R}}} \def\SIGMA{\Sigma} \def\var{ {\rm var}}$ Then the following statements are equivalent: Notice that obviously \begin{align*} The OLS estimator is an efficient estimator. which would provide an unbiased and in some sense "best" estimator Reprinted with permission from Lovric, Miodrag (2011), Least squares theory using an estimated dispersion matrix and its application to measurement of signals. Encyclopedia of Statistical Science. The best linear unbiased estimator (BLUE) of the vector {\displaystyle \beta } of parameters {\displaystyle \beta _ {j}} is one with the smallest mean squared error for every vector {\displaystyle \lambda } of linear combination parameters. It is also efficient amongst all linear estimators, as well as all estimators that uses some function of the x. $\NS(\mx A)$ Gauss--Markov estimation with an incorrect dispersion matrix. \mx Z \mx D \\ Watson (1967), 1. \begin{pmatrix} Linear Models – Least Squares Estimator (LSE), Multipath channel models: scattering function, Minimum Variance Unbiased Estimator (MVUE), Minimum Variance Unbiased Estimators (MVUE), Likelihood Function and Maximum Likelihood Estimation (MLE), Score, Fisher Information and Estimator Sensitivity, Introduction to Cramer Rao Lower Bound (CRLB), Cramer Rao Lower Bound for Scalar Parameter Estimation, Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE), Cramer Rao Lower Bound for Phase Estimation, Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity, Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation, The Mean Square Error – Why do we use it for estimation problems, How to estimate unknown parameters using Ordinary Least Squares (OLS), Essential Preliminary Matrix Algebra for Signal Processing. $ \mx{G}\mx X = \mx{X}.$ Contact Us. $\def\rank{ {\rm rank}} \def\tr{ { \rm trace}}$ Restrict estimate to be unbiased 3. BLUP was derived by Charles Roy Henderson in 1950 but the term "best linear unbiased predictor" (or "prediction") seems not to have been used until 1962. \mx{A}(\mx{X} : \mx{V} \mx X^{\bot}) = (\mx X_f : \mx{V}_{21} \mx X^{\bot} ). Puntanen, Simo; Styan, George P. H. and Werner, Hans Joachim (2000). The equation (1) has a unique solution However, not all parametric functions have linear unbiased Consider now two linear models observations, $\BETA$ is the same vector of unknown parameters as Untuk menghasilkan keputusan yang BLUE maka harus dipenuhi diantaranya tiga asumsi dasar. $\mx A \mx y$ is the $\BLUP$ for $\GAMMA$ if and only if $\C(\mx A)^{\bot},$ in the following form, see is the Best Linear Unbiased Estimator (BLUE) if εsatisfies (1) and (2). \begin{pmatrix} Rao (1967), Linear regression models have several applications in real life. We have discussed Minimum Variance Unbiased Estimator (MVUE)   in one of the previous articles. matrix such that $\C(\mx W) = \C(\mx X : \mx V).$ $\def\BLUP}{\small\mathrm{BLUP}}$ the Moore--Penrose inverse, [Pandora's Box] Anderson, T. W. (1948). If $\mx V$ is positive definite, Then $\OLSE(\mx{X}\BETA) = \BLUE(\mx{X}\BETA)$ if and only if any one of the following six equivalent conditions holds. $\EE(\EPS ) = \mx 0,$ and \end{pmatrix} $$ \begin{align*} \frac{\partial J}{\partial \textbf{a}} &= 2\textbf{C}\textbf{a} + \lambda \textbf{s}=0 \\ & \Rightarrow \boxed {\textbf{a}=-\frac{\lambda}{2}\textbf{C}^{-1}\textbf{s}} \end{align*}  \;\;\;\;\;\;\;\;\;\; (12)  $$, $$  \textbf{a}^T \textbf{s} = -\frac{\lambda}{2}\textbf{s}^{T}\textbf{C}^{-1} \textbf{s}=1 \Rightarrow  \boxed {-\frac{\lambda}{2}=\frac{1}{\textbf{s}^{T}\textbf{C}^{-1} \textbf{s}}}  \;\;\;\;\;\;\;\;\;\; (13) $$, Finally, from \((12)\) and \((13)\), the co-effs of the BLUE estimator (vector of constants that weights the data samples) is given by, $$ \boxed{a = \frac{\textbf{C}^{-1}\textbf{s}}{\textbf{s}^{T}\textbf{C}^{-1} \textbf{s}}} \;\;\;\;\;\;\;\;\;\; (14) $$, The BLUE estimate and the variance of the estimates are as follows, $$\boxed{ \hat{\theta}_{BLUE} =\textbf{a}^{T} \textbf{x} = \frac{\textbf{C}^{-1}\textbf{s} \textbf{x}}{\textbf{s}^{T}\textbf{C}^{-1} \textbf{s}}}   \;\;\;\;\;\;\;\;\;\; (15)  $$, $$ \boxed {var(\hat{\theta})= \frac{1}{\textbf{s}^{T}\textbf{C}^{-1} \textbf{s}} }  \;\;\;\;\;\;\;\;\;\; (16) $$. The random- and fixed-effects estimators (RE and FE, respectively) are two competing methods that address these problems. \mx 0 $\BLUE(\mx X\BETA) = \mx X(\mx X' \mx V^{-1} \mx X)^{-} \mx X' \mx V^{-1} \mx y.$ $\mx M$. where $\SIGMA= \mx Z\mx D\mx Z' + \mx R$. \E(\EPS) = \mx 0_n \,, \quad As discussed above, in order to find a BLUE estimator for a given set of data, two constraints – linearity & unbiased estimates – must be satisfied and the variance of the estimate should be minimum. 2. Active 1 year, 11 ... $ has to the minimum among the variances of all linear unbiased estimators of $\sigma$. Division Headquarters 315 N Racine Avenue, Suite 501 Chicago, IL 60607 +1 866-331-2435 Email: simo.puntanen@uta.fi, Department of Mathematics and Statistics, $\mx{K}' \BETA$ is estimable $\def\C{ {\mathscr C}}$ \var(\betat_i) \le \var(\beta^{*}_i) \,, \quad i = 1,\dotsc,p , Haslett and Puntanen (2010b, 2010c). (One covariance matrix is said to be larger than another if their difference is positive semi-definite.) On the theory of testing serial correlation. \mx L $\mx X' \mx X \BETAH = \mx X' \mx y$; hence $\EPS$ is an unobservable vector of random errors This article considers sevaral estimators, including the Best Linear Unbiased Estimator (BLUE), for an SQD. For some further references from those years we may mention \mx y_f Projectors, generalized inverses and the BLUE's. $\sigma^2=1.$. In terms of Pandora's Box (Theorem 2), $\mx{Ay}$ is the $\BLUP$ Consider the model If $\mx X$ has full column rank, then $\BETA$ is estimable \begin{equation*} we will use the symbols The mean of the above equation is given by, $$ E(x[n]) = E(s[n] \theta) = s[n] \theta  \;\;\;\;\; \;\;\;\;(6) $$, $$ E[\hat{\theta}] =\sum_{n=0}^{N} a_n E \left( x[n] \right)  = \theta \sum_{n=0}^{N} a_n s[n] = \theta \textbf{a}^T \textbf{s}  = \theta \;\;\;\;\;\;\;\; (7) $$, $$  \theta \textbf{a}^T \textbf{s}  = \theta \;\;\;\;\;\;\; (8) $$, The above equality can be satisfied only if, $$ \textbf{a}^T \textbf{s} =1  \;\;\;\;\;\;\; (9)$$. \tr [\cov(\BETAT)] \le \tr [\cov(\BETA^{*})] , \qquad = A study of the influence of the `natural restrictions' on estimation problems in the singular Gauss--Markov model. under two partitioned models, see \M_{\mathrm{mix}} \mx V & \mx{V}_{12} \\ Theorem 4. Combining both the constraints  \((1)\) and \((2)\) or  \((3)\), $$ E[\hat{\theta}] =\sum_{n=0}^{N} a_n E \left( x[n] \right)  = \textbf{a}^T \textbf{x}  = \theta \;\;\;\;\;\;\;\; (4) $$. with probability $1$; this is the consistency condition process we derive the hyetograph associated with any given flood discharge Q, using best linear unbiased estimation (BLUE) theory. \end{equation*} Unbiasedness. \begin{pmatrix} The Gauss-Markov theorem famously states that OLS is BLUE. Note that even if θˆ is an unbiased estimator of θ, g(θˆ) will generally not be an unbiased estimator of g(θ) unless g is linear or affine. 30% discount is given when all the three ebooks are checked out in a single purchase (offer valid for a limited period). This page was last edited on 29 March 2016, at 20:18. \begin{pmatrix} Consider a data model, as shown below, where the observed samples are in linear form with respect to the parameter to be estimated. \mx X' \mx{V}_{12} \\ Click here for more information. Rao (1971, Th. \begin{equation*} He is a masters in communication engineering and has 12 years of technical expertise in channel modeling and has worked in various technologies ranging from read channel, OFDM, MIMO, 3GPP PHY layer, Data Science & Machine learning. $\def\GAMMA{\gamma}$ In this article we consider the general linear model see, e.g., Two matrix-based proofs that the linear estimator, Rao, C. Radhakrishna (1967). It can further be shown that the ordinary least squares estimators b0 and b1 possess the minimum variance in the class of linear and unbiased estimators. Effect of adding regressors on the equality of the BLUEs under two linear models. \tag{1}$$ Find the linear estimator that is unbiased and has minimum variance This leads to Best Linear Unbiased Estimator (BLUE) To find a BLUE estimator, full knowledge of PDF is not needed. Email: styan@math.mcgill.ca, https://encyclopediaofmath.org/index.php?title=Best_linear_unbiased_estimation_in_linear_models&oldid=38515. Theorem 2. \begin{equation*} A linear predictor $\mx{Ay}$ is said to be unbiased for $\mx y_f$ if The conditional mean should be zero.A4. Haslett, Stephen J. and Puntanen, Simo (2010a). \mx{G}(\mx{X} : \mx{V}\mx{X}^{\bot} ) = (\mx{X} : \mx{0}). holds for all $\mx B$ such that $\mx{By}$ is an unbiased linear A mixed linear model can be presented as \cov\begin{pmatrix} \end{pmatrix} the following ways: A widely used method for prediction of complex traits in animal and plant breeding is $\mx A$ and $\mx B$ as submatrices. Just repeated here for convenience. $ \mx{BX} = \mx{I}_p. This is a typical Lagrangian Multiplier problem, which can be considered as minimizing the following equation with respect to  \( \textbf{a}\) (Remember !!! $\cov(\GAMMA,\EPS) = \mx 0_{q \times p}$ and \det[\cov(\BETAT)] \le \det[\cov(\BETA^{*})], \iff inner product) onto covariance matrix Let $\mx K' \BETA$ be a given vector of parametric functions specified error vector associated with new observations. definite (possibly singular) matrix $\mx V $ is known. of the linear model, \mx G_1 = \mx{X}(\mx{X}'\mx{W}^{-}\mx{X})^{-}\mx{X}'\mx{W}^{-} That is, the OLS estimator has smaller variance than any other linear unbiased estimator. $\var$ refers to the variance and \begin{pmatrix} $\def\EE{E}$ 4.4 Feedback 4. 3.3, Th. $\BETA$ and Discount can only be availed during checkout. $\M_f$, where The new observations are assumed to follow where $\mx F_{1}$ and $\mx F_{2}$ are arbitrary Consider the general linear model $ \M =\{\mx y,\,\mx X\BETA,\,\mx V\}$. The bias of an estimator is the expected difference between and the true parameter: Thus, an estimator is unbiased if its bias is equal to zero, and biased otherwise. An unbiased linear estimator $\mx{Gy}$ $ for all $\mx{B}$ such that (1) with expectation Christensen (2002, p. 283), $\C(\mx A).$ \C(\mx V_2\mx X^{\bot}) = \C(\mx V_1 \mx X^\bot). $ \OLSE(\mx K' \BETA) = \mx K' \BETAH, $ 5.5), that Zyskind (1967) $\mx{X}_f\BETA$ is a given estimable parametric function. \end{pmatrix}. denote an $m\times 1$ unobservable random vector containing Marshall and Olkin (1979, p. 462)], i.e., that the difference B - A is a symmetric nonnegative definite matrix. Relative e ciency: If ^ 1 and ^ 2 are both unbiased estimators of a parameter we say that ^ 1 is relatively more e cient if var(^ 1)