banner



Sum Of Eigenvalues Is Trace

Variational characterization of eigenvalues of meaty Hermitian operators on Hilbert spaces

In linear algebra and functional assay, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of like nature.

This article offset discusses the finite-dimensional instance and its applications before because compact operators on infinite-dimensional Hilbert spaces. Nosotros will see that for compact operators, the proof of the master theorem uses substantially the same idea from the finite-dimensional argument.

In the case that the operator is not-Hermitian, the theorem provides an equivalent characterization of the associated singular values. The min-max theorem can exist extended to self-adjoint operators that are bounded below.

Matrices [edit]

Let A be a northward × due north Hermitian matrix. Every bit with many other variational results on eigenvalues, one considers the Rayleigh–Ritz quotient RA  : C n \ {0} → R divers by

R A ( ten ) = ( A x , ten ) ( x , x ) {\displaystyle R_{A}(ten)={\frac {(Ax,x)}{(x,ten)}}}

where (⋅, ⋅) denotes the Euclidean inner production on C n . Clearly, the Rayleigh quotient of an eigenvector is its associated eigenvalue. Equivalently, the Rayleigh–Ritz quotient tin be replaced past

f ( x ) = ( A x , ten ) , x = i. {\displaystyle f(x)=(Ax,10),\;\|ten\|=1.}

For Hermitian matrices A, the range of the continuous part RA (x), or f(ten), is a compact subset [a, b] of the real line. The maximum b and the minimum a are the largest and smallest eigenvalue of A, respectively. The min-max theorem is a refinement of this fact.

Min-max theorem [edit]

Let A be an n × n Hermitian matrix with eigenvalues λ 1 ≤ ... ≤ λthousand ≤ ... ≤ λn and so

λ g = min U { max x { R A ( x ) x U  and 10 0 } dim ( U ) = yard } {\displaystyle \lambda _{thousand}=\min _{U}\{\max _{x}\{R_{A}(10)\mid x\in U{\text{ and }}ten\neq 0\}\mid \dim(U)=k\}}

and

λ grand = max U { min ten { R A ( x ) x U  and x 0 } dim ( U ) = northward k + 1 } {\displaystyle \lambda _{k}=\max _{U}\{\min _{x}\{R_{A}(x)\mid x\in U{\text{ and }}x\neq 0\}\mid \dim(U)=n-k+1\}}

in particular,

λ 1 R A ( x ) λ n x C n { 0 } {\displaystyle \lambda _{1}\leq R_{A}(x)\leq \lambda _{north}\quad \forall x\in \mathbf {C} ^{n}\backslash \{0\}}

and these bounds are attained when ten is an eigenvector of the appropriate eigenvalues.

As well the simpler formulation for the maximal eigenvalue λ northward is given by:

λ n = max { R A ( ten ) : 10 0 } . {\displaystyle \lambda _{n}=\max\{R_{A}(x):x\neq 0\}.}

Similarly, the minimal eigenvalue λ 1 is given by:

λ 1 = min { R A ( x ) : 10 0 } . {\displaystyle \lambda _{1}=\min\{R_{A}(10):10\neq 0\}.}

Proof

Since the matrix A is Hermitian information technology is diagonalizable and we can cull an orthonormal basis of eigenvectors {u 1, ..., un } that is, ui is an eigenvector for the eigenvalue λi and such that (ui , ui ) = 1 and (ui , uj ) = 0 for all ij.

If U is a subspace of dimension k then its intersection with the subspace span{uk , ..., un } isn't zero, for if it were, then the dimension of the span of the 2 subspaces would be k + ( n chiliad + 1 ) {\displaystyle yard+(north-k+1)} , which is impossible. Hence there exists a vector 5 ≠ 0 in this intersection that we can write equally

v = i = 1000 n α i u i {\displaystyle v=\sum _{i=k}^{n}\alpha _{i}u_{i}}

and whose Rayleigh quotient is

R A ( v ) = i = one thousand northward λ i | α i | 2 i = m n | α i | 2 λ k {\displaystyle R_{A}(v)={\frac {\sum _{i=g}^{due north}\lambda _{i}|\alpha _{i}|^{ii}}{\sum _{i=chiliad}^{n}|\alpha _{i}|^{ii}}}\geq \lambda _{thou}}

(as all λ i λ k {\displaystyle \lambda _{i}\geq \lambda _{thou}} for i=k,..,n) and hence

max { R A ( x ) ten U } λ 1000 {\displaystyle \max\{R_{A}(ten)\mid ten\in U\}\geq \lambda _{yard}}

Since this is true for all U, we can conclude that

min { max { R A ( ten ) x U  and ten 0 } dim ( U ) = thousand } λ k {\displaystyle \min\{\max\{R_{A}(x)\mid x\in U{\text{ and }}ten\neq 0\}\mid \dim(U)=yard\}\geq \lambda _{k}}

This is one inequality. To plant the other inequality, chose the specific k-dimensional space Five = span{u 1, ..., uk } , for which

max { R A ( ten ) x Five  and x 0 } λ k {\displaystyle \max\{R_{A}(x)\mid x\in V{\text{ and }}x\neq 0\}\leq \lambda _{chiliad}}

because λ k {\displaystyle \lambda _{k}} is the largest eigenvalue in V. Therefore, also

min { max { R A ( x ) 10 U  and 10 0 } dim ( U ) = k } λ 1000 {\displaystyle \min\{\max\{R_{A}(x)\mid x\in U{\text{ and }}x\neq 0\}\mid \dim(U)=k\}\leq \lambda _{k}}

To become the other formula, consider the Hermitian matrix A = A {\displaystyle A'=-A} , whose eigenvalues in increasing order are λ thousand = λ n one thousand + i {\displaystyle \lambda '_{k}=-\lambda _{n-k+1}} . Applying the result just proved,

λ northward k + 1 = λ thou = min { max { R A ( x ) ten U } dim ( U ) = k } = min { max { R A ( ten ) x U } dim ( U ) = k } = max { min { R A ( 10 ) x U } dim ( U ) = k } {\displaystyle {\begin{aligned}-\lambda _{n-k+1}=\lambda '_{k}&=\min\{\max\{R_{A'}(x)\mid x\in U\}\mid \dim(U)=k\}\\&=\min\{\max\{-R_{A}(x)\mid 10\in U\}\mid \dim(U)=one thousand\}\\&=-\max\{\min\{R_{A}(10)\mid x\in U\}\mid \dim(U)=g\}\end{aligned}}}

The result follows on replacing one thousand {\displaystyle k} with n m + 1 {\displaystyle n-k+one} .

Counterexample in the not-Hermitian instance [edit]

Let N be the nilpotent matrix

[ 0 1 0 0 ] . {\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}.}

Ascertain the Rayleigh caliber R N ( x ) {\displaystyle R_{Northward}(ten)} exactly as to a higher place in the Hermitian example. Then it is piece of cake to see that the only eigenvalue of N is nothing, while the maximum value of the Rayleigh ratio is 1 / 2 . That is, the maximum value of the Rayleigh quotient is larger than the maximum eigenvalue.

Applications [edit]

Min-max principle for atypical values [edit]

The singular values {σk } of a square matrix Thou are the square roots of the eigenvalues of K*M (equivalently MM*). An firsthand issue[ commendation needed ] of the first equality in the min-max theorem is:

σ thousand = min S : dim ( South ) = k max x Due south , 10 = ane ( Thou G ten , ten ) 1 2 = min S : dim ( S ) = k max x South , x = i Grand x . {\displaystyle \sigma _{k}^{\uparrow }=\min _{S:\dim(S)=k}\max _{x\in South,\|x\|=i}(M^{*}Mx,10)^{\frac {1}{two}}=\min _{Southward:\dim(S)=k}\max _{x\in S,\|x\|=1}\|Mx\|.}

Similarly,

σ k = max S : dim ( Due south ) = northward k + one min x S , x = 1 Grand x . {\displaystyle \sigma _{thou}^{\uparrow }=\max _{S:\dim(Southward)=n-yard+one}\min _{x\in S,\|ten\|=i}\|Mx\|.}

Here σ k = σ thou {\displaystyle \sigma _{k}=\sigma _{grand}^{\uparrow }} denotes the k th entry in the increasing sequence of σ's, then that σ 1 σ ii {\displaystyle \sigma _{1}\leq \sigma _{2}\leq \cdots } .

Cauchy interlacing theorem [edit]

Let A exist a symmetric n × due north matrix. The m × thousand matrix B, where mnorth, is called a compression of A if there exists an orthogonal projection P onto a subspace of dimension m such that PAP* = B. The Cauchy interlacing theorem states:

Theorem. If the eigenvalues of A are α one ≤ ... ≤ αnorth , and those of B are β ane ≤ ... ≤ βj ≤ ... ≤ βm , so for all jm ,
α j β j α n g + j . {\displaystyle \blastoff _{j}\leq \beta _{j}\leq \blastoff _{northward-m+j}.}

This can be proven using the min-max principle. Allow βi have respective eigenvector bi and Sj be the j dimensional subspace Due southj = span{b i, ..., bj }, then

β j = max x S j , x = i ( B 10 , x ) = max x S j , 10 = one ( P A P x , 10 ) min S j max x S j , ten = i ( A ( P x ) , P x ) = α j . {\displaystyle \beta _{j}=\max _{x\in S_{j},\|x\|=i}(Bx,x)=\max _{x\in S_{j},\|x\|=1}(PAP^{*}x,x)\geq \min _{S_{j}}\max _{x\in S_{j},\|x\|=1}(A(P^{*}x),P^{*}10)=\blastoff _{j}.}

According to start role of min-max, αj βj . On the other hand, if we ascertain S mj+1 = span{bj , ..., bone thousand }, then

β j = min x S m j + 1 , x = 1 ( B ten , x ) = min x S m j + 1 , x = ane ( P A P x , x ) = min ten S m j + 1 , x = i ( A ( P x ) , P 10 ) α n m + j , {\displaystyle \beta _{j}=\min _{ten\in S_{m-j+one},\|x\|=1}(Bx,ten)=\min _{ten\in S_{m-j+one},\|ten\|=ane}(PAP^{*}x,ten)=\min _{10\in S_{m-j+one},\|x\|=ane}(A(P^{*}x),P^{*}x)\leq \blastoff _{northward-m+j},}

where the last inequality is given past the second function of min-max.

When due north1000 = 1, we have αj βj α j+1 , hence the name interlacing theorem.

Compact operators [edit]

Let A exist a meaty, Hermitian operator on a Hilbert space H. Recall that the spectrum of such an operator (the set of eigenvalues) is a set of real numbers whose simply possible cluster point is zero. It is thus convenient to listing the positive eigenvalues of A as

λ one thousand λ one , {\displaystyle \cdots \leq \lambda _{k}\leq \cdots \leq \lambda _{i},}

where entries are repeated with multiplicity, every bit in the matrix case. (To emphasize that the sequence is decreasing, we may write λ 1000 = λ thousand {\displaystyle \lambda _{m}=\lambda _{k}^{\downarrow }} .) When H is infinite-dimensional, the above sequence of eigenvalues is necessarily infinite. Nosotros at present apply the same reasoning equally in the matrix case. Letting Sk H exist a chiliad dimensional subspace, we can obtain the following theorem.

Theorem (Min-Max). Allow A be a compact, cocky-adjoint operator on a Hilbert space H, whose positive eigenvalues are listed in decreasing guild ... ≤ λ1000 ≤ ... ≤ λ ane . Then:
max S k min x South k , ten = 1 ( A x , x ) = λ k , min S k one max 10 Due south k 1 , x = 1 ( A x , x ) = λ g . {\displaystyle {\begin{aligned}\max _{S_{k}}\min _{x\in S_{k},\|10\|=ane}(Ax,ten)&=\lambda _{k}^{\downarrow },\\\min _{S_{chiliad-1}}\max _{x\in S_{k-1}^{\perp },\|10\|=1}(Ax,x)&=\lambda _{k}^{\downarrow }.\end{aligned}}}

A similar pair of equalities hold for negative eigenvalues.

Proof

Let Due south' be the closure of the linear span S = span { u k , u k + ane , } {\displaystyle S'=\operatorname {span} \{u_{g},u_{k+1},\ldots \}} . The subspace S' has codimension yard − ane. By the same dimension count argument as in the matrix example, S' Sk is non empty. And so there exists xS'Due southchiliad with x = i {\displaystyle \|x\|=i} . Since information technology is an chemical element of S' , such an x necessarily satisfy

( A x , 10 ) λ yard . {\displaystyle (Ax,10)\leq \lambda _{m}.}

Therefore, for all Southk

inf x S yard , x = 1 ( A 10 , ten ) λ k {\displaystyle \inf _{x\in S_{k},\|x\|=1}(Ax,10)\leq \lambda _{g}}

But A is compact, therefore the function f(x) = (Ax, 10) is weakly continuous. Furthermore, any divisional ready in H is weakly meaty. This lets u.s. supersede the infimum past minimum:

min x South k , x = one ( A x , x ) λ yard . {\displaystyle \min _{x\in S_{k},\|10\|=1}(Ax,x)\leq \lambda _{k}.}

So

sup S k min x S k , x = 1 ( A ten , x ) λ grand . {\displaystyle \sup _{S_{g}}\min _{10\in S_{k},\|x\|=1}(Ax,ten)\leq \lambda _{k}.}

Because equality is accomplished when S thousand = span { u 1 , , u k } {\displaystyle S_{k}=\operatorname {bridge} \{u_{one},\ldots ,u_{k}\}} ,

max S k min x Southward k , x = 1 ( A x , x ) = λ k . {\displaystyle \max _{S_{k}}\min _{ten\in S_{k},\|x\|=i}(Ax,x)=\lambda _{k}.}

This is the offset part of min-max theorem for compact self-adjoint operators.

Analogously, consider now a (yard − one)-dimensional subspace S k−1, whose the orthogonal complement is denoted by S thou−1 . If S' = span{u 1...uyard },

S Southward k 1 0 . {\displaystyle Southward'\cap S_{k-ane}^{\perp }\neq {0}.}

So

x Due south k 1 10 = 1 , ( A ten , x ) λ k . {\displaystyle \exists 10\in S_{chiliad-1}^{\perp }\,\|x\|=1,(Ax,ten)\geq \lambda _{one thousand}.}

This implies

max x South g ane , 10 = 1 ( A x , x ) λ m {\displaystyle \max _{x\in S_{one thousand-one}^{\perp },\|x\|=1}(Ax,x)\geq \lambda _{yard}}

where the compactness of A was applied. Index the above by the collection of k-ane-dimensional subspaces gives

inf S k i max x S k 1 , x = 1 ( A x , x ) λ one thousand . {\displaystyle \inf _{S_{k-1}}\max _{ten\in S_{k-1}^{\perp },\|ten\|=1}(Ax,10)\geq \lambda _{k}.}

Selection S k−ane = span{u 1, ..., u k−1} and we deduce

min S yard i max ten South one thousand 1 , x = 1 ( A x , ten ) = λ chiliad . {\displaystyle \min _{S_{k-1}}\max _{x\in S_{k-one}^{\perp },\|x\|=1}(Ax,x)=\lambda _{k}.}

Self-adjoint operators [edit]

The min-max theorem as well applies to (possibly unbounded) self-adjoint operators.[1] [ii] Recall the essential spectrum is the spectrum without isolated eigenvalues of finite multiplicity. Sometimes we have some eigenvalues beneath the essential spectrum, and we would like to approximate the eigenvalues and eigenfunctions.

Theorem (Min-Max). Permit A be self-adjoint, and let E 1 E 2 East 3 {\displaystyle E_{i}\leq E_{2}\leq E_{three}\leq \cdots } be the eigenvalues of A below the essential spectrum. Then

E n = min ψ 1 , , ψ n max { ψ , A ψ : ψ span ( ψ one , , ψ n ) , ψ = ane } {\displaystyle E_{n}=\min _{\psi _{1},\ldots ,\psi _{northward}}\max\{\langle \psi ,A\psi \rangle :\psi \in \operatorname {span} (\psi _{ane},\ldots ,\psi _{n}),\,\|\psi \|=one\}} .

If we only take North eigenvalues and hence run out of eigenvalues, then we allow E n := inf σ e s s ( A ) {\displaystyle E_{northward}:=\inf \sigma _{ess}(A)} (the bottom of the essential spectrum) for north>N, and the above statement holds afterwards replacing min-max with inf-sup.

Theorem (Max-Min). Allow A be self-adjoint, and allow E 1 Eastward 2 Eastward 3 {\displaystyle E_{1}\leq E_{2}\leq E_{iii}\leq \cdots } be the eigenvalues of A below the essential spectrum. So

E n = max ψ 1 , , ψ due north 1 min { ψ , A ψ : ψ ψ ane , , ψ n 1 , ψ = 1 } {\displaystyle E_{n}=\max _{\psi _{ane},\ldots ,\psi _{north-1}}\min\{\langle \psi ,A\psi \rangle :\psi \perp \psi _{1},\ldots ,\psi _{n-1},\,\|\psi \|=1\}} .

If we but accept N eigenvalues and hence run out of eigenvalues, then we let E northward := inf σ e s south ( A ) {\displaystyle E_{n}:=\inf \sigma _{ess}(A)} (the lesser of the essential spectrum) for n > N, and the in a higher place statement holds after replacing max-min with sup-inf.

The proofs[1] [2] use the following results about self-adjoint operators:

Theorem. Let A be cocky-adjoint. Then ( A E ) 0 {\displaystyle (A-E)\geq 0} for Due east R {\displaystyle East\in \mathbb {R} } if and but if σ ( A ) [ Eastward , ) {\displaystyle \sigma (A)\subseteq [E,\infty )} .[1] : 77
Theorem. If A is self-adjoint, then

inf σ ( A ) = inf ψ D ( A ) , ψ = 1 ψ , A ψ {\displaystyle \inf \sigma (A)=\inf _{\psi \in {\mathfrak {D}}(A),\|\psi \|=1}\langle \psi ,A\psi \rangle }

and

sup σ ( A ) = sup ψ D ( A ) , ψ = 1 ψ , A ψ {\displaystyle \sup \sigma (A)=\sup _{\psi \in {\mathfrak {D}}(A),\|\psi \|=1}\langle \psi ,A\psi \rangle } .[1] : 77

Run into also [edit]

  • Courant minimax principle
  • Max–min inequality

References [edit]

  1. ^ a b c d Chiliad. Teschl, Mathematical Methods in Quantum Mechanics (GSM 99) https://www.mat.univie.ac.at/~gerald/ftp/volume-schroe/schroe.pdf
  2. ^ a b Lieb; Loss (2001). Analysis. GSM. Vol. fourteen (2d ed.). Providence: American Mathematical Society. ISBN0-8218-2783-9.
  • Thousand. Reed and B. Simon, Methods of Modern Mathematical Physics 4: Analysis of Operators, Academic Printing, 1978.

Sum Of Eigenvalues Is Trace,

Source: https://en.wikipedia.org/wiki/Min-max_theorem

Posted by: bellwased1993.blogspot.com

0 Response to "Sum Of Eigenvalues Is Trace"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel