Next Article in Journal / Special Issue
The Solvability of a System of Quaternion Matrix Equations Involving ϕ-Skew-Hermicity
Previous Article in Journal
Free-Space Continuous-Variable Quantum Key Distribution with Imperfect Detector against Uniform Fast-Fading Channels
Previous Article in Special Issue
Constructing Dixon Matrix for Sparse Polynomial Equations Based on Hybrid and Heuristics Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral Distribution and Numerical Methods for Rational Eigenvalue Problems

1
Department of Mathematics, Taizhou University, Taizhou 225300, China
2
Department of Mathematics, Southeast University, Nanjing 210096, China
3
Department of Applied Mathematics, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Submission received: 12 April 2022 / Revised: 10 June 2022 / Accepted: 15 June 2022 / Published: 20 June 2022

Abstract

:
Rational eigenvalue problems (REPs) have important applications in engineering applications and have attracted more and more attention in recent years. Based on the theory of low-rank modification, we discuss the spectral properties and distribution of the symmetric rational eigenvalue problems, and present two numerical iteration methods for the above REPs. Numerical experiments demonstrate the effectiveness of our proposed methods.

1. Introduction

We consider the following rational eigenvalue problem (REP)
R ( λ ) x = 0 ,
where R ( λ ) R n × n is a matrix rational function with respect to the scalar parameter λ R and the details for the degree of R ( λ ) can be seen in [1]. As we know, λ is an eigenvalue of the problem (1) if and only if it satisfies the characteristic equation det ( R ( λ ) ) = 0 where det ( · ) denotes the determinant of its following matrix. The nonzero vector x R n and the two-tuple ( λ , x ) are called as the corresponding eigenvector of λ and an eigenpair of the REP (1), respectively. The REP arises in a wide variety of applications including vibration of fluid–solid structures [2], optimization of acoustic emissions of high-speed trains [3], free vibration of plates with elastically attached masses [4], free vibrations of a structure with a viscoelastic constitutive relation describing the behavior of a material [5,6], electronic structure calculations of quantum dots [7,8], and so on.
More precisely, in this paper we consider that R ( λ ) is shown as follows:
R λ = P λ + i = 1 N s i λ q i λ B i ,
where P λ = λ d A d + + λ A 1 + A 0 is a matrix polynomial, s i ( λ ) and q i ( λ ) are coprime scalar polynomials of degrees m i and n i with m i < n i , respectively, and  A i , B i R n × n are all constant matrices.
At present, there are mainly three types of numerical methods to compute the eigenvalues of the REP (1). The first type of numerical method is to solve the REP via a brute-force approach. That is to say, multiply the both sides of (2) by all scalar polynomials q i ( λ ) , which results in a  d + i = 1 N n i order polynomial eigenvalue problem (PEP). Nevertheless, these methods are not as efficient as required for the large-scale problems, especially when the term i = 1 N n i is not small enough. Moreover, the corresponding PEP would have more extra eigenvalues than the original REP (1). The second type of numerical method is to linearize the REP into a PEP with some specific tricks. For example, Su and Bai [9] presented a linearization-based method by converting the REP into a well-studied PEP and preserved the structures and properties of the original REP. Dopico and González-Pizarro [10] proposed a compact rational Krylov method for the large-scale REP. The third type of numerical method is to treat the REP as the general nonlinear eigenvalue problems, and solve them via a nonlinear eigensolver, see, e.g., [11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Although the abovementioned methods can solve the REP well, they ignore some structures and properties of the original rational eigenvalue problems. To overcome this disadvantage, it is necessary to study spectral properties and distribution of the REP first, and then try to put forward some effective numerical methods according to these properties. As far as we know, there are engineering applications that require the computation of only some of the eigenvalues lying within an interval [5]. Therefore, in this paper we focus on some numerical methods to compute eigenpairs of the REP in an interval.
The rest of the paper is organized as follows. Section 2 briefly introduces some preliminary results. Section 3 discusses the spectral properties and the distribution of the rational eigenvalues. In Section 4, we develop two numerical methods for solving the symmetric REP based on the spectral properties. Some numerical examples are given to show the effectiveness of the proposed methods in Section 5. Finally, we give some concluding remarks in Section 6.
For convenience, we use the following notations: I denotes the identity matrix of suitable size. e j denotes the jth column of the identity matrix I. The superscript T denotes the transpose of a vector or a matrix, respectively. · denotes the Euclidean norm of a vector or a matrix. · F denotes the Frobenius norm of a matrix. x , y denotes the inner product of vector x and vector y.

2. Preliminaries

In this section, to facilitate the theoretical analysis and further obtain the main results for rational eigenvalue problems (1) and (2), the following useful assumptions are addressed.
Hypothesis 1 (H1).
Coefficient matrices A i are the symmetric positive definite matrices with i = 1 , , d and A 0 is symmetric.
Hypothesis 2 (H2).
Matrices B i are the low-rank symmetric positive semidefinite matrices with i = 1 , 2 , , N .
Hypothesis 3 (H3).
Rational functions r i ( λ ) = s i λ q i λ are monotonically increasing functions with respect to parameter λ on the intervals separated by the zeros of polynomials q i λ where i = 1 , 2 , , N .
Remark 1.
The following example provides an intuitive illustration for assumptionH3.
Example 1.
Assume that r i ( λ ) = a 1 λ 1 λ + a 2 λ 2 λ + + a k λ k λ , where λ 1 < λ 2 < < λ k and a 1 , a 2 , , a k > 0 . We can easily get that r i ( λ ) = a 1 ( λ 1 λ ) 2 + a 2 ( λ 2 λ ) 2 + + a k ( λ k λ ) 2 > 0 where i = 1 , 2 , , k . Therefore, we have that r i ( λ ) are monotonically increasing functions with respect to parameter λ on the intervals ( , λ 1 ) , ( λ i , λ i + 1 ) , ( λ k , + ) where i = 1 , 2 , , k 1 .
For the REP (1) and (2), in this paper we only discuss the eigenvalue distribution on the positive semi-real axis, namely J R + . Assume that all zeros of q i ( x ) with i = 1 , 2 , , N on the real positive semiaxis are arranged in the following order:
inf J σ 0 < σ 1 σ L < σ L + 1 < = Δ sup J .
Set J l = σ l 1 , σ l where l = 1 , , L + 1 . Then we have J = l = 1 L + 1 J l = R + \ σ i i = 1 L and J l J k = ϕ if  l k .
As long as the matrices A i and B j are symmetrical for all i = 0 , 1 , , d and j = 1 , 2 , , N , we can define the Rayleigh functional p ( x ) for the REP. That is to say, if  p ( x ) satisfies the following equation
R p x x , x = 0 ,
p ( x ) is the Rayleigh functional of R ( λ ) . Notice that in the linear case R ( λ ) = λ I A , it is exactly the Rayleigh quotient. Let f λ , x = R λ x , x , then p x is a root of f λ , x = 0. Because A i is positive definite with i = 1 , , d , B i is positive semidefinite and r i λ is monotonically increasing function, it is easy to verify that f λ , x is a monotone increasing function on the intervals separated by the zeros of polynomials q i λ . Therefore,
λ p x f λ , x > 0 ,
where λ p x .
For each fixed λ J , we consider the following standard eigenvalue problem (SEP):
R λ x = μ x .
We can easily see that if  λ is an eigenvalue of the REP (1) and (2), μ = 0 is an eigenvalue of the above SEP (5). Conversely, it is also true. Therefore, the eigenvalue of the REP (1) and (2) can be characterized by the zero eigenvalue of the SEP (5). For the standard eigenvalue problem, we have the minmax principle
μ j = sup V H j min x V \ 0 R λ x , x x , x ,
where H j represents the set of Hilbert subspaces with dimension j of R n . Similarly, we have the minmax principle of the REP
λ j = min V H j , V D ϕ sup x V D p x ,
where D denotes the domain of the Rayleigh functional p x which satisfies (3). For a more detailed discussion, see, e.g., [17,18].

3. Spectral Properties and Distribution of the REPs

According to the assumption (H2), we know that the matrices B i are low-rank. Hence, the REP (2) can be regarded as a low-rank perturbation of the following PEP [25]
P ˜ λ x = 0 ,
where
P ˜ λ = P λ + i = 1 N s i κ q i κ B i ,
with κ J l and l = 1 , 2 , , L + 1 .
For any l 1 , 2 , , L + 1 and any value κ in the interval J l , the REP (1) and (2) has the corresponding PEP (7). Assume that the PEP (7) has an eigenvalue μ in the interval J l . Then we will prove that the eigenvalue λ of the REP (1) and (2) is between κ and μ . Before giving this theorem, we first show some lemmas.
Lemma 1.
Assume that s λ and q λ represent any pair of scalar polynomials s i λ and q i λ with i = 1 , 2 , , N , respectively. For any λ 1 , λ 2 J l with l = 1 , 2 , , L + 1 , we have
s λ 1 q λ 2 q λ 1 s λ 2 = ( λ 1 λ 2 ) g ( λ 1 , λ 2 ) ,
where g λ 1 , λ 2 > 0 with a i and b j standing for the coefficients of s ( λ ) and q ( λ ) , respectively.
Proof. 
Without losing generality, we let s λ = i = 0 m a i λ i and q λ = j = 0 n b j λ j . Hence,
s λ 1 q λ 2 q λ 1 s λ 2 = i = 0 m a i λ 1 i j = 0 n b j λ 2 j i = 0 m a i λ 2 i j = 0 n b j λ 1 j = i = 0 m j = 0 n a i b j λ 1 i λ 2 j λ 1 j λ 2 i .
Let
g ( λ 1 , λ 2 ) = i j s g n ( i j ) a i b j λ 1 λ 2 min ( i , j ) k = 0 i j 1 λ 1 k λ 2 i j k 1 ,
and we have s λ 1 q λ 2 q λ 1 s λ 2 = λ 1 λ 2 g λ 1 , λ 2 .
Let r λ = s λ q λ . Then
r λ 1 r λ 2 = s λ 1 q λ 1 s λ 2 q λ 2 = s λ 1 q λ 2 q λ 1 s λ 2 q λ 1 q λ 2 = λ 1 λ 2 q λ 1 q λ 2 g λ 1 , λ 2 .
Because λ 1 , λ 2 J l , then q λ 1 q λ 2 > 0 . Based on the assumption H 3 we have g λ 1 , λ 2 > 0 . Thus, the conclusion holds true.    □
For the PEP (7), the Rayleigh functional R κ x should satisfy P ˜ R κ x x , x = 0 , namely,
P R κ x x , x = i = 1 N s i κ q i κ B i x , x .
Lemma 2.
Assume that κ J l and there exists a vector x H 1 , such that R κ x J l with l = 1 , 2 , , L + 1 where H 1 = x H x , x = 1 . Then x D l and
min κ , R κ x p l x max κ , R κ x ,
where D l is the domain of the Rayleigh functional p l x which satisfies (3) in  J l .
Proof. 
With the fact that
f R κ x , x = P R κ x x , x + i = 1 N s i R κ x q i R κ x B i x , x
and the relation (8), we have
f R κ x , x = i = 1 N s i κ q i κ B i x , x + i = 1 N s i R κ x q i R κ x B i x , x = i = 1 N R κ x κ g i κ , R κ x q i κ q i R κ x B i x , x .
It follows from Lemma 1 and the assumption (H2) that f R κ x , x 0 if  κ R κ x and f R κ x , x 0 if  κ R κ x . Similarly, it can be proved that f κ , x 0 if  κ R κ x and f κ , x 0 if  κ R κ x . Finally, because f λ , x is continuous, we have x D l and min κ , R κ x p l x max κ , R κ x , which completes the proof.    □
Theorem 1.
Assume that (H1)–(H3) hold, and let κ J l with l = 1 , 2 , , L + 1 . Assume that μ j J l and J l contains the j-th eigenvalue of the PEP (7). Then the REP (1) and (2) has a corresponding eigenvalue λ j J l and min ( κ , μ j ) λ j max ( κ , μ j ) .
Proof. 
We first show that there exists a subspace V H j , such that
V D l and sup x V D l p l x max ( κ , μ j ) .
In fact, suppose that there exist W H j and w W \ 0 , such that
μ j = min V H j , V D l ϕ max x V D l R κ x = max x W \ 0 R κ x = R κ w .
Because κ J l and μ j J l , we have w D l from Lemma 2. Then W D l . For any x W \ 0 , we have R κ x μ j . Therefore, P R κ x x , x P ( μ j ) x , x . That is,
P ( μ j ) x , x + i = 1 N s i κ q i κ B i x , x 0 .
Set δ = max ( κ , μ j ) . Then it is easy to obtain that P δ x , x + i = 1 N s i δ q i δ B i x , x 0 . That is, f δ , x 0 . It follows from (4) that p l x δ for any x D l W . Hence, sup x W D l p l x δ = max ( κ , μ j ) .
In the following, we show that for any V H j , if  V D , we have
sup x V D l p l x min ( κ , μ j ) .
We prove this result by reduction to absurdity. Suppose that there exists V H j such that V D , but  sup x V D l p l ( x ) < min ( κ , μ j ) . Let x V V such that R κ x V = max x V R κ x . Then we have R κ x V J l . Actually, if  R κ x V J l , it is easy to get x V D l and p l x V min κ , R κ x V .
That is,
sup x V D l p l x p l x V min κ , R κ x V min ( κ , μ j )
contradicting the fact that sup x V D l p l x < min ( κ , μ j ) . Hence, R κ x V J l .
Set δ = min ( κ , μ j ) min κ , R κ x V . Then we have δ μ j R κ x V . Moreover, because f δ , x V = P δ x V , x V + i = 1 N s i δ q i δ B i x V , x V , it is easy to obtain
f δ , x V P R κ x V x V , x V i = 1 K s i κ q i κ B i x V , x V = 0 .
For any x V D l , we let w t = t x + 1 t x V and φ t = f δ , w t . Then φ 0 = f δ , x V 0 . Because p l x < δ , it follows from (4) that φ 1 = f δ , x > f p l x , x = 0 . There exists t ˜ [ 0 , 1 ) such that f δ , w t ˜ = 0 . Thus, w t ˜ V D l and p l w t ˜ = δ = min ( κ , μ j ) which conflicts with the assumption.
To summarise, we have
λ j = inf V H j , V D l ϕ sup x V D l p l v J l , and min κ , μ j λ j max κ , μ j ,
which completes the proof.    □
Actually, eigenvalue μ j of the REP (1) and (2) is the function with respect to κ , which implies that μ j = μ j κ . The following theorem will elaborate the continuity of the function μ j κ .
Theorem 2.
Assume that H1H3 hold, then μ j κ is a continuous decreasing function with respect to κ.
Proof. 
Let P ˜ λ , κ = Δ P ˜ λ . For any ε > 0 , there exists δ > 0 such that
P ˜ λ , κ 1 P ˜ λ , κ 2 i = 1 N s i κ 1 q i κ 1 s i κ 2 q i κ 2 B i = κ 1 κ 2 i = 1 N g κ 1 , κ 2 q i κ 1 q i κ 2 B i < ε
when κ 1 κ 2 < δ with δ = δ ε > 0 small enough. Because the eigenvalue is a continuous function with respect to the elements of its matrix [26], we have that μ j κ is a continuous function with respect to κ .
It follows from (8) that R κ x is the root of the following polynomial equation
λ d + a d 1 λ d 1 + + a 1 λ + a 0 = 0 ,
where a i = A i x , x > 0 with i = 1 , 2 , , d 1 and a 0 = A 0 x , x + i = 1 N s i κ q i κ B i x , x . With fixed x, a i remains unchanged where i = 1 , 2 , , d 1 . Moreover, from the assumption (H3), we know that a 0 is an increasing function of κ . Then we can easily prove that R κ x is a decreasing function of κ . Finally, through the minmax principle (6), we can conclude that μ j κ is a continuous decreasing function of κ , which completes the proof.    □
Theorems 1 and 2 show that if the REP (1) and (2) have an eigenvalue λ j J l , there must be such a value κ and one eigenvalue μ j of the PEP (7) in the interval J l . Conversely, if the REP (1) and (2) have no eigenvalues in the interval J l , the PEP (7) will not have any eigenvalues in this interval even if the values κ in the interval J l are taken all over. Therefore, there exists a one-to-many relationship between λ j and μ j .
On the other hand, suppose that there exists κ 0 J l , then the PEP (7) has two unequal eigenvalues μ j κ 0 μ ˜ j κ 0 J l . If  λ j and λ ˜ j are fixed points of μ j κ 0 and μ ˜ j κ 0 , respectively, we have λ j λ ˜ j . In fact, because the eigenvalue is a continuous function with respect to the elements of its matrix, the multiplicity of the original eigenvalue will not change with the change of κ . That is, μ j κ μ ˜ j κ holds. Note that if  κ = λ j , we have λ j λ ˜ j . Therefore, there is one-to-one correlation between λ j and μ j .
To summarise, we can obtain the following theorems.
Theorem 3.
There is a one-to-one relationship between the eigenvalue λ j of the REP (1) and (2) and the eigenvalue μ j of the PEP (7) in  J l with l = 1 , 2 , , L + 1 .
Theorem 4.
Assume that σ l 1 < κ 1 < κ 2 < σ l . Let N κ = max n : μ n κ κ , then there are exactly N κ 2 N κ 1 eigenvalues in the semi-interval ( κ 1 , κ 2 ] .

4. Numerical Methods for Solving the REPs

In this section, based on the above spectral distribution of the REP we discuss the numerical methods for solving the REP (1) and (2). Given a  κ J l , we can find an eigenvalue μ of the PEP (7). Here, how to select the next new value κ is the key to propose the novel numerical algorithms. Because μ κ is a continuous decreasing function of κ and λ is a fixed point of μ κ , we can choose the newest κ by a certain fixed-point algorithm. For simplicity, we first consider to choose κ via dichotomy as follows:
κ : = κ + μ 2 .
Therefore, we derive the following numerical method (Algorithm 1) for solving the REP (1) and (2).   
Algorithm 1: Dichotomy iteration method for the REP (1) and (2)
Input: rational matrix function R λ , the target point σ J l and the tolerance τ .
Output: the approximate eigenvalue λ J l closest to σ .
  • Set κ = σ J l ;
  • Construct the polynomial eigenvalue problem (7);
  • Solve the PEP (7) to get eigenpair μ , x where μ is closest to σ . If no such μ is present, then stop; otherwise;
  • Compute r n = R μ x r n = R μ x i = 0 d μ i A i F + i = 1 N s i μ q i μ E i F i = 0 d μ i A i F + i = 1 N s i μ q i μ B i F . If  r n < τ , then stop, and output λ = μ ; otherwise;
  • Update κ by using (9) and go to step 2.
Remark 2.
In actual computation, for the small-scale PEP (7) the classical approach is to turn it into a generalized eigenvalue problem (GEP) via linearization, or solve it directly by the in-built function of Matlab. For the large-scale ones, we can adopt the partially orthogonal projection method [27] to solve it.
Remark 3.
Assume that x is an eigenvector of μ for the PEP (7). If  λ = μ , x is also the eigenvector corresponding to λ of the REP (1) and (2). Therefore, we can use the Rayleigh functional to accelerate κ as follows:
κ : = p l x .
Thus another numerical method (Algorithm 2) for solving the REP (1) and (2) can be summarized as follows.
Algorithm 2: Rayleigh functional iteration method for the REP (1) and (2)
Input: rational matrix function R λ , the target point σ J l and the tolerance τ .
Output: the approximate eigenvalue λ J l closest to σ .
  • Set κ = σ J l ;
  • Construct the polynomial eigenvalue problem (7);
  • Solve the PEP (7) to get eigenpair μ , x where μ is closest to σ . If no such μ is present, then stop; otherwise;
  • Compute r n = R μ x r n = R μ x i = 0 d μ i A i F + i = 1 N s i μ q i μ E i F i = 0 d μ i A i F + i = 1 N s i μ q i μ B i F . If  r n < t o l , then stop, and output λ = μ ; otherwise;
  • Update κ by using (10) and go to step 2.

5. Numerical Results

In this section, we report some numerical examples to show the effectiveness of the proposed Algorithms 1 and 2. All computations are performed under Matlab (version R2019a). In our examples, λ is an exact eigenvalue of the REP (1) and (2), and λ is an approximate eigenvalue computed by Algorithm 1 or Algorithm 2. CPU denotes the CPU time (in seconds) for computing an approximate solution, and Iter denotes the number of iteration steps. The stopping tolerance for the residual norm is chosen to be τ = 10 12 .
Example 2
([9]). We consider the following REP:
R λ x = λ B A E σ λ σ E x = 0 ,
where A and B are the positive definite tridiagonal matrices defined as
A = 1 h 2 1 1 2 1 1 1 , B = h 6 4 1 1 4 1 1 2
with h = 1 n and E = e n e n T .
We can easily check that the above eigenvalue problem meets the assumptions H1H3 in Section 2. Let n = 1000 and σ = 1 . Here we are interested in computing all eigenvalues of the REP (11) in the interval J = 0 , 100 , and we can divide it into two small intervals such as J 1 = 0 , 1 and J 2 = 1 , 100 .
It is easy to verify that λ = 0.45731832 , 4.48202582 , 24.21875011 and 63.69036457 are the eigenvalues of the REP (11) in the interval 0 , 100 because the corresponding smallest singular values of R ( λ ) are less than 10 9 .
Through Theorem 4, we have that the numbers of eigenvalues of the REP (11) in J 1 and J 2 are 1, and 3, respectively. The above result is completely consistent with the actual distribution of eigenvalues for the REP (11).
In the following, we choose different κ in J 1 and J 2 such as κ = 0.5 , 13 , 38 , 75 and apply Algorithms 1 and 2 to compute all eigenvalues of the (11) in J 1 and J 2 .
The numerical results for Algorithms 1 and 2 are reported in Table 1, which shows that the proposed methods are very useful and efficient to solve rational eigenvalue problems in one interval. Moreover, Algorithm 2 requires less CPU and iteration steps than Algorithm 1. Moreover, the numerical results remain the same when n and σ take the other different values.

6. Conclusions

In this paper, the spectral distribution of one class of rational eigenvalue problems has been studied in detail, and two simple iterative methods for solving this kind of rational eigenvalue problems have been proposed based on the spectral distribution. Numerical examples show the efficiency of the new approaches. The spectral properties and distribution of the general rational eigenvalue problems are remaining for our future work.

Author Contributions

Methodology, X.C. and W.W.; software, X.S. and A.L.; writing—original draft preparation, X.C. and W.W.; writing-review and editing, X.C., W.W. and X.S.; supervision, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the National Natural Science Foundation of China (Grant No. 12001396), the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20170591 and BK20200268), the Natural Science Foundation of Jiangsu Higher Education Institutions of China (Grant Nos. 21KJB110017 and 20KJB110005), the China Postdoctoral Science Foundation (Grant No. 2018M642130) and Qing Lan Project of the Jiangsu Higher Education Institutions.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editor and anonymous referees for useful comments and suggestions which helped to improve the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duffin, R.J.; Hazony, D. The degree of a rational matrix function. J. Soc. Indust. Appl. Math. 1963, 11, 645–658. [Google Scholar] [CrossRef]
  2. Voss, H. A rational spectral problem in fluid-solid vibration. Electron. Trans. Numer. Anal. 2003, 16, 94–106. [Google Scholar]
  3. Mackey, D.S.; Mackey, N.; Mehl, C.; Mehrmann, V. Structured polynomial eigenvalue problems: Good vibrations from good linearizations. SIAM J. Matrix Anal. Appl. 2006, 28, 1029–1051. [Google Scholar] [CrossRef]
  4. Solov’ëv, S. Preconditioned iterative methods for a class of nonlinear eigenvalue problems. Linear Algebra Appl. 2006, 415, 210–229. [Google Scholar] [CrossRef] [Green Version]
  5. Mehrmann, V.; Voss, H. Nonlinear eigenvalue problems: A challenge for modern eigenvalue methods. GAMM-Reports 2004, 27, 121–152. [Google Scholar] [CrossRef] [Green Version]
  6. Mohammadi, S.A.; Voss, H. Variational Characterization of Real Eigenvalues in Linear Viscoelastic Oscillators. Math. Mech. Solids 2016. [Google Scholar] [CrossRef]
  7. Hwang, T.; Lin, W.; Liu, J.; Wang, W. Numerical simulation of a three dimensional quantum dot. J. Comput. Phys. 2004, 196, 208–232. [Google Scholar] [CrossRef]
  8. Voss, H. Iterative projection methods for computing relevant energy states of a quantum dot. J. Computat. Phys. 2006, 217, 824–833. [Google Scholar] [CrossRef] [Green Version]
  9. Su, Y.F.; Bai, Z.J. Solving rational eigenvalue problems via linearization. SIAM J. Matrix Anal. Appl. 2011, 32, 201–216. [Google Scholar] [CrossRef] [Green Version]
  10. Dopico, F.M.; González-Pizarro, J. A compact rational Krylov method for large-scale rational eigenvalue problems. Numer. Linear Algebr. 2019, 26, e2214. [Google Scholar] [CrossRef] [Green Version]
  11. Kublanovskaya, V.N. On an approach to the solution of the generalized latent value problem for λ-matrices. SIAM J. Numer. Anal. 1970, 7, 532–537. [Google Scholar] [CrossRef]
  12. Jain, N.K.; Singhal, K. On Kublanovskaya’s approach to the solution of the generalized latent value problem for functional λ-matrices. SIAM J. Numer. Anal. 1983, 20, 1062–1070. [Google Scholar] [CrossRef]
  13. Li, R.C. QR decomposition and nonlinear eigenvalue problems. Math. Numer. Sinica 1989, 11, 374–385. [Google Scholar]
  14. Li, R.C. Compute multiply nonlinear eigenvalues. J. Comput. Math. 1992, 10, 1–20. [Google Scholar]
  15. Dai, H.; Bai, Z.Z. On smooth LU decompositions with applications to solutions of nonlinear eigenvalue problems. J. Comput. Math. 2010, 28, 745–766. [Google Scholar]
  16. Guo, J.S.; Lin, W.W.; Wang, C.S. Nonequivalence deflation for the solution of matrix talent value problems. Linear Algebra Appl. 1995, 231, 15–45. [Google Scholar] [CrossRef]
  17. Voss, H.; Werner, B. A minmax principle for nonlinear eigenvalue problems with application to nonoverdamped systems. Math. Meth. Appl. Sci. 1982, 4, 415–424. [Google Scholar] [CrossRef]
  18. Schwetlick, H.; Schreiber, K. Nonlinear Rayleigh functionals. Linear Algebra Appl. 2012, 436, 3991–4016. [Google Scholar] [CrossRef] [Green Version]
  19. Voss, H. An Arnoldi method for nonlinear eigenvalue problems. BIT 2004, 44, 387–401. [Google Scholar] [CrossRef] [Green Version]
  20. Voss, H. A Jacobi Davidson method for nonlinear and nonsymmetric eigenproblems. Comput. Struct. 2007, 85, 1284–1292. [Google Scholar] [CrossRef]
  21. Binkowski, F.; Zschiedrich, L.; Burger, S. A Riesz-projection-based method for nonlinear eigenvalue problems. J. Comput. Phys. 2020, 419, 109678. [Google Scholar] [CrossRef]
  22. Chen, X.P.; Dai, H. A Modified Newton Method for Nonlinear Eigenvalue Problems. East Asian J. Appl. Math. 2018, 8, 139–150. [Google Scholar]
  23. Chen, X.P.; Wei, W.; Yang, X.; Liu, H.; Pan, X.M. Successive linear Newton interpolation methods for solving the large-scale nonlinear eigenvalue problems. Appl. Math. Comput. 2020, 387, 124663. [Google Scholar] [CrossRef]
  24. Chen, X.P.; Wei, W.; Pan, X.M. Modified successive approximation methods for the nonlinear eigenvalue problems. Appl. Numer. Math. 2021, 164, 190–198. [Google Scholar] [CrossRef]
  25. Voss, H.; Yildiztekin, K.; Huang, X. Nonlinear low rank modification of a symmetric eigenvalue problem. SIAM J. Matrix Anal. Appl. 2011, 32, 515–535. [Google Scholar] [CrossRef] [Green Version]
  26. Wilkinson, J.H. The Algebraic Eigenvalue Problem; Clarendon Press: Oxford, UK, 1965. [Google Scholar]
  27. Wei, W.; Dai, H. Partially orthogonal projection method and its variations for solving polynomial eigenvalue problem. Numer. Math. J. Chin. Univ. 2016, 38, 116–133. (In Chinese) [Google Scholar]
Table 1. Numerical results of Example 2.
Table 1. Numerical results of Example 2.
κ = 0.5 IterλrnCPU
Algorithm 170.457318355879899 6.60 × 10 13 0.57
Algorithm 230.457318325580995 1.10 × 10 15 0.29
κ = 13
Algorithm 1214.482026373417336 8.36 × 10 13 1.69
Algorithm 234.482025808828371 1.19 × 10 13 0.29
κ = 38
Algorithm 12324.218751627286714 6.27 × 10 13 1.89
Algorithm 2224.218750015378188 8.46 × 10 13 0.18
κ = 75
Algorithm 12363.690365902995510 5.45 × 10 13 1.85
Algorithm 2263.690364569627210 1.34 × 10 15 0.18
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, X.; Wei, W.; Shi, X.; Luo, A. Spectral Distribution and Numerical Methods for Rational Eigenvalue Problems. Symmetry 2022, 14, 1270. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14061270

AMA Style

Chen X, Wei W, Shi X, Luo A. Spectral Distribution and Numerical Methods for Rational Eigenvalue Problems. Symmetry. 2022; 14(6):1270. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14061270

Chicago/Turabian Style

Chen, Xiaoping, Wei Wei, Xueying Shi, and An Luo. 2022. "Spectral Distribution and Numerical Methods for Rational Eigenvalue Problems" Symmetry 14, no. 6: 1270. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14061270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop