Next Article in Journal
Atomic Solution for Certain Gardner Equation
Previous Article in Journal
A Hybrid Cross Layer with Harris-Hawk-Optimization-Based Efficient Routing for Wireless Sensor Networks
Previous Article in Special Issue
Classification of Irreducible Z+-Modules of a Z+-Ring Using Matrix Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quasi-Double Diagonally Dominant H-Tensors and the Estimation Inequalities for the Spectral Radius of Nonnegative Tensors

1
Teachers College, Eastern Liaoning University, Dandong 118003, China
2
School of Mathematics and Statistics, Beihua University, Jilin 132013, China
*
Author to whom correspondence should be addressed.
Submission received: 20 October 2022 / Revised: 2 December 2022 / Accepted: 7 December 2022 / Published: 7 February 2023

Abstract

:
In this paper, we study two classes of quasi-double diagonally dominant tensors and prove they are H -tensors. Numerical examples show that two classes of H -tensors are mutually exclusive. Thus, we extend the decision conditions of H -tensors. Based on these two classes of tensors, two estimation inequalities for the upper and lower bounds for the spectral radius of nonnegative tensors are obtained.

1. Introduction

Let R ( C ) be the real (complex) field. Consider an m-th order n-dimensional tensor A , which consists of n m entries in R :
A = ( a i 1 i 2 i m ) , a i 1 i 2 i m R , i j = 1 , 2 , , n , j = 1 , 2 , , m .
Let R n be the set of all n-dimensional real vectors, and let R [ m , n ] ( C [ m , n ] ) be the set of all m-th order n-dimensional real (complex) tensors. A tensor A is called nonnegative if a i 1 i 2 i m 0 , and we denote this by A R + [ m , n ] . R + n and R + + n represent the sets of nonnegative and positive vectors in n-dimensional Euclidean space, respectively. We denote n = { 1 , 2 , , n } , i = 1 .
In 2005, Lim [1] and Qi [2] defined the eigenvalues of a tensor, respectively.
Definition 1. 
Let A = ( a i 1 i 2 i m ) R [ m , n ] . If there are a complex number λ and a nonzero complex vector x = ( x 1 , x 2 , , x n ) T , such that
A x m 1 = λ x [ m 1 ] ,
then λ is called an eigenvalue of A , x is termed an eigenvector of A associated with λ , and A x m 1 and x [ m 1 ] are vectors, whose i-th entries are
( A x m 1 ) i = i 2 , , i m = 1 n a i i 2 i m x i 2 x i m
and ( x [ m 1 ] ) i = x i m 1 , respectively.
Specifically, ( λ , x ) is called an H-eigenpair if ( λ , x ) R × R n . The largest eigenvalue of tensor A is called the spectral radius, and we denote it by ρ ( A ) . We denote the set of eigenvalues of tensor A as σ ( A ) .
As a higher-dimensional generalization of matrices, tensors are used in many scientific fields, such as signal and image processing, continuum physics, data mining and processing, nonlinear optimization, elastic analysis in physics, and higher-order statistics [3,4,5,6]. The properties and criteria of H -tensor ( M -tensor) were discussed in detail in [7,8,9], and the relevant results were given. There are many applications for the H -tensor ( M -tensor); for example, the multilinear systems can be expressed as A x m 1 = b , where A and b R n are given, and x is to be solved. Examples of multilinear systems can be found in [10,11,12,13]. Consider the positive define of g ( x ) = i 1 , i 2 , , i m = 1 n a i 1 i 2 i m x i 1 x i 2 x i m ; that is, when 0 x R n , g ( x ) > 0 , the M -tensor is also an important application [9]. The estimation of the upper and lower bounds for the spectral radius of a nonnegative tensor is an important element in the study of the spectral problem of nonnegative tensors [14,15], and the application of the relation between the M -tensor and the nonnegative tensor gives an estimate of the upper and lower bounds for the spectral radius of the nonnegative tensor. By analyzing the tensor structure, two classes of quasi-double diagonally dominant tensors are given in this paper, and they are proved to be H -tensors; at the same time, an inequality is given for the estimation of the upper and lower bounds for the spectral radius of the nonnegative tensor.

2. Preliminaries

In this section, we first recall some preliminary knowledge important to our work on nonnegative tensors.
Ref. [16] generalized the concept of irreducible matrices to irreducible tensors.
Definition 2 ([16]). 
An m-th order n-dimensional tensor A is called reducible if there exists a nonempty proper index subset J n , such that
a i 1 i 2 i m = 0 , i 1 J , i 2 , , i m J .
If A is not reducible, then A is irreducible.
Definition 3 ([17]). 
Let A = ( a i 1 i 2 i m ) R + [ m , n ] .
(1) 
We call a nonnegative matrix G ( A ) the representation associated with the nonnegative tensor A , if the ( i , j ) -th element of G ( A ) is defined to be the summation of a i i 2 i m with indices { i 2 i m } j .
(2) 
We call A weakly reducible if its representation G ( A ) is a reducible matrix, and we call it weakly primitive if G ( A ) is a primitive matrix. If A is not weakly reducible, then it is called weakly irreducible.
Definition 4 ([7,18]). 
Let A = ( a i 1 i 2 i m ) C [ m , n ] , D = d i a g ( d 1 , d 2 , , d n ) be a positive diagonal matrix of order n; we define it as ( A D m 1 ) i 1 i 2 i m = a i 1 i 2 i m d i 1 d i 2 d i m .
We use I to denote the m-th order n-dimensional unit tensor with entries
I i 1 i 2 i m = 1 , if i 1 = i 2 = = i m , 0 , otherwise ,
and we define the following m-th order δ i 1 i 2 i m Kronecker delta
δ i 1 i 2 i m = 1 , if i 1 = i 2 = = i m , 0 , otherwise .
Let A = ( a i 1 i 2 i m ) C [ m , n ] , and denote
r ¯ i ( A ) = i 2 , , i m = 1 n | a i i 2 i m | , r i ( A ) = r ¯ i ( A ) | a i i | , i n , r i [ j ] ( A ) = i 2 , , i m = 1 j { i 2 , , i m } n | a i i 2 i m | a j j , r ¯ i [ j ] ( A ) = r i ( A ) r i [ j ] ( A ) , i j , i , j n .
The study of the conditions for the determination of the H -tensor is the basis for the application of the H -tensor. The literature [7,8,9] provides some methods for the determination of the H -tensor. In this paper, a different method is used to obtain a class of quasi-double diagonally dominant tensor by carefully analysing the structure of the tensor, and another class of quasi-double diagonally dominant tensor is discussed by analysing the digraph of the majorization matrix of the tensor.
In the following, we describe two classes of quasi-double diagonally dominant tensors, prove that they are nonsingular H -tensors, and give several inequalities to estimate the spectral radius of nonnegative tensors based on the correspondence between the diagonal dominance of a tensor and the inclusion domain of its eigenvalues.

3. Two Classes of Quasi-Double Diagonally Dominant H -Tensors

In this section, we describe two classes of quasi-double dominant H -tensors and show that the two classes of tensors are not mutually inclusive.
Definition 5 ([8]). 
Let A = ( a i 1 i 2 i m ) C [ m , n ] . If
| a i i | r i ( A ) , i n ,
then tensor A is called diagonally dominant. If (1) are all strictly inequalities, then tensor A is called strictly diagonally dominant. If tensor A is irreducible, and (1) holds at least one strict inequality, then tensor A is called irreducible diagonally dominant. If there is a positive diagonal matrix D, such that A D m 1 is strictly diagonally dominant, then tensor A is called generalized strictly diagonally dominant.
Definition 6 ([9]). 
For A = ( a i 1 i 2 i m ) C [ m , n ] , its comparison tensor, denoted by M A = ( m i 1 i 2 i m ) R [ m , n ] , is defined as
m i 1 i 2 i m = | a i i | , i f i 1 = i 2 = = i m , | a i 1 i 2 i m | , o t h e r w i s e .
Definition 7 ([7,8,9]). 
Let A = ( a i 1 i 2 i m ) C [ m , n ] . Tensor A is said to be a Z -tensor if it can be written as A = s I B , where s > 0 , B R + [ m , n ] . Furthermore, if s ρ ( B ) , then tensor A is said to be an M -tensor, and if s > ρ ( B ) , then tensor A is said to be a nonsingular M -tensor.
Reference [6] also proved the following:
Theorem 1 ([9]). 
If A = ( a i 1 i 2 i m ) R [ m , n ] is a Z -tensor, then tensor A is a nonsingular M -tensor if and only if R e λ > 0 , λ σ ( A ) .
Definition 8 ([7,8]). 
Let A = ( a i 1 i 2 i m ) C [ m , n ] . If the comparison tensor M A of tensor A is an M -tensor, then tensor A is called an H -tensor, and if comparison tensor M A is a nonsingular M -tensor, then tensor A is called a nonsingular H -tensor.
Theorem 2 ([7,8]). 
Let A = ( a i 1 i 2 i m ) C [ m , n ] . If tensor A is strictly diagonally dominant, irreducible diagonally dominant, or generalized strictly diagonally dominant, then tensor A is called a nonsingular H -tensor.
Theorem 3. 
Let A = ( a i 1 i 2 i m ) C [ m , n ] . If
 (i) 
| a i i | > r i [ i ] ( A ) , i n ,
 (ii) 
| a i i | r i [ i ] ( A ) | a j j | r ¯ j [ i ] ( A ) > r ¯ i [ i ] ( A ) r j [ i ] ( A ) , i , j n , i j ,
then A is nonsingular; that is, 0 σ ( A ) .
Proof. 
If 0 σ ( A ) , then there exists 0 x = ( x 1 , x 2 , , x n ) T R n , such that
A x m 1 = 0 .
Assume | x t 1 | | x t 2 | | x t n 1 | | x t n | 0 ; therefore, | x t 1 | 0 , and we have
i 2 , , i m = 1 n a t 1 i 2 i m x i 2 x i m = 0 .
Hence,
a t 1 t 1 x t 1 m 1 = i 2 , , i m = 1 t 1 { i 2 , , i m } δ t 1 , t 2 , , t m = 0 n a t 1 i 2 i m x i 2 x i m i 2 , , i m = 1 t 1 { i 2 , , i m } n a t 1 i 2 i m x i 2 x i m ;
thus, we have
| a t 1 t 1 | | x t 1 | m 1 r t 1 [ t 1 ] ( A ) | x t 1 | m 1 + r ¯ t 1 [ t 1 ] ( A ) | x t 2 | m 1 ,
i.e.,
| a t 1 t 1 | r t 1 [ t 1 ] ( A ) | x t 1 | m 1 r ¯ t 1 [ t 1 ] ( A ) | x t 2 | m 1 .
Similarly, from (2), we have
| a t 2 t 2 | | x t 2 | m 1 r t 2 [ t 1 ] ( A ) | x t 1 | m 1 + r ¯ t 2 [ t 1 ] ( A ) | x t 2 | m 1 ,
i.e.,
| a t 2 t 2 | r ¯ t 2 [ t 1 ] ( A ) | x t 2 | m 1 r t 2 [ t 1 ] ( A ) | x t 1 | m 1 ,
where x t 2 0 ; otherwise, from x t 1 0 and (3), we have | a t 1 t 1 | r t 1 [ t 1 ] ( A 0 , in contradiction with (i). In this way, from (i), (3), and (4), we have
| a t 1 t 1 | r t 1 [ t 1 ] ( A ) | a t 2 t 2 | r ¯ t 2 [ t 1 ] ( A ) | x t 1 | m 1 | x t 2 | m 1 r ¯ t 1 [ t 1 ] ( A ) r t 2 [ t 1 ] ( A ) | x t 1 | m 1 | x t 2 | m 1 ,
i.e.,
| a t 1 t 1 | r t 1 [ t 1 ] ( A ) | a t 2 t 2 | r ¯ t 2 [ t 1 ] ( A ) r ¯ t 1 [ t 1 ] ( A ) r t 2 [ t 1 ] ( A ) ,
in contradiction with (ii). Therefore, 0 σ ( A ) . □
Theorem 4. 
If A = ( a i 1 i 2 i m ) C [ m , n ] , then σ ( A ) D ( A ) D ˜ ( A ) , where
D ( A ) = i n D i ( A ) , D i ( A ) = z C | | z a i i | r i [ i ] ( A ) , i n , D ˜ ( A ) = i j D i j ( A ) , D i j ( A ) = z C | | z a i i | r i [ i ] ( A ) | a j j | r ¯ j [ i ] ( A ) r ¯ i [ i ] ( A ) r j [ i ] ( A ) , i , j n .
Proof. 
If λ is an eigenvalue of tensor A , then 0 σ ( λ I A ) . From Theorem 3, we know there is some i 0 n , such that
| λ a i 0 i 0 | r i 0 [ i 0 ] ( A ) ,
or there is some i 0 , j 0 n , such that
| λ a i 0 i 0 | r i 0 [ i 0 ] ( A ) | λ a j 0 j 0 | r ¯ j 0 [ i 0 ] ( A ) r ¯ i 0 [ i 0 ] ( A ) r j 0 [ i 0 ] ( A ) .
Therefore, we have λ D i 0 ( A ) or λ D i 0 j 0 ( A ) . □
Theorem 5. 
Let A = ( a i 1 i 2 i m ) C [ m , n ] . If
 (i) 
| a i i | > r i [ i ] ( A ) , i n ,
 (ii) 
| a i i | r i [ i ] ( A ) | a j j | r ¯ j [ i ] ( A ) > r ¯ i [ i ] ( A ) r j [ i ] ( A ) , i , j n , i j ,
then M A is a nonsingular M -tensor; that is, A is a nonsingular H -tensor.
Proof. 
Consider the comparison tensor M A of tensor A . λ σ ( M A ) , R e λ > 0 . Otherwise, if there exists λ 0 σ ( M A ) , R e λ 0 0 , then from (i), we have
λ 0 | a i i | = ( I m λ 0 ) i + R e λ 0 | a i i | | R e λ 0 | a i i | | | a i i | > r i [ i ] ( A ) , i n .
From (ii), we have
| a j j | r ¯ j [ i ] ( A ) > 0 , j n .
Hence, from (i) and (ii), we have
| λ 0 | a i i | r i [ i ] ( A ) | λ 0 | a j j | | r ¯ j [ i ] ( A ) = | ( I m λ 0 ) i + R e λ 0 | a i i | | r i [ i ] ( A ) | ( I m λ 0 ) i + R e λ 0 | a j j | | r ¯ j [ i ] ( A ) | R e λ 0 | a i i | | r i [ i ] ( A ) R e λ 0 | a j j | | r ¯ j [ i ] ( A ) | a i i | r i [ i ] ( A ) | a j j | r ¯ j [ i ] ( A ) > r ¯ i [ i ] ( A ) r j [ i ] ( A ) , i , j n , i j .
Therefore, from Theorem 4, we know λ 0 σ ( A ) , a contradiction with λ 0 σ ( A ) . Thus, there must be R e λ 0 > 0 . Then, from Theorem 1, we know M A is a nonsingular M -tensor; so, from Definition 8, we know tensor A is a nonsingular H -tensor. □
Let A = ( a i 1 i 2 i m ) C [ m , n ] ; its majorization matrix [19], we denote by A ^ = ( a i j ) C n × n , where a i j = a i j j , i , j n , r i ( A ^ ) = j = 1 j i | a i j | . The digraph [20] of matrix A ^ is denoted as Γ ( A ^ ) , and the directed edge on Γ ( A ^ ) is denoted as e i j , Γ i + ( A ^ ) = { j n : a i j j 0 } .
Theorem 6. 
Let A = ( a i 1 i 2 i m ) C [ m , n ] . If
 (i) 
| a j j | | a i i | r i ( A ) + r i ( A ^ ) > r j ( A ) r i ( A ) ^ , e i j Γ ( A ^ ) ,
 (ii) 
| a i i | > r i ( A ) , Γ i + ( A ^ ) = ,
then 0 σ ( A ) .
Proof. 
If 0 σ ( A ) , then there exists 0 x = ( x 1 , x 2 , , x n ) T R n , such that
A x m 1 = 0 .
Assume | x t 1 | | x t 2 | | x t n 1 | | x t n | 0 , a t 1 t 2 t 2 = = a t 1 t s 1 t s 1 = 0 , a t 1 t s t s 0 , s n ; therefore, x t 1 0 , e t 1 t s Γ ( A ^ ) .
(1)
If Γ t 1 + ( A ^ ) = , then r t 1 ( A ) ^ = 0 . From (5), we have
i 2 , , i m = 1 n a t 1 i 2 i m x i 2 x i m = 0 .
Hence, we have
| a t 1 t 1 | | x t 1 | m 1 r t 1 ( A ) r t 1 ( A ^ ) | x t 1 | m 1 + r t 1 ( A ^ ) | x t s | m 1 = r t 1 ( A ) | x t 1 | m 1 ,
i.e.,
| a t 1 t 1 | < r t 1 ( A ) .
This is in contradiction with (ii).
(2)
If Γ t 1 + ( A ^ ) , we assume
a t 1 t 2 t 2 = = a t 1 t s 1 t s 1 = 0 , a t 1 t s t s 0 , s n , then e t 1 t s Γ ( A ^ ) . We discuss this in two cases:
(2.1)
Let x t s 0 ; from (5), we have
i 2 , , i m = 1 n a t 1 i 2 i m x i 2 x i m = 0 .
Hence, we have
| a t 1 t 1 | | x t 1 | m 1 r t 1 ( A ) r t 1 ( A ^ ) | x t 1 | m 1 + r t 1 ( A ^ ) | x t s | m 1 ,
i.e.,
| a t 1 t 1 | r t 1 ( A ) r t 1 ( A ^ ) | x t 1 | m 1 r t 1 ( A ^ ) | x t s | m 1 .
Similarly, from (5), we have
| a t s t s | | x t s | m 1 r t s ( A ) | x t 1 | m 1 .
Thus,
| a t s t s | | a t 1 t 1 | r t 1 ( A ) r t 1 ( A ^ ) | x t s | m 1 | x t 1 | m 1 r t 1 ( A ^ ) r t s ( A ) | x t s | m 1 | x t 1 | m 1 ,
i.e.,
| a t s t s | | a t 1 t 1 | r t 1 ( A ) r t 1 ( A ^ ) r t 1 ( A ^ ) r t s ( A ) , e t 1 t s Γ ( A ^ ) .
(2.2)
If a t 1 t s t s 0 , t 1 t s , 2 s n , | x t s | = 0 , then we have
| a t 1 t 1 | r t 1 ( A ) r t 1 ( A ^ ) | x t 1 | m 1 r t 1 ( A ^ ) | x t s | m 1 = 0 ;
thus,
| a t 1 t 1 | r t 1 ( A ) r t 1 ( A ^ ) 0 .
Hence,
| a t s t s | | a t 1 t 1 | r t 1 ( A ) r t 1 ( A ^ ) r t 1 ( A ^ ) r t s ( A ) .
Combining (2.1) and (2.2), we know that the result contradicts with (ii). Recombining (1) and (2), we know 0 σ ( A ) .
Theorem 7. 
If A = ( a i 1 i 2 i m ) C [ m , n ] , then
σ ( A ) e i j Γ ( A ^ ) z C : | z a j j | | z a i i | r i ( A ) + r i ( A ^ ) r j ( A ) r i ( A ) ^ i n Γ i + ( A ^ ) = z C : | z a i i | r i ( A ) .
Proof. 
If λ is an eigenvalue of tensor A , then 0 σ ( λ I A ) . From Theorem 6, we know there is some i 0 , j 0 n , e i 0 j 0 Γ ( A ^ ) , such that
| λ a j 0 j 0 | | λ a i 0 i 0 | r i 0 ( A ) + r i 0 ( A ^ ) r j 0 ( A ) r i 0 ( A ) ^ ,
or there exists i 0 n , Γ i + ( A ^ ) = , such that
| λ a i 0 j 0 j 0 | < r i 0 ( A ) .
Theorem 8. 
Let A = ( a i 1 i 2 i m ) C [ m , n ] . If
 (i) 
| a j j | | a i i | r i ( A ) + r i ( A ^ ) > r j ( A ) r i ( A ) ^ , e i j Γ ( A ^ ) ,
 (ii) 
| a i i | > r i ( A ) , Γ i + ( A ^ ) = ,
then M A is a nonsingular M -tensor; that is, A is a nonsingular H -tensor.
Proof. 
Consider the comparison tensor M A of tensor A . λ σ ( M A ) . Similar to the proof of Theorem 5, we know R e λ > 0 . Therefore, from Theorem 1, we know the comparison tensor M A of tensor A is a nonsingular M -tensor; so, A is a nonsingular H -tensor. □
We give a simple example for Theorems 5 and 8, respectively.
Example 1. 
Let A R + [ 3 , 3 ] , where
A ( 1 , : , : ) = a 111 a 112 a 113 a 121 a 122 a 123 a 131 a 132 a 133 = 6 0 1 0 1 1 1 1 1.5 , A ( 2 , : , : ) = a 211 a 212 a 213 a 221 a 222 a 223 a 231 a 232 a 233 = 0 0.5 0 0 5 1 1 1 1 , A ( 3 , : , : ) = a 311 a 312 a 313 a 321 a 322 a 323 a 331 a 332 a 333 = 0 1 0 1 1 1 1 1 7 .
Clearly, tensor A is a Z -tensor, due to | a 111 | = 6 6.5 = r i ( A ) ; thus, A is not a strictly diagonally dominant tensor. By calculation, we have
| a 111 | = 6 > 2 = r 1 [ 1 ] ( A ) , | a 222 | = 5 > 2.5 = r 2 [ 2 ] ( A ) , | a 333 | = 7 > 3 r 3 [ 3 ] ( A ) ,
| a 111 | r 1 [ 1 ] ( A ) | a 222 | r ¯ 2 [ 1 ] ( A ) = ( 6 2 ) ( 5 3 ) > 4.5 × 1.5 = r ¯ 1 [ 1 ] ( A ) r 2 [ 1 ] ( A ) , | a 111 | r 1 [ 1 ] ( A ) | a 333 | r ¯ 3 [ 1 ] ( A ) = ( 6 2 ) ( 7 3 ) > 4.5 × 3 = r ¯ 1 [ 1 ] ( A ) r 3 [ 1 ] ( A ) , | a 222 | r 2 [ 2 ] ( A ) | a 111 | r ¯ 1 [ 2 ] ( A ) = ( 5 2.5 ) ( 6 3.5 ) > 2 × 3 = r ¯ 2 [ 2 ] ( A ) r 1 [ 2 ] ( A ) , | a 222 | r 2 [ 2 ] ( A ) | a 333 | r ¯ 3 [ 2 ] ( A ) = ( 5 2.5 ) ( 7 1 ) > 2 × 5 = r ¯ 2 [ 2 ] ( A ) r 3 [ 2 ] ( A ) , | a 333 | r 3 [ 3 ] ( A ) | a 111 | r ¯ 1 [ 3 ] ( A ) = ( 7 3 ) ( 6 1 ) > 3 × 5.5 = r ¯ 3 [ 3 ] ( A ) r 1 [ 3 ] ( A ) , | a 333 | r 3 [ 3 ] ( A ) | a 222 | r ¯ 2 [ 3 ] ( A ) = ( 7 3 ) ( 5 0.5 ) > 3 × 4 = r ¯ 3 [ 3 ] ( A ) r 2 [ 3 ] ( A ) .
Conditions (i) and (ii) of Theorem 5 is satisfied; therefore, from Theorem 5, we know tensor A is a nonsingular M -tensor; so, A is a nonsingular H -tensor.
Example 2. 
Let A R + [ 3 , 3 ] , where
A ( 1 , : , : ) = a 111 a 112 a 113 a 121 a 122 a 123 a 131 a 132 a 133 = 5 0.8 0.5 0 2 0.2 0.5 0 2.2 , A ( 2 , : , : ) = a 211 a 212 a 213 a 221 a 222 a 223 a 231 a 232 a 233 = 2 0.4 0.5 0.7 8.65 0.6 0.5 0.3 1 , A ( 3 , : , : ) = a 311 a 312 a 313 a 321 a 322 a 323 a 331 a 332 a 333 = 1.5 0.5 0.5 0.5 1.5 0.5 0.5 0.5 8.45 .
Clearly, tensor A is a Z -tensor, due to | a 111 | = 5 6.2 = r i ( A ) ; thus, tensor A is not strictly diagonally dominant. However, it is easy to verify that the condition of Theorem 8 is satisfied; therefore, A is a nonsingular M -tensor; that is, A is a nonsingular H -tensor.
Remark 1. 
The conditions of Theorems 3 and 8, which determine the H -tensor, are not mutually inclusive. If Example 1 satisfies the conditions of Theorem 3, it is known to be an H -tensor by applying Theorem 3; however,
| a 222 | | a 111 | r 1 ( A ) + r 1 ( A ^ ) = 5 × ( 6 4 ) < 4.5 × 2.5 = r 2 ( A ) r 1 ( A ) ^ , e 12 Γ ( A ^ ) .
Therefore, the conditions of Theorem 8 are not satisfied, and thus Theorem 8 can not determine it to be an H -tensor.
Another example is Example 2, which satisfies the conditions of Theorem 8 and is known to be an H -tensor by applying Theorem 8; however,
| a 222 | r 2 [ 2 ] ( A ) | a 111 | r ¯ 1 [ 2 ] ( A ) = ( 8.65 2 ) ( 5 3.2 ) < 4 × 3 = r ¯ 2 [ 2 ] ( A ) r 1 [ 2 ] ( A ) .
Therefore, the conditions of Theorem 3 are not satisfied, and thus Theorem 3 cannot be applied to determine that it is an H -tensor.

4. Estimation Inequalities for the Spectral Radius of Nonnegative Tensors

Based on the two classes of H -tensors given in Section 3, two estimation inequalities for the spectral radius of nonnegative tensors are given in this section. First, some basic results of the spectral radius are introduced.
Let A = ( a i 1 i 2 i m ) , B = ( b i 1 i 2 i m ) R + [ m , n ] . If a i 1 i 2 i m b i 1 i 2 i m , i 1 , i 2 , , i m n , then we denote 0 A B .
Theorem 9 ([21]). 
Let A = ( a i 1 i 2 i m ) , and B = ( b i 1 i 2 i m ) R + [ m , n ] . If 0 A B , then ρ ( A ) ρ ( B ) . Specifically, ρ ( A ) a i i , i n .
For the spectral properties of general nonnegative tensors, Ref. [21] provided the following results.
Theorem 10. 
If A = ( a i 1 i 2 i m ) R + [ m , n ] , then ρ ( A ) is the eigenvalue of A , and there is a corresponding nonnegative eigenvector x R + n .
Theorem 11. 
Let A be an m-th order n-dimensional nonnegative weakly irreducible tensor; then, there exists a unique positive eigenvector corresponding to the spectral radius up to a multiplicative constant.
In [21], the upper and lower bounds for the spectral radius of a nonnegative tensor were given, which all depended only on the entries of A .
Theorem 12. 
If A = ( a i 1 i 2 i m ) R + [ m , n ] , then
min i n r ¯ i ( A ) ρ ( A ) max i n r ¯ i ( A ) .
Based on Theorems 4 and 5 in Section 3, the following estimation inequalities for the upper and lower bounds for the spectral radius of nonnegative tensors are given.
Theorem 13. 
Let A = a i 1 i 2 i m R + m , n ; then,
min i j , i , j n r i j ( A ) ρ ( A ) max max i n { a i i + r i [ i ] ( A ) } , max i j , i , j n r i j ( A ) ,
where
r i j ( A ) = 1 2 a i i + r i [ i ] ( A ) + a j j + r ¯ j [ i ] ( A ) + a i i + r i [ i ] ( A ) a j j + r ¯ j [ i ] ( A ) 2 + 4 r ¯ i [ i ] ( A A ) r j [ i ] ( A ) 1 2 .
Proof. 
From Theorem 10, we have ρ ( A ) σ ( A ) . From Theorem 4, we know there exists i 0 n , satisfying
ρ ( A ) a i 0 i 0 + r i 0 [ i 0 ] ( A ) ,
or there exists i 0 , j 0 n , i 0 j 0 , satisfying
ρ ( A ) a i 0 i 0 r i 0 [ i 0 ] ( A ) ) ( ρ ( A ) a j 0 j 0 r ¯ j 0 [ i 0 ] ( A ) r ¯ i 0 [ i 0 ] ( A ) r j 0 [ i 0 ] ( A ) .
Therefore,
ρ ( A ) max max i n a i i + r i [ i ] ( A ) , max i j , i , j n r i j ( A ) .
On the other hand, if A is weakly irreducible, then it is known from Theorem 11 that there exists x = ( x 1 , x 2 , , x n ) T R + + n , such that
A x m 1 = ρ ( A ) x [ m 1 ] .
Without loss of generality, suppose that x t 1 x t 2 x t n 1 x t n > 0 . From (6), we have
( ρ ( A ) a t n t n ) x t n m 1 = i 2 , , i m = 1 δ t n i 2 i m = 0 n a t n i 2 i m x i 2 x i m r t n [ t n ] ( A ) x t n m 1 + r ¯ t n [ t n ] ( A ) x t n 1 m 1 ,
and
( ρ ( A ) a t n 1 t n 1 ) x t n 1 m 1 = i 2 , , i m = 1 δ t n 1 i 2 i m = 0 n a t n 1 i 2 i m x i 2 x i m r t n 1 [ t n ] ( A ) x t n m 1 + r ¯ t n [ t n ] ( A ) x t n 1 m 1 .
Thus, we have
ρ ( A ) a t n t n r t n [ t n ] ( A ) x t n m 1 r ¯ t n [ t n ] ( A ) x t n 1 m 1 ,
and
ρ ( A ) a t n 1 t n 1 r ¯ t n [ t n ] ( A ) x t n 1 m 1 r t n 1 [ t n ] ( A ) x t n m 1 .
So multiplying (7) with (8) gives
( ρ ( A ) a t n t n r t n [ t n ] ( A ) ) ( ρ ( A ) a t n 1 t n 1 r ¯ t n [ t n ] ( A ) ) x t n 1 m 1 x t n m 1 r ¯ t n [ t n ] ( A ) r t n 1 [ t n ] ( A ) x t n 1 m 1 x t n m 1 ;
that is,
( ρ ( A ) a t n t n r t n [ t n ] ( A ) ) ( ρ ( A ) a t n 1 t n 1 r ¯ t n [ t n ] ( A ) ) r ¯ t n [ t n ] ( A ) r t n 1 [ t n ] ( A ) .
Therefore, we have
ρ ( A ) 1 2 a t n t n + r t n [ t n ] ( A ) + a t n 1 t n 1 + r ¯ t n 1 [ t n ] ( A ) + ( ( a t n t n + r t n [ t n ] ( A ) ) ( a t n 1 t n 1 + r ¯ t n 1 [ t n ] ( A ) ) ) 2 + 4 r ¯ t n [ t n ] ( A ) r t n 1 [ t n ] ( A ) 1 2 } min i j , i , j n r i j ( A ) .
For general nonnegative tensors A = a i 1 i 2 i m R + m , n , we define
A ( ε ) = ( a i 1 i 2 i m ( ε ) ) R + [ m , n ] , ε > 0 ,
where a i 1 i 2 i m ( ε ) = a i 1 i 2 i m + ε ; then, A ( ε ) is irreducible. Therefore, from the above proof, we have
ρ ( A ( ε ) ) 1 2 a t n t n ( ε ) + r t n [ t n ] ( A ( ε ) ) + a t n 1 t n 1 ( ε ) + r ¯ t n 1 [ t n ] ( A ( ε ) ) + ( ( a t n t n ( ε ) + r t n [ t n ] ( A ( ε ) ) ) ( a t n 1 t n 1 ( ε ) + r ¯ t n 1 [ t n ] ( A ( ε ) ) ) ) 2 + 4 r ¯ t n [ t n ] ( A ( ε ) ) r t n 1 [ t n ] ( A ( ε ) ) 1 2 } min i j r i j ( A ( ε ) ) .
Notice that A ( ε ) , a i 1 i 2 i n ( ε ) , r t n [ t n ] ( A ( ε ) ) , r ¯ t n 1 [ t n ] ( A ( ε ) ) , r t n [ t n ] ( A ( ε ) ) , r ¯ t n 1 [ t n ] ( A ( ε ) ) , r i j ( A ( ε ) ) are continuous functions of ε . Let ε 0 ; then,
ρ ( A ) 1 2 a t n t n + r t n [ t n ] ( A ) + a t n 1 t n 1 + r ¯ t n 1 [ t n ] ( A ) + ( ( a t n t n + r t n [ t n ] ( A ) ) ( a t n 1 t n 1 + r ¯ t n 1 [ t n ] ( A ) ) ) 2 + 4 r ¯ t n [ t n ] ( A ) r t n 1 [ t n ] ( A ) 1 2 } min i j , i , j n r i j ( A ) .
Remark 2. 
The inequality in the spectral radius of nonnegative tensors given by Theorem 13 is not a complete improvement of Theorem 12, and it can be combined with Theorem 12 to obtain further improved results.
Theorem 14. 
If A = a i 1 i 2 i m R + m , n , then
max min i n r ¯ i ( A ) , min i j , i , j n r i j ( A ) ρ ( A ) min max i n r ¯ i ( A ) , max i n a i i + r i [ i ] ( A ) , max i j , i , j n r i j ( A ) ,
where r i j ( A ) , see Theorem 13.
Similarly, based on Theorems 7 and 8 in Section 3, we have the following estimation inequalities for the upper and lower bounds of the spectral radius of nonnegative tensors.
Theorem 15. 
If A = a i 1 i 2 i m R + m , n is weakly irreducible, then
min min e i j Γ ( A ^ ) s i j ( A ) , min i n , Γ i + ( A ^ ) = r ¯ i ( A ) ρ ( A ) max max e i j Γ ( A ^ ) s i j ( A ) , max i n , Γ i + ( A ^ ) = r ¯ i ( A ) ,
where
s i j ( A ) = 1 2 a i i + a j j + r i ( A ) r i ( A ^ ) + a i i a j j + r i ( A ) r i ( A ^ ) 2 + 4 r ^ i ( A ) r j ( A ) 1 2 .
Proof. 
From Theorem 10, we have ρ ( A ) σ ( A ) . From Theorem 7, we know there exists i 0 , j 0 n , e i 0 j 0 Γ i + ( A ^ ) , satisfying
( ρ ( A ) a j 0 j 0 ) ρ ( A ) a i 0 i 0 r i 0 ( A ) + r i 0 ( A ^ ) r j 0 ( A ) r i 0 ( A ^ ) ,
or there exists i 0 n , Γ i 0 + ( A ^ ) = , satisfying
ρ ( A ) a i 0 i 0 r i 0 ( A ) .
Therefore, we have
ρ ( A ) max max e i j Γ ( A ^ ) s i j ( A ) , max i n , Γ i + ( A ^ ) = r ¯ i ( A ) .
Next, we prove that the left-hand side of the inequality of the theorem holds.
Since A = a i 1 i 2 i m R + m , n is weakly irreducible, and from Theorem 11, we have ρ ( A ) σ ( A ) ; therefore, there exists x = ( x 1 , x 2 , , x n ) T R + + n , such that
A x m 1 = ρ ( A ) x [ m 1 ] .
Without loss of generality, suppose that x t 1 x t 2 x t n 1 x t n > 0 .
(1.1)
If Γ t n + ( A ^ ) = , then r t n ( A ^ ) = 0 . From (9), we have
i 2 , , i m = 1 n a t n i 2 i m x i 2 x i m = ρ ( A ) x t n m 1 .
Therefore,
ρ ( A ) r ¯ t n ( A ) .
(1.2)
If Γ t n + ( A ^ ) , assume a t n t n 1 t n 1 = = a t n t n r 1 t n r 1 = 0 , a t n t n r t n r 0 , r n 1 ; then, e t n t n r Γ ( A ^ ) . From (10), we have
ρ ( A ) a t 1 t 1 r t n ( A ) + r t n ( A ^ ) x t n m 1 r t n ( A ^ ) x t n r m 1 .
Similarly, from
i 2 , , i m = 1 n a t n r i 2 i m x i 2 x i m = ρ ( A ) x t n r m 1 ,
we obtain
ρ ( A ) a t n r t n r x t n r m 1 r t n r ( A ) x t n m 1 .
Therefore, we have
ρ ( A ) a t n t n r t n ( A ) + r t n ( A ^ ) ρ ( A ) a t n r t n r x t n m 1 x t n r m 1 r t n ( A ^ ) r t n r ( A ) x t n m 1 x t n r m 1 ;
that is,
ρ ( A ) s t n t n r ( A ) min e i j Γ ( A ^ ) s i j ( A ) .
The estimation of the spectral radius of a general nonnegative tensor has the following result.
Theorem 16. 
If A = a i 1 i 2 i m R + m , n , then
min i j s i j ( A ) ρ ( A ) max max e i j Γ ( A ^ ) s i j ( A ) , max i n , Γ i + ( A ^ ) = r ¯ i ( A ) ,
where s i j ( A ) , see Theorem 15.
Proof. 
We only need to prove the inequality on the left. Let A = a i 1 i 2 i m R + m , n be reducible but not weakly irreducible. We construct nonnegative tensors A ε = ( a i 1 i 2 i m ( ε ) ) R + [ m , n ] , ε > 0 , where
a i 1 i 2 i m ( ε ) = a i 1 i 2 i m + ε , if δ i 1 i 2 i m = 0 , a i 1 i 2 i m , otherwise ;
then, A ε is weakly irreducible. Similar to the proof of Theorem 15, and with ρ ( A ( ε ) ) as a continuous function of ε , letting ε 0 , we obtain
ρ ( A ) min i j s i j ( A ) .
The following results show that Theorem 15 is an improvement of Theorem 12.
Theorem 17. 
If A = a i 1 i 2 i m R + m , n , then
min i n r ¯ i ( A ) min min e i j Γ ( A ^ ) s i j ( A ) , min i n , Γ i + ( A ^ ) = r ¯ i ( A ) ρ ( A ) max max e i j Γ ( A ^ ) s i j ( A ) , max i n , Γ i + ( A ^ ) = r ¯ i ( A ) max i n r ¯ i ( A ) ,
where s i j ( A ) , see Theorem 15.
Proof. 
Without loss of generality, suppose that for any i , j n , i j , e i j Γ i + ( A ^ ) , r ¯ i ( A ) r ¯ j ( A ) , we have
a i i a j j + r i A r i ( A ^ ) 2 + 4 r i ( A ^ ) r j ( A ) a i i a j j + r i A r i ( A ^ ) 2 + 4 r i ( A ^ ) r ¯ i ( A ) a j j = a i i a j j + r i A r i ( A ^ ) 2 + 4 r i ( A ^ ) a i i a j j + r i ( A ) = a i i a j j + r i A + r i ( A ^ ) 2 .
When a i i a j j + r i ( A ) + r i ( A ^ ) 0 , we have
s i j A = 1 2 a i i + a j j + r i A r i ( A ^ ) + a i i a j j + r i ( A ) 2 + 4 r i ( A ^ ) r j ( A ) 1 2 a i i + a j j + r i A r i ( A ^ ) + a i i a j j + r i A + r i ( A ^ ) = r ¯ i ( A ) max i n r ¯ i ( A ) .
When a i i a j j + r i ( A ) + r i ( A ^ ) < 0 , we have
s i j A = 1 2 a i i + a j j + r i A r i ( A ^ ) + a i i a j j + r i ( A ) 2 + 4 r i ( A ^ ) r j ( A ) 1 2 a i i + a j j + r i A r i ( A ^ ) ( a i i a j j + r i A + r i ( A ^ ) ) = a j j r i ( A ^ ) max i n r ¯ i ( A ) .
From Theorem 12, we have
ρ ( A ) max max e i j Γ ( A ^ ) s i j ( A ) , max i n , Γ i + ( A ^ ) = r ¯ i ( A ) max i n r ¯ i ( A ) .
Similar to the above proof of the theorem, we have
min i n r ¯ i ( A ) min min e i j Γ ( A ^ ) s i j ( A ) , min i n , Γ i + ( A ^ ) = r ¯ i ( A ) ρ ( A ) .
Example 3. 
Let
A ( 1 , : , : ) = a 111 a 112 a 113 a 121 a 122 a 123 a 131 a 132 a 133 = 5 1 3 2 2 4 3 6 2 , A ( 2 , : , : ) = a 211 a 212 a 213 a 221 a 222 a 223 a 231 a 232 a 233 = 3 4 5 3 8 2 3 4 1 , A ( 3 , : , : ) = a 311 a 312 a 313 a 321 a 322 a 323 a 331 a 332 a 333 = 2 5 6 4 2 6 0 3 8 .
Thus, ρ ( A ) = 32.1135 . From Theorem 17, we obtain
r ¯ 1 ( A ) = 28 , r ¯ 2 ( A ) = 32 , r ¯ 3 ( A ) = 26 , a 111 + r ¯ 1 1 ( A ) = 14 , a 222 + r ¯ 2 2 ( A ) = 21 , a 333 + r ¯ 3 3 ( A ) = 23 , r 12 ( A ) 29.9353 , r 13 ( A ) 30.1285 , r 21 ( A ) 30.4536 , r 23 ( A ) 28.2082 , r 31 ( A ) 33.1208 , r 32 ( A ) 34.2829 .
Therefore,
28.2082 ρ ( A ) 34.2829 .
From Theorem 12,
28 ρ ( A ) 36 .
Example 4. 
Let
A ( 1 , : , : ) = a 111 a 112 a 113 a 121 a 122 a 123 a 131 a 132 a 133 = 3 1 3 2 2 5 3 6 1 , A ( 2 , : , : ) = a 211 a 212 a 213 a 221 a 222 a 223 a 231 a 232 a 233 = 0 2 5 2 5 4 6 5 0 , A ( 3 , : , : ) = a 311 a 312 a 313 a 321 a 322 a 323 a 331 a 332 a 333 = 3 4 6 1 5 2 2 1 7 .
We know that A is weakly irreducible, and
Γ ( A ^ ) = 3 2 1 0 5 0 3 5 7 .
Thus, ρ ( A ) = 28.8482 . From Theorem 15, we obtain
r ¯ 1 ( A ) = 26 , r ¯ 2 ( A ) = 29 , r ¯ 3 ( A ) = 31 , s 13 ( A ) 26.9146 , s 31 ( A ) 29.8523 , s 32 ( A ) 30.5227 .
Therefore,
26.3693 ρ ( A ) 30.5227 .
From Theorem 12,
26 ρ ( A ) 31 .

5. Conclusions

In this paper, by systematically analyzing the structure of tensors, a new classification method was used to define a class of quasi-double diagonally dominant tensors, and another class of quasi-double diagonally dominant tensors was defined by applying the digraph of the majorization matrix of a tensor, proving that they were H -tensors and further extending the determination conditions of H -tensors. Moreover, inequalities for estimating the upper and lower bounds for the spectral radius (the largest H-eigenvalue) of nonnegative tensors were given based on the relationship between the diagonal dominance of the tensor ( H -tensor) and the inclusion domain of the eigenvalues of the tensor, and these inequalities improved the Perron–Frobenius inequality for estimating the upper and lower bounds for the spectral radius of nonnegative tensors. This paper provides new ways of thinking to provide more refined determination conditions for the H -tensor and to improve the inequalities for estimating the upper and lower bounds of the spectral radius of the nonnegative tensor.

Author Contributions

In this paper, H.L. proposed the concept of the quasi-double diagonal dominance of tensors, and X.W. consulted the relevant literature and specifically gave two quasi-double diagonal dominance forms of tensors. X.W. and H.L. jointly completed the proof of the theorem, and H.L. reviewed it. All authors have read and agreed to the submitted version of the manuscript.

Funding

This work was supported by the Natural Sciences Program of Science and Technology of Jilin Province of China (20190201139JC).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lim, L.H. Singular value and and eigenvalue of tensors, a variational approach. In Proceedings of the 1st IEEE International Workshop on Computational Advances in Multi Sensor Adaptive Processing, Puerto Vallarta, Mexico, 13–15 December 2005; pp. 129–132. [Google Scholar]
  2. Qi, L.Q. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef]
  3. Cichocki, A.; Zdunek, R.; Phan, A.H.; Amari, S.I. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation; John Wiley and Sons, Ltd.: Natick, MA, USA, 2009. [Google Scholar]
  4. Hu, S.; Huang, Z.H.; Ling, C.; Qi, L. On determinants and eigenvalue theory of tensors. J. Symb. Comput. 2013, 50, 508–531. [Google Scholar] [CrossRef]
  5. Moakher, M. On the averaging of symmetric positive-definite tensors. J. Elast. 2006, 82, 273–296. [Google Scholar] [CrossRef]
  6. Nikias, C.L.; Mendel, J.M. Signal processing with higher-order spectra. IEEE Signal Process. Mag. 1993, 10, 10–37. [Google Scholar] [CrossRef]
  7. Ding, W.Y.; Qi, L.Q.; Wei, Y.M. M -tensors and nonsingular M -tensors. Linear Algebra Appl. 2013, 439, 3264–3278. [Google Scholar] [CrossRef]
  8. Kannan, M.R.; Shaked-Monderer, N.A. Berman. Some properties of strong H-tensors and general H-tensors. Linear Algebra Appl. 2015, 476, 42–55. [Google Scholar] [CrossRef]
  9. Zhang, L.P.; Qi, L.Q.; Zhou, G.L. M -tensors and some applications. SIAM J. Matrix Anal. Appl. 2014, 35, 437–452. [Google Scholar] [CrossRef]
  10. Li, X.; Ng, M.K. Solving sparse non-negative tensor equations: Algorithms and applications. Front. Math. China 2015, 10, 649–680. [Google Scholar] [CrossRef]
  11. Li, X.; Ng, M.K.; Ye, Y. HAR: Hub, Authority and Relevance scores in multi-relational data for query search. In Proceedings of the 2012 SIAM International Conference on Data Mining, Anaheim, CA, USA, 26–28 April 2012; pp. 141–152. [Google Scholar]
  12. Li, X.; Ng, M.K.; Ye, Y. MultiComm: Finding community structure in multi-dimensional networks. IEEE Trans. Knowl. Data Eng. 2014, 26, 929–941. [Google Scholar] [CrossRef]
  13. Ng, M.K.; Li, X.; Ye, Y. MultiRank: Co-ranking for objects and relations in multi-relational data. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 21–24 August 2011; pp. 1217–1225. [Google Scholar]
  14. Jin, H.; Kannan, M.; Bai, M. Lower and upper bounds for H-eigenvalues of even order real symmetric tensors. Linear Multilinear Algebra 2017, 65, 1402–1416. [Google Scholar] [CrossRef] [Green Version]
  15. Li, S.; Chen, Z.; Li, C.; Zhao, J. Eigenvalue bounds of third-order tensors via the minimax eigenvalue of symmetric matrices. Comput. Appl. Math. 2020, 39, 293–312. [Google Scholar] [CrossRef]
  16. Chang, K.C.; Pearson, K.; Zhang, T. Perron-Frobenius theorem for nonnegative tensors. Commun. Math. Sci. 2008, 6, 507–520. [Google Scholar] [CrossRef]
  17. Friedland, S.; Gaubet, S.; Han, L. Perron-Frobenius theorem for nonnegative multilinear forms and extensions. Linear Algebra Appl. 2013, 438, 738–749. [Google Scholar] [CrossRef]
  18. Koldda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  19. Pearson, K.J. Essentially positive tensors. Int. J. Algebra 2010, 4, 421–427. [Google Scholar]
  20. Hord, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: New York, NY, USA, 1985. [Google Scholar]
  21. Yang, Y.; Yang, Q. Further results for Perron-Frobenius theorem for nonnegative tensors. SIAM. J. Matrix Anal. Appl. 2010, 31, 2517–2530. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Lv, H. Quasi-Double Diagonally Dominant H-Tensors and the Estimation Inequalities for the Spectral Radius of Nonnegative Tensors. Symmetry 2023, 15, 439. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15020439

AMA Style

Wang X, Lv H. Quasi-Double Diagonally Dominant H-Tensors and the Estimation Inequalities for the Spectral Radius of Nonnegative Tensors. Symmetry. 2023; 15(2):439. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15020439

Chicago/Turabian Style

Wang, Xincun, and Hongbin Lv. 2023. "Quasi-Double Diagonally Dominant H-Tensors and the Estimation Inequalities for the Spectral Radius of Nonnegative Tensors" Symmetry 15, no. 2: 439. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15020439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop