Next Article in Journal
Visual Sequential Search Test Analysis: An Algorithmic Approach
Next Article in Special Issue
Minimal Systems of Binomial Generators for the Ideals of Certain Monomial Curves
Previous Article in Journal
A Hardy–Hilbert-Type Inequality Involving Parameters Composed of a Pair of Weight Coefficients with Their Sums
Previous Article in Special Issue
Induced Matchings and the v-Number of Graded Ideals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polynomial and Pseudopolynomial Procedures for Solving Interval Two-Sided (Max, Plus)-Linear Systems

Department of Mathematics and Theoretical Informatics, Technical University of Košice, 04200 Košice, Slovakia
*
Author to whom correspondence should be addressed.
Submission received: 14 October 2021 / Revised: 11 November 2021 / Accepted: 17 November 2021 / Published: 18 November 2021
(This article belongs to the Special Issue Combinatorics and Computation in Commutative Algebra)

Abstract

:
Max-plus algebra is the similarity of the classical linear algebra with two binary operations, maximum and addition. The notation Ax = Bx, where A, B are given (interval) matrices, represents (interval) two-sided (max, plus)-linear system. For the solvability of Ax = Bx, there are some pseudopolynomial algorithms, but a polynomial algorithm is still waiting for an appearance. The paper deals with the analysis of solvability of two-sided (max, plus)-linear equations with inexact (interval) data. The purpose of the paper is to get efficient necessary and sufficient conditions for solvability of the interval systems using the property of the solution set of the non-interval system Ax = Bx. The main contribution of the paper is a transformation of weak versions of solvability to either subeigenvector problems or to non-interval two-sided (max, plus)-linear systems and obtaining the equivalent polynomially checked conditions for the strong versions of solvability.

1. Introduction and Preliminaries

An algebraic structure in which addition is substituted by maximum and multiplication by addition is called max-plus algebra. The solvability of systems of linear equations is one of the crucial questions that are considered in max-plus algebra. Systems of (max, plus)-linear equations are used in the modeling and analysis of discrete dynamic systems and various versions of real-life optimizations.
Consider a generalization of discrete event dynamic systems ([1,2,3]) with m entities E 1 , , E m producing entity outputs O 1 , , O n (data, products, etc.) working in stages whereby each entity contributes to the completion of each entity output and works for all outputs simultaneously. The state of entity E i after some stage k is described by entry x i ( k ) of a vector x ( k ) , and the element a i j of a matrix A formulates the influence of the activity of E j in the previous stage on the activity of E i in the current stage whereby we want to complete the partial entity output O i . Moreover, all entities must wait until their preceding entities finish their activity and necessary influence constraints, formally expressed as ( A x ( k ) ) i = j a i j x j ( k ) . Further, similarly to in [2], suppose that m other entities F 1 , , F m prepare partial entity outputs for entity outputs U 1 , , U n , whereby b i j and y j , alike as above, encode the influence of the work and the state of the corresponding entity, respectively, obtaining ( B x ( k ) ) i = j b i j x j ( k ) .
Consider a model with a synchronization condition: to find the states of all 2 m entities so that each pair ( O i , U i ) is completed at the same state. Algebraically, we must solve the two-sided max-plus system, A x = B x .
The study of properties of systems of two-sided (max, plus)-linear systems is important for many applications. If the matrix and vector entries are estimated incorrectly, then one of methods of restoring solvability is to substitute a matrix and vectors by interval matrix and interval vectors, respectively.
In this paper, we consider the properties of matrices and vectors with inexact (interval) entries and analyze several versions of the solvability of the interval systems with respect to quantifiers and their order. The goal of this paper is to present weak and strong versions of the solvability of A X = B X and prove its necessary and sufficient conditions for vector X and matrices A , B with inexact (interval) entries. Moreover, some of the equivalent conditions of strong versions of solvability can be checked in a polynomial number of arithmetic operations. Systems of two-sided (max, plus)-linear equations A x = B x have been studied by many authors [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. The motivation for this research are papers [8,10]. Paper [8] studies six types of solutions of systems of two-sided (max, plus)-linear equations A x = B x and suggests a method for computing the solution set of a two-sided (max, plus)-linear system. Notice that this paper generalizes the results obtained in [8].
Let us now give more details on the organization of the paper and on the results obtained there. The next section presents the notation and definitions of solvability of a two-sided max-plus system A x = B x for A B . Section 2 and Section 3 deal with definitions and properties of subeigenvectors and two-sided max-plus systems. Section 4 is devoted to classification of interval solutions of interval two-sided max-plus systems and the characterization of the necessary and sufficient conditions for the strong and the weak versions of solvability. Based on the results, we also give a method for testing the equivalent conditions obtained in Theorems 13 and 16.
Denote the set of real numbers by R and the set of all natural numbers by N . The symbol R ¯ will stand for R { } . For two elements a , b R ¯ , we set a b = max ( a ; b ) and a b = a + b . Throughout the paper, we denote , the neutral element with respect to ⊕, by ε and the neutral element 0 with respect to ⊗, by e. For given natural numbers n , m N , we use the notations N = { 1 , 2 , , n } and M = { 1 , , m } . The matrix operations over R { } are defined formally in the same manner (with respect to , ) as matrix operations over any field. The rth power of a matrix A is denoted by A r .
Suppose that n 1 , m 1 are given integers. The set of n × m matrices over R ¯ is denoted by R ¯ ( n , m ) , especially the set of n × 1 vectors over R ¯ is denoted by R ¯ ( n ) . The triple ( R ¯ , , ) is called max-plus algebra. If each entry of a matrix A R ¯ ( n , n ) (a vector x R ¯ ( n ) ) is equal to ε , we shall denote this as A = ε ( x = ε ).
For A R ( m , n ) , C R ( m , n ) , we write A C if a i j c i j holds true for all i , j N . Similarly, for x = ( x 1 , , x n ) T R ( n ) and y = ( y 1 , , y n ) T R ( n ) , we write x y if x i y i for each i N .
By digraph, we understand a pair G = ( V G , E G ) , where V G is a non-empty finite set, called the node set, and E G , E G V G × V G is called the arc set. A digraph G is a subdigraph of digraph G , if V G V G and E G E G . A walk in G is the sequence of nodes W = ( i 0 , i 1 , , i l ) such that ( i k 1 , i k ) E G for all k = 1 , 2 , , l . The number l 0 is called the length of W and denoted l ( W ) . If i 0 = i l , then W is a cycle of length l. A cycle is elementary if all nodes except the terminal node are distinct. A digraph is called strongly connected if any two distinct nodes of G are contained in a common cycle.
By a strongly connected component of a digraph G = ( V G , E G ) , we mean a subdigraph K = ( V K , E K ) , where the node set V K V G is such that any two distinct nodes i , j V K are contained in a common cycle, E K = E G ( V K × V K ) and V K is the maximal subset with this property. A strongly connected component K of a digraph is called non-trivial if there is a cycle of positive length in K .
For a given matrix A R ¯ ( n , n ) , the weighted digraph G ( A ) associated with A is the digraph with the node set V G ( A ) = N and the edge set E G = { ( i , j ) N × N ; a i j ε } . If G = G ( A ) , then the weight of W = ( i 0 , i 1 , i l ) is defined by w ( W ) = a i 0 i 1 + a i 1 i 2 + + a i l 1 i l . The cycle mean of cycle c is defined by w ¯ ( c ) = ( a i 0 i 1 + a i 1 i 2 + + a i l 1 i 0 ) / l ( c ) and the maximum cycle mean of A is defined as λ ( A ) = max c w ¯ ( c ) .
A R ¯ ( n , n ) is called irreducible if G ( A ) is strongly connected and reducible otherwise.
For a given matrix A R ¯ ( n , n ) , the number λ R ¯ and the n-tuple x R ( n ) , x ε are the so-called eigenvalue (subeigenvalue) and eigenvector (subeigenvector) of A, respectively, if
A x = λ x ( A x λ x ) .

2. Subeigenvectors

The column span of a matrix A with columns A 1 , , A n is defined { i N α i A i ; α 1 , , α n R } and will be denoted by s p a n ( A ) .
For a given A R ¯ ( n , n ) , λ R ¯ denotes A λ = λ 1 A and A = I A A 2 A 3 , called the Kleene star.
An eigenspace  V ( A , λ ) is defined as the set of all eigenvectors of A with associated eigenvalue λ , i.e.,
V ( A , λ ) = { x R ¯ ( n ) \ { ε } : A x = λ x } ,
and subeigenspace  V ( A , λ ) is defined as the set of all subeigenvectors of A with associated subeigenvalue λ , i.e.,
V ( A , λ ) = { x R ¯ ( n ) ; A x λ x } .
The set Λ ( A ) = { λ R ¯ ; V ( A , λ ) ε } will be called the spectrum of A.
Any reducible matrix A can be transformed by simultaneous permutations of rows and columns to a Frobenius normal form
A = A 11 ε ε A 21 A 22 ε A r 1 A r 2 A r r ,
where A 11 , , A r r are irreducible square submatrices of A ([1,18]). N 1 , , N r denote the corresponding partition subsets of the node set of G ( A ) . The symbol N i N j means that there is a directed path from N i to N j in a reduced digraph with nodes N 1 , N 2 , , N r and the arc set { ( N i , N j ) ; ( k N i ) ( s N j ) a k s > ε } .
The diagonal block A j j is called spectral if λ ( A j j ) = max N i N j λ ( A i i ) , and we denote λ m i n = min { λ ( A i i ) ; A i i s p e c t r a l } ( = min Λ ( A ) ) . For the more details see [1,18].
Theorem 1
([18]). Let A R ¯ ( n , n ) . Then V ( A , λ ) { ε } if and only if λ λ m i n and V ( A , λ ) = s p a n ( G ) , where G is the matrix consisting of the columns g j of the matrix ( A λ ) with indices j i I ( λ ) N i , where I ( λ ) = { i { 1 , , r } ; λ ( A i i ) λ , A i i i s s p e c t r a l } .
Notice that the basis of V ( A , λ ) { ε } can be found in O ( n 3 ) time [19].

3. Systems of Two-Sided (Max, Plus)-Linear Equations

In this section we consider the two-sided max-plus linear system A x = B x for A B .
The set of solutions to the system A x = B x will be denoted S ( A , B ) , that is,
S ( A , B ) = { x R ¯ ( n ) \ { ε } ; A x = B x }
and put
M i ( A , B ) = { j N ; a i j = b i j } .
Lemma 1.
Suppose A R ¯ ( m , n ) , B R ¯ ( m , n ) with A B . If S ( A , B ) , then M i ( A , B ) for any i M .
Proof. 
Suppose that there is x R ¯ ( n ) such that A x = B x . Then for an arbitrary but fixed i M , there are j , k N such that a i j x j = ( A x ) i = ( B x ) i = b i k x k . Then we get a i j x j = b i k x k b i j x j and hence a i j b i j . According to the assumption a i j b i j , we obtain the equality a i j = b i j and hence j M i ( A , B ) . □
For J = ( j 1 , , j m ) M = × i M M i ( A , B ) , put
N k J = { M ; j = k } for any k N
and define the matrix C J ( A , B ) = ( c i j J ) R ¯ ( n , n ) , where
c j J = max k N J { b k j b k } , for j ε , if N J = or = j .
The matrix C J ( A , B ) will be written briefly as C J .
Theorem 2.
Suppose A , B R ¯ ( m , n ) and A B . Then x S ( A , B ) if and only if there is J M such that C J ( A , B ) x x .
Proof. 
Suppose that A B and J = ( j 1 , , j m ) M . Then x is an element of S ( A , B ) if and only if
max { a 11 + x 1 , , a 1 n + x n } = max { b 11 + x 1 , , b 1 n + x n } = b 1 j 1 + x j 1 max { a m 1 + x 1 , , a m n + x n } = max { b m 1 + x 1 , , b m n + x n } = b 1 j m + x j m
Since A B and J = ( j 1 , , j m ) M , the system (2) is equivalent to
max j j 1 { b 1 j + x j } b 1 j 1 + x j 1 max j j m { b m j + x j } b m j m + x j m max j j 1 { b 1 j b 1 j 1 + x j } x j 1 max j j m { b m j b m j m + x j } x j m
Observe that some elements of J = ( j 1 , , j m ) M can be equal. Denote the set { M ; j = k } by N k J for any k N . Now we can simplify (3) using N k J in the equivalent system C J ( A , B ) x x , where C J ( A , B ) = ( c i j J ) with
c j J = max k N J { b k j b k } , for j ε , if N J = or = j .
and the assertion follows. □
Thus, Theorem 2 turns the problem of solvability of the system A x = B x into the problem of finding J M and a corresponding subeigenvector of C J ( A , B ) .
Example 1.
Solve the system A x = B x , where matrices A , B , A B have the forms
A = 3 4 4 6 3 1 1 5 4 3 1 3 , B = 5 4 6 6 4 7 4 5 4 3 2 4 .
Then we obtain M 1 ( A , B ) = { 2 } , M 2 ( A , B ) = { 1 } , M 3 ( A , B ) = { 2 , 3 } , M 4 ( A , B ) = { 1 } , J 1 = ( 2 , 1 , 2 , 1 ) , J 2 = ( 2 , 1 , 3 , 1 ) and N 1 J 1 = { 2 , 4 } , N 2 J 1 = { 1 , 3 } and N 3 J 1 = . Similarly for J 2 = ( 2 , 1 , 3 , 1 ) , we have N 1 J 2 = { 2 , 4 } , N 2 J 2 = { 1 } and N 3 J 2 = { 3 } . To solve the system A x = B x , we shall consider a solvability of two systems A x = B ( J 1 ) x and A x = B ( J 2 ) x , where
B ( J 1 ) = 5 4 6 6 4 7 4 5 4 3 2 4 , B ( J 2 ) = 5 4 6 6 4 7 4 5 4 3 2 4 .
Case (i). System A x = B ( J 1 ) x is transformed into equivalent system C J 1 x x as follows:
max { 3 + x 1 , 4 + x 2 , 4 + x 3 } = max { 5 + x 1 , 4 + x 2 , 6 + x 3 } = 4 + x 2 max { 6 + x 1 , 3 + x 2 , 1 + x 3 } = max { 6 + x 1 , 4 + x 2 , 7 + x 3 } = 6 + x 1 max { 1 + x 1 , 5 + x 2 , 4 + x 3 } = max { 4 + x 1 , 5 + x 2 , 4 + x 3 } = 5 + x 2 max { 3 + x 1 , 1 + x 2 , 3 + x 3 } = max { 3 + x 1 , 2 + x 2 , 4 + x 3 } = 3 + x 1
max { 1 + x 1 , 2 + x 3 } x 2 max { 2 + x 2 , 1 + x 3 } x 1 max { 1 + x 1 , 1 + x 3 } x 2 max { 1 + x 2 , 1 + x 3 } x 1 max { 1 + x 2 , 1 + x 3 } x 1 max { 1 + x 1 , 2 + x 3 } x 2
and in vector-matrix form
C J 1 x = ε 1 1 1 ε 2 ε ε ε x 1 x 2 x 3 x 1 x 2 x 3 .
To obtain the set of all solutions of the system C J 1 x x we will use Theorem 1 and look for the set of subeigenvectors for the matrix C J 1 with λ = 0 , i.e., V ( C J 1 , 0 ) . Since λ ( C J 1 ) = λ m i n = 0 the set V ( C J 1 , 0 ) and V ( C J 1 , 0 ) = s p a n ( G 1 ) , where G 1 is the matrix consisting of the columns of ( C J 1 ) 0 . Thus, any solution of the system C J 1 x x can be expressed as max-plus linear combination of the columns of V ( C J 1 , 0 ) = s p a n ( G 1 ) (one of solutions is x = ( 1 , 2 , 0 ) T ), where
G 1 = ( C J 1 ) 0 = 0 1 1 1 0 2 ε ε 0 .
Case (ii). A x = B ( J 2 ) x is transformed into equivalent system C J 2 x x as follows:
max { 3 + x 1 , 4 + x 2 , 4 + x 3 } = max { 5 + x 1 , 4 + x 2 , 6 + x 3 } = 4 + x 2 max { 6 + x 1 , 3 + x 2 , 1 + x 3 } = max { 6 + x 1 , 4 + x 2 , 7 + x 3 } = 6 + x 1 max { 1 + x 1 , 5 + x 2 , 4 + x 3 } = max { 4 + x 1 , 5 + x 2 , 4 + x 3 } = 4 + x 3 max { 3 + x 1 , 1 + x 2 , 3 + x 3 } = max { 3 + x 1 , 2 + x 2 , 4 + x 3 } = 3 + x 1
max { 1 + x 1 , 2 + x 3 } x 2 max { 2 + x 2 , 1 + x 3 } x 1 max { x 1 , 1 + x 2 } x 3 max { 1 + x 2 , 1 + x 3 } x 1 max { 1 + x 2 , 1 + x 3 } x 1 max { 1 + x 1 , 2 + x 3 } x 2 max { x 1 , 1 + x 2 } x 3
and in vector-matrix form
C J 2 x = ε 1 1 1 ε 2 0 1 ε x 1 x 2 x 3 x 1 x 2 x 3 ,
whereby according to Theorem 1, the set V ( C J 2 , 0 ) = since λ ( C J 2 ) = 3 / 2 > 0 = λ . We can conclude that x is a solution of A x = B x if and only if x V ( C J 1 , 0 ) V ( C J 2 , 0 ) = s p a n ( G 1 ) .
Theorem 3.
Suppose given A R ¯ ( m , n ) , B R ¯ ( m , n ) with A B . Then S ( A , B ) if and only if J × i M M i ( A , B ) V ( C J , 0 ) ε and M i ( A , B ) for any i M .
Proof. 
The proof follows from Theorem 1, Lemma 1, and Theorem 2. □
Observe that
(i)
solvability of the system A x = B x for A B can be recognized in k · O ( n 3 ) time, where k = | × i M M i ( A , B ) | (k can be exponentially large),
(ii)
if x V ( C J , 0 ) for some J × i M M i ( A , B ) , then x can be expressed as max-plus linear combination of basis of V ( C J , 0 ) { ε } which can be found in O ( n 3 ) time [19].

4. Interval Solutions

Similarly to [7,9,10,11,12,13,14,15,16,17,20,21,22], by an interval of R ¯ ( m , n ) , we mean a subset of R ¯ ( m , n ) of the form Y = ( y i j ) for i M , j N , where each y i j is an arbitrary interval belonging to R ¯ . For each i , j we denote y ̲ i j : = inf y i j and y ¯ i j = sup y i j . Then, we also have Y ̲ : = ( y ̲ i j ) = inf Y and Y ¯ : = ( y ¯ i j ) = sup Y . A set Y = ( y i j ) for i M , j N a subset of R ¯ ( m , n ) , is called an interval matrix (a interval vector, if n = 1 , Y = ( y i ) ) if it is of the form Y = ( y i j ) for y i j nonempty subsets of R ¯ taking any of the following four forms:
[ y ̲ i j , y ¯ i j ] , ( y ̲ i j , y ¯ i j ) , ( y ̲ i j , y ¯ i j ] , [ y ̲ i j , y ¯ i j )
( [ y ̲ i , y ¯ i ] , ( y ̲ i , y ¯ i ) , ( y ̲ i , y ¯ i ] , [ y ̲ i , y ¯ i ) ) .
Now we can rewrite an interval vector with bounds x ̲ , x ¯ R ¯ ( n ) and interval matrices with bounds A ̲ , A ¯ R ¯ ( m , n ) , B ̲ , B ¯ R ¯ ( m , n ) as follows
X = [ x ̲ , x ¯ ] = x R ¯ ( n ) ; x ̲ x x ¯ ,
A = [ A ̲ , A ¯ ] = A R ¯ ( m , n ) ; A ̲ A A ¯ ,
B = [ B ̲ , B ¯ ] = B R ¯ ( m , n ) ; B ̲ B B ¯ .
We will consider the following various versions of interval solutions of the system
A X = B X ,
depending on the used quantifiers and their order, where the aim is either to suggest a polynomial method for its solvability or to transform it into known max-plus linear systems of equations and/or inequalities.
Definition 1.
If A, B and X are given, then X is called weak
  • XEA-solution of (4) if ( x X ) ( A A ) ( B B ) A x = B x ,
  • EAX-solution of (4) if ( A A ) ( B B ) ( x X ) A x = B x
and X is called strong
  • XEA-solution of (4) if ( x X ) ( A A ) ( B B ) A x = B x ,
  • EXE-solution of (4) if ( A A ) ( x X ) ( B B ) A x = B x .
Notice that the denotation-weak XEA-solution corresponds with quantifiers and their order as follows: weak corresponds with the existence quantifier of X, E corresponds with the existence quantifier of A, and A corresponds with the forall quantifier of B. Similarly, a strong XEA-solution means that strong corresponds with the forall quantifier of X, E corresponds with existence quantifier of A and A corresponds with the forall quantifier of B.
For given indices i M , j N , we define matrix A ˜ ( i j ) R ¯ ( m , n ) and vector x ˜ ( i ) R ¯ ( n ) by putting for every k M , l N
a ˜ k l ( i j ) = a ¯ i j , for k = i , l = j a ̲ k l , otherwise , x ˜ k ( i ) = x ¯ i , for k = i x ̲ k , otherwise .
Lemma 2
([23]). Suppose x R ¯ ( n ) and A R ¯ ( m , n ) . Then
(i) 
x X if and only if x = i N γ i x ˜ ( i ) for some values γ i R with x ̲ i x ¯ i γ i 0 ,
(ii) 
A A if and only if A = i M , j N α i j A ˜ ( i j ) for some values α i j R with a ̲ i j a ¯ i j α i j 0 .
Lemma 3.
Suppose A, B and x X . Then the following equivalences hold true:
(i) 
If A A , B B , then ( x X ) A x = B x if and only if ( k N ) A x ˜ ( k ) = B x ˜ ( k ) .
(ii) 
If B B , then ( A A ) A x = B x if and only if ( i M , j N ) A ˜ ( i j ) x = B x .
(iii) 
If A A , then ( B B ) A x = B x if and only if ( i M , j N ) A x = B ˜ ( i j ) x .
(iv) 
( A A ) ( B B ) A x = B x if and only if ( j , r M , k , s N ) A ( i j ) x = B ( r s ) x .
Proof. 
(i) Suppose that A A , B B , x X and A x ˜ ( k ) = B x ˜ ( k ) holds for any k N . Then in view of Lemma 2 (i) we get x = k N γ k x ˜ ( k ) . Therefore,
A x = A k N γ k x ˜ ( k ) = k N γ k ( A x ˜ ( k ) ) =
k N γ k ( B x ˜ ( k ) ) = B k N γ k x ˜ ( k ) = B x .
The reverse implication trivially follows.
(ii) Suppose that there are A A and i M such that ( A x ) i ( B x ) i . Consider two cases:
  • If ( A x ) i > ( B x ) i , then there is s N such that
    a i s x s = j N a i j x j > j N b i j x j .
    Then for the generator A ˜ ( i s ) , we obtain
    ( A ˜ ( i s ) x ) i a ˜ i s ( i s ) x s = a ¯ i s x s a i s x s > ( B x ) i .
  • If ( A x ) i < ( B x ) i , then for the generator A ˜ ( k s ) , k M , k i , we obtain
    j N b i j x j > j N a i j x j j N a ̲ i j x j = ( A ˜ ( k s ) x ) i .
    We have shown that there are k M , j N and i M such that ( A ˜ ( k s ) x ) i ( B x ) i . The reverse implication trivially follows.
(iii) The proof is analogical as the proof of (ii) after exchanging A to B.
(iv) Suppose that there are A A , B B , and i N such that ( A x ) i ( B x ) i . Consider two cases:
  • ( A x ) i > ( B x ) i . Thus, there is s , r N such that
    a i s x s = j N a i j x j > j N b i j x j = b i r x r .
    Then for the generators A ˜ ( i s ) and B ˜ ( k s ) , k i , we obtain
    ( A ˜ ( i s ) x ) i a ˜ i s ( i s ) x s = a ¯ i s x s a i s x s >
    j N b i j x j j N b ̲ i j x j = ( B ( k s ) x ) i .
  • ( A x ) i < ( B x ) i . The proof of this case is analogical to the proof of the case (iv) 1. We showed that there are j , r M , k , s N , and i M such that ( A ˜ ( j k ) x ) i ( B ˜ ( r s ) x ) i . The reverse implication trivially follows. □
Theorem 4
([8]). Suppose A, B, and x X .
(i) 
( A A ) ( B B ) A x = B x if and only if A ̲ x B ¯ x and A ¯ x B ̲ x .
(ii) 
( A A ) ( B B ) A x = B x if and only if A ̲ x = B ¯ x and A ¯ x = B ̲ x .
(iii) 
( A A ) ( B B ) A x = B x if and only if A ̲ x B ̲ x and A ¯ x B ¯ x .
Theorem 5
([24]). Suppose A R ¯ ( m , n ) and b R ¯ ( m ) . Then the system A x = b is solvable if and only if x ( A , b ) R ¯ ( n ) is its solution, where x j ( A , b ) = min i M { b i a i j } for j N .
Theorem 6
([25]). Suppose B , C R ¯ ( m , n ) and b , c R ¯ ( m ) . Then the system of inequalities B x b , C x c has a solution if and only if C x ( B , b ) c .
Notice that the solvability of C x ( B , b ) c can be recognized in O ( m n ) time.

4.1. Weak XEA-Solution

Theorem 7.
Suppose A, B, and X. Then X is a weak XEA-solution of A X = B X if and only if
( x X ) ( A A ) A x = B ̲ x = B ¯ x .
Proof. 
Suppose that there are x X and A A such that A x = B ̲ x and A x = B ¯ x . The implication follows from the formula
A x = B ̲ x B x B ¯ x = A x .
The reverse implication trivially follows. □
The product A x can be expressed as a max-plus linear combination of generators of A and X according to Lemma 2, i.e., A = i M , j N α i j A ˜ ( i j ) , x = i N γ k x ˜ ( k ) . After a matrix-vector multiplication
A x = i M , j N α i j A ˜ ( i j ) k N γ k x ˜ ( k ) = i M , j N k N α i j γ k A ˜ ( i j ) x ˜ ( k )
we obtain a combination of coefficients γ k with α i j , or in others words, in (5), we get a quadratic part of the equality which we do not know to solve.
Theorem 8.
Suppose given A, B, and X. Then X is a weak XEA-solution of A X = B X if and only if
( x J M V ( C J , 0 ) ) ( A A ) A x = B ̲ x .
Proof. 
The proof follows from Theorem 7. □
Observe that according to Theorem 4 (i), the Formula (6) can be rewritten into the next form
( x J M V ( C J , 0 ) ) A ̲ x B ̲ x A ¯ x .
Suppose that the set of vectors { c 1 J , , c k J } is a basis of V ( C J , 0 ) for an arbitrary but fixed J M = × j M M j ( A , B ) . Then any vector x V ( C J , 0 ) can be expressed as a max-plus linear combination of vectors from { c 1 J , , c k J } , i.e., x = i = 1 k δ i c i J , δ i R . Now we can reformulate the last theorem.
Theorem 9.
Suppose given A, B, and X. Then X is a weak XEA-solution of A X = B X if and only if there are J M and δ = ( δ 1 , , δ k ) T R ( k ) such that
A ̲ i = 1 k δ i c i J B ̲ i = 1 k δ i c i J A ¯ i = 1 k δ i c i J ,
where the set of vectors { c 1 J , , c k J } is a basis of V ( C J , 0 ) .
Proof. 
The proof follows from Theorem 8. □
Observe that (8) can be expressed in the following form
i = 1 k A ̲ c i J δ i i = 1 k B ̲ c i J δ i
i = 1 k B ̲ c i J δ i i = 1 k A ¯ c i J δ i ,
or as the matrix-vector product
A ̲ c 1 J A ̲ c k J B ̲ c 1 J B ̲ c k J δ B ̲ c 1 J B ̲ c k J A ¯ c 1 J A ¯ c k J δ
and we get a hint for how to decide whether X is a weak XEA-solution of A X = B X . It suffices to find J M for a polynomial-obtained basis of V ( C J , 0 ) and then to find a solution δ of (11). The solvability of A x = B x is polynomially equivalent to solving a mean-payoff game [26]. Moreover, there are efficient pseudopolynomial algorithms for solving a mean-payoff game but a polynomial algorithm for the solvability of the system A x = B x ( A x B x ) is waiting for an appearance ([8,26]).

4.2. Weak EAX-Solution

Theorem 10.
Suppose A, B, and X. Then X is a weak EAX-solution of A X = B X if and only if
( A A ) ( i N , j M ) ( x X ) A x = B ( k l ) x .
Proof. 
The proof follows from Lemma 3 (iii). □
Notice that we are able neither to suggest a transformation of Theorem 10 based on the solvability of the equality A x = B ( k l ) x into a non-quadratic version, similarly to Theorem 8, nor to prove NP-hardness of this problem.

4.3. Strong XEA-Solution

Theorem 11.
Suppose A, B, and X. Then X is a strong XEA-solution of A X = B X if and only if
( k N ) ( A A ) ( B B ) A x ˜ ( k ) = B x ˜ ( k ) .
Proof. 
Suppose that there is x X such that for an arbitrary but fixed A A there are B B and i M such that ( A x ) i ( B x ) i . We shall show that there is k N such that for an arbitrary A A there are B B and i M such that ( A x ˜ ( k ) ) i ( B x ˜ ( k ) ) i . By the assumption and the proof of Lemma 3 (i), the implication follows.
The reverse implication trivially results. □
Theorem 12.
Suppose given A, B, and X. Then X is a strong XEA-solution of A X = B X if and only if
( k N ) ( A A ) A x ˜ ( k ) = B ̲ x ˜ ( k ) = B ¯ x ˜ ( k ) .
Proof. 
At first, we show that X is a strong XEA-solution of A X = B X if and only if ( x X ) ( A A ) B ̲ x = A x = B ¯ x . Let x X be such that there is A A with A x = B ̲ x and A x = B ¯ x . Then, by monotonicity of operations ⊕ and ⊗, we have A x = B ̲ x B x B ¯ x = A x . The reverse implication trivially follows. Hence, the proof follows from Theorem 11. □
For each k N , define vectors C ˜ ( k ) R ¯ ( m + m n ) , D ˜ ( k ) R ¯ ( m + m n ) , and the matrix E ˜ ( k ) R ( m + m n , m n ) as follows:
C ˜ ( k ) = B ̲ x ˜ ( k ) a ̲ 11 a ¯ 11 a ̲ m n a ¯ m n , D ˜ ( k ) = B ̲ x ˜ ( k ) 0 0 ,
E ˜ ( k ) = A ˜ ( 11 ) x ˜ ( k ) A ˜ ( 12 ) x ˜ ( k ) A ˜ ( m n ) x ˜ ( k ) 0 ε ε ε ε 0 .
Consider max-plus linear system of inequalities
E ˜ ( k ) y ( k ) D ˜ ( k ) E ˜ ( k ) y ( k ) C ˜ ( k )
where the vector y ( k ) R ( m n ) consists of the variables y i j R .
Theorem 13.
Suppose given A, B and X. Then X is a strong XEA-solution of A X = B X if and only if B ̲ x ˜ ( k ) = B ¯ x ˜ ( k ) and the max-plus linear system of inequalities (14) is solvable for any k N .
Proof. 
Suppose that k N is arbitrary but fixed, B ̲ x ˜ ( k ) = B ¯ x ˜ ( k ) and y is a solution of the linear system of inequalities (14) satisfying the condition a ̲ i j a ¯ i j y i j 0 , for   every i M , j N . Consider the matrix A = i M , j N y i j A ˜ ( i j ) [ A ̲ , A ¯ ] .
From the inequalities E ˜ ( k ) y ( k ) D ˜ ( k ) , E ˜ ( k ) y ( k ) C ˜ ( k ) , we have the following block inequalities
j M , l N A ˜ ( j l ) x ˜ ( k ) y j l B ̲ x ˜ ( k ) j M , l N y j l A ˜ ( j l ) x ˜ ( k ) B ̲ x ˜ ( k ) A x ˜ ( k ) B ̲ x ˜ ( k )
and
j M , l N A ˜ ( j l ) x ˜ ( k ) y j l B ̲ x ˜ ( k ) j M , l N y j l A ˜ ( j l ) x ˜ ( k ) B ̲ x ˜ ( k ) A x ˜ ( k ) B ̲ x ˜ ( k ) .
Thus, in view of Theorem 12, X is a strong XEA-solution of A X = B X .
For the converse implication, assume that X is a strong XEA-solution of A X = B X , i.e., that for each x [ x ̲ , x ¯ ] ) , there exists A A such that for any B [ B ̲ , B ¯ ] ) , the inequality A x = B x holds true. By Theorem 12 and Lemma 2 (ii), for any k N there exist coefficients α i j R , i M , j N such that A = i M , j N α i j A ˜ ( i j ) and a ̲ i j a ¯ i j α i j 0 . Then y ( k ) R ( m n , 1 ) , where y i j = α i j for any i M , j N , satisfies the inequalities E ˜ ( k ) y ( k ) D ˜ ( k ) , E ˜ ( k ) y ( k ) C ˜ ( k ) . □

4.4. Strong EXE-Solution

Theorem 14.
Suppose A A , B and X. Then for each x X , there is B B such that A x = B x if and only if for each k N the inequalities B ̲ x ˜ ( k ) A x ˜ ( k ) B ¯ x ˜ ( k ) hold true.
Proof. 
The proof follows from Lemma 3 (i) and Theorem 4 (i). □
Theorem 15.
Suppose A, B, and X. Then X is a strong EXE-solution of A X = B X if and only if there is A A such that B ̲ x ˜ ( k ) A x ˜ ( k ) B ¯ x ˜ ( k ) .
Proof. 
The proof follows from Theorem 14. □
To compute A A in Theorem 15, define vectors C ˜ R ¯ ( 2 m n ) , D ˜ R ¯ ( 2 m n ) and the matrix E ˜ R ( 2 m n , m n ) as follows:
C ˜ = B ̲ x ˜ ( 1 ) B ̲ x ˜ ( n ) a ̲ 11 a ¯ 11 a ̲ m n a ¯ m n , D ˜ = B ¯ x ˜ ( 1 ) B ¯ x ˜ ( n ) 0 0 ,
E ˜ ( A ) = A ˜ ( 11 ) x ˜ ( 1 ) A ˜ ( 12 ) x ˜ ( 1 ) A ˜ ( m n ) x ˜ ( 1 ) A ˜ ( 11 ) x ˜ ( n ) A ˜ ( 12 ) x ˜ ( n ) A ˜ ( m n ) x ˜ ( n ) 0 ε ε ε ε 0 .
Consider the max-plus linear system
E ˜ ( A ) y D ˜ E ˜ ( A ) y C ˜
where the vector y R ( m n ) consists of the variables y i j R .
Theorem 16.
Suppose given A, B, and X. Then X is a strong EXE-solution of A X = B X if and only if the max-plus linear system of inequalities, (17) is solvable.
Proof. 
Let y be a solution of (17) satisfying the condition a ̲ i j a ¯ i j y i j 0 , for   r every i M , j N . Then A R ( m , n ) , A = i M , j N y i j A ˜ ( i j ) [ A ̲ , A ¯ ] in view of Lemma 2 (ii).
Moreover, from the inequalities E ˜ ( A ) y D ˜ and E ˜ ( A ) y C ˜ , we have the following block inequalities for every fixed i N
k M , l N A ˜ ( k l ) x ˜ ( i ) y k l B ̲ x ˜ ( i ) k M , l N y k l A ˜ ( k l ) x ˜ ( i ) B ̲ x ˜ ( i ) A x ˜ ( i ) B ̲ x ˜ ( i )
and
k M , l N A ˜ ( k l ) x ˜ ( i ) y k l B ¯ x ˜ ( i ) k M , l N y k l A ˜ ( k l ) x ˜ ( i ) B ¯ x ˜ ( i ) A x ˜ ( i ) B ¯ x ˜ ( i ) .
Thus, according to Theorem 15, X is strong EXE-solution of A X = B X .
For the converse implication, suppose that X is a strong EXE-solution of A X = B X ; i.e., there is A A such that for any x [ x ̲ , x ¯ ] ) , there is B [ B ̲ , B ¯ ] ) such that A x = B x . By Lemma 2 (ii), there are α i j R , i M , j N such that A = i M , j N α i j A ˜ ( i j ) and a ̲ i j a ¯ i j α i j 0 . Then y R ( m n ) , where y i j = α i j , for any i M , j N , satisfy E ˜ ( A ) y D ˜ and E ˜ ( A ) y C ˜ . □

5. Conclusions

In this paper, we have dealt with a problem of interval solvability of A X = B X . Necessary and sufficient conditions for four versions of interval solvability, namely weak XEA-solution, weak EAX-solution, strong XEA-solution, and strong EXE-solution, have been presented, and the computational complexity of methods for checking each of obtained equivalent conditions has been suggested. The way to define the other versions of weak/strong solutions, whereby some of them are each others equivalent, has been presented. The equivalent conditions of all versions of strong solution can be reduced to satisfy some max-plus linear equations and inequalities. Hence, types of strong interval solutions are of polynomial complexity. Weak solutions have been transformed into non-interval two-sided max-plus system for which efficient pseudopolynomial algorithms exist.

Author Contributions

H.M. and J.P. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the APVV grant # 180373 .

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to thank the referees for their valuable remarks and suggestions that helped to increase the clarity of arguments in the paper. The support of the APVV grant # 180373 is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Butkovič, P. Max-Linear Systems: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  2. Butkovič, P.; Jones, D. On special cases of the generalized max-plus eigenproblem. SIAM Matrix Anal. Appl. 2016, 37, 1002–1021. [Google Scholar] [CrossRef] [Green Version]
  3. Gavalec, M.; Plavka, J.; Ponce, D. Strong tolerance of interval eigenvectors in fuzzy algebra. Fuzzy Sets Syst. 2019, 369, 145–156. [Google Scholar] [CrossRef]
  4. Butkovič, P.; Zimmermann, K. A strongly polynomial algorithm for solving two-sided linear systems in max-algebra. Discret. Appl. Math. 2006, 154, 437–446. [Google Scholar] [CrossRef] [Green Version]
  5. Butkovič, P.; Hegedus, G. An elimination method for finding all solutions of the system of linear-equations over an extremal algebra. Ekon.-Mat. Obz. 1984, 20, 203–215. [Google Scholar]
  6. Cuninghame-Green, R.A.; Butkovič, P. The equation Ax = By over (max, +). Theor. Comput. Sci. 2003, 293, 3–12. [Google Scholar] [CrossRef] [Green Version]
  7. Gavalec, M.; Plavka, J.; Ponce, D. Strong, Strongly Universal and Weak Interval Eigenvectors in Max-Plus Algebra. Mathematics 2020, 8, 1348. [Google Scholar] [CrossRef]
  8. Leela-apiradeea, W.; Lodwick, W.A.; Thipwiwatpotjanaa, P. An algorithm for solving two-sided interval system of max-plus linear equations. Inf. Sci. 2017, 399, 183–200. [Google Scholar] [CrossRef]
  9. Myšková, H.; Plavka, J. The robustness of interval matrices in max-plus algebra. Linear Algebra Its Appl. 2014, 445, 85–102. [Google Scholar] [CrossRef]
  10. Myšková, H. Interval systems of max-separable linear equations. Linear Algebra Its Appl. 2005, 403, 263–272. [Google Scholar] [CrossRef] [Green Version]
  11. Myšková, H. Control solvability of interval systems of max-separable linear equations. Linear Algebra Its Appl. 2006, 416, 215–223. [Google Scholar] [CrossRef] [Green Version]
  12. Plavka, J. l-Parametric Eigenproblem in max-algebra. Discret. Appl. Math 2005, 150, 16–28. [Google Scholar] [CrossRef] [Green Version]
  13. Plavka, J.; Sergeev, S. X-simple image eigencones of tropical matrices. Linear Algebra Its Appl. 2016, 507, 169–190. [Google Scholar] [CrossRef] [Green Version]
  14. Plavka, J. The weak robustness of interval matrices in max-plus algebra. Discret. Appl. Math. 2014, 173, 92–101. [Google Scholar] [CrossRef]
  15. Umer, M.; Hayat, U.; Abbas, F. An Efficient Algorithm for Nontrivial Eigenvectors in Max-Plus Algebra. Symmetry 2019, 11, 738. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, C.; Tao, Y. Interval strong solutions of interval systems of max-plus linear equations. Linear Algebra Its Appl. 2018, 537, 148–159. [Google Scholar] [CrossRef]
  17. Wang, C.; Xia, Y.; Tao, Y. Ordered Structures of Polynomials over Max-Plus Algebra. Symmetry 2021, 13, 1137. [Google Scholar] [CrossRef]
  18. Butkovič, P. On the tropical supereigenvectors. Linear Algebra Its Appl. 2016, 498, 574–591. [Google Scholar] [CrossRef] [Green Version]
  19. Sergeev, S.; Schneider, H.; Butkovič, P. On visualition scaling, subeigenvectors and Kleene stars in max-algebra. Linear Algebra Its Appl. 2009, 431, 2395–2406. [Google Scholar] [CrossRef] [Green Version]
  20. Gavalec, M.; Plavka, J. Strong regularity of matrices in general max–min algebra. Linear Algebra Its Appl. 2003, 371, 241–254. [Google Scholar] [CrossRef] [Green Version]
  21. Gavalec, M.; Plavka, J.; Tomášková, H. Interval eigenproblem in max–min algebra. Linear Algebra Its Appl. 2014, 440, 24–33. [Google Scholar] [CrossRef]
  22. Myšková, H.; Plavka, J. X-robustness of interval circulant matrices in fuzzy algebra. Linear Algebra Its Appl. 2013, 438, 2757–2769. [Google Scholar] [CrossRef]
  23. Gavalec, M.; Plavka, J.; Ponce, D. Tolerance types of interval eigenvectors in max-plus algebra. Inf. Sci. 2016, 367–368, 14–27. [Google Scholar] [CrossRef]
  24. Zimmermann, K. Extremální Algebra; Ekon. ústav ČSAV: Prague, Czech Republic, 1976. [Google Scholar]
  25. Cechlárová, K. Solutions of Interval Linear Systems in (max, +)-algebra. In Proceedings of the 6th International Symposium on Operational Research Preddvor, Preddvor, Slovenia, 26–28 September 2001; pp. 321–326. [Google Scholar]
  26. Allamigeon, X.; Legay, A.; Fahrenberg, U.; Katz, R.; Gaubert, S. Tropical Fourier-Motzkin elimination, with an application to real-time verification. Internat. Algebra Comput. 2014, 24, 569–607. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Myšková, H.; Plavka, J. Polynomial and Pseudopolynomial Procedures for Solving Interval Two-Sided (Max, Plus)-Linear Systems. Mathematics 2021, 9, 2951. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222951

AMA Style

Myšková H, Plavka J. Polynomial and Pseudopolynomial Procedures for Solving Interval Two-Sided (Max, Plus)-Linear Systems. Mathematics. 2021; 9(22):2951. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222951

Chicago/Turabian Style

Myšková, Helena, and Ján Plavka. 2021. "Polynomial and Pseudopolynomial Procedures for Solving Interval Two-Sided (Max, Plus)-Linear Systems" Mathematics 9, no. 22: 2951. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222951

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop