Next Article in Journal / Special Issue
Direct Sampling for Recovering Sound Soft Scatterers from Point Source Measurements
Previous Article in Journal
Darcy Brinkman Equations for Hybrid Dusty Nanofluid Flow with Heat Transfer and Mass Transpiration
Previous Article in Special Issue
Conditional Variational Autoencoder for Learned Image Reconstruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Convergence of the Inverse Problem for Chemotaxis in the Bayesian Setting

1
Department of Mathematics, University of Würzburg, 97074 Würzburg, Germany
2
Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53705, USA
3
School of Mathematics, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Submission received: 22 October 2021 / Revised: 8 November 2021 / Accepted: 9 November 2021 / Published: 11 November 2021
(This article belongs to the Special Issue Inverse Problems with Partial Data)

Abstract

:
Chemotaxis describes the movement of an organism, such as single or multi-cellular organisms and bacteria, in response to a chemical stimulus. Two widely used models to describe the phenomenon are the celebrated Keller–Segel equation and a chemotaxis kinetic equation. These two equations describe the organism’s movement at the macro- and mesoscopic level, respectively, and are asymptotically equivalent in the parabolic regime. The way in which the organism responds to a chemical stimulus is embedded in the diffusion/advection coefficients of the Keller–Segel equation or the turning kernel of the chemotaxis kinetic equation. Experiments are conducted to measure the time dynamics of the organisms’ population level movement when reacting to certain stimulation. From this, one infers the chemotaxis response, which constitutes an inverse problem. In this paper, we discuss the relation between both the macro- and mesoscopic inverse problems, each of which is associated with two different forward models. The discussion is presented in the Bayesian framework, where the posterior distribution of the turning kernel of the organism population is sought. We prove the asymptotic equivalence of the two posterior distributions.

1. Introduction

Chemotaxis is the phenomenon of organisms directing their movements upon certain chemical stimulation. Every motile organism exhibits some type of chemotaxis. Mathematically, there are two mainstream mathematical models used to describe this phenomenon: one at the macroscopic population level and the other at the mesoscopic level.
The most famous model in the first category is the Keller–Segel equation, introduced in [1,2,3]. The equation traces the evolution of bacteria density when chemical stimulation is introduced to the system:
t ρ · ( D · ρ ) + · ( ρ Γ ) = 0 ,
where ρ ( x , t ) is the cell density at location x at time t > 0 . In this equation, both the advection term and the diffusion process integrate the external chemical density information, meaning that both the diffusion matrix D ( x , t ; c ) R 3 × 3 and the drift vector Γ ( x , t ; c ) R 3 , as functions of ( x , t ) , are determined by the chemoattractant’s density function c : ( x ¯ , t ¯ ) c ( x ¯ , t ¯ ) . This function c serves as a meta-parameter determining the ( x , t ) dependence of D , Γ .
However, the model is inaccurate in certain regimes. It overlooks the bacteria’s complex reaction to the chemoattractants and is thus macroscopic in nature. This inspires the second category of modeling, where the motion of individual bacteria is accounted for. The associated modeling is thus mesoscopic. When bacterial movements are composed of two states, running in a straight line with a given velocity v and tumbling from one velocity v to another v , the according mathematical model is termed the run-and-tumble model. It is described by the mesoscopic chemotaxis equation [4,5,6]:
t f ( x , t , v ) + v · x f ( x , t , v ) = K ( f ; c ) : = V K ( x , t , v , v ; c ) f ( x , t , v ) K ( x , t , v , v ; c ) f ( x , t , v ) d v .
In the equation, f ( x , t , v ) is the population density of bacteria with velocity v in some velocity space V R 3 at space point x R 3 at time t > 0 . The tumbling kernel K ( x , t , v , v ; c ) encodes the probability of bacteria changing from velocity v to v. The shape of this probability depends on the chemoattractant’s density function c.
Abbreviating the notation and calling f : = f ( x , t , v ) and K : = K ( x , t , v , v ; c ) as in [5], the tumbling term on the right-hand side of Equation (2) reads
K ( f ) = V K f K f d v .
Because bacteria are usually assumed to move with constant speed, conventionally, we have V = S n 1 . Moreover, since the cell doubling time is much longer than the chemotaxis time scale, we remove the birth–death effect from the equation.
Both models above are empirical in nature. The coefficients, such as D, Γ and K, which encode the way in which bacteria respond to the environment, are typically unknown ahead of time. Since the chemoattractant concentration c depends on space and time, so do D, Γ and K. However, except for very few well-studied bacteria, these quantities are not explicitly known and cannot be measured directly. One thus needs to design experiments and use measurable quantities to infer the information. This constitutes the inverse problem that we study. One such experiment was reported in [7], where the authors studied phototaxis and used video recording of seaweed motion ( ρ in time) to infer D and Γ in (1).
There are various ways to conduct inverse problems, and, in this paper, we take the viewpoint of Bayesian inference. This is to assume that the coefficients are not uniquely configured in reality but, rather, follow a certain probability distribution. The measurements are taken to infer this probability. In the process of such inference, one nevertheless needs to incorporate the forward model. The two different forward models described above then lead to two distinctive posterior distributions as the inference.
One natural question is to understand the relation between the two resulting posterior distributions. In this article, we answer this question by asymptotic analysis. To be specific, we will show that the two models are asymptotically equivalent in the long-time and large-space regime, and ( D , Γ ) can be uniquely determined by a given K. As such, the associated two inverse problems are asymptotically equivalent too. The equivalence is characterized by the distance (we use both the Kullback–-Leibler divergence and the Hellinger distance) between the two corresponding posterior distributions. We show that this distance vanishes asymptotically as the Knudsen number, a quantity that measures the mean free path between two subsequent tumbles, becomes arbitrarily small.
One goal of this paper is to provide a theoretical foundation for developing an efficient kinetic inverse solver. All inverse solvers involve iterative forward solvers, and thus the computational complexity can be reduced significantly if the forward solver is cheap. While it is costly to compute kinetic equations on the phase space, the Keller–Segel diffusion limit is a good surrogate. Our result suggests that the solution to the Keller–Segel inverse problem is close to the kinetic result and thus qualifies as a ‘good’ initial guess, for the full reconstruction on the kinetic level.
The rest of the paper is organized as follows: in Section 2, we present the asymptotic relation between the two forward models. This can be seen as an adaption of the results in [5] to our setting. The analysis serves as the foundation to link the two inverse problems. In Section 3, we formulate the Bayesian inverse problems corresponding to the scaled chemotaxis equation and the Keller–Segel model as underlying models. The well-posedness and convergence of the two corresponding posterior distributions are shown in Section 4. The results are summarized and discussed in Section 5.
We should stress that both the mathematical modeling of chemotaxis and Bayesian inference are active research areas. In formulating our problems, we select the most widely accepted models and methods.
For modeling chemotaxis, the two models (1)–(2) are the classical ones, and were derived from the study of a biased random walk [1,6]. They assume that the organisms passively depend on the environment. When bacteria actively respond to and change the environment, a parabolic or elliptic equation for c can be added to describe such feedback to the environment [2,3,8]. The coupled system consisting of Equation (1) and a parabolic equation for c, where the chemoattractant is assumed to be produced by the bacteria population, can exhibit blow-up solutions. Therefore, some particular forms of D ( · , · ; c ) , Γ ( · , · ; c ) are proposed in order to eliminate the unwanted behavior. These models include volume filling [9], quorum sensing models [10] or the flux-limited Keller–Segel system [11]. On the kinetic level, additional variables were introduced to describe the intracellular responses of the bacteria to the chemoattractant in the signaling pathway [12,13,14,15]. The asymptotic limit of the newer models sometimes reveals interesting phenomena, such as fractional diffusion [16]. The asymptotic equivalence of the classical model to the Keller–Segel model was extensively studied, e.g., in [5,6,17,18]. In particular, the current paper heavily depends on the techniques shown in [5].
There is also vast literature on inverse problems. For the Bayesian inference perspective in scientific computing, interested readers are referred to monographs [19,20] and the references therein. In comparison, linking two or multiple inverse problems in different regimes is relatively rare. In [21], the authors studied the asymptotic equivalence between the inverse kinetic radiative transport equations and its macroscopic counterpart, the diffusion equation. In [22], the convergence of Bayesian posterior measures for a parametrized elliptic PDE forward model was shown in a similar fashion.

2. Asymptotic Analysis for Kinetic Chemotaxis Equations and the Keller–Segel Model

The two problems we will be using are chemotaxis kinetic equation and the Keller–Segel equation. We review these two models in this section and study their relation. This serves as a cornerstone for building the connection of the two associated inverse problems.
Throughout the paper, we assume that the chemoattractant density c is one given and fixed function of ( x , t ) and is not produced or consumed by the bacteria. While this is an approximation, it is valid in many experiments where one has tight control over the matrix environment. Because c is fixed, we drop the dependence of K , D , Γ on c in the notation.
We claim, and will show below, that the two Equations (1) and (2) are asymptotically equivalent in the long-time large-space regime. We denote by ε the scaling parameter; then, in a parabolic scaling, the chemotaxis equation to be considered has the following form:
ε 2 t f ε ( x , t , v ) + ε v · x f ε ( x , t , v ) = K ε ( f ε ) : = V K ε ( x , t , v , v ) f ε ( x , t , v ) K ε ( x , t , v , v ) f ε ( x , t , v ) d v f ε ( x , 0 , v ) = f 0 ( x , v ) .
Formally, when ε 0 , the tumbling term dominates the equation and we expect, in the leading order:
f ε f * , with K * ( f * ) = 0 ,
where K * can be viewed as the limiting operator as K ε . This means that the limiting solution is almost in the null space of the limiting tumbling operator. Furthermore, due to the specific form of the tumbling operator, one can show that, under certain conditions, such null space is one-dimensional; compare, e.g., [5] Lemma 2 and the following derivations. We thus formally write
N ( K * ) = { α F : α R , with V F d v = 1 } ,
and denote f * = ρ F . Conventionally, we call F the local equilibrium. Due to the form of K , this is a function only of v. Inserting this formula back into (3) and performing asymptotic expansion up to the second order, and following [5], we find that ρ satisfies the Keller–Segel equation:
t ρ · ( D · ρ ) + · ( ρ Γ ) = 0 , ρ ( x , 0 ) = ρ 0 ( x ) = V f 0 ( x , v ) d v .
A rigorous proof of the convergence of a subsequence of f ε can be found in [5], Theorem 3, where the authors discussed a nonlinear extension of the present model.
From now on, we confine ourselves to kernels having the form of
K ε = K 0 + ε K 1 .
Remark 1.
Because our aim is to compare the posterior distributions of K ε for the kinetic model (3) and the macroscopic model (4), this choice is reasonable. As shown in [5], higher-order terms in ε would not affect the macroscopic equation. Therefore, they would not be reconstructable by the macroscopic inverse problem.
In order to rigorously justify the above intuition on the convergence f ε ρ F and ensure the existence of solutions to Equations (3) and (4), we suppose ( K 0 , K 1 ) to be an element of the admissible set
A = { ( K 0 , K 1 ) C 1 ( R 3 × [ 0 , ) × V × V ) 2 K 0 C 1 , K 1 C 1 C and 0 < α K 0 symmetric and K 1 antisymmetric in ( v , v ) }
for some preset constants C , α > 0 . Defining | V | = V 1 d v as the volume of V, it is straightforward to show that for any ( K 0 , K 1 ) A ,
F 1 / | V | is the local equilibrium .
Remark 2.
With ( K 0 , K 1 ) assumed to be symmetric and antisymmetric, the local equilibrium F in (7) is explicit and simple. This is, e.g., the case for one typical choice of the tumbling kernel: K [ c , c ] = a [ c ] + ε b [ c ] ϕ ( v · c v · c ) with antisymmetric ϕ, which represents a special case of the models extensively studied in [5].
For better readability, we assume the symmetry properties of the tumbling kernel stated in (6) throughout the paper. We should mention, however, that it is possible to relax this assumption on the tumbling kernel while maintaining the same macroscopic limit. In particular, if there exists one uniform velocity distribution F ( v ) > 0 that is positive, bounded and satisfies
V F d v = 1 , V v F ( v ) d v = 0 and K 0 ( x , t , v , v ) F ( v ) = K 0 ( x , t , v , v ) F ( v )
for all considered K 0 in the admissible set, then all statements and arguments provided in this paper still hold true. Note that, by these requirements, assumption (A0) in Chalub et al. [5] is satisfied.
Suppose that the initial data are smooth in the sense that f 0 C c 1 , + ( R 3 × V ) . Then we have the following theorem on convergence which can be viewed as an adaption of the results in [5].
Theorem 1. 
Suppose that K ε has the form of (5) with ( K 0 , K 1 ) A and suppose the initial condition f 0 C c 1 , + ( R 3 × V ) ; then, the solution f ε to the chemotaxis Equation (3) satisfies the following:
(a) 
For sufficiently small ε, the solution f ε of Equation (3) exists and is bounded in L [ 0 , T ] , L + 1 L ( R 3 × V ) for T < .
(b) 
The solution f ε converges to ρ F in L [ 0 , T ] ; L + 1 L ( R 3 × V ) , with ρ satisfying the Keller–Segel Equation (4) with coefficients
D = V v κ ( x , t , v ) d v ,
Γ = V v θ ( x , t , v ) d v .
Here, θ and κ solve the cell problems:
K 0 ( κ ) = v F and K 0 ( θ ) = K 1 ( F ) ,
where K i ( g ) : = V K i g K i g d v for i = 0 , 1 .
(c) 
The boundedness and the convergence is uniform in A .
Sketch of proof
(a)
First of all, we have the maximum principle so that
f ε ( · , t , · ) L 1 ( R 3 × V ) = f 0 L 1 ( R 3 × V ) < ,
and, following the same arguments as in [5], we integrate in time for
f ε ( x , t , v ) = f 0 ( x , v ) + 0 t K ε ( f ε ) x v s ε , t s , v d s f 0 ( x , v ) + 0 t V K ε x v s ε , t s , v , v f ε x v s ε , t s , v d v d s f 0 ( x , v ) + 2 C 0 t V f ε x v s ε , t s , v d v d s .
Noting that f 0 L + 1 L and 0 < K ε = K 0 + ε K 1 ( 1 + ε ) C 2 C for sufficiently small ε , we have:
f ε ( · , t , · ) L ( R 3 × V ) f 0 L ( R 3 × V ) + 2 C | V | 0 t f ε ( · , s , · ) L ( R 3 × V ) d s .
Calling the Grönwall lemma, one obtains a bound on f ε ( · , t , · ) L ( R 3 × V ) . Since the only role that K i plays is its boundedness by C, as in (11), the estimate that we obtain is uniform in A and is independent of ε for a sufficiently small ε .
(b)
We show that f ε is a Cauchy sequence in ε . For this purpose, we call f ε and f ε ˜ the solutions of the chemotaxis Equation (3) with the scaling being ε and ε ˜ . We also denote the difference f ^ ε , ε ˜ : = f ε f ε ˜ . Subtracting the two equations, we have:
ε 2 t f ^ ε , ε ˜ + ε v · x f ^ ε , ε ˜ = K 0 ( f ^ ε , ε ˜ ) + ε K 1 ( f ^ ε , ε ˜ ) ( ε 2 ε ˜ 2 ) t f ε ˜ ( ε ε ˜ ) v · x f ε ˜ + ( ε ε ˜ ) K 1 ( f ε ˜ ) = K ε ( f ^ ε , ε ˜ ) ( ε 2 ε ˜ 2 ) t f ε ˜ ( ε ε ˜ ) v · x f ε ˜ + ( ε ε ˜ ) K 1 ( f ε ˜ ) = : S
with trivial initial data f ^ ε , ε ˜ ( x , 0 , v ) = 0 . This is an equation with a source term S. Using the argument as in (a), the L boundedness of the time and spatial derivative t f ε ˜ , x f ε ˜ in S can be shown, meaning that S is of order ε ε ˜ . Running (11) again with this extra source term, we have
f ε f ε ˜ L ( [ 0 , T ] ; L 1 L ( R 3 × V ) ) = O ( ε ε ˜ ) .
Hence, { f ε } is a Cauchy sequence, and thus converges to some f L ( [ 0 , T ] , L + 1 L ( R 3 × V ) ) .
It remains to prove f = ρ F almost everywhere in [ 0 , T ] × R 3 × V with ρ satisfying the Keller–Segel Equation (4) with D, Γ , as given in Equations (8) and (9). This follows by arguments rather similar to those in [5], and is therefore omitted from here. Since only the boundedness of ( K 0 , K 1 ) is seen in the proof, the convergence is uniform in A .

3. Bayesian Inverse Problem Setup

Associated with the two forward models, there are two inverse problems. We describe the inverse problem setup and present them with the Bayesian inference formulation.
In the lab setup, it is assumed that the bacteria plate is large enough so that the boundary plays a negligible role. At the initial time, the bacteria cells are distributed on the plate. One then injects chemoattractants onto the plate through a controlled manner, so to have c ( t , x ) explicitly given, forcing K i , and ( D , Γ ) to be functions of ( t , x , v ) or ( t , x ) only. The bacteria density at location x at time t is then measured.
Measuring is usually done by taking high-resolution photos of the plate at time t and counting the bacteria in a small neighborhood of location x. Another possibility is taking a sample of the bacteria at location x and measuring the bacteria density of the sample by classical techniques such as optical density OD 600 or flow cytometry, see, e.g., [23,24]. This, however, describes an invasive technique and thus allows measurements at only one time t.
The whole experiment is to take data of the following operator:
A K 0 , K 1 ε : f 0 f ε ( t , x , v ) d v
if the dynamics of the bacteria are modeled by (3), and
A K 0 , K 1 0 = A D , Γ : ρ 0 : = V f 0 d v ρ ( t , x )
if the dynamics of the bacteria are modeled by (4). Noting that ( D , Γ ) are uniquely determined by ( K 0 , K 1 ) by Equations (8) and (9), we can equate A D , Γ with A K 0 , K 1 0 . Although the more natural macroscopic inverse problem would be to recover the diffusion and drift coefficients D , Γ in (4), we choose to formulate the inverse problem for the tumbling kernel ( K 0 , K 1 ) . This allows us to compare the solution for both the kinetic and the macroscopic inverse problem.
Remark 3.
In order to reasonably compare the solutions to the inverse problems, the solutions have to be of the same kind. We choose to reconstruct ( K 0 , K 1 ) in both the kinetic and macroscopic inverse problem; see Figure 1 (left). The macroscopic inverse problem is thus also formulated for ( K 0 , K 1 ) , which ( D , Γ ) is a function of. Alternatively, one could also reconstruct ( D , Γ ) from both models. In the kinetic setting, this would mean to reconstruct ( K 0 chem , K 1 chem ) and then transform to values of ( D chem , Γ chem ) by Equations (8) and (9); see Figure 1 (right).
We do not choose this alternative, because the information on the tumbling kernel ( K 0 , K 1 ) is microscopic and thus more detailed. Furthermore, with a fixed ( K 0 , K 1 ) , ( D , Γ ) can be uniquely determined, and thus the convergence can be viewed as a mere consequence; see also Remark 5.
Multiple experiments can be conducted using different initial profiles, but the same controlled c ( t , x ) is used to ensure that the to-be-reconstructed K i is unchanged from experiment to experiment. Denoting by k [ 1 , , K ] the indices of the different initial data setups, and denoting by j = ( j 1 , j 2 ) [ 1 , , J 1 ] [ 1 , , J 2 ] the indices of the measuring time and location, with t j = t j 1 being the measuring time, and χ j = χ j 2 C c ( R 3 ) being the spatial test function, then, with (3) and (4) being the forward models, we take the measurements, respectively:
G j k ε , chem ( K 0 , K 1 ) = M j A K 0 , K 1 ε ( f 0 ( k ) ) = R 3 V f ε ( k ) ( x , t j , v ) d v χ j ( x ) d x ,
G j k KS ( K 0 , K 1 ) = M j A K 0 , K 1 0 ( ρ 0 ( k ) ) = R 3 ρ ( k ) ( x , t j ) χ j ( x ) d x ,
where M j are the measuring operators with corresponding test functions ( δ j , χ j ) . One can consider χ j a compactly supported blob function concentrated at a certain location, meaning that all the bacterial cells in a small neighborhood are counted towards this particular measurement; see Figure 2. This is a reasonable model when counting bacteria in a small neighborhood or taking samples with a pipette.
Throughout the paper, we assume that the initial data and the measuring operators are controlled:
f 0 ( k ) L 1 , f 0 ( k ) L < C ρ , k max { χ j L 1 , χ j L 2 , χ j L , | supp χ j | d x } < C x , j .
Remark 4.
The measurements G j k ε , chem ( K 0 , K 1 ) , G j k KS ( K 0 , K 1 ) are formulated in a rather general form in Equations (14) and (15) due to the freedom in the choice of the test function χ j C c ( R 3 ) .
However, all subsequent derivations also hold true for the specific case of pointwise measurements with t j : = t j 1 and x j : = x j 2 . The measurements would then be G j k ε , chem ( K 0 , K 1 ) = V f ε ( k ) ( x j , t j , v ) d v and G j k KS ( K 0 , K 1 ) = ρ ( k ) ( x j , t j ) , which would correspond to measuring operators M j with test functions ( δ t j 1 , δ x j 2 ) .
Since measuring error is not avoidable in the measuring process, we assume that it introduces an additive error and collect the data of the form
y j k ε , chem = G j k ε , chem ( K 0 , K 1 ) + η j k , y j k KS = G j k KS ( K 0 , K 1 ) + η j k ,
where the noise η j k is assumed to be a random variable independently drawn from a Gaussian distribution N ( 0 , γ 2 ) of known variance γ 2 > 0 .
In the Bayesian form, the to-be-reconstructed parameter ( K 0 , K 1 ) is assumed to be a random variable, and the goal is to reconstruct its distribution. Suppose a priori that we know that the parameter is drawn from the distribution μ 0 ; then, the Bayesian posterior distributions for ( K 0 , K 1 ) should be
μ ε , chem y ( K 0 , K 1 ) = 1 Z ε , chem μ ε , chem ( K 0 , K 1 ) ( y ) μ 0 ( K 0 , K 1 ) = 1 Z ε , chem e 1 2 γ 2 G ε , chem ( K 0 , K 1 ) y 2 μ 0 ( K 0 , K 1 ) ,
using (3) as the forward model, and
μ KS y ( K 0 , K 1 ) = 1 Z KS μ KS ( K 0 , K 1 ) ( y ) μ 0 ( K 0 , K 1 ) = 1 Z KS e 1 2 γ 2 G KS ( K 0 , K 1 ) y 2 μ 0 ( K 0 , K 1 ) ,
using (4) as the forward model. In the formula, Z is the normalization constant to ensure 1 d μ y ( K 0 , K 1 ) = 1 and
μ ( K 0 , K 1 ) ( y ) = e 1 2 γ 2 G ( K 0 , K 1 ) y 2
is the likelihood of observing the data y from a model with a tumbling kernel or diffusion and drift term derived by ( K 0 , K 1 ) .
In Section 4, we specify the conditions on μ 0 to ensure the well-definedness of μ y .
Remark 5.
Since the macroscopic model does not explicitly depend on ( K 0 , K 1 ) , the distribution of μ KS y ( D , Γ ) is of interest in the macroscopic description (4). There are two ways to derive it starting with a prior distribution on ( K 0 , K 1 ) : The natural way would be to transform the prior distribution to a prior on ( D , Γ ) by Equations (8) and (9) and then consider the inverse problem of reconstructing ( D , Γ ) . This approach is displayed by the lower path in Figure 3. If, however, the posterior distribution μ KS y ( K 0 , K 1 ) is calculated ahead of the transformation (as in our case), one could instead transform this posterior distribution directly to a distribution in the ( D , Γ ) space following the upper path in Figure 3. Naturally, the question arises of whether the two ways lead to the same posterior distribution. It turns out that they do. Considering the second possibility, we see that the likelihood and thus the normalization constant only depend on ( D , Γ ) , because we are in the macroscopic model. Hence, only the prior distribution is transformed, as is the case for the first possibility.

4. Convergence of Posterior Distributions

One natural question arises. The two different forward models provide two different posterior distribution functions of ( K 0 , K 1 ) . Which distribution is the correct one, or, rather, what is the relation between the two posterior distributions?
As discussed in Section 2, the two forward models are asymptotically equivalent in the long-time large-space regime, so it is expected that the two posterior distributions converge as well. This suggests that the amount of information given by the measurements is equally presented by the two forward models. However, this convergence result is not as straightforward as it may seem. One issue comes from the control of initial data and the measurement operator. For each initial datum, the solution converges in L [ 0 , T ] ; L + 1 L ( R 3 × V ) ; we now have a list of initial data, and the solutions are tested on a set of measuring operators, so we need a uniform convergence when tested on the dual space. Furthermore, to show the convergence of two distribution functions, a certain metric needs to be given on the probability function space. The issue of how the convergence for one set of fixed ( K 0 , K 1 ) translates to the convergence on the entire admissible set also needs to be addressed.
By choosing the admissible set (6), we formulated an assumption on the tumbling kernels ( K 0 , K 1 ) ahead of time. With this a priori knowledge, we showed the uniform boundedness and convergence of the solutions f ε to the chemotaxis Equation (3) over the function set A in Theorem 1. This will play a crucial role in the convergence proof for the inverse problem. From here on, we assume that the prior distribution μ 0 is supported on A .
Before we show the convergence, as an a priori estimate, we first show the well-posedness of the Bayesian posterior distributions in Lemma 1, following [19,20].
Lemma 1.
If the initial conditions f 0 ( k ) C c 1 , + ( R 3 × V ) and the test functions χ j C c ( R 3 ) satisfy (16), then the following properties of the posterior distributions hold true:
(a) 
The measurements G ε , chem and G KS are uniformly bounded on A (and uniformly in ε).
(b) 
For small enough ε, the measurements G ε , chem and G KS are Lipschitz continuous with respect to the tumbling kernels ( K 0 , K 1 ) under the norm ( K 0 , K 1 ) * : = max ( K 0 , K 1 ) on A .
(c) 
The posterior distributions are well-posed and absolutely continuous with regard to each other.
Proof.  
(a)
For every ( j , k ) , we have:
| G j k KS ( K 0 , K 1 ) | = R 3 ρ ( k ) ( x , t j ) χ j ( x ) d x χ j ( x ) ρ ( k ) ( · , t j ) L 1 ( R 3 ) = χ j ( x ) ρ 0 ( k ) L 1 ( R 3 ) C x C ρ
where we use the density conservation: ρ ( · , t ) L 1 ( R 3 ) = ρ 0 L 1 ( R 3 ) for all t. Analogously, we have | G j k ε , chem ( K 0 , K 1 ) | C x C ρ . Note that this bound is independent of ε .
(b)
For the chemotaxis model, we have for ( K 0 , K 1 ) , ( K 0 ˜ , K 1 ˜ ) A
| G j k ε , chem ( K 0 , K 1 ) G j k ε , chem ( K 0 ˜ , K 1 ˜ ) | = R 3 V ( f ε ( k ) f ˜ ε ( k ) ) ( x , t j , v ) d v χ j ( x ) d x χ j supp χ j V | f ¯ ε ( k ) ( x , t j , v ) | d v d x C x | V | | supp χ j | d x f ¯ ε ( k ) ( · , t j , · ) L ( R 3 × V ) C x 2 | V | f ¯ ε ( k ) ( · , t j , · ) L ( R 3 × V ) ,
where f ε ( k ) and f ˜ ε ( k ) are solutions to the initial value problem (3) with initial condition f 0 ( k ) and tumbling kernels K ε = K 0 + ε K 1 and K ˜ ε = K ˜ 0 + ε K ˜ 1 , respectively. Their difference f ¯ ε ( k ) : = f ε ( k ) f ˜ ε ( k ) satisfies the scaled difference equation:
ε 2 t f ¯ ε ( k ) ( x , t , v ) + ε v · x f ¯ ε ( k ) ( x , t , v ) = K ˜ ε ( f ¯ ε ( k ) ) + K ¯ ε ( f ε ( k ) ) f ¯ ε ( k ) ( x , 0 , v ) = 0 .
Here, K ¯ denotes the tumbling operator with kernel K ¯ ε : = K ε K ˜ ε . Integration in s at ( x v s ε , t s , v ) shows
f ¯ ε ( k ) ( x , t , v ) = 0 t K ˜ ε ( f ¯ ε ( k ) ) x v s ε , v , t s + K ¯ ε ( f ε ( k ) ) x v s ε , v , t s d s = 0 t V K ˜ ε f ¯ ε ( k ) x v s ε , v , v , t s K ˜ ε f ¯ ε ( k ) x v s ε , v , v , t s d v + V K ¯ ε f ε ( k ) x v s ε , v , v , t s K ¯ ε f ε ( k ) x v s ε , v , v , t s d v d s .
This yields
f ¯ ε ( k ) ( · , t , · ) L ( R 3 × V ) 2 K ˜ ε | V | 0 t f ¯ ε ( k ) ( · , t s , · ) L ( R 3 × V ) d s + 2 K ε K ε ˜ | V | f ε ( k ) t 4 C | V | 0 t f ¯ ε ( k ) ( · , s , · ) L ( R 3 × V ) d s + 4 ( K 0 K ˜ 0 , K 1 K ˜ 1 ) * | V | c f T
since one has K ε 2 ( K 0 , K 1 ) * 2 C for small enough ε < 1 and f ε ( k ) c f is bounded in L uniformly on A by Theorem 1 (a). Additionally, c f can be chosen to be independent of k by inserting the uniform boundedness of f 0 ( k ) L in (16) into Equation (12). The Grönwall lemma thus gives
f ¯ ε ( k ) ( · , t , · ) L ( R 3 × V ) L ( T , C , C ρ ) ( K 0 K ˜ 0 , K 1 K ˜ 1 ) *
with some coefficient L depending on T, C and C ρ . Inserting this into Equation (19) results in the desired Lipschitz continuity.
We similarly study the Lipschitz continuity of the Keller–Segel measurements G j k KS ( K 0 , K 1 ) . The proof strategy is almost the same. With some computational effort, one can see:
| G j k KS ( K 0 , K 1 ) G j k KS ( K ˜ 0 , K ˜ 1 ) | χ j L 2 ( ρ ( k ) ρ ˜ ( k ) ) ( · , t j ) L 2 C x c ( D D ˜ L ( [ 0 , T ] × R 3 ; R 3 × 3 ) + Γ Γ ˜ L ( [ 0 , T ] × R 3 ; R 3 ) )
where ( Γ , D ) , ( Γ ˜ , D ˜ ) are the drift and diffusion terms derived by the collision operators defined by ( K 0 , K 1 ) and ( K ˜ 0 , K ˜ 1 ) , respectively, by Equations (8) and (9). The constant c monotonously depends on the L 2 norms of ρ ( k ) and x ρ ( k ) , which are bounded uniformly on A . By the linear relation between D and κ and Γ and θ , this directly translates to
| G j k KS ( K 0 , K 1 ) G j k KS ( K ˜ 0 , K ˜ 1 ) | c ˜ c C x ( κ κ ˜ L ( [ 0 , T ] × R 3 ; L 2 ( V ; d v F ; R 3 ) ) + θ θ ˜ L ( [ 0 , T ] × R 3 ; L 2 ( V ; d v F ) ) ) ,
with constant c ˜ depending only on V. Finally, the Lax–Milgram theorem shows the continuous dependence of
θ θ ˜ L 2 ( V ; d v F ) + κ κ ˜ L 2 ( V ; d v F ; R 3 ) c ^ ( K 0 , K 1 ) ( K ˜ 0 , K ˜ 1 ) *
where c ^ only depends on V , α , C .
(c)
By (a), the likelihoods e 1 2 γ 2 G ( K 0 , K 1 ) y 2 are bounded away from zero and bounded uniformly on A (and in ε ). Thus, the normalization constants Z are too. Part (b) guarantees the measurability of the likelihoods. In total, this shows that the posterior distributions are well-defined and continuous with respect to each other. Since the likelihoods are continuous in y, the well-posedness of the posterior distributions is given.
We are now ready to show the convergence of the two posterior measures. There are two quantities that we use to measure the difference between two distributions:
  • Kullback–Leibler divergence
    d KL ( μ 1 , μ 2 ) : = A log d μ 1 d μ 2 ( u ) d μ 2 ( u )
  • Hellinger metric
    d Hell ( μ 1 , μ 2 ) 2 = 1 2 A d μ 1 d μ 0 ( u ) d μ 2 d μ 0 ( u ) 2 d μ 0 ( u ) .
The two metrics both evaluate the distance between the two probability measures μ 1 and μ 2 , which are either absolutely continuous with respect to each other or with respect to a third probability measure μ 0 . Both are frequently used for comparing two distribution functions, e.g., in machine learning [25,26,27,28,29,30] or inverse problem settings [21,22]. Even though the Kullback–Leibler divergence lacks the symmetry and triangle-inequality properties of a metric, it has gained popularity due to its close connection to several information concepts, such as the Shannon entropy or the Fisher information metric [31]. Conversely, the Hellinger metric is a true metric. Although it does not have a demonstrative interpretation as does the Kullback–Leibler divergence, its strength lies in the fact that convergence in the Hellinger metric implies convergence of the expectation of any polynomially bounded function with respect to either of the posterior distributions, as explained in [19]. In particular, the mean, covariance and further moments of the distributions converge.
Before comparing the posterior measures, we need to have a look at the convergence of the measurements G ( K 0 , K 1 ) .
Lemma 2.
Assuming that the initial and testing functions satisfy (16), the chemotaxis measurements G ε , chem converge to the Keller–Segel measurements G KS uniformly on A as ε 0 .
Proof. 
Theorem 1 shows the convergence of f ε to ρ F in L ( [ 0 , T ] , L + 1 L ( R 3 × V ) ) uniformly on A . As a consequence, we have the convergence of the measurements:
G j k ε , chem ( K 0 , K 1 ) G j k KS ( K 0 , K 1 ) = R 3 V f ε ( k ) ( x , t j , v ) d v χ j ( x ) d x R 3 ρ ( k ) ( x , t j ) χ j ( x ) d x R 3 V | f ε ( k ) ( x , t j , v ) ρ ( k ) ( x , t j ) F ( v ) | d v | χ j ( x ) | d x f ε ( k ) ( · , t j , · ) ρ ( k ) ( · , t j ) F L ( R 3 × V ) | V | χ j L 1 ( R 3 ) 0
where we use the form F = 1 V . By the uniform convergence of f ε to ρ F , this holds uniformly on A . Since the initial data and measuring test functions satisfy (16), we have the uniform convergence over ( j , k ) as well. □
We can now prove the following theorem on the asymptotic equivalence of the two posterior measures describing the distribution of the tumbling kernels ( K 0 , K 1 ) A if the dynamics of the bacteria are modeled by the kinetic (3) or macroscopic Equation (4).
Theorem 2.
Let the measurement of the macroscopic bacteria density be of the form (14) and (15) for an underlying kinetic chemotaxis model or a Keller–Segel model, respectively. The measuring test functions χ j C c ( R 3 ) and initial data f 0 ( k ) C c 1 , + ( R 3 × V ) are assumed to satisfy (16). Given a prior distribution μ 0 on A and an additive centered Gaussian noise in the data, the posterior distributions for the tumbling kernel derived from the kinetic chemotaxis equation and the macroscopic Keller–Segel equation as underlying models are asymptotically equivalent in the Kullback–Leibler divergence
d KL ( μ ε , chem y , μ KS y ) ε 0 0 .
Proof of Theorem 2
With the above lemmas, one can proceed as in the proof in [21]. The integrand of the Kullback–Leibler divergence is by the definition of the normalization constants of order
log d μ ε , chem y d μ KS y ( K 0 , K 1 ) = log μ 0 ( K 0 , K 1 ) μ ε , chem ( K 0 , K 1 ) ( y ) Z ε , chem Z KS μ 0 ( K 0 , K 1 ) μ KS ( K 0 , K 1 ) ( y ) = log Z KS Z ε , chem + log μ ε , chem ( K 0 , K 1 ) ( y ) μ KS ( K 0 , K 1 ) ( y ) = O ( | Z ε , chem Z KS | ) + O ( | μ ε , chem ( K 0 , K 1 ) ( y ) μ KS ( K 0 , K 1 ) ( y ) | ) = O ( | μ ε , chem ( K 0 , K 1 ) ( y ) μ KS ( K 0 , K 1 ) ( y ) | ) .
Thus, we estimate
| μ ε , chem ( K 0 , K 1 ) ( y ) μ KS ( K 0 , K 1 ) ( y ) | = exp y G ε , chem ( K 0 , K 1 ) 2 2 γ 2 exp y G KS ( K 0 , K 1 ) 2 2 γ 2 c y G ε , chem ( K 0 , K 1 ) 2 y G KS ( K 0 , K 1 ) 2
for the Lipschitz constant c < of exp ( | x | 2 γ 2 ) and
y G ε , chem ( K 0 , K 1 ) 2 y G KS ( K 0 , K 1 ) 2 = tr 2 y G ε , chem ( K 0 , K 1 ) G KS ( K 0 , K 1 ) T G ε , chem ( K 0 , K 1 ) G KS ( K 0 , K 1 ) 2 y G ε , chem ( K 0 , K 1 ) G KS ( K 0 , K 1 ) · G ε , chem ( K 0 , K 1 ) G KS ( K 0 , K 1 ) .
The first factor is bounded uniformly on A and in ε by Lemma 1 (a) and Lemma 2 shows that the second factor converges to 0 uniformly on A . It follows that
d KL ( μ ε , chem y , μ KS y ) 0 .
The boundedness of the the Hellinger metric by the Kullback–Leibler divergence
d Hell 2 ( μ 1 , μ 2 ) d KL ( μ 1 , μ 2 )
as shown in Lemma 2.4 in [32] together with Theorem 2 yields the asymptotic equivalence of the posterior distributions also in the Hellinger metric.
Corollary 1.
In the framework of Theorem 2, one has
d Hell ( μ ε , chem y , μ KS y ) ε 0 0 .

5. Summary and Discussion

In this article, we considered bacterial movement in an environment with an attracting chemical substance that was not produced or consumed by the bacteria. The bacteria density was modeled to follow a chemotaxis Equation (3) on the kinetic level and a Keller–Segel Equation (4) on the macroscopic level. We studied the reconstruction of the tumbling coefficient using the measurement of the bacterial density at different times and locations using different initial data. After adapting the results from [5] in the parabolic scaling, we studied the equivalence between the reconstructions using the two different underlying models in the Bayesian framework. Assumptions on the prior information were made to guarantee the uniform convergence of the two forward models. This enabled us to show that the posterior distributions are properly defined and that convergence of the two posterior distributions holds true. The distance between two posterior distributions was measured in both the Kullback–Leibler divergence and the Hellinger metric.
The work presented here serves as a cornerstone of future research. On one hand, the study here can help in the design of an efficient inversion solver. Most inversion solvers are composed of many iterations of forward solvers. Since the kinetic chemotaxis equation lies on the phase space and is numerically much more expensive, the limiting Keller–Segel equation can serve as a good substitute for generating a good initial guess and speeding up the computation by reducing the number of iterations. On the other hand, the approach performed in this study is rather general, and with minor modification, it also provides the foundation for explaining experiments, such as [7].

Author Contributions

Conceptualization, Q.L. and M.T.; methodology, K.H., Q.L. and M.T.; formal analysis, K.H.; investigation, K.H., C.K., Q.L. and M.T.; writing—original draft preparation, K.H.; writing—review and editing, C.K., Q.L. and M.T.; supervision, C.K. All authors have read and agreed to the published version of the manuscript.

Funding

K.H. acknowledges support from the Würzburg Mathematics Center for Communication and Interaction (WMCCI) as well as the Studienstiftung des deutschen Volkes and the Marianne-Plehn-Programm. Q.L. acknowledges support from the Vilas Early Career Award. The research is supported in part by NSF via grant DMS-1750488 and the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin Madison with funding from the Wisconsin Alumni Research Foundation. M.T. acknowledges the support of NSFC11871340 and the Changjiang Scholar Program Youth Project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study did not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Patlak, C. Random Walk with Persistence and External Bias: A Mathematical Contribution to the Study of Orientation of Organisms. Bull. Math. Biophys. 1953, 15, 311–338. [Google Scholar] [CrossRef]
  2. Keller, E.F.; Segel, L.A. Model for chemotaxis. J. Theor. Biol. 1971, 30, 225–234. [Google Scholar] [CrossRef]
  3. Keller, E.F.; Segel, L.A. Traveling bands of chemotactic bacteria: A theoretical analysis. J. Theor. Biol. 1971, 30, 235–248. [Google Scholar] [CrossRef]
  4. Perthame, B. Transport Equations in Biology; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  5. Chalub, F.; Markowich, P.; Perthame, B.; Schmeiser, C. Kinetic Models for Chemotaxis and their Drift-Diffusion Limits. Monatsh. Math. 2004, 142, 123–141. [Google Scholar] [CrossRef]
  6. Alt, W. Biased random walk models for chemotaxis and related diffusion approximations. J. Math. Biol. 1980, 9, 147–177. [Google Scholar] [CrossRef] [PubMed]
  7. Giometto, A.; Altermatt, F.; Maritan, A.; Stocker, R.; Rinaldo, A. Generalized receptor law governs phototaxis in the phytoplankton Euglena gracilis. Proc. Natl. Acad. Sci. USA 2015, 112, 7045–7050. [Google Scholar] [CrossRef] [Green Version]
  8. Keller, E.F.; Segel, L.A. Initiation of slime mold aggregation viewed as an instability. J. Theor. Biol. 1970, 26, 399–415. [Google Scholar] [CrossRef]
  9. Kowalczyk, R. Preventing blow-up in a chemotaxis model. J. Math. Anal. Appl. 2005, 305, 566–588. [Google Scholar] [CrossRef] [Green Version]
  10. Horstmann, D.; Winkler, M. Boundedness vs. blow-up in a chemotaxis system. J. Differ. Equ. 2005, 215, 52–107. [Google Scholar] [CrossRef] [Green Version]
  11. Perthame, B.; Vauchelet, N.; Wang, Z. The Flux Limited Keller-Segel System; Properties and Derivation from Kinetic Equations. Rev. Matemática Iberoam. 2019, 36, 357–386. [Google Scholar] [CrossRef] [Green Version]
  12. Erban, R.; Othmer, H. From Individual to Collective Behavior in Bacterial Chemotaxis. SIAM J. Appl. Math. 2004, 65, 361–391. [Google Scholar] [CrossRef] [Green Version]
  13. Si, G.; Wu, T.; Ouyang, Q.; Tu, Y. Pathway-Based Mean-Field Model for Escherichia coli Chemotaxis. Phys. Rev. Lett. 2012, 109, 048101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Si, G.; Tang, M.; Yang, X. A Pathway-Based Mean-Field Model for E. coli Chemotaxis: Mathematical Derivation and Its Hyperbolic and Parabolic Limits. Multiscale Model. Simul. 2014, 12, 907–926. [Google Scholar] [CrossRef]
  15. Sun, W.; Tang, M. Macroscopic Limits of Pathway-Based Kinetic Models for E. coli Chemotaxis in Large Gradient Environments. Multiscale Model. Simul. 2016, 15, 797–826. [Google Scholar] [CrossRef] [Green Version]
  16. Perthame, B.; Sun, W.; Tang, M. The fractional diffusion limit of a kinetic model with biochemical pathway. Z. Angew. Math. Phys. 2017, 69, 1–15. [Google Scholar] [CrossRef] [Green Version]
  17. Othmer, H.; Hillen, T. The Diffusion Limit of Transport Equations II: Chemotaxis Equations. SIAM J. Appl. Math. 2002, 62, 1222–1250. [Google Scholar] [CrossRef] [Green Version]
  18. Othmer, H.; Dunbar, S.; Alt, W. Models of Dispersal in Biological Systems. J. Math. Biol. 1988, 26, 263–298. [Google Scholar] [CrossRef] [PubMed]
  19. Stuart, A.M. Inverse problems: A Bayesian perspective. Acta Numer. 2010, 19, 451–559. [Google Scholar] [CrossRef] [Green Version]
  20. Dashti, M.; Stuart, A. The Bayesian Approach to Inverse Problems. In Handbook of Uncertainty Quantification; Springer: Berlin/Heidelberg, Germany, 2015; pp. 1–118. [Google Scholar] [CrossRef] [Green Version]
  21. Newton, K.; Li, Q.; Stuart, A. Diffusive Optical Tomography in the Bayesian Framework. Multiscale Model. Simul. 2020, 18, 589–611. [Google Scholar] [CrossRef] [Green Version]
  22. Abdulle, A.; Blasio, A. A Bayesian Numerical Homogenization Method for Elliptic Multiscale Inverse Problems. SIAM/ASA J. Uncertain. Quantif. 2020, 8, 414–450. [Google Scholar] [CrossRef]
  23. Beal, J.; Farny, N.; Haddock-Angelli, T.; Vinoo Selvarajah, V.; Baldwin, G.S.; Buckley-Taylor, R.; Gershater, M.; Kiga, D.; Marken, J.; Sanchania, V.; et al. Robust estimation of bacterial cell count from optical density. Commun. Biol. 2020, 3, 1–29. [Google Scholar] [CrossRef]
  24. Hammes, F.; Egli, T. Cytometric methods for measuring bacteria in water: Advantages, pitfalls and applications. Anal. Bioanal. Chem. 2010, 397, 1083–1095. [Google Scholar] [CrossRef]
  25. Ran, Z.Y.; Hu, B.G. An identifying function approach for determining parameter structure of statistical learning machines. Neurocomputing 2015, 162, 209–217. [Google Scholar] [CrossRef]
  26. Clim, A.; Zota, R.D.; TinicĂ, G. The Kullback-Leibler Divergence Used in Machine Learning Algorithms for Health Care Applications and Hypertension Prediction: A Literature Review. Procedia Comput. Sci. 2018, 141, 448–453. [Google Scholar] [CrossRef]
  27. Hamadouche, A.; Kouadri, A.; Bakdi, A. A modified Kullback divergence for direct fault detection in large scale systems. J. Process. Control 2017, 59, 28–36. [Google Scholar] [CrossRef]
  28. Cieslak, D.; Hoens, T.; Chawla, N.; Kegelmeyer, W. Hellinger distance decision trees are robust and skew-insensitive. Data Min. Knowl. Discov. 2012, 24, 136–158. [Google Scholar] [CrossRef]
  29. Ni, X.; Härdle, W.K.; Xie, T. A Machine Learning Based Regulatory Risk Index for Cryptocurrencies. arXiv 2021, arXiv:2009.12121. [Google Scholar]
  30. Goldenberg, I.; Webb, G. Survey of distance measures for quantifying concept drift and shift in numeric data. Knowl. Inf. Syst. 2019, 60, 591–615. [Google Scholar] [CrossRef]
  31. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  32. Tsybakov, A.B. Introduction to Nonparametric Estimation, 1st ed.; Springer Series in Statistics: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef]
Figure 1. Two ways to compare the inverse problems: determining and comparing the tumbling kernels for both underlying chemotaxis and Keller–Segel models (left) or determining the drift or diffusion coefficient for the Keller–Segel model and the tumbling kernel for the chemotaxis model and calculating the corresponding drift and diffusion coefficients.
Figure 1. Two ways to compare the inverse problems: determining and comparing the tumbling kernels for both underlying chemotaxis and Keller–Segel models (left) or determining the drift or diffusion coefficient for the Keller–Segel model and the tumbling kernel for the chemotaxis model and calculating the corresponding drift and diffusion coefficients.
Computation 09 00119 g001
Figure 2. Measurement of the bacteria density (blue) at two different measuring times t j , t j ˜ . The location of the test functions is indicated by the support in space of the test functions χ j , χ j ˜ .
Figure 2. Measurement of the bacteria density (blue) at two different measuring times t j , t j ˜ . The location of the test functions is indicated by the support in space of the test functions χ j , χ j ˜ .
Computation 09 00119 g002
Figure 3. Two ways to determine the posterior distribution μ KS y ( D , Γ ) from a prior μ 0 ( K 0 , K 1 ) on the tumbling kernels.
Figure 3. Two ways to determine the posterior distribution μ KS y ( D , Γ ) from a prior μ 0 ( K 0 , K 1 ) on the tumbling kernels.
Computation 09 00119 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hellmuth, K.; Klingenberg, C.; Li, Q.; Tang, M. Multiscale Convergence of the Inverse Problem for Chemotaxis in the Bayesian Setting. Computation 2021, 9, 119. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9110119

AMA Style

Hellmuth K, Klingenberg C, Li Q, Tang M. Multiscale Convergence of the Inverse Problem for Chemotaxis in the Bayesian Setting. Computation. 2021; 9(11):119. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9110119

Chicago/Turabian Style

Hellmuth, Kathrin, Christian Klingenberg, Qin Li, and Min Tang. 2021. "Multiscale Convergence of the Inverse Problem for Chemotaxis in the Bayesian Setting" Computation 9, no. 11: 119. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9110119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop