**:** ..
**:**
**:** 90
**:**
**:** 2021
**:** .. // . 90. .: , 2021. .36-48. DOI: https://doi.org/10.25728/ubs.2021.90.2

** :** , ,

** (.):** nonlinear Markov chains, ergodicity, rate of convergence

**:** . , .. . . . , , . , , , . , , .

** (.):** The paper studies an improved estimate for the rate of convergence for nonlinear homogeneous discrete-time Markov chains. These processes are nonlinear in terms of the distribution law. Hence, the transition kernels are dependent on the current probability distributions of the process apart from being dependent on the current state. Such processes often act as limits for large-scale systems of dependent Markov chains with interaction. The paper generalizes the convergence results by taking the estimate over two steps. Such an approach keeps the existence and uniqueness results under assumptions that are analogical to the one-step result. It is shown that such an approach may lead to a better rate of convergence. Several examples provided illustrating the fact that the suggested estimate may have a better rate of convergence than the original one. Also, it is shown that the new estimate may even be applicable in some cases when the conditions of the result on one step cannot guarantee any convergence. Finally, these examples depict that the original conditions may not be an obstacle for the convergence of nonlinear Markov chains.

PDF -
: 1534, : 498, : 27.