Abstract
In analyses and simulation of safetycritical systems, it is necessary to cope with perturbations of the parameters, noise, and inaccuracies that exist in almost all mathematical models, in which case, we say that the analysis or simulation is robust. Robustness is a central issue in both cyberphysical and machine learning systems.The foundation that we lay for robustness analysis is based on domain theory. Hence, it is abstract and may be used in both types of systems. We model the continuous phase of a cyberphysical system using differential equations, for which we use domaintheoretic methods for obtaining validated solutions. For robustness analysis of machine learning systems, we use the domaintheoretic Lderivative.
In my PhD progress, I have made the following contributions:
Chapter 4 constructs a continuous domain for temporal discretization of differential equations.
By using this domain, and the domain of Lipschitz maps, we formulate a generalization of the
Euler operator, which only requires the field of the differential equation to be Lipschitz continuous and exhibits secondorder convergence. We prove the computability of the operator within the framework of effectively given domains. The generalized (secondorder) Euler operator not only imposes weaker assumptions on the field but also exhibits a superior convergence rate, compared with the RungeKutta variant. We implement the firstorder, secondorder, and RungeKutta Euler operators using arbitraryprecision interval arithmetic, and report on some experiments, which verify our theoretical results.
This work has been published in Theoretical Computer Science (TCS):
– Edalat, A., Farjudian, A., and Li, Y. “Recursive Solution of Initial Value Problems with Temporal Discretization”. In: Theoretical Computer Science (2023), p. 114221. doi: 10.1016/j.tcs.2023.114221
Chapter 6 presents a domaintheoretic framework for validated robustness analysis of neural
networks. We first analyze the global robustness of a general class of networks. Based on the fact that Edalat’s domaintheoretic Lderivative coincides with Clarke’s generalized gradient, we
extend our framework for attackagnostic local robustness analysis. We develop an algorithm for computing the Lipschitz constant of feedforward networks. In particular, we prove the soundness and completeness of the algorithm. We also prove the computability of our method within the framework of effectively given domains. Our method is applicable to both differentiable and nondifferentiable networks. We implement our algorithm using arbitrary precision interval arithmetic and present the results of some experiments.
This work has been published in Mathematical Structures in Computer Science (MSCS):
– Zhou, C., Shaikh, R. A., Li, Y., and Farjudian, A. “A domaintheoretic framework for robustness analysis of neural networks”. In: Mathematical Structures in Computer Science 33.2 (2023), pp. 68–105. doi: 10.1017/S0960129523000142.
The work in Chapter 7 is an extension of the framework of Chapter 6 to recurrent neural networks (RNNs). We present a validated method for computing the Lipschitz constant of RNNs,
which applies to both differentiable and nondifferentiable networks, based on Clarke’s generalized gradient. A maximization algorithm has been devised based on bisection, which can obtain a certified estimate of the Lipschitz constant, and locate the region of the least robustness in the input domain. With a firm foundation in domain theory, the algorithms can be proven to be correct by construction. Moreover, the method has been implemented by experiments using interval arithmetic.
This work has been published in the proceedings of the conference ICMLSC 2023 (The 7th International Conference on Machine Learning and Soft Computing).
– Guo, Y., Li, Y., and Farjudian, A. “Validated Computation of Lipschitz Constant of Recurrent Neural Networks”. In: Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing. ICMLSC ’23. Chongqing, China: Association for Computing Machinery, 2023, pp. 46–52. doi: 10.1145/3583788.3583795
Date of Award  Jul 2024 

Original language  English 
Awarding Institution 

Supervisor  Amin Farjudian (Supervisor), Adam Rushworth (Supervisor) & Andrew J. Parkes (Supervisor) 
Keywords
 domain theory
 interval arithmetic
 validated methods
 robustness
 cyberphysical systems
 neural networks