PhD opportunities in Mathematical Sciences

All our PhD programmes in Mathematical Sciences, Actuarial Mathematics and Statistics (AM&S) are shared with the University of Edinburgh. Indeed the Mathematics and AM&S departments at Heriot-Watt form, together with the University of Edinburgh Mathematics department, the Maxwell Institute for Mathematical Sciences.

Information on all our programmes, funding opportunities and on how to apply can be found on the Maxwell Institute Graduate School (MIGS) website.

A description of the research areas we cover within Mathematics and AM&S can be found on our Research in Mathematical Sciences page. 

Below you will find a description of many of the PhD projects we offer organised by research theme:

Projects in Probability, Statistics and Data Science, Financial and Actuarial Mathematics

Climate change, mortality and pensions

Description: The aim of this PhD proposal is to provide insights into the crucial importance and impact of the climate change on the solvency of pension plans, through the modelling of mortality and morbidity rates. The pension sector is extremely large, being valued at GBP13.9 billion in 2021. Pensions are critical to enable people to pay for food, rent and other daily needs when they stop working.

Current mortality and morbidity models used in industry rely on, for example, the age of the individuals and evolution of ageing over time. The models need to be adjusted to consider the effect of the global warming and extreme natural events. There have been very few academic papers written on this topic either. So more sophisticated models must be developed in the pension sector to incorporate climate change risks.

This is important because pension plans play a vital role in allowing individuals and society to manage climate change risk and making net zero a reality. Due to the long-term nature of their liabilities and vast sums of money invested in pensions, the pension sector can invest their assets in long-term, transformative infrastructure projects, to support the transition to a net zero world.

The objectives of this industrial collaboration and PhD project will be the following:

  • To determine what climate change risks should be included in models of mortality and morbidity;
  • To forecast mortality and morbidity rates by including climate change risks in these models.
  • To estimate the financial impact of climate-related risks on pension plans under stressed scenarios.
  • To identify new opportunities for pension plans under climate change risk: what should they do to address the challenges of climate change? Specifically, potential risk management solutions will be analysed.

Supervisors: Carmen Boado-Penas and Catherine Donnelly

Industrial supervisor: Scott Eason (Partner at BW and Head of Insurance and Longevity Consulting) and Kim Durniat (Partner and Head of Life Consulting) have agreed to become members of the thesis supervisory team.

Barnett Waddingham (BW) is a leading independent UK professional services consultancy across risks, pensions, investment and insurance.

Random graphs and networks: limits, approximations, and applications 

Supervisors: Fraser Daly and Seva Shneer 

Description: We will consider some models for random graphs where some nodes may form more connections than others. Such models include non-homogeneous graphs and configuration models. We aim to study various measures of connectedness of such graphs, for instance the probabilities of randomly chosen nodes forming cliques or other subgraphs. We aim to find asymptotics as well as approximations for these measures of connectedness as the size of the graph is going to infinity. Techniques which could be applied here include Stein's method for probability approximations. Extensions of this work include analogous results for dynamic random graphs evolving in time and for stochastic processes on random graphs. 

  Approximations for Random Sums with Dependence 

Supervisor: Fraser Daly 

Description: Sums of a random number of random variables have applications in many areas, including insurance, where they can be used to represent the total claim amount received within a given year: a random number of claims is received, each of which is for a random amount. Classically, these individual claim amounts are assumed to be independent and identically distributed, and independent of the number of claims received. This allows us to derive approximations for the distribution of the total claim amount, for example a Gaussian approximation using the central limit theorem. However, these assumptions of independence are unrealistic, and we would like to relax them. The aim of this project is to derive and investigate explicit distributional approximations for sums of a random number of random variables with dependence, using Stein's method for probabilistic approximation and other relevant tools.    

References

[1] L. H. Y. Chen, L. Goldstein and Q.-M. Shao (2011). Normal Approximation by Stein's Method. Springer, Berlin.  

[2] F. Daly (2021). Gamma, Gaussian and Poisson approximations for random sums using size-biased and generalized zero-biased couplings. Scandinavian Actuarial Journal, to appear. 

Deep Learning Methods for Credit Risk Models. 

Supervisor: Wei Wei 

Description:  In this project you will explore the development of deep learning methods for credit risk models. It requires developing pricing and calibration methods for nonlinear models in credit risk. Techniques that would be applied here include semi-linear parabolic partial differential equations, backward stochastic differential equations, and deep learning algorithms for high dimensional optimization problems. At the end of the project, you are expected to have a broad view on general analytical and computational tools for credit risk models. 

Stochastic control methods for quantitative behavioural finance. 

Supervisor: Wei Wei 

Description: Behavioural finance is the study of the influence of human emotions and psychology on financial decision making. While psychological factors are involved, decision making problems become time-inconsistent, in that any optimal rule obtained today may no longer be optimal from the perspective of a future date. In this project, you will explore the methodological developments of stochastic control to tame the time-inconsistency arising from quantitative behavioural finance models. It will require developing the time-inconsistent stochastic control theory and designing efficient numerical method to analyze the problems in behavioural finance. 

Convergence of Markov processes with applications to computational statistics and machine learning 

Supervisor: Mateusz Majka 

Description:  The project will involve investigating convergence to equilibrium of several different types of Markov processes, including solutions of stochastic differential equations (driven by either Brownian motion or pure jump Levy processes) and Markov chains (see papers [1] and [3]). Results of such type, besides their theoretical significance, have found numerous applications in computational statistics and machine learning. For instance, by employing the probabilistic coupling technique, one can obtain precise convergence rates of numerous Monte Carlo algorithms that are constructed by utilizing discretisations of stochastic differential equations, and are used in computational statistics for sampling from high dimensional probability distributions [2, 4]. Depending on the candidate's interests, the project can focus either on the numerical/computational aspect or on the underlying theory. 

References: 

4. L.-J. Huang, M. B. Majka and J. Wang, Strict Kantorovich contractions for Markov chains and Euler schemes with general noise, to appear in Stochastic Process. Appl. (2022). 

3. M. Liang, M. B. Majka and J. Wang, Exponential ergodicity for SDEs and McKean-Vlasov processes with Lévy noise, Ann. Inst. Henri Poincaré Probab. Stat. 57 (2021), no. 3, 1665-1701. 

2. M. B. Majka, A. Mijatović and L. Szpruch, Non-asymptotic bounds for sampling algorithms without log-concavity, Ann. Appl. Probab. 30 (2020), no. 4, 1534-1581. 

1. A. Eberle and M. B. Majka, Quantitative contraction rates for Markov chains on general state spaces, Electron. J. Probab. 24 (2019), paper no. 26, 36 pp. 

Factors influencing the time to disease fade-out. 

Supervisor: Damian Clancy 

Description: The spread of infectious disease through a population is an inherently random process, and can be studied using stochastic models. For diseases which become endemic in a population, one object of interest is the time until fade-out of infection (a random variable). The expected time to fade-out may be computed straightforwardly through Monte-Carlo simulation, or more exactly from general Markov process theory. For more complicated models, implementing these approaches becomes less straightforward, and approximation methods may also be needed. In this project, you will investigate a variety of approaches, with the aim of understanding the effects of particular disease features upon time to fade-out. There are many different features of different diseases that you could study - for instance, you might examine the impact of environmental transmission upon disease persistence, or the effects of changes in the birth and death rates of the susceptible population.   

References:  

"Approximating time to extinction for endemic infection models" by Damian Clancy and Elliott Tjia (2018), Methodology and Computing in Applied Probability volume 20, pages 1043–1067 10.1007/s11009-018-9621-8 

"The Influence of Latent and Chronic Infection on Pathogen Persistence" by A. O'Neill, A. White, D. Clancy, F. Ruiz-Fons & C. Gortázar (2021), Mathematics volume 9, article number 1007 https://doi.org/10.3390/math9091007 

Replication and redundancy 

Supervisors: Sergey Foss and Seva Shneer 

Description: A popular strategy for job allocation in a large service system is as follows. Upon arrival of a new job, several copies of it are sent to a subset of servers. As soon as a required number of copies have completed (or started) service, others are deleted. We study the performance of such systems, including stability, delays in the stationary regime and limit theorems. We also plan to design optimal, in some sense, policies. 

Epidemics with migration 

Supervisors:  Sergey Foss and Seva Shneer 

Description: We study models where agents may arrive into a system, may change their state (for instance, infected, susceptible, exposed, recovered) and may change their location in the system. We plan to study the stationary regime of such a system. Questions of interest include conditions for extinction/survival of the epidemic and analysis of the effects of mobility on the spread of the epidemic. 

Bayesian inference and classification on graphs 

Supervisors: Jonas Latz, Seva Shneer 

Description: Graphs appear frequently in natural sciences and technical disciplines whenever connections between agents need to be represented. Friendships in social networks can be marked by an edge between two user profiles, as can the conductivity in a porous medium between two spatial positions. In practice, these graphs are often at least partially unknown: people could be friends in real life but the social network doesn’t know; the ground water might flow between two points in the reservoir, but we do not observe this directly. 

The problem of reconstructing such missing parts of a graph is a problem of statistical inference. Measurements are given externally (incomplete friendship graph, measurements of the hydrostatic pressure in different positions in porous medium) and shall now be used to reconstruct the uncertain. 

In this project, we want to focus on Bayesian techniques. The Bayesian approach does not simply compute a point estimator (say, a best guess for the unknown), it computes a probability distribution over the possible parameters giving weight to those more likely given the observed data. This distribution is the so-called posterior. 

The project will consider both theoretical and computational questions. Theoretically, we aim at understanding how the estimation changes when the number of available data sets and also the number of unknown parameters increase; i.e. considering infinite graphs and graph-continuum limits. From a computational perspective, we aim at finding new Markov chain Monte Carlo algorithms that allow us to efficiently approximate the Bayesian posterior in a high-dimensional graph inference or classification problem. 

Travelling-wave solutions for models of growth, interaction, depletion and diffusion 

Supervisor: Seva Shneer 

Description:  We will study interacting-particle systems relevant for the study of scheduling mechanisms in stochastic networks such as redundancy and load balancing. The models are also of interest in other application areas ranging from biology to communication networks. The evolution of states of particles consists of mechanisms of growth, interaction, depletion and diffusion. We will aim at characterising their scaling limits, in particular in terms of travelling waves. 

Longevity risk management

Supervisor: Andrew Cairns

Description: Pension plans and life insurers are exposed to longevity risk: the risk that pensioners, in aggregate, live longer than anticipated. This has caused these institutions to look at ways to manage this risk. This project will look at (a) the models to measure the underlying risk; (b) innovative ways to manage the risk; (c) use stochastic models to assess the effectiveness of the different risk management solutions.

Cause of death measurement and modelling

Supervisor: Andrew Cairns

Description: Recent years have seen a huge increase in the availability of mortality data by cause of death rather than just all cause mortality (see, for example, www.causesofdeath.org). The use of cause-of-death data gives us greater insight into the past (e.g. drivers of past mortality improvements) as well as the future (e.g. which causes are likely to drive future all-cause mortality improvements?). This presents us with new challenges: what is the most effective way to model this data to gain the best insights?

Peer-to-Peer Risk Sharing

Description: Risk sharing is the core of insurance, or in general, of risk management. Traditional insurance products are built upon centralized models, where an insurer is the central node which establishes a bilateral risk sharing agreement to each of its policyholders. In the last decade, a modern decentralized insurance model, the so-called peer-to-peer (P2P) insurance, has been rapidly developing in the industry by, for example, Friendsurance, Inspeer, and Lemonade. This has recently sparked fundamental researches on the risk sharing mechanism for the P2P insurance. This project shall first review the recent literature development on P2P risk sharing, then advance theoretical foundation of P2P risk sharing, and finally compare it with classical centralized models.

Supervisor: Alfred Chong

Applications of Reinforcement Learning in Insurance

Description: Model-based solutions have been well developed in various topics of insurance. However, these solutions naturally suffer from any model miscalibration and/or misspecification. If a model is miscalibrated and/or misspecified, without retuning the model timely and swiftly, a model-based solution will possibly lead to losses, if not catastrophic consequences, for an insurance company. Reinforcement learning (RL), a flourishing sub-field in machine learning, has already proved its automated powerfulness in a wide range of non-actuarial tasks resembling human intelligence. Inspired by Chong et al. (2021, 2022), in which RL is applied to derive self-revising hedging strategies for variable annuity contracts, this project shall explore more applications of reinforcement learning in insurance.

Supervisor: Alfred Chong

Forward Preferences in Insurance

Description: Pioneered by Musiela and Zariphopoulou (2007), forward preferences were developed to rectify classical utility maximization problems, which fix a priori the horizon of interest, the model of dynamics, as well as the future utility function of agent. These assumptions further deviate from the insurance practice, since the horizon of a product is particularly long, there is usually a random time, such as a future lifetime, being involved, and a mortality model could be revised based on an updated health examination. Similar to Chong (2019), and Ng and Chong (2022), this project shall fundamentally revisit actuarial topics, which base on classical utility maximization problems, in the forward framework, and shed light on the pros and cons of the classical backward and novel forward models.

Supervisor: Alfred Chong

Efficient computation of Rare-risk measures

Description: Certain rare events have high cost, both humanitarian and financial, which make them significant events that industries and governments must plan for. Taking measures to reduce or mitigate the risks of such events is the goal of risk management which requires accurate assessment of such risks. This project's goal is to speed up computations of accurate risk measures of rare events to ensure effective risk management. The goal will be achieved by developing novel computational methods which utilize approximation properties of the underlying stochastic models and which are based on Monte Carlo and random sampling methods which are easily parallelizable and fully exploit the increased availability of computational resources.

Supervisor: Abdul-Lateef Haji-Ali

Hierarchical Methods for Chaotic Systems

Description: Chaotic systems appear in weather, ocean circulation and climate models, and incur enormous computational cost with currently available methods due to accuracy requirements imposing slow time-stepping and fine space-discretization. In this project, we will develop hierarchical methods to speed up uncertainty quantification (UQ) of such systems which will allow practitioners to conduct more thorough statistical studies that will ultimately result in better decision making.

Supervisor: Abdul-Lateef Haji-Ali

Bilevel optimisation for inverse problems: analysis, fast computations, and Bayes.

Description: Inverse problems concern the estimation of parameters of mathematical models given real world data: we estimate the permeability of a groundwater reservoir using measurements of the hydrostatic pressure in the reservoir, we reconstruct the position and shape of a tumour using attenuated X-rays in medical imaging, and we train the weight and bias matrices in a deep neural network that aims at distinguishing cats and dogs. Inverse problems are usually not uniquely solvable or their solution is brittle with respect to small perturbations in the data -- they are ill-posed.

Two ways that can often overcome ill-posedness are regularisation on the one hand and the Bayesian approach on the other hand. The regularisation approach consists in minimising a certain functional that is a sum of the negative log-likelihood of the observed data given the unknown parameter and an additional term that has favourable properties. The Bayesian approach is probabilistic: we model the unknown parameter as a random variable being distributed according to the so-called prior distribution. Using the aforementioned likelihood and Bayes' formula, we can then obtain the posterior distribution - that is the conditional distribution of the parameter given the data observation. The posterior can be used for point estimation and uncertainty quantification.

Regulariser and prior can have a large influence on the solution of the inverse problem and an appropriate choice is hard. In bilevel optimisation, we aim to `learn' the regulariser based on available data. Such a parameter can be a simple prefactor of a usual regulariser [Reyes et. al; Journal of Mathematical Imaging and Vision 57: 1–25 (2017)] that needs to be determined or the regulariser can be completely determined by a neural network [Mukherjee et al.; ArXiv 2008.02839 (2020)], where then weights and biases need to be learned.

We commence this project by looking at the method by [Antil et al.; Inverse Problems, 36: 064001 (2020)], which presents an interesting method to estimate the fraction of a fractional Laplacian that is used for regularisation. Here, the primary goal is to find a more scalable version of this algorithm throughmodern techniques in numerical linear algebra, allowing us to reconstruct large scale medical images. Future work may include: bilevel optimisation of prior distributions in Bayesian inversion, learning of sparse dictionaries through optimisation on manifolds, other fractional operators (such as total variation), and a continuous-time analysis of stochastic gradient descent in bilevel optimisation [Jin et al.; ArXiv 2112.03754 (2021)].

Supervisors: Dr Jonas Latz, Dr Abdul-Lateef Haji-Ali 

Statistical learning for quantifying meteorological event-related risks

Supervisor(s): Dr G Tzougas, Prof G Streftaris

Description: Quantifying meteorological event-related risks has become increasingly important in general insurance as extreme climate events may trigger excess claims that can potentially have detrimental impact on the insurer’s portfolio. On the other hand, it is challenging to model the relation between climate events and claim frequencies, since detailed information on climate events is often not fully recorded. Motivated by the above issues, in this PhD project we will use compound frequency and severity statistical models, together with copula-based models for the number and the cost of claims to characterize meteorological event-related risks. The proposed models have the capacity to uncover the joint distribution of the event and claim processes, also in cases where the observed data are incomplete. Bayesian methodology will be used to quantify the associated uncertainty, and we will also consider extensions based on deep learning techniques for capturing non-linearities in the data. Geospatial information will be included, to assess potential impact on the meteorological event and claim frequencies. Finally, the project will investigate possible negative intrinsic dependencies between meteorological events and per-event claim frequencies, which can imply that an insurance company may enjoy diversification benefits from climate change that causes more meteorological events.

Bayesian predictive modelling for morbidity risk

Supervisor(s): Prof G Streftaris

Description: The principal aim of the proposed research is to develop, evaluate and assess models for morbidity risk and related insurance rates, under statistical and machine learning frameworks that allow for uncertainty quantification. The proposed work will address the timely need to develop robust predictive models for rapidly changing morbidity risks and the relevant impact on health-related insurance. This research area requires forward-thinking attention, as morbidity trends are changing dynamically due to various complex factors including changes in life expectancy, improvements in health and care and developments in medical science. Earlier work [1,2] has shown that morbidity and health-insurance-related rates are idiosyncratic to a number of factors, including demographic, socio-economic and policy-linked characteristics. The proposed project will build on this work to identify suitable morbidity risk factors for a wide range of illnesses, also relating to insured populations. We will also assess the

robustness of the developed predictive models and select an ensemble of models that perform well under a set of criteria designed to optimise both the interpretability and predictive quality for risks associated with certain medical morbidity causes.

References:

[1] Streftaris, G., Xie, X., Arik, A. and Dodd, E. (2018) Critical illness insurance rates and related morbidity trends. ARC webinar, https://www.actuaries.org.uk/learn-and-develop/research-and-knowledge/actuarial-research-centre-arc/arc-webinar-series-2018

[2] Arik, A., Dodd, E., Cairns, A., Streftaris, G. (2021) Socioeconomic disparities in cancer incidence and mortality in England and the impact of age-at-diagnosis on cancer mortality. PLoS ONE. 16, 7. DOI: 10.1371/journal.pone.0253854

Projects in Structure and symmetry - Mathematical Physics

Renormalisation interfaces in two-dimensional quantum field theory 

Supervisor: Anatoly Konechny  

Description: Renormalisation group (RG) is as fundamental concept in Quantum Field Theory (QFT) that describes how the physics changes under the change of energy scale. A typical renormalisation group trajectory starts from one fixed point and drives the theory to a different fixed point. In two dimensions the fixed points are described by conformal field theories that possess an infinite-dimensional symmetry algebra. While a lot is known about the end points of renormalisation group flows very little is known about the global structure of the space of flows linking the end points. Recently a new object called Renormalisation domain wall or renormalisation interface has been invented. In two dimensions this object is a line of contact between two different conformal field theories on each side. 

The project involves studying such objects for concrete  RG flows both analytically and numerically. The aims are to learn how to construct such objects, what information they encode about the RG flows and how could they be used in gaining control over the space of flows. 

References: arXiv:1201.0767; arXiv:1211.3665; arXiv:1407.6444, arXiv:1610.07489, arXiv:2012.12361 

Gradient formulas for renormalisation group flows 

Supervisor: Anatoly Konechny  

Description: Renormalisation group flows describe how quantum field theories (QFTs) change when we change the scale of interaction. 

The coupling constants then change according to their beta functions that can be considered as a vector field on the space of theories. More geometry on the space of theories arises when one takes into account stress-energy tensor conservation and anomaly consistency conditions. The crucial equation describing the local geometry  is a gradient formula for the beta functions. Roughly speaking it expresses the beta function as a gradient of some potential function plus additional terms related to various local tensor fields. In string theory such equations for two-dimensional QFTs can be considered as space-time equations of motion for the string fields. The project includes studying the gradient formulae, local geometry and its string theory interpretation. Possible concrete problems include N=2 supersymmetric and PT-symmetric gradient formula for boundary RG flows in two dimensions. 

References: arXiv:1310.4185, arXiv:0910.3109, arXiv:hep-th/0312197. 

Are regular polygons optimal in relativistic quantum mechanics? 

Supervisor: Lyonell Boulton  

Description: Among all polygons with the same perimeter, regular polygons are known to minimise the ground energy of the Dirichlet Laplacian. This is also the case, if we consider the area as the fixed geometrical invariant rather than the perimeter. In the language of quantum mechanics, this can be re-interpreted by saying that the non-relativistic Schrodinger operator on boxes with the same perimeter or area, attains its minimal energy when the box has a regular base. These results can be traced back to the work of Polya in the 1950s and, arguably, they have enabled the rise to a whole new area of research in Geometrical Spectral Theory which is still an active subject of enquiry with strong links to Mathematical Physics. 

The aim of this PhD project is to develop research in the following direction. Suppose that we replace the Schrodinger operator with the free Dirac operator and pose the same question now in the relativistic setting. Will the box with a regular base be the optimal minimising energy shape (in absolute value) among all others? 

Recent progress on this problem include the paper [1], which considers general regions and [2], where this question is posed, but not solved. It appears that, even when the region is rectangular, an answer as to whether the square is indeed the optimal shape is not so easy to tackle. 

A concrete  initial stage of the project will be the analytic investigation of the problem on quadrilateral regions, trying to identify a shape for which the problem can be reduced and treated with exact formulas. Depending on progress we might consider numerical investigations along the lines of [3], in order to gain an insight on further lines of enquiry. The initial phase of the project might lead towards different directions, including the addition of magnetic or electic fields. 

References:  

[1] Ann. Henri Poincaré 19, 1465–1487 (2018) 

[2] J. Math. Phys. 63, 013502 (2022) 

[3] App. Num. Math. 99, 1–23 (2016) 

Projects in Structure and Symmetry - Algebra, Geometry, Topology

Solving equations in groups 

Supervisor: Laura Ciobanu 

Description: Imagine an equation of the form XaYYbZZc=1 in a group G, where X,Y, Z are variables and a, b, c some elements in G. Does this equation have solutions, and if it does, what are they? The answer depends very much on the group, whether it is free, hyperbolic, nilpotent or some other type. In some cases these questions, for arbitrary equations, are unsolvable, in other cases they are well understood but quite difficult. This project would revolve around understanding equations in nilpotent groups, and the base case would be the 3x3 Heisenberg group, where very little is known in terms of describing the solutions to an equation. Alternatively, depending on the background of the applicant, it could involve equations in some groups acting on rooted trees, such as the Grigorchuk group. 

This project brings together group theory, combinatorics, computational complexity, and possibly some algebraic geometry and formal languages, and it can be treated theoretically or rather computationally. 

Counting geodesics in groups 

Supervisor: Laura Ciobanu 

Description: To each finitely generated group one can attach the Cayley graph: a graph whose vertices are the group elements, and an edge connects two vertices if these are related via multiplication by a generator. If one counts all the geodesic, or shortest, paths, between the identity element/vertex and vertices at distance n from the identity, then one is looking at the geodesic growth function of the group. Much is known about this function, but a lot is also left to explore. For example, does there exist a group where this function is algebraic, but not rational? Or does there exist a group where this function is bigger than polynomial, but less than exponential? 

This project brings together group theory, combinatorics, formal languages, and computational experiments. 

The homology groups of a class of \'etale groupoids 

Supervisor: Mark Lawson 

Description: Etale groupoids are interesting to a wide circle of people including group theorists, C*-algebra theorists and  those of us working in inverse semigroup theory. The \'etale groupoids of particular interest are those whose space of identities are compact Hausdorff $0$-dimensional spaces. 

These include, of course, the \'etale groupoids whose space of identities is the Cantor space. There has been some work on the integer homology groups of such groupoids with much of it focussed on the zeroth and first homology groups. The ultimate aim is to classify those \'etale groupoids which are effective and minimal. It seems very likely that the integer homology groups will play a role in any such classification. What more can be said about these homology groups? Can we use non-commutative Stone duality to help us understand them better? 

A knowledge of topology is essential for this project together with a strong background in algebra. 

Applications of MV-algebras to a class of Boolean inverse monoids\'etale groupoids. 

Supervisor: Mark Lawson 

Description: MV-algebras generalize Boolean algebras and come originally from multiple valued logic (whence the MV). 

By work of Lawson, Scott and Wehrung, it is known that all MV-algebras can be  coordinatized by means of suitable Boolean inverse monoids: specifically, those which are factorizable and satisfy what is termed the lattice condition. This suggests that the theory of MV-algebras should be applicable to this class of Boolean inverse monoids. In particular, it suggests that there might be a sheaf representation of such Boolean inverse monoids. To date, very little MV-algebra theory has been applied to the study of this class of Boolean inverse monoids. But such an application could lead to some very interesting geometry. A strong background in algebra is essential for this project. 

The geometry of Artin groups 

Supervisor: Alexandre Martin 

Description: Artin groups form a class of groups generalising braid groups and with strong connections with Coxeter groups. Unlike Coxeter groups however, the structure and geometry of Artin groups are still mysterious in full generality. Certain classes of Artin groups are better understood, and this often comes from the existence of well-behaved actions on non-positively curved spaces (hyperbolic, CAT(0), etc.)  

The role of this project would be to study such actions, and to construct new ones, in order to reveal more of the geometry (in particular, non-positively curved features) and the structure (subgroups, automorphisms, etc.) of Artin groups.  

Combination problems in non-positive curvature 

Supervisor: Alexandre Martin 

Description: When studying a group G acting on a simplicial complex, one can think of this action as a way to decompose G into smaller "pieces" (the stabilisers of simplices), glued together via the combinatorics of the action. A natural question to ask is then the following: If all stabilisers satisfy a given property (P), under what conditions  (on the geometry of the complex acted upon, the dynamics of the action, etc,) can we conclude that the group G itself satisfies this property (P)?  

Such "combination problems" have been extensively studied for groups acting on trees, but fewer results are known in higher dimension. The goal of this project would be to study such problems for groups acting on higher dimensional complexes such as CAT(0) cube complexes and polygonal complexes, for various classes of properties (hyperbolicity, Tits alternative, etc.), and with applications to certain important classes of groups: Artin groups, graphs products, etc. 

Large-scale geometry of groups 

Supervisor: Alessandro Sisto 

Description: Geometric group theory is the study of groups using geometry. More concretely, in order to do so one associates to a (finitely generated) group a certain metric space called Cayley graph. However, this is a slight lie as the Cayley graph in fact depends on a choice of generating set, but Cayley graphs associated to different generating sets share the same "large-scale geometry". That is, there is a notion of maps preserving the large-scale geometry of spaces, called quasi-isometries, and all Cayley graphs of a given group are quasi-isometric to each other. In view of this, in geometric group theory it is very natural to study groups up to quasi-isometry. 

The project would focus on studying the large-scale geometry of various groups of interest in algebra, geometry, and topology. more specifically, this involves studying properties that are invariant under quasi-isometries, as well as rigidity phenomena. 

Randomness in groups 

Supervisor: Alessandro Sisto 

Description: Given a group, it is natural to ask what a "generic" element of the group looks like. In order to make this question precise, one can introduce random walks, as those provide a model for a random, or generic, element of a group. There are also various constructions, for example in low-dimensional topology, where one parameter is an element of a certain group, so random walks also provide models for "generic" objects of other kinds, for example 3-manifolds. Part of the motivation to study random walks, besides the intrinsic interest, is that sometimes in order to prove the existence of objects of a certain kind, the best way to proceed is to show that a generic object satisfies the required property. 

This project focuses on properties and applications of random walks and other stochastic processes within a broad class of groups, called acylindrically hyperbolic groups, that provides a common framework to simultaneously study various groups of interest in algebra, geometry, and topology. 

Projects in Applied and Computational Mathematics – including Industrial Mathematics and Mathematical Biology & Ecology

Non-local operators: Applications and efficient computation  

Supervisor: Lehel Banjai 

Description: Non-local interactions are ubiquitous in nature and lead to models that are difficult to handle accurately and efficiently. An example of this is the area of fractional differential operators, interest in which has been exploding in recent years among numerical analysts, probabilists, engineers, and mathematical analysts. Applications are wide ranging, including pattern formation in biology, therapeutic ultrasound in medicine, anomalous diffusion in finance and engineering etc. This is a huge and very active field. The project would address efficient computation of the difficult to compute fractional operators and applications to new areas. One interesting possibility is to look into the fractional steady state wave equation that has applications in, e.g., geophysics. Here much is still open including the qualitative behaviour of solutions, appropriate models, analysis of the solutions of the PDE model, and both the efficient computation and the analysis of the numerical schemes.   

Space-time numerical methods for nonlinear acoustics with applications to medical ultrasound  

Supervisor: Lehel Banjai 

Description: In this project we would consider the numerical solution of a class of nonlinear wave equations modelling medical ultrasound. One such model is described by the attenuated Westervelt equation. The standard procedure to solve the equation numerically is to first discretize in space by, e.g., the finite element or finite difference method. An explicit and implicit time-stepping applied to this semi-discretization gives rise to a heavily structured space-time discretization. Instead, in this project we will look at fully unstructured space-time meshes that can be adapted in both space and time to the wave travelling through the tissue. Such space-time finite element methods have been investigated since the end of the 80s. However, only lately has there been a surge of interest in them due to the ready availability of high-performance parallel computer infrastructure. Much is still open: optimal formulations, a-posteriori error analysis and adaptivity, efficient construction of space-time elements in 3+1D, solution of resulting linear systems (preconditioning, parallel direct methods) etc. The aim of this project is to look at some of these aspects.   

 Defect interaction in a crystalline lattice  

Supervisor: Julian Braun 

Description: Crystalline materials are solids in which the atoms follow the pattern of a periodic lattice. Defects are the imperfections in this lattice structure. As such they are crucial to fully understand the overall behaviour of the material. The aim of this project is an in depth analysis of the interaction of two or more defects on the atomic level. This should lead to the derivation of interaction laws on a larger scale while also giving the opportunity to develop new numerical methods for the computation of defect interaction.  

Modelling the transport of microplastics in the ocean  

Supervisor: Cathal Cummins 

Description: There are an estimated 5.25 trillion plastic pieces floating in the global oceans, with approximately 1.5 million tonnes of microplastics polluting the ocean each year. There is an observed size-based preferential loss of this plastic from the ocean surface into the water column, however, we still lack a full understanding of the mechanisms behind this process. This hinders our ability to map the distribution of microplastics in the ocean, monitor their ecological impact or plan for partial removal. However, we have recently made progress in developing a mathematical description of one important process, biofouling (the accumulation of algae on the surface of microplastics) and its role in the vertical movement of floating debris. One of the key findings of the work is that particle properties are the biggest factor in determining the particular excursions that microplastics make beneath the free surface.  

However, this study neglected inertial effects, such as added mass and history effects, which result from the lagging boundary layer development of an accelerating particle. It also did not consider the effects of non-spherical geometries. In a recent review, we discovered that history effects could only be neglected for microplastics of 55micron diameter and less in regular ocean conditions. Given that microplastics are defined as any plastic debris with diameter 1micron – 5mm, there remains a considerable range of microplastics whose dynamics should include an analysis of the history force. This project aims to investigate the influence of inertial and geometric effects on the migration and ultimate fate of biofouled particles in the ocean. 

Optimal algorithms for nonlinear partial differential equations.  

Supervisor: Sebastien Loisel 

Description: The efficient solution of partial differential equations plays an important role in all fields of application. For example, the stationary heat equation (the Laplacian or Poisson problem) asks for a solution $u$ to the partial differential equation $u_{xx} + u_{yy} = f$. One can discretize this problem, e.g. replacing the derivatives by finite differences, which yields a finite-dimensional linear problem. I am interested in nonlinear problems, such as the $p$-Laplacian. These problems are much harder, and published solvers often fail to converge, or converge very slowly.  

If the PDE is discretized on a grid with $n$ points, it is obviously impossible to solve a PDE in less than $O(n)$ time. In this project, we will investigate algorithms for solving nonlinear PDEs in almost $O(n)$ time.  

 Modelling Epithelial Wound Healing  

Supervisor: Jonathan Sherratt  

Description: The term epithelium refers to the surface layer of an organ, and it is the first line of defense to injury. The healing of epithelial wounds has been studied in great detail in the skin and the cornea of the eye – including a significant body of mathematical modelling. I am keen to develop these models to apply specifically to epithelia in other tissues, which can show significant points of difference such as a close interplay with the immune system. Work in this area is well suited to a student keen to apply partial differential equation models to a specific biological system with potential medical implications.  

Modelling Vegetation Patterns in Semi-Arid Regions  

Supervisor: Jonathan Sherratt  

Description: In regions where water is the limiting resource, plants often cluster together, forming large-scale spatial patterns. Mathematical models of this process have been studied for 20 years, and have contributed hugely to our understanding of the patterns. I am keen to develop these models, making them more realistic by including factors such as long-range. This will involve development of new mathematical methodologies, for example to construct numerical bifurcation diagrams for integro-partial differential equations.  The aims include identification of new signatures for imminent ecosystem collapse, and the development of optimal strategies for replanting of degraded landscapes.  

Ecological and epidemiological models of wildlife and livestock systems  

Supervisor: Andrew White 

Description: Mathematical models are key tools to understand the population and infectious disease dynamics of natural systems. Results from model studies have been used to guide policy decisions and shape conservation strategies to protect endangered species. Models typically focus on pairwise interactions, such as predator-prey or host-disease dynamics, but there is now evidence that the ecological community composition emerges through complex interactions where, for example, the interplay between competition, predation, disease transmission, seasonality and spatial structure can all play a key role. Examples include how the shared pathogen, squirrelpox, and the shared predator, the pine marten, can alter the outcome of species competition between red and grey squirrels and how the re-introduction of a native predator species, wolves, can reduce the prevalence of tuberculosis in wildlife prey species such as wild boar and deer and thereby reduce the chance of disease spillover to livestock populations.

This project would aim to develop new models and theory that capture the complexity of the real world by examining complex species interactions which integrate the effects of competition, predation and disease across trophic levels. The models will be developed in collaboration with biologists with expertise in the red and grey squirrel, squirrelpox and pine marten case study system in the UK and Ireland, with biologists who examine pathogen diversity at the interface between wildlife and livestock populations in Spain and in collaboration with the theoretical ecology group at UC Berkeley, USA.

Statistical Control of the Ecological Risks of Fisheries

Description: Fishing competes with predators, such as birds and seals, for resources and might impact their populations. This PhD project will use state-of-the-art statistical methods to analyse the dynamics involved in fisheries and develop new models and tools to manage the ecological and financial impacts of fishing. The analysis of ecosystem dynamics will make use of extensive data including remote sensed environmental variables and time series of bird, mammal and fish population and performance estimates. The results of this analysis will then be used in a simulation framework to develop and test feedback methods for managing the fisheries to control the risks to marine ecosystems while maintaining economic benefits of fishing.

Supervisor: Abdul-Lateef Haji-Ali

Bilevel optimisation for inverse problems: analysis, fast computations, and Bayes.

Description: Inverse problems concern the estimation of parameters of mathematical models given real world data: we estimate the permeability of a groundwater reservoir using measurements of the hydrostatic pressure in the reservoir, we reconstruct the position and shape of a tumour using attenuated X-rays in medical imaging, and we train the weight and bias matrices in a deep neural network that aims at distinguishing cats and dogs. Inverse problems are usually not uniquely solvable or their solution is brittle with respect to small perturbations in the data -- they are ill-posed.

Two ways that can often overcome ill-posedness are regularisation on the one hand and the Bayesian approach on the other hand. The regularisation approach consists in minimising a certain functional that is a sum of the negative log-likelihood of the observed data given the unknown parameter and an additional term that has favourable properties. The Bayesian approach is probabilistic: we model the unknown parameter as a random variable being distributed according to the so-called prior distribution. Using the aforementioned likelihood and Bayes' formula, we can then obtain the posterior distribution - that is the conditional distribution of the parameter given the data observation. The posterior can be used for point estimation and uncertainty quantification.

Regulariser and prior can have a large influence on the solution of the inverse problem and an appropriate choice is hard. In bilevel optimisation, we aim to `learn' the regulariser based on available data. Such a parameter can be a simple prefactor of a usual regulariser [Reyes et. al; Journal of Mathematical Imaging and Vision 57: 1–25 (2017)] that needs to be determined or the regulariser can be completely determined by a neural network [Mukherjee et al.; ArXiv 2008.02839 (2020)], where then weights and biases need to be learned.

We commence this project by looking at the method by [Antil et al.; Inverse Problems, 36: 064001 (2020)], which presents an interesting method to estimate the fraction of a fractional Laplacian that is used for regularisation. Here, the primary goal is to find a more scalable version of this algorithm through modern techniques in numerical linear algebra, allowing us to reconstruct large scale medical images. Future work may include: bilevel optimisation of prior distributions in Bayesian inversion, learning of sparse dictionaries through optimisation on manifolds, other fractional operators (such as total variation), and a continuous-time analysis of stochastic gradient descent in bilevel optimisation [Jin et al.; ArXiv 2112.03754 (2021)].

Supervisors: Dr Jonas Latz, Dr Abdul-Lateef Haji-Ali

Numerical analysis for multiscale bulk-surface PDEs

Description: Coupled systems of nonlinear partial differential equations defined in bulk domains and on surfaces are naturally arising in modelling of many biological and physical systems. Popular examples are models for intercellular signalling processes, crucial to all biological processes in living tissues. In such models the dynamics of signalling molecules in intercellular and/or intracellular spaces (bulk domain) are coupled to the dynamics of receptors on cell membranes (surface). In this project we will consider the design and analysis (a priori and a posteriori error analysis) of numerical schemes for the multiscale bulk-surface problems, when considering processes on two different spatial scales (e.g. on the level of a single cell and on a tissue level). (joint with C. Venkataraman, University of Sussex)

Supervisor: Mariya Ptashnyk

Numerical analysis for nonlocal cross-diffusion systems

Description: Cross-diffusion systems arise in modelling many different biological and physical processes, e.g. the movement of cells, bacteria or animals, transport through ion-channels in cells, tumour growth, gas dynamics, carrier transport in semiconductors, with the chemotaxis system being one of the most important examples of a cross-diffusion system. The motivation for considering nonlocal cross-diffusion systems, where the Laplacian, modelling random walk, is replaced by the fractional Laplacian is derived from the experimental observation that both in the context of cell motility and population dynamics in certain situations organisms move according to Lévy processes. In this project we will consider the design, analysis and implementation of the efficient numerical schemes for the simulation of nonlocal cross-diffusion systems. There two main challenges in the numerical simulations of the fractional cross-diffusion systems: the cross-diffusion terms and the nonlocality of the fractional Laplacian.

Supervisor: Mariya Ptashnyk (joint with Lehel Banjai, Heriot-Watt University)

Analysis of Collective phenomena

Supervisor: Michela Ottobre

Description: The study of collective phenomena is concerned with understanding how coordinated behaviour can emerge from interaction between particles - typical examples are bird flocking, fish schooling, crowd behaviour. This research field has attracted the attention of various communities for decades and yet the number of interesting open questions seems to keep increasing as new applications of the underlying abstract framework keep emerging (voting system, social media, neural networks, these can all been seen from the lens of collective phenomena). The analysis of such phenomena crucially requires a strong interplay between (stochastic) analysis, numerical analysis, computational methods  and modelling insight.  This project will focus on collective phenomena in mathematical biology, with particular interest on animal migration.

Can we make long term predictions?

Supervisor: Michela Ottobre

Description: Quoting Niels Bohr, “it is hard to make predictions, especially about the future”; and one may add “even harder if it is about the distant future”. We experience the difficulty of making long-term predictions in everyday life (think of weather forecasts, life expectancy for cancer patients, market behaviour etc). Beyond underlying modelling issues (which we will partly come back to later), our ability to make predictions relies either on the use of probabilistic/statistical approaches, which aim at quantifying the likelihood of possible scenarios (chance of rain/sun), or on numerical simulation methods. In the latter case one tries to “look into the future” by numerically approximating equations which are intended to model a given system (e.g. the spread of cancer cells or of disease in a population). Very often a combination of both approaches is adopted, with each of them having their strengths and weaknesses. When using numerical simulations, which are the main focus of this project, one of the core issues is that the numerical error typically increases in time, i.e. it increases if the simulation is run for longer. This makes long-term simulations (and predictions based on such simulations) less reliable. The overarching goal of this project is to understand when a given Stochastic Differential Equation (SDE) - whether finite or infinite dimensional  - can be numerically  approximated with an error which does not increase in time.

Factors influencing the time to disease fade-out. 

Supervisor: Damian Clancy 

Description: The spread of infectious disease through a population is an inherently random process, and can be studied using stochastic models. For diseases which become endemic in a population, one object of interest is the time until fade-out of infection (a random variable). The expected time to fade-out may be computed straightforwardly through Monte-Carlo simulation, or more exactly from general Markov process theory. For more complicated models, implementing these approaches becomes less straightforward, and approximation methods may also be needed. In this project, you will investigate a variety of approaches, with the aim of understanding the effects of particular disease features upon time to fade-out. There are many different features of different diseases that you could study - for instance, you might examine the impact of environmental transmission upon disease persistence, or the effects of changes in the birth and death rates of the susceptible population.   

References:  

"Approximating time to extinction for endemic infection models" by Damian Clancy and Elliott Tjia (2018), Methodology and Computing in Applied Probability volume 20, pages 1043–1067 10.1007/s11009-018-9621-8 

"The Influence of Latent and Chronic Infection on Pathogen Persistence" by A. O'Neill, A. White, D. Clancy, F. Ruiz-Fons & C. Gortázar (2021), Mathematics volume 9, article number 1007 https://doi.org/10.3390/math9091007 

Projects in Analysis of PDEs and Stochastic Analysis

Semi-discrete optimal transport theory: Numerical methods and applications 

Supervisor: David Bourne 

Description: Optimal transport theory goes back to 1781 and the French engineer Gaspard Monge, who wanted to find the optimal way of transporting soil for building earthworks for Napoleon's troops. While Leonid Kantorovich made some progress on the problem in the 1940s with the invention of linear programming, the problem remained unsolved for over 200 years. In fact it was not even known whether there existed a solution until some big mathematical breakthrough in the 1980s and 1990s. These theoretical advances opened the floodgates to applications. Optimal transport theory is now applied to PDEs, geometry, economics, image processing, crowd dynamics, statistics, machine learning, and the list goes on. In July 2018 the Italian mathematician Alessio Figalli won a Fields Medal for his work in optimal transport and PDEs. 

This PhD project focusses on an important class of optimal transport problems known as semi-discrete transport problems, which have recently found applications in weather modelling [1], pattern formation [2], microstructure modelling [3], optics, and fluid mechanics. In this project we will explore further applications and develop novel numerical methods. 

References: 

[1] Bourne, D.P., Egan, C.P., Pelloni, B. & Wilkinson, M. (2022) Semi-discrete optimal transport methods for the semi-geostrophic equations, Calculus of Variations and Partial Differential Equations, 61:39. 

[2] Bourne, D.P. & Cristoferi, R. (2021) Asymptotic optimality of the triangular lattice for a class of optimal location problems, Communications in Mathematical Physics, 387, 1549-1602. 

[3] Bourne, D.P., Kok, P.J.J., Roper, S.M. & Spanjer, W.D.T. (2020) Laguerre tessellations and polycrystalline microstructures: A fast algorithm for generating grains of given volumes, Philosophical Magazine, 100, 2677-2707. 

Elasticity methods in computer vision 

Supervisor: John Ball 

Description: The project concerns the comparison of images by minimizing a functional depending on a map taking one image to the other and on features of the images (see [1]). Part of the functional is the same as that for nonlinear elasticity. In work with a current student some basic properties of this model, such as existence of minimizers, have been established, and conditions found under which for linearly related images the minimization delivers the corresponding linear map. The project will extend this work in several directions, in particular testing the minimization algorithm numerically, and considering the effect of adding a term depending on second derivatives of the deformation to the functional. 

The project offers a training in nonlinear analysis, especially the calculus of variations, in related numerical methods, and in nonlinear elasticity itself. 

  References:  

[1] J. M. Ball, Nonlinear elasticity and image processing, Lecture at Newton Institute, https://talks.cam.ac.uk/talk/index/96634 

Equilibrium of liquid crystals in exterior domains 

Supervisor: John Ball 

Description: The aim of the project is to investigate the equilibrium configurations of nematic liquid crystals in the 3D region outside a finite number of bounded open sets $W_i$, according to the Oseen-Frank theory whose state variable is a unit vector field giving the mean orientation of the rod-like molecules forming the liquid crystal. This problem was studied in 2D in [1], but the 2D theory has a different flavour in that equilibria are smooth, while in 3D they can have singularities (see [2]) such as point defects. Currently there is much interest in liquid crystal colloids, in which the $W_i$ are particles that can move, and the proposed project has possible developments for the study of such dynamical situations. 

The project offers training in modern techniques of nonlinear partial differential equations and the calculus of variations adaptable to other situations. 

References: 

[1] Lu Liu, The Oseen-Frank theory of liquid crystals, Thesis, Oxford, 2019. 

[2] Haïm Brezis, Jean-Michel Coron, and Elliott H Lieb. Harmonic maps with defects. Communications in Mathematical Physics, 107(4):649–705, 1986.   

  The Laplacian eigenvalues on regions with symmetries.  

Supervisors: Lyonell Boulton and Beatrice Pelloni 

Description: The eigenvalues of the Laplacian on a rectangle can easily be found in terms of trigonometric functions. On an ellipse they can be found in terms of Bessel functions. From these two examples, we might be tempted to assume that then it is possible to find the eigenvalues on other simple regions (for example by arguments involving symmetry). Indeed they can be computed exactly on a straight isosceles triangle in terms of those on the square. In general, however, this is not true. Even for a generic triangle (the simplest possible 2D region), it is not true in that we can always find a close expression for the smallest eigenvalue or the corresponding eigenfunction.  

Regions for which we know the exact eigenvalues include the equilateral triangle. A list of these was first computed by Gabriel Lamé in the 19th Century and the arguments for the calculation are very sophisticated, involving a tessellation of the plane by means of parallelepipeds and studying various symmetry groups. It is remarkable that it was only in 1985, that the first full proof that Lamé’s list was complete was found [1].  

Recently, a new approach by Fokas and Kalimeris [3], seems to provided an effective mechanism to compute the full list of eigenvalues for the equilateral triangles with Dirichlet, Neumann and other natural boundary conditions. This techinique appears to be very promising and it is yet to be tested on other regions. 

This PhD project will begin by analysing and comparing the proofs made in [1] and [3], alongside with a simpler proof found in [2], that Lamé's list of eigenvalues on the triangle is complete. Then, we will move onto investigating the following problem. Can we compute explicitly the eigenvalues on a regular hexagon? Then, what about other regular polygons in general? For this, the case of mixed boundary conditions (Dirichlet and Neumann) on triangles might be a natural line of enquiry.  

See [4] for a full list of references on the subject. 

References: 

[1] M. Pinsky, SIAM J. Math. Anal. 16 (1985) 848-851. 

[2] B.J. McCartin, SIAM Rev. 45 (2003), 267-287. 

[3] T. Fokas and M. Kalimeris, Comp. Methods and Funct. Theo. 14 (2014) 10-33. 

[4] D.S. Grebenkov and B.-T. Nguyen, SIAM Rev. 55 (2013) 601-667. 

One-parameter Semigroups on Metric Graphs 

Supervisor: Lyonell Boulton 

Description: The purpose of this project is to study the time-evolution equation associated to linear differential operators on a graph. We assume that the edges of the graph (of a certain length) are segments and that suitable regularity conditions and boundary conditions are fixed on the nodes. 

According to the seminal work of Kramar-Fijavz, in the case of the Laplacian we know necessary and sufficient conditions at the nodes, for the time evolution problem to have a solution for any initial condition which is square integrable - in the Hilbert space $L^2$. This is a universal existence result. The main purpose of this project is to extend the work of Kramar-Fijavz following three main leads. 

1- Classify the family of initial conditions for which the evolution problem still has a solution, despite of exhibiting non-universal existence. 

2- Consider more general operators, such as those of Sturm-Liouville-type. 

3- Consider the more general case of Banach spaces $L^p$. 

The project itself will ensure training in the state-of-the-art of non-self-adjoint spectral theory and the theory of one-parameter semigroups on graphs. 

The s-numbers of higher order Sobolev embeddings 

Supervisor: Lyonell Boulton 

Description: The theory of embeddings between function spaces began with the need to prove existence and uniqueness results about the solution of partial differential equations.  The theory florished as the demand for solving more complicated equations drew a need to suply sharper inequalities. This symbiosis is illustrated by the classical Sobolev inequality discovered in the 1930s. In many applications it is enough to show existence of the constant, but for some other applications the precise value of the minimal constant is required. One example of this is in the theory surounding the so-called Euclidean isoperimetric inequality.  

The optimal constant for the first order Sobolev inequality on a segment with zero boundary conditions was computed by Talenti in 1976. Since then, rather little has been studied about the case of Sobolev spaces of higher order. An exception to this is the case of second order embeddings examined recently in [1]. The purpose of this project will be a thorough investigation of properties of the optimal constant, and the so-called singular numbers for higher order embeddings.  

References: 

[1] Arxiv 2204.04703 

Hierarchical Methods for Stochastic Partial Differential Equations

Description: Partial Differential Equations (PDEs) are important versatile tools for modelling various phenomena, like fluid dynamics, thermodynamics, nuclear waste, etc... Stochastic Partial Differential Equations (SPDEs) generalize PDEs by introducing random parameters or forcing. One is then interested in quantifying the uncertainty of outputs of such models through the computations of various statistics. Accurate computations of such statistics can be costly as it requires fine time- and space-discretization to satisfy accuracy requirements. Several hierarchical methods were developed to address such issues and applied successfully to Stochastic Differential Equations (SDEs) and in this project we will extend these works to deal with the more complicated SPDEs.

Supervisor: Abdul-Lateef Haji-Ali

Numerical Methods for Financial Market Models

Description: Many models for the evolution of financial and economic variables, for example interest rates, inflation rates, advanced models of stock prices, have no known closed-form analytical solution. To be able to work with these models, for example to value financial derivative products and their risk management, it is of fundamental importance to design numerical methods that are accurate, efficient, and easily and efficiently adaptable to changing market conditions. Financial derivatives are ubiquitous, they are embedded in many standard financial and insurance products and play an important role themselves in the risk management of companies. In this project, we will apply methods from stochastic analysis and probability theory to models of financial markets to enhance the understanding of their stochastic properties, and to design high-quality fast methods for their numerical treatment.

Supervisor: Anke Wiese

An Optimisation View of Deep Learning Methods for Data Science

Description: Data science transforms data into interpretable information to enable accurate decision-making. Such methods rely on advanced mathematical tools. Optimisation is one of them, and it is broadly used to design robust, fast, and scalable algorithms to minimise given objective functions. Since early 2000’s, proximal methods have become state-of-the-art to solve minimisation problems, in particular in the context of inverse problems. During the last decade, proximal algorithms involving neural networks (NNs) have emerged. Two main classes of such hybrid methods can be distinguished. The first approach consists in unrolling an optimisation algorithm over a fixed number of iterations to build the layers of a NN, leading to unfolded NNs. Unfolded NNs are particular instances of end-to-end NNs, that are directly used to solve inverse problems, processing corrupted data to produce a corrected output. The second approach relies on replacing the denoising steps of an optimisation algorithm by NNs, leading to PnP algorithms.

Research related to optimisation-based NNs for inverse imaging problems is relatively recent, and devoted methods are evolving fast. Some of the challenges of interest in this field are related to (i) theoretical understanding, (ii) design of new methods (including optimisation algorithms, sampling methods, NNs, etc.), (iii) applications (e.g., medical, astronomical, photon imaging).

Possible research directions for this project include (but are not restricted to):

(1) Theoretical guarantees of hybrid optimisation-NN methods: Although convergence of PnP algorithms has started to be better understood recently, many questions remain unanswered (or partially answered). For instance, to unsure convergence of PnP algorithms, NNs must satisfy some technical conditions. How to build such NNs? Similarly, when unrolling a fixed number of iterations of an optimisation algorithm, all the theoretical guarantees are lost. What type of guarantees do unfolded NNs offer?

(2) Building flexible NNs for inverse problems: In PnP methods, the involved NNs depend on the underlying statistical models (e.g., higher noise level on the measurements requires stronger denoisers). Hence different NNs must be trained depending on the inverse problem' statistical model, which is computationally prohibitive. How to build more flexible NNs that can be adapted to multiple statistical models?

(3) Improve NN efficiency using optimisation: Optimisation methods benefit from numerous acceleration strategies. Can (unfolded) NNs benefit from such acceleration techniques to design more powerful networks?

Supervisors: Audrey Repetti and Prof. Jean-Christophe Pesquet (University of Paris-Saclay, CentraleSupelec)

Interacting particle systems and SPDEs of McKean-Vlasov type

Supervisor: Michela Ottobre 

Description: The study of Interacting Particle Systems (IPSs) and related kinetic equations has attracted the interest of the mathematics and physics communities for decades. Such interest is kept alive by the continuous successes of this framework in modelling  a vast range of phenomena, in diverse fields such as biology, social sciences, control engineering, economics, game theory, statistical sampling and simulation, neural networks etc.

When the number of agents/particles in the system is very large the dynamics of the full Particle System (PS) can be rather complex and expensive to simulate; moreover, one is quite often more interested in the collective behaviour of the system rather than in its detailed description. In this context, the established methodology in statistical mechanics and stochastic analysis is to look for simplified models that retain relevant characteristics of the original PS by letting the number N of particles to infinity; the resulting limiting equation for the density of particles is, typically,  a low dimensional, (in contrast with the initial high dimensional PS) non-linear partial differential equation (PDE), where the non-linearity has a specific structure, commonly referred to as a McKean-Vlasov nonlinearity. We will consider PS where the dynamics of each particle is described by a Stochastic Differential Equation (SDE). When N tends to infinity, depending on the specific nature of the noise, one can obtain either a PDE for the particle density (i.e. a deterministic equation, which is the classical setting) or a Stochastic PDE. It is well known that, even in the classical case in which the limit is a PDE, the particle system and the PDE can have very different properties, raising questions about whether the PDE is a good approximation of the initial PS. The case in which the limit is stochastic is by far less investigated and this is the regime on which this project will focus. 

Analysis of Multiscale problems: stochastic averaging and homogenization

Supervisor: Michela Ottobre

Description:  Many systems of interest in the applied sciences share the common feature of possessing multiple scales, either in time or in space, or both. Some approaches to modelling focus on one scale and incorporate the effect of other scales (e.g. smaller scales) through empirical constitutive relations. Multiscale modelling approaches are built on the ambition of treating both scales at the same time, with the aim of deriving (rather than empirically obtaining) efficient coarse grained (CG) models which incorporate the effects of the smaller/faster scales. Obtaining such CG descriptions in a principled way helps one strike a compromise between microscopic models, accurate but computationally expensive, and macroscopic models, which are less accurate but simpler. Multiscale methods play a fundamental role across science, providing both underpinning for numerics/simulation algorithms and modelling paradigms in an impressive range of fields, such as engineering, material science, mathematical biology, social sciences, climate modelling (notably playing a central role in Hasselmann's programme, where climate/ whether are seen as slow/fast dynamics, respectively), to mention just a few. 

This project is primarily concerned with the study of stochastic systems which possess multiple time-scales and that are modelled by stochastic differential equations (SDEs). In their simplest form, the systems we consider are made of two components, commonly referred to as the fast and slow scale.  In this case, assuming the fast process (FP) evolves towards a (unique) equilibrium,  so called equilibrium measure (EM), established methodologies allow one to obtain an effective dynamics by substantially ‘replacing’ the FP with its behaviour in equilibrium (intuitively,  due to the large time-scale separation, the FP will immediately reach its equilibrium state).  In this context, the method of multiscale expansions provides a way to formally derive the CG dynamics, while stochastic averaging (and homogenization) techniques provide analytical tools for rigorous proofs.

When using any of these techniques a key assumption is that the dynamics for the FP (more precisely, the so-called ‘frozen process’) has a unique EM (that is, that it is ergodic). Without such an assumption even formal multiscale expansions seem to be no longer useful.  Nonetheless many systems in the applied sciences do exhibit multiple equilibria. In particular, when the fast process has multiple EMs, the procedure we have informally described above can no longer be used as is, and even producing an ansatz for the reduced description of the dynamics becomes nontrivial. This is the starting point for this project.

How to apply

If you would like to know more about any of the PhD projects listed above, please contact the relevant supervisor directly, we are always happy to have an informal chat.

We actively promote equality, diversity and inclusion and welcome applications from all qualified applicants.

For informal enquiries about the PhD programme get in touch with:

  • Daniel Coutand (Mathematical Sciences Admissions Officer) for projects in Mathematical Sciences.
  • Seva Shneer (Actuarial Mathematics and Statistics Admissions Officer) for projects in Actuarial Science and Statistics.
  • Michela Ottobre (PhD Programme Director).

You can find full details of how to apply by visiting the Maxwell Institute Graduate School (MIGS) website.

For further useful support on how to get a PhD, we strongly encourage you to look at the Piscopia Initiative website (Piscopia is as student led initiative aimed at helping female and non-binary applicants through the PhD application process - this has been a hugely successful initiative, so please get in touch with the organisers, they are incredibly friendly and helpful).