Lab For AI Verification ( LAIV)

What we do

Artificial Intelligence is a research and engineering area that develops methods for adaptive and autonomous applications. For example, when your mobile phone learns to recognise your voice — this is an example of its adaptive behaviour. And when your car navigator suggests a better route — this is prototypical autonomous planning. It is easy to see that adaptive and autonomous applications have become pervasive in both the global economy and our everyday lives. However, can we really trust them? The question of trust in computer systems is traditionally a subject of the Formal Verification domain. The two different domains — AI and Formal Verification — thus have to meet.

LAIV is a team of researchers working on a range of inter-disciplinary problems that combine AI and Formal Verification.

For example, we seek answers to the following questions:

  • How do we establish safety and security of AI applications?
  • What are the mathematical properties of AI algorithms?
  • How can types and functional programming help to verify AI?
  • How can we verify neural networks and other related machine-learning algorithms?
  • How can machine learning improve software verification?

The Lab was established in 2019, with the initial intent to provide a local hub where researchers and research students from the Edinburgh Center for Robotics and the National Robotarium can meet with Computer Scientists, Logicians and Programming Language experts interested in verification of AI. Since then, the range of our projects and collaborations widened. 

For more information visit the LAIV Website.

Group Members

Ekaterina Komendantskaya, Rob Stewart, Andrew Ireland, Michael Lones, Hans-Wolfgang Loidl, Wei Pang, Muhammad Najib, Lilia Georgieva, Kathrin Stark, Marko Doko, Chengjia Wang