Scott Roberts (BT)

AI Inventors: innovation or serendipity?

The prospect of artificial intelligence independently creating new and useful innovations hit the headlines in 2020 with the filing of patent applications for inventions purported to be generated entirely by generative machine intelligence. The questions of policy arising from such applications tested the patent law framework in Europe and the United States. The issue of AI inventorship runs deep and in this talk we will briefly explore the inventive process, the question of discovery and the extent of autonomy of machine learning in the innovation process.

Scott Roberts is a patent attorney and computer scientist working at BT. He currently serves as the President of the IP Federation representing IP intensive industry in the UK. Scott is also an ad personam member of the Standing Advisory Committee before the European Patent Office (SACEPO), is appointed as an industry representative to the UK Government Department of International Trade’s IP Thematic Working Group, and represents the Confederation of British Industry at the BUSINESSEUROPE Patents Working Group

Professor Chris Reed (Queen Mary University)

Overcoming the Asimov Effect – how lawmakers misunderstand AI and how to change that

Machine learning (ML) is no longer a purely research activity. AIs based on ML are being deployed into society, and this inevitably attracts the attention of lawmakers and regulators. They ask new questions:

  • Is it safe?
  • Is it doing the right things?
  • Is it not doing the wrong things?
The answers they get are unsatisfactory because lawmakers think they understand the technology, but their understanding is largely based on the fictional writings of Asimov and others, writing from the 1950s about ‘thinking’ machines which replicate human decision-making. This is not how AIs ‘think’. This talk will examine some of the legal and regulatory problems which arise from this misunderstanding:
  • Restrictions on using AI
  • Demands to invent AIs which ‘understand’ human concepts like fairness
  • Regulatory systems which focus on transparency and explanations which an AI cannot provide
It will conclude with some thoughts about the current legal and regulatory direction of travel, which is encouraging, and suggestions how technologists might educate lawmakers and regulators to clear up the misunderstandings and produce better solutions which enable the societal benefits of AI to be achieved.

Chris Reed is Professor of Electronic Commerce Law at the Centre for Commercial Law Studies, Queen Mary University of London, where he was formerly Director of the Centre and subsequently Academic Dean of the Faculty of Law & Social Science. He consults to companies and law firms, having previously been of counsel to the City of London law firms Lawrence Graham, Tite & Lewis and Stephenson Harwood.

Chris has worked exclusively in the computing and technology law field since 1987, and teaches University of London LLM students from all over the world. He has published widely on many aspects of computer law; his latest books are Rethinking the Jurisprudence of Cyberspace (with Andrew Murray, Edward Elgar 2018) and Making Laws for Cyberspace (OUP 2012), Research with which he was involved led to the EU directives on electronic signatures and on electronic commerce. The Leverhulme Foundation awarded him a Major Research Fellowship for 2009-2011 (see Making Laws for Cyberspace for findings).

From 1997 to 2000 Chris was Joint Chairman of the Society for Computers and Law, of which he is an inaugural Honorary Fellow, and in 1997-8 he acted as Specialist Adviser to the House of Lords Select Committee on Science and Technology. Chris has acted as an Expert for the European Commission, represented the UK Government at the Hague Conference on Private International Law and has been an invited speaker at OECD and G8 international conferences.

John McCall (National Subsea Centre, Robert Gordon University, Aberdeen)

Energy Transition: Transforming the North Sea Supply Chain

The North Sea has, for decades, been a major basin for hydrocarbon exploration and production and still produces around 70MT of hydrocarbon per annum. However with the abundance of marine renewable energy resource and the emerging Blue Economy, the North Sea is now at the heart of the global transition to clean energy generation and a net zero carbon economy. In this talk I will explore how AI and data science can be applied to enable and accelerate this transition. The talk will focus on research at Robert Gordon University over recent years to model the complex and fragmented North Sea supply chain, using real operations data. The work has resulted in optimisation algorithms and data science increasingly being embedded in daily operations along the supply chain to the point where £100Ms of efficiency savings are in sight alongside reductions in carbon emissions from operations of up to 40%. I will also set this in the broader context of current drivers for transition to Net Zero in the North Sea and some related research and technology challenges.

John McCall is the Professorial Lead in Predictive Data Analytics for the National Subsea Centre at Robert Gordon University. He has researched in machine learning, search and optimisation for 25 years, making novel contributions to a range of optimisation algorithms and predictive machine learning methods, including EDA, PSO, ACO and GA. He has 140+ peer-reviewed publications in books, international journals and conferences. These have received over 2200 citations with an h-index of 22.

John and his research team at RGU specialise in industrially-applied optimization and decision support, working with major international companies including BT, BP, EDF, CNOOC and Equinor as well as a diverse range of SMEs. Major application areas for this research are: vehicle logistics, fleet planning and transport systems modelling; predictive modelling and maintenance in energy systems; and decision support in industrial operations management. John and his team attract direct industrial funding as well as grants from UK and European research funding councils and technology centres. John is a founding director of two companies: Celerum, which provides consultancy and general optimization software services; and PlanSea Solutions, which focuses on marine planning and logistics.

John has served as a member of the IEEE Evolutionary Computing Technical Committee, an Associate Editor of IEEE Computational Intelligence Magazine and the IEEE Systems, Man and Cybernetics Journal, and he is currently an Editorial Board member for the journal Complex And Intelligent Systems. He frequently organises workshops and special sessions at leading international conferences. Most recently, he chaired the Workshop on Evolutionary Computation for Permutation Problems (ECPERM) at GECCO 2020. John has served on a number of industry advisory bodies including the OGTC Academic Panel, and the ScotlandIS - SDS Digital Skills Partnership Advisory Board. He chaired the Education Board of The Data Lab Technology Centre from 2014 – 2017.

Dr. Jun Chen (Queen Mary University)

Automating Airport Surface Operations through Multi-Objective Decision Support

With increasing demand for air travel and overloaded airport facilities, inefficient airport taxiing operations are identified as a significant contributor to unnecessary fuel burn and a substantial source of pollution. The critical problem is the allocation of taxi routes to aircraft that balance conflicting objectives of taxi times, costs and emissions, independent of airport topology or day-to-day operations. The trade-offs between these objectives reflect the interests of the different stakeholders: dimensional as the efficiency of airport operations depends on aircraft dynamics, airport layout, departing and arriving air traffic, and constraints including weather, air regulations and pilot-in-the-loop interaction. By modelling aircraft and their movements accurately, more efficient taxi routes can be generated than the manual or automated methods currently in use, while still maintaining safety standards. This talk focuses on the development of such a system by employing realistic, robust, cost-effective and reconfigurable multi-objective decision support methodologies. The proposed methodologies have been validated through complex ground handling problems at major airports to reduce taxi times, operating costs and environmental impact.

Jun Chen is Senior Lecturer (Associate Professor) in Engineering Science at QMUL. He received his PhD degree from the University of Sheffield and has published more than 60 scientific papers in areas of multi-objective optimisation, interpretable fuzzy systems, data-driven modelling, and intelligent transportation systems. Dr. Chen was among the first researchers to investigate the trade-off between taxi time and fuel consumption in airport ground movements, and proposed the Active Routing (AR) concept. AR forms the cornerstone of a major ongoing EPSRC funded project for which Dr. Chen is the lead PI. He has also been the PI on four industrial projects with Anglian Water, 4 EPSRC IAA projects and was the CoI on three Innovate UK projects. From 2020, he serves as a full member of the EPSRC Peer Review College. From 2018, he is awarded Turing Fellow by the national artificial intelligence research institute – the Alan Turing Institute.

Maria Axente (PwC)

The Journey Towards Responsible AI

Over the past couple of years, risks and ethical considerations are coming to the forefront of AI development and use. More and more companies prioritise pro active governance of AI, updating current organisational practices, roles and responsibilities to reflect the new challenges brought by autonomous systems. In this talk we will explore the best practices around development and use of responsible AI and how to embark on a journey aim to maximise the benefits while mitigating the foreseen but also the unintended consequences of AI.

In her role as Responsible AI and AI for Good Lead at PwC, Maria Axente leads the implementation of ethics in AI for the firm while partnering with industry, academia, governments, NGO and civil society, to harness the power of AI in an ethical and responsible manner, acknowledging the benefits and risks in many walks of life. She has played a crucial part in the development and set-up of PwC’s UK AI Center of Excellence, the firm’s AI strategy and most recently the development of PwC’s Responsible AI toolkit, firms methodology for embedding ethics in AI. Maria is a globally recognised AI ethics expert, a Advisory Board member of the UK All-Party Parliamentary Group on AI, member of BSI/ISO & IEEE AI standard groups, a Fellow of the RSA and an advocate for gender diversity, children and youth rights in the age of AI.

Janet Adams (SingularityNet)

Ethics of AI in Financial Services

Janet’s talk will focus on Ethics of AI in Financial Services, proposing that Accountability and Explainability are the master keys which will unlock benefit of AI in the industry, delivering fair customer outcomes, regulatory compliance and global market stability. Through her research Janet investigated and assessed different algorithmic approaches for nine AI banking use cases across retail and wholesale sectors, and compared the results with the requirements for safe and ethical AI, published by governments, public bodies and regulators across the globe. She will present these findings along with a comprehensive framework for tangible action in this field.

Janet Adams is the Chief Operating Officer at SingularityNET; the world’s leading decentralised AI network. An avid enthusiast in artificial intelligence and disruptive tech advocate, Janet has over two decades of experience leading large scale technology and risk change. Prior to joining SingularityNET in 2021, Janet’s experience extends across commercial and investment banking, and her resume includes the likes of HSBC, RBS and Barclays. Combining her published MSc dissertation on Explainability and Accountability of AI, with her special brand of technology, risk and conduct expertise, Janet has established herself as a thought leader in AI ethics and risk management. Janet is committed to the principle of inclusivity and strongly believes that the AI revolution will play a significant role in furthering the work of strengthening diversity.