Tutorials
Tutorial I
Quantum Computing for Artificial Intelligence | Francesco Petrucionne | Ian Joel David
Francesco Petrucionne
University of KwaZulu-Natal
Ian Joel David
University of KwaZulu-Natal
Bio: Francesco Petrucionne
Francesco studied Physics at the University of Freiburg i. Br. and received his PhD in 1988. He was conferred the “Habilitation” degree (Dr. rer. nat. habil.) from the same University in 1994. In 2004 he was appointed Professor of Theoretical Physics at the University of KwaZulu-Natal (UKZN), in Durban (South Africa). In 2005 he was awarded an Innovation Fund grant to set up a Centre for Quantum Technology. In 2007 he was granted the South African Research Chair for Quantum Information Processing and Communication. At present, he is also Pro Vice-Chancellor Big Data and Informatics of UKZN, one of the Deputy Directors of the National Institute for Theoretical Physics (NITheP) and Adjunct Professor in the School of Electrical Engineering of the Korean Advanced Institute for Science and Technology (KAIST). Prof Petruccione is an elected member of the Academy of Sciences of South Africa, a Fellow of the Royal Society of South Africa and a Fellow of the University of KwaZulu- Natal. He has published about 190 papers in refereed scientific journals. He is the co- author of a monograph on “The Theory of Open Quantum Systems ” (more than 7000 citations according to Google Scholar), that was published in 2002, reprinted as a paperback in 2007, and translated in Russian. Recently, he published a monograph (with Maria Schuld) on “Supervised Learning with Quantum Computers”. He is the editor of several proceedings volumes and of special editions of scientific journals. Prof Petruccione is a member of the Editorial Board of the journals “Open Systems and Information Dynamics”, “Scientific Reports”, and “Quantum Machine Intelligence”.
Bio: Ian Joel David
Ian Joel David studied Physics and Mathematics at the University of KwaZulu Natal (UKZN). He is currently completing a Masters degree in Physics at UKZN, focusing on the digital simulation of Open Quantum Systems. He has spent several years as a research assistant in Prof. Petruccione’s Quantum Research Group at UKZN. He has given several introductory and advanced quantum computing mini-courses at various institutions and conferences.
Tutorial Content:
Quantum computing has recently become an area of great interest for people in many fields, everyone from computer scientists to engineers and even people in the finance industry. The aim of this tutorial will be to allow researchers working in the field of artificial intelligence to gain some exposure and a basic understanding of the field of quantum computing. In this tutorial we shall cover the theory behind a basic quantum classifier called the Hadamard classifier and also implement this classifier using Qiskit, which is a python package used for quantum computing research. The tutorial will be divided into two parts: the first being a theoretical part which will cover all of the theory necessary to understand the Hadamard classifier, the second part, will demonstrate how to implement the classifier using python and qiskit.
The theoretical portion of this tutorial shall cover the following concepts:
- What is quantum computing?
- Qubits.
- Quantum gates.
- Quantum circuits.
- Measurement and the squared state overlap.
- The state preparation for the data encoding in the Hadamard classifier.
- The classification step of the Hadamard Classifier.
For the practical portion, we shall go through a Jupiter notebook which will perform classification on the iris dataset using the Hadamard classifier.
Tutorial II
Logical Neural Networks – a Graphical Tutorial | Naweed Khan | IBM Research | Africa
Naweed Khan
IBM Research | Africa
Bio: Naweed Khan
Naweed Khan is a Research Scientist in the AI Science team at IBM Research | Africa. He heads the patent portfolio in the South Africa lab and is a lead open source developer in the global Neuro-Symbolic AI team. He is a recipient of IBM’s Research Accomplishment award, with work focusing on enhancing core machine learning algorithms, computational framework implementation and infrastructure development in projects that look beyond today’s deep learning approaches. He develops tools that push the boundaries in Natural Language Understanding: making machines that reason with knowledge, learn with noisy data and compute in an efficient and symbolically interpretable way. Naweed obtained his MEng in Robotics and Image processing at the University of Johannesburg, where he also holds a BSc in Computer Science.
Tutorial Content | Logical Neural Networks – a graphical tutorial
LNNs present a new paradigm for using interpretable neural networks to reason about the world in a logically sound and complete manner.
This tutorial will focus on implementation examples, highlighting the differences between LNNs and standard NNs and demonstrating how knowledge can enhance today’s sophisticated neural architectures.
Prerequisites:
LNN paper – https://arxiv.org/abs/2006.13155
Tutorial III
Accelerating and Scaling Inference with NVIDIA GPUs | Adam Grzywaczewski | NVIDIA
Adam Grzywaczewski
NVIDIA
Bio: Adam Grzywaczewski: Senior Deep Learning Solution Architect NVIDIA
Adam Grzywaczewski is a deep learning solution architect at NVIDIA, where his primary responsibility is to support a wide range of customers in delivery of their deep learning solutions. Adam is an applied research scientist specialising in machine learning with a background in deep learning and system architecture. Previously, he was responsible for building up the UK government’s machine-learning capabilities while at Capgemini and worked in the Jaguar Land Rover Research Centre, where he was responsible for a variety of internal and external projects and contributed to the self-learning car portfolio.
Tutorial Content | Accelerating and Scaling Inference with NVIDIA GPUs
The content covers how to deploy PyTorch, ONNX, TensorFlow, and TensorRT models to Triton. This includes how to use functionality like perf_analyzer, model-analyzer, Prometheus and Grafana, Python and C++ clients, ensembling models, and custom Python and C++ backends. Additionally, this content is motivated by real-world challenges that data scientists face as they explore Triton (NLP and RecSys use cases). Upon completion of this lab, students will be able to:- Create an end-to-end inferencing pipeline
- Create models across all frameworks (PyTorch, ONNX, TensorFlow, TensorRT) for deployment in Triton
- Define model configurations for these models, with different settings
- Send requests to Triton for each of these different models using Python synchronous and asynchronous calls as well as dynamic batching
- Analyze and understand opportunities for further optimization
- Use model-analyzer and perf_analyzer tools to assess model performance
- Use Prometheus and Grafana to build monitoring dashboards to visualize performance metrics (latency, throughput, dynamic batch size, etc.)
- Leverage gRPC, asynchronous inferencing, dynamic batching, shared memory, etc. to optimize latency and throughput
- Create Python and C++ backends for custom operations
- Create ensembling modules (Python and C++)
- Create Python and C++ clients to interact with the Triton server
Prerequisite(s):
- Good understanding of Python and basic experience using one of the deep learning frameworks (e.g. PyTorch, TensorFlow, MxNet, etc.)
- Basic understanding of container technology and Docker
- Basic understanding of the HTTP protocol