Practical Issues in Asset Management
Dr.Reha Tutuncu from Point 72 shared his expertise and thoughts on the challenges and issues in Asset management from a practitioner's perspective. Reha discussed issues associated with Factor investing and multi-period models and discuss how investors should strategize in the day of Covid19
Model Validation and Machine Learning
Dr. Agus Sudjianto focused the discussion on machine learning explainability and robustness. Explainability is critical to evaluate conceptual soundness of models particularly for the applications in highly regulated institutions such as banks. There are many explainability tools available and the focus in this talk is how to develop fundamentally interpretable models.
Machine Learning for Factor Investing
Tony Guido first introduced the concept of supervised learning. He covered the practitioner angle for constructing non linear multi factor signals using stock characteristics. He showed the added value of ML based signals over traditional linear stale factors blend in equity.
Explainable AI and Bias in Machine Learning: A Financial Industry Perspective
Jennifer Jordan, Kareem Saleh, Anthony Habayeb, and Slater Victoroff discussed about AI explainability and Bias from an entrepreneur and investor perspective. and discussed about what the opportunities and challenges are and what the future looks like for explainable AI
Validation and Machine Learning - Some Thoughts on Deep Neural Networkss
Dr. Jorg Kientz oultined the use of Machine Learning Algorithms and their portential application. Specifically focused on Deep Neural Networks. Ben Steiner discussed the challenges of Deep Learning and focussed on the key aspects of Model Risk Management for Deep Learning and Alpha Strategies.
Master Class: GANS with Applications in Synthetic Data Generation
Gautier Marti will discuss Generative Adversarial Networks (GANs) and discuss applications in synthetic data generation and other quantitative finance applications. He will also discuss his work on CORRGANS, Sampling Realistic Financial Correlation Matrices Using Generative Adversarial Networks.
Ethical use of AI in Financial Markets
As AI and ML penetrate the financial industry, there are growing concerns about ethical use of AI in Finance. In this talk, Dan Liebau will focus on how the AI can be operationalized to help industry professionals and executive teams alike think about opportunities, risks as well as required actions factoring in ethics in our data-driven world.
Responsible AI in Action
As the discussion on AI ethics and adoption of Responsible AI grows, there is confusion of what Responsible AI actually means for an enterprise. Is it regulation? Is it having a moral stance? Is it policy? Is it to prevent bad actors? As we delegate more and more decision making to machines, we need to not only bring policy, but also have pragmatic ways to adopt these practices within the enterprise.
Machine Learning Interpretability:Self-Explanatory Models: Interpretability, Diagnostics and Simplification
This talk aims to unwrap the black box of deep ReLU networks through exact local linear representation, which utilizes the activation pattern and disentangles the complex network into an equivalent set of local linear models (LLMs)
A Unified Framework for Model Explanation
Ian will discuss a new paper that unifies a large portion of the literature using a simple idea: simulating feature removal. The new class of "removal-based explanations" describes 20+ existing methods (e.g., LIME, SHAP) and reveals underlying links with psychology, game theory and information theory.
Synthetic Data Generation in Finance
Stefan shows how to create synthetic time-series data using generative adversarial networks (GAN). GANs train a generator and a discriminator network in a competitive setting so that the generator learns to produce samples that the discriminator cannot distinguish from a given class of training data.
Introduction to Generative Modeling Using Quantum Machine Learning
In this training, you will develop a basic understanding of quantum computing and how it can be used in machine learning models, with special emphasis on generative models. We will focus on a particular architecture, the quantum circuit Born machine (QCBM), and use it to generate a simple dataset of bars and stripes.
What To Do When AI Fails:
AI Incident Response
This talk will outline a new approach to “incident response” specifically tailored to AI and it will present a free and open sample AI incident response plan. Participants will leave understanding when and why AI creates liability for the organizations that employ it, and how organizations should react when their AI causes major incidents.
Explainability of Supervised Machine Learning
Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. Join Nadia Burkart and Dr. Marco Huber in a discussion on Explainability of Supervised Learning.
Hilbert Space Kernel Methods for Machine Learning: Background and Foundations
Daniel will, in the first part of this talk, overviews RKHS (Reproducing Kernel Hilbert Space) methods and some of their applications. Jean-Marc will then present and discuss a Python library called codpy (curse of dimensionality - for Python), and implementing RKHS methods.
for Fairly and Transparently
Expanding Acces to Credit
Join QuantUniversity for a complimentary fall speaker series where you will hear from Quants, innovators, startups and Fintech experts on various topics in Quant Investing, Machine Learning, Optimization, Fintech, AI etc.
Pragmatic Algorithmic Auditing
In this talk, Sri will introduce Algorithmic auditing and discuss why Algorithmic auditing will be a formal process industries using AI will need. Sri will also discuss the emerging risks in the adoption of AI and discuss how QuSandbox, his company is building, will address the emerging needs of formal Algorithmic auditing practices in enterprises.
Innovations in Model Risk Management
Jon will discuss what if a financial firm decided to delete its entire set of models and redevelop them from scratch? What might it do differently in the process of rebuilding its entire model ecosystem in order to avoid and leverage from some of its previous mistakes?