Deep Learning in Quantitative Finance
Deep Learning in Quantitative Finance
Click to enlarge
Author(s): Andrew, Green
Green, Andrew
ISBN No.: 9781119685289
Pages: 400
Year: 202509
Format: E-Book
Price: $ 132.58
Dispatch delay: Dispatched between 7 to 15 days
Status: Available (Forthcoming)

Contents Acknowledgmentsxix 1 Introduction3 1.1 What this book is about3 1.2 The Rise of AI5 1.2.1 LLMs5 1.3 The Promise of AI in Quantitative Finance7 1.4 Practicalities7 1.4.


1 The Examples7 1.4.2 Python and PyTorch8 1.4.3 Docker9 1.5 Reading this book10 2 Feed Forward Neural Networks13 2.1 Introducing Neural Networks13 2.1.


1 Why activation must be non-linear15 2.1.2 Learning Representations17 2.2 Regression and Classification18 2.3 Activation Functions27 2.3.1 Linear28 Acknowledgmentsxix 1 Introduction3 1.1 What this book is about3 1.


2 The Rise of AI5 1.2.1 LLMs5 1.3 The Promise of AI in Quantitative Finance7 1.4 Practicalities7 1.4.1 The Examples7 1.4.


2 Python and PyTorch8 1.4.3 Docker9 1.5 Reading this book10 2 Feed Forward Neural Networks13 2.1 Introducing Neural Networks13 2.1.1 Why activation must be non-linear15 2.1.


2 Learning Representations17 2.2 Regression and Classification18 2.3 Activation Functions27 2.3.1 Linear28 2.3.2 Sigmoid (Logistic)28 2.3.


3 Heaviside (Binary)29 2.3.4 Hyperbolic Tangent (tanh)29 2.3.5 Rectified Linear Unit (ReLU)31 2.3.6 Leaky ReLU32 2.3.


7 Parameteric rectified linear unit (PReLU)32 2.3.8 Gaussian Error Linear Unit (GELU)33 2.3.9 Exponential Linear Unit (ELU)33 2.3.10 Scaled Exponential Linear Unit (SELU)33 2.3.


11 Swish33 2.3.12 Scaled Exponentially-Regularised Linear Units (SERLU)35 2.3.13 Softmax35 2.4 The Universal Function Approximation Theorem45 2.5 Conclusions48 3 Training Neural Networks49 3.1 Backpropagation and Adjoint Algorithmic Differentiation50 3.


1.1 Adjoint Algorithmic Differentiation51 3.2 Data Preparation and Scaling53 3.2.1 Vectorization53 3.2.2 Input Normalization54 3.2.


3 Handling Test and Validation Data57 3.2.4 Feature Engineering?57 3.3 Weight Initialization57 3.3.1 Initializing Weights58 3.3.2 Initializing Biases60 3.


4 The Choice of Loss Function68 3.4.1 Regression68 3.4.2 Binary Classification74 3.4.3 Multi-class Classification79 3.4.


4 Multi-label Classification81 3.5 Optimization Algorithms82 3.5.1 Basic Techniques82 3.5.2 Optimizers with Adaptive Learning Rates91 3.6 Common Training Problems97 3.6.


1 Overfitting/Underfitting97 3.6.2 Defining Bias and Variance Mathematically100 3.6.3 Local Minima101 3.6.4 Saddle Points and Second Order Methods101 3.6.


5 Vanishing and Exploding Gradients102 3.7 Batch Normalization104 3.8 Evaluation and Validation110 3.8.1 The Train / Test / Validation Split110 3.8.2 Evaluation Metrics113 3.9 Sobolev Training Using Function Derivatives124 3.


9.1 Incorporating Derivatives125 3.9.2 Key Theorems126 3.9.3 Empirical Results127 3.10 Conclusions131 4 Regularisation 133 4.1 Introduction Regularisation and Generalisation133 4.


2 Weight Decay134 4.2.1 L2 Regularisation135 4.2.2 L1 Regularisation136 4.3 Early Stopping 137 4.4 Ensemble Methods and Dropout138 4.4.


1 Bootstrap Aggregating (Bagging)139 4.4.2 Dropout140 4.5 Data Augmentation146 4.6 Other Regularisation Methods147 4.6.1 Batch Normalisation as Regularisation147 4.6.


2 Multitask Learning147 4.7 Conclusions Regularisation Strategy149 5 Hyperparameter Optimization 151 5.1 Introduction151 5.1.1 Types of Hyperparameter153 5.1.2 Types of HPO154 5.2 Manual155 5.


3 Grid Search155 5.4 Random Search 158 5.5 Bayesian Optimization159 5.5.1 The Gaussian Process Surrogate Model 160 5.5.2 The Acquisition Function 161 5.5.


3 Enhancements for Bayesian Hyperparameter Optimization162 5.6 Bandit-based165 5.6.1 Successive Halving (SHA) 166 5.6.2 Hyperband 169 5.6.3 BOHB 173 5.


6.4 Asynchronous Successive Halving (ASHA)176 5.7 Population Based Training (PBT)181 5.8 Conclusions184 6 Convolutional Neural Networks 187 6.1 Introduction187 6.2 Convolutions188 6.2.1 Mathematics of Convolutions188 6.


2.2 Convolutional Layers191 6.2.3 Edge Effects Padding194 6.2.4 Multi-channel Convolutions 195 6.2.5 Selecting Filter Sizes198 6.


2.6 Choosing the Number of Filters 203 6.3 Downsampling 203 6.3.1 Strided Convolutions203 6.3.2 Pooling 203 6.4 Data Augmentation206 6.


5 Transfer Learning Using Pre-trained Networks 211 6.6 Visualising Features213 6.6.1 Visualizing Filters and Feature Activations213 6.6.2 Gradient-based Visualization 214 6.7 Famous CNNs 223 6.7.


1 LeNet 223 6.7.2 AlexNet 225 6.7.3 VGG 230 6.7.4 Inception234 6.7.


5 ResNet 245 6.8 Conclusions on CNNs 252 7 Sequence Models 255 7.1 Introducing Sequence Models 255 7.2 Recurrent Neural Networks 257 7.2.1 Shallow RNNs258 7.2.2 Bidirectional RNNs 263 7.


2.3 Deep RNNs267 7.2.4 Vanishing and Exploding Gradients269 7.2.5 Long Short Term Memory (LSTM)270 7.2.6 Gated Recurrence Unit (GRU)272 7.


3 Neural Natural Language Processing 276 7.3.1 Introducing NLP 276 7.3.2 NLP Preprocessing 276 7.3.3 N-grams281 7.3.


4 Evaluation Metrics for NLP 283 7.3.5 A Neural Probabilistic Language Model286 7.3.6 Word Embeddings293 7.3.7 RNNs and NLP 297 7.3.


8 Sequence to Sequence Models301 7.3.9 Attention Mechanisms309 7.3.10 Transformers and Large Language Models314 7.4 Conclusions on Sequence Models322 8 Autoencoders 323 8.1 Introduction323 8.1.


1 Encoders and Decoders325 8.2 Autoencoders and Singular-Valued Decomposition 325 8.2.1 PCA and SVD325 8.2.2 Linear Autoencoders replicate SVD328 8.2.3 Autoencoders as non-Linear PCA332 8.


3 Shallow and Deep Autoencoders332 8.4 Regularized and Sparse Autoencoders 336 8.5 Denoising Autoencoders339 8.6 Autoencoders and Generative Models 341 8.7 Conclusion342 9 Generative Models 343 9.1 Introduction343 9.2 Evaluating Generative Model Performance 345 9.2.


1 Inception Score 346 9.2.2 Fréchet Inception Distance 348 9.3 Energy-based Models (EBMs) 348 9.3.1 Boltzmann Machines.


To be able to view the table of contents for this publication then please subscribe by clicking the button below...
To be able to view the full description for this publication then please subscribe by clicking the button below...