Deep Learning
=================
Tutorial
--------------
- `CS230: Deep Learning, Stanford | Fall 2018 `_
- `Cheatsheets `_
- `DS-GA 1008: Deep Learning, NYU | Spring 2020 `_
- `MIT 6.S191: Introduction to Deep Learning, MIT | 2020 `_
- `CS 182: Deep Learning, UC Berkeley | Spring 2021 `_
- `CS294-158-SP20: Deep Unsupervised Learning, UC Berkeley | Spring 2020 `_
- `CS330: Deep Multi-Task and Meta Learning, Stanford | Fall 2019 `_
- `MIT 6.S192: Deep Learning for Art, Aesthetics, and Creativity, MIT | 2021 `_
GitHub
--------------
- `Zhang, A., Lipton, Z., Li, M., & Smola, A. (2021). Dive into Deep Learning. arXiv preprint arXiv:2106.11342. `_
- `Chollet, F. (2017). Deep Learning with Python. Manning. `_
- `Foster, D. (2019). Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play. O'Reilly Media. `_
- `Patel, A. (2019). Hands-On Unsupervised Learning Using Python: How to Build Applied Machine Learning Solutions from Unlabeled Data. O'Reilly Media. `_
- https://github.com/bharathgs/Awesome-pytorch-list
- https://github.com/rasbt/deeplearning-models
GAN
^^^^^^^^
- https://github.com/eriklindernoren/Keras-GAN
- https://github.com/robbiebarrat/art-DCGAN
Book
--------------
- `Ian Goodfellow, Yoshua Bengio, & Aaron Courville (2016). Deep Learning. MIT Press. `_
Web Page
--------------
Troubleshooting and Debugging
^^^^^^^^
- `A Recipe for Training Neural Networks - Andrej Karpathy `_
- `Troubleshooting Deep Neural Networks - Josh Tobin `_
- `37 Reasons why your Neural Network is not working - Slav Ivanov `_
- `CS231n: Convolutional Neural Networks for Visual Recognition `_
Hyperparameters
^^^^^^^^
- `hypeparameters tuning neural network according to loss vs according to scoring function - Stack Exchange `_
Optimizer
^^^^^^^^
- `AdamW and Super-convergence is now the fastest way to train neural nets - fast.ai `_
- `Optimizerはどれが優れているか(ON EMPIRICAL COMPARISONS OF OPTIMIZERS FOR DEEP LEARNINGの紹介) - Akihiro FUJII `_
Batch Normalization
^^^^^^^^
- `Ordering of batch normalization and dropout? - stackoverflow `_
| 順番はBN → Dropoutにする(Dropout → BNだと学習と推論でBNに入力する値の分散が変わってしまい性能が悪化する)
MLP (Multilayer Perceptron)
^^^^^^^^
- `How to Configure the Number of Layers and Nodes in a Neural Network - Jason Brownlee `_
- `How to choose the number of hidden layers and nodes in a feedforward neural network? - Cross Validated `_
| "One hidden layer is sufficient for the large majority of problems."
| "the optimal size of the hidden layer is usually between the size of the input and size of the output layers."
| "The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer."
| "The number of hidden neurons should be less than twice the size of the input layer."
| "the number of neurons in that layer is the mean of the neurons in the input and output layers."
- `multi-layer perceptron (MLP) architecture: criteria for choosing number of hidden layers and size of the hidden layer? - Stack Overflow `_
- `Number of nodes in hidden layers of neural network - Cross Validated `_
PyTorch
^^^^^^^^
- `PERFORMANCE TUNING GUIDE - PyTorch `_
- `PyTorchでの学習・推論を高速化するコツ集 - 小川雄太郎 `_