Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Deep Learning Crash Course

by Giovanni Volpe, Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, Carlo Manzo
No Starch Press, San Francisco (CA), 2026
ISBN-13: 9781718503922
https://nostarch.com/deep-learning-crash-course


  1. Dense Neural Networks for Classification

  2. Dense Neural Networks for Regression

  3. Convolutional Neural Networks for Image Analysis

  4. Encoders–Decoders for Latent Space Manipulation

  5. U-Nets for Image Transformation
    Discusses U-Net architectures for image segmentation, cell counting, and various biomedical imaging applications.

  • Code 5-1: Segmenting Biological Tissue Images with a U-Net
    Demonstrates how to build and train a U-Net to segment internal cell structures (for example, mitochondria) in electron micrographs. It covers creating pipelines for raw images and labeled masks, using skip connections for detail retention, applying early stopping to avoid overfitting, and evaluating performance via the Jaccard Index (IoU). The notebook also demonstrates data augmentation to improve segmentation robustness.

  • Code 5-A: Detecting Quantum Dots in Fluorescence Images with a U-Net
    Uses a U-Net to localize fluorescent quantum dots in noisy microscopy images. It simulates realistic training data with random positions, intensities, and added noise, and pairs them with masks indicating quantum dot locations. After training on these simulations, the U-Net is tested on real experimental images. You’ll see how accurately it can mark quantum dots by generating centroid-based masks.

  • Code 5-B: Counting Cells with a U-Net
    Applies a U-Net to create binary masks of cell nuclei, then uses connected-component labeling to count how many nuclei the mask contains. After simulating or loading real images of stained nuclei, the notebook trains a single-channel output U-Net using a binary cross-entropy loss. Accuracy is measured by comparing predicted cell counts with ground truth, reporting mean absolute and percentage errors. This pipeline automates cell counting and quantifies how close the predictions are to actual counts.

  1. Self-Supervised Learning to Exploit Symmetries

  2. Recurrent Neural Networks for Timeseries Analysis

  3. Attention and Transformers for Sequence Processing

  4. Generative Adversarial Networks for Image Synthesis

  5. Diffusion Models for Data Representation and Exploration

  6. Graph Neural Networks for Relational Data Analysis

  7. Active Learning for Continuous Learning

  8. Reinforcement Learning for Strategy Optimization

  9. Reservoir Computing for Predicting Chaos