Module 5 workflow tool

Deep learning workflow

This page turns the hardhat detection notebook into a guided learning workflow. It helps you understand the dataset, labels, training process, and testing flow here in the browser, while the real YOLO training and detection still happen in Colab.

Open the original Colab

Workflow stages

See the full pipeline first

The underlying model and dataset stay the same as the notebook. The difference is that each stage is explained clearly before you ever need to look at the code.

Model introduction

This module uses YOLOv5, a deep learning object detection model. It does more than classify an image: it finds object locations and predicts labels such as helmet and head.

  • Input: a construction image
  • Output: bounding boxes with class labels
  • Goal: detect whether workers are wearing hard hats

Why YOLOv5 is used here

YOLOv5 is practical for teaching because it is widely used, supports pretrained weights, and makes it easier to show the full pipeline from annotation to training, validation, and testing.

  • Fast object detection workflow
  • Good fit for custom safety datasets
  • Produces metrics that are straightforward to explain

What makes deep learning different?

In traditional machine learning, people usually choose the features first. In deep learning, the model learns the useful visual features directly from many labeled images.

  • Traditional ML: hand-designed features, then prediction
  • Deep learning: raw image in, learned features inside the model
  • YOLO learns shapes, edges, textures, and object patterns during training

Follow one example through the workflow

  • Start with one construction image and its XML annotation
  • Convert the XML box into YOLO label text
  • Train YOLO on many labeled images like this
  • Save the learned model as best.pt or last.pt
  • Use the saved model to detect helmets in a new image

1

Download data

Use the Google Drive file ID to bring the dataset into Colab.

2

Organize files

Split images into training, validation, and testing folders.

3

Convert labels

Change XML annotations into YOLO text labels.

4

Train the model

Use a pretrained YOLO model to learn hardhat detection.

5

Validate performance

Monitor loss, precision, recall, and mAP during training.

6

Test on new images

Run inference on unseen images and inspect predictions.

Important distinction

  • This page is for understanding the workflow visually.
  • The browser previews help explain labels, metrics, and predictions.
  • The real dataset download, YOLO training, validation, and testing still happen in Colab.

Data setup

Set up the dataset and workflow

This step is for preparing the dataset, choosing the split, and generating the Colab cells. The explanation stays visible, and the code is grouped into smaller blocks so it is easier to follow.

Workflow settings

These settings update the generated Colab cells below. If you change the file ID, split ratios, epochs, batch size, image size, or experiment name, the code blocks are rewritten automatically.

How to use this step
  • Google Drive file ID changes the dataset download cell.
  • Train / validation / test changes the dataset split code.
  • Class names changes the YAML config and label conversion logic.
  • Epochs, batch size, image size change the YOLO training command.
  • Experiment name changes the output folder and saved run name.
  • Use GPU in Google Colab for faster training.
  • The three split values should add up to 100.
  • The generated cells can be pasted into a fresh notebook in order.
  • This step downloads the dataset, splits the images, converts XML to YOLO labels, and creates the YAML config.

Generated workflow

These code blocks update live when you change the workflow settings on the left.

Step 1. Download dataset

This cell downloads and unzips the dataset in Colab.

Step 2. Split data and convert annotations

This cell organizes files and converts XML labels into YOLO text labels.

Step 3. Create YOLO config

This cell writes the YAML file that tells YOLO where the data lives.

Step 4. Install YOLOv5 and train

This cell installs YOLOv5 and starts model training.

Step 5. Validate and test

This cell checks model quality and runs inference on unseen images.

Interactive preview

Explore the workflow visually

The main workflow uses Google Drive and Colab. This section is an optional visual aid for understanding dataset structure, annotations, labels, and the train-validation-test split.

Preview inputs

Primary workflow

  • The dataset is downloaded into Colab from Google Drive as part of the workflow configuration step.
  • Use the folder tree below only if you want a local visual preview of the dataset structure.
  • Use the sample image and XML preview to understand how annotations become YOLO labels.

0

Files

0

Images

0

Labels/XML

Folder tree

Optionally upload a folder to generate a dataset tree.

No folder loaded yet.

80%

Train

10%

Validation

10%

Test

Split preview

The dataset will be divided into train, validation, and test subsets.

Preview output

Upload a sample image to preview the workflow visually.

Detected classes

  • Upload an annotation file to inspect object labels.

Annotation summary

  • No annotation loaded yet.

XML objects

Upload an annotation file to preview object names and coordinates.

YOLO labels

Upload both an image and XML file to generate YOLO labels.

Model training

Understand training, validation, and testing

This final step explains what the model is doing while it learns, what the main metrics mean, and how the training curves usually behave over time.

Data structure

dataset/
images/
annotations/
train/images
train/labels
val/images
val/labels
test/images
test/labels

What validation means

Validation checks model quality during training. It helps us compare learning progress without using the final test images.

What testing means

Testing happens after training. It uses unseen images so you can judge how well the model generalizes.

Training process

What happens during model training

During training, the model repeatedly compares its predictions to the labeled training data, calculates error, updates its weights, and then checks progress on the validation set.

A

Load a batch

A small group of training images and labels is loaded into the model.

B

Predict

The model predicts object boxes, labels, and confidence values.

C

Calculate loss

The prediction is compared with the true annotation to measure error.

D

Update weights

The optimizer changes model parameters to reduce error next time.

E

Validate

The model is checked on validation data to monitor generalization.

F

Repeat by epoch

This cycle repeats over many epochs until performance improves enough.

What is an epoch?

Batch 1 Batch 2 Batch 3 Batch 4
One full pass through the training set

One epoch means the model has seen the whole training dataset once. If training uses 10 epochs, the model goes through the training data 10 times.

What is batch size?

Small group of images loaded together

Batch size tells YOLO how many images to process at one time before updating the model weights. Larger batches use more memory; smaller batches are lighter but may train more slowly.

What is image size?

640 x 640
320 x 320
Input image resolution used during training

Image size is the resolution YOLO uses during training and testing. Larger sizes may preserve more detail, but they also require more time and memory.

Training set

Used directly to teach the model from labeled examples.

Validation set

Used during training to check whether the model is improving without overfitting.

Test set

Used only after training to estimate final performance on unseen data.

After training

What files you get after training finishes

When training ends, YOLOv5 saves the learned model and the training history. These output files are what you use later for validation, testing, and inference on new images.

Saved files after training

best.pt

This is the most important output file. It stores the version of the model that performed best on the validation set during training.

  • Use this file for testing
  • Use this file for detection on new images
  • Usually the main file to keep

last.pt

This stores the model from the final training epoch. It is useful if you want to resume training or compare the final epoch to the best validation checkpoint.

  • Represents the final training state
  • Useful for continuing training
  • Not always the best-performing file

Training plots and logs

YOLOv5 also saves charts and metrics such as loss, precision, recall, and mAP so you can inspect how learning changed across epochs.

  • Helps explain model improvement
  • Shows overfitting or stability
  • Supports class discussion and reporting

Typical output folder after training

runs/train/yolo_hardhat_exp/
weights/best.pt
weights/last.pt
results.png
results.csv
confusion_matrix.png

The exact folder name depends on the experiment name you choose during training.

How weights learn from labeled images

Real annotation example from Module 5

Construction crew image annotated with helmet and head bounding boxes
This uses the real example image and XML boxes from Module 5. The model learns from many labeled examples like this during training.

How this connects to weights

Every time the model sees labeled examples like this, it adjusts its internal weights so it becomes better at recognizing the visual patterns linked to helmet and head.

  • The image provides the visual pattern
  • The box provides the object location
  • The class label provides the meaning
  • The learned weights store what the model extracts from many such examples

What are model weights?

Model weights are the learned numbers inside the neural network. During training, YOLO keeps adjusting them so important visual patterns like helmet shape and head boundaries matter more, while unhelpful background details matter less.

Example Weights A model stores many learned numbers 0.92 0.71 ... 0.84 0.12 0.66 What They Learn Strong weights helmet curve head boundary object shape Weak weights background noise unhelpful pixels Saved Output best.pt last.pt Weights saved after training for testing and detection
Simplified view: real models contain millions of learned values, but the idea is the same. Larger useful weights strengthen important visual cues, while weaker weights reduce less helpful ones.

How one labeled image helps training

Using the real Module 5 image, we can imagine the model assigning stronger learned weight values to useful clues and weaker values to less useful ones.

0.92
Helmet edge and curve

The model learns that rounded helmet boundaries are highly useful for the helmet class.

0.84
Head shape near shoulders

The model learns that head position and shape help separate head from background clutter.

0.66
Worker posture and nearby context

These patterns can help, but usually matter less than the direct object shape.

0.12
Background pixels

Unhelpful background texture should get much weaker importance after training.

These values are illustrative, not extracted from the actual YOLOv5 checkpoint. They are shown here to help connect real image features to the idea of learned weights.

Before training

The model starts with generic pretrained knowledge, but it is not yet specialized for this hardhat dataset.

During training

The weights are adjusted after each batch so predictions move closer to the labeled answers.

After training

The saved weight files contain the learned version of the model that can now be reused for validation, testing, and detection.

Mock training dashboard

Run a browser-side demo that simulates the training curves from your current settings.

Demo only

  • The charts below are simulated so you can learn what training curves usually look like.
  • The real loss, precision, recall, and mAP values come from the Colab training run.
  • Use this section to understand the pattern, not as the actual model result.

0

Epoch

0.00

mAP@0.5

0.00

Precision

0.00

Recall

0.00

Train loss

0.00

Val loss

Click Run dashboard to simulate a training session with the current settings.

Loss curves

Validation metrics

Testing

Preview the testing workflow on a new image

Demo only: this browser preview illustrates what testing on unseen images looks like, but it does not run the trained YOLO model. Upload a new image to see example boxes and confidence scores here, then use the Colab testing cell to generate the real detection results.

Testing input

What this step shows

  • Demo only: the webpage draws example detections for teaching the workflow, not real model predictions.
  • Testing uses images the model did not see during training.
  • The browser demo simulates a prediction view so the testing stage is easier to understand.
  • The real trained model still runs in Colab when you execute the testing cell.
  • Upload a new image, then run the detection demo.

Testing output

Upload a new image to begin testing.