# Create virtual environment
python -m venv .venv
# Activate virtual environment
.venv\Scripts\activate
# Create virtual environment
python3 -m venv .venv
# Activate virtual environment
source .venv/Scripts/activate
python src/run_pipeline.py stage1 --prepare_data
python src/run_pipeline.py stage1 --train_model
python src/run_pipeline.py stage1
python src/run_pipeline.py stage2 --prepare_data
python src/run_pipeline.py stage2 --train_model
python src/run_pipeline.py stage2
python src/run_pipeline.py stage1_inference
python src/run_pipeline.py stage2_inference
python src/run_inference.py evaluation --evaluate_stage1
python src/run_inference.py evaluation --evaluate_stage2
python src/run_inference.py evaluation
data/
├── raw/
│ ├── images/ # Place your P&ID images here
│ └── labels_class_aware/ # Place corresponding ground truth labels here
The repository includes a sample dataset from the Dataset-P&ID collection, containing 5 P&ID images and their corresponding labels. This dataset is used for demonstration purposes and originates from the research paper available here.
To use your own P&ID images:
- Place your images in
data/raw/images/
- Place corresponding ground truth labels in
data/raw/labels_class_aware/
- Ensure labels follow the same format as the sample dataset
- Change
class_aware_class_names
variable inconfig.yaml
accordingly
Class-Agnostic Object Detection & One-shot Label Transfer is found to be more:
- Generalizable to different underlying P&ID drawing styles
- Robust to class-imbalance compared to equivalent class-aware counterparts.
This step breaks down large P&ID sheets into overlapping patches.
Plus, class-aware labels are transformed into class-agnostic to prepare for training a Yolo object detection model.
Trains a 'Generic' symbol detector
For large P&IDs infer on smaller patches and combine the results (implemented via SAHI ).
Train a model using one labeled image per symbol class (e.g. P&ID legend). The model can be a Siamese Network/ Prototypical (Zero-shot) Network or a Traditional classifier trained on augmented images.
If you use this package in your work, please cite it as:
@article{GUPTA2024105260,
title = {Semi-supervised symbol detection for piping and instrumentation drawings},
journal = {Automation in Construction},
volume = {159},
pages = {105260},
year = {2024},
issn = {0926-5805},
doi = {https://doi.org/10.1016/j.autcon.2023.105260},
url = {https://www.sciencedirect.com/science/article/pii/S0926580523005204},
author = {Mohit Gupta and Chialing Wei and Thomas Czerniawski},
}