This was a project we designed for the Georgia Tech Hacklytics 2025 Hackathon.
Our current stack includes the following models:
Model | Origin | Link | Purpose and Details |
---|---|---|---|
GPT-4o | OpenAI | OpenAI | |
GPT-4o mini | OpenAI | OpenAI | |
StableDesign | Github & Replicate API | Comprised of multiple other layered models including | |
YOLO 8m | Ultralytics | GitHub & Docs | |
Segment Anything Model (SAM) VIT-H |
Meta Research | GitHub | |
During previous stages of our development and/or in the intended designs we had but weren't able to implement yet, we include the following models in addition to the current:
Model | Origin | Link | Purpose and Details |
---|---|---|---|
Contrastive Language-Image Pre-Training (CLIP) Commit dcba3cb |
OpenAI | GitHub | |
OmniGen | VectorSpaceLab | GitHub & Replicate & Hugging Face | |
StableDiffusion |
We recommend setting up a conda
or ve`nv environment to run this.
Within our pip requirements includes various packages for api calls and model downloads (such as for openai
, CLIP
, and replicate
).
pip install -r requirements.txt
This is used for our content matching phase for product matching purposes.
Download Page: https://googlechromelabs.github.io/chrome-for-testing/#stable
Ultralytics's You Only Look Once 8m
(Yolo8m
) and Meta (Facebook) Research's Segment Anything Model
(SAM
) models are required for our content extraction phase. They're used for furniture and decor item identification within our generated design plans.
Model Name | Direct Download Link | Docs and Details |
---|---|---|
YOLO8m | Ultralytics Direct Download | Ultralytics YOLO8m Docs |
VIT-H SAM | Meta's Direct Download | Facebook Research's Github SAM Docs |
streamlit run app.py