Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added train/test set functionality, random search #18

Open
wants to merge 7 commits into
base: param_search2
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,21 @@ Project website: https://socraticmodels.github.io/

**Abstract** Large pretrained (e.g., “foundation”) models exhibit distinct capabilities depending on the domain of data they are trained on. While these domains are generic, they may only barely overlap. For example, visual-language models (VLMs) are trained on Internet-scale image captions, but large language models (LMs) are further trained on Internet-scale text with no images (e.g., spreadsheets, SAT questions, code). As a result, these models store different forms of commonsense knowledge across different domains. In this work, we show that this diversity is symbiotic, and can be leveraged through Socratic Models (SMs): a modular framework in which multiple pretrained models may be composed zero-shot i.e., via multimodal-informed prompting, to exchange information with each other and capture new multimodal capabilities, without requiring finetuning. With minimal engineering, SMs are not only competitive with state-of-the-art zero-shot image captioning and video-to-text retrieval, but also enable new applications such as (i) answering free-form questions about egocentric video, (ii) engaging in multimodal assistive dialogue with people (e.g., for cooking recipes) by interfacing with external APIs and databases (e.g., web search), and (iii) robot perception and planning.


## Install
To install the environment, run:

`conda env create -f environment.yml`

`conda activate socratic`

`python -m spacy download en`

## Instructions

This repository provides scripts for CLIP with FLAN-T5 prompting, and self-contained colabs with prototype implementations of Socratic Models for various applications.

## Colab notebooks

There are a couple of Colab notebooks with the [T5 prompting pipeline](https://colab.research.google.com/drive/1o-q4QQYfdYIXq10e3BctO2h_980aLt3t#scrollTo=29352228) and [CLIP embedding space visualisations](https://colab.research.google.com/drive/1PG6BXF-I89mvqAjl17Ms2WSAvQUI_w8d#scrollTo=bkwvovEQQz5H).

2 changes: 1 addition & 1 deletion scripts/coco_captioning_baseline.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ def main(num_images=50, num_captions=10, lm_temperature=0.9, lm_max_length=40, l
# img_type_dic[img_name], num_people_dic[img_name], location_dic[img_name], obj_list_dic[img_name]
# )

prompt_dic[img_name] = pg.create_baseline_lm_prompt2(
prompt_dic[img_name] = pg.create_socratic_original_prompt(
img_type_dic[img_name], num_people_dic[img_name], location_dic[img_name][:num_places],
obj_list_dic[img_name][:num_objects]
)
Expand Down
11 changes: 7 additions & 4 deletions scripts/coco_captioning_gpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,10 @@


@print_time_dec
def main(num_images=100, num_captions=10, lm_temperature=0.9, random_seed=42):
def main(
num_images=100, num_captions=10, lm_temperature=0.9, random_seed=42,
set_type='train'
):

"""
1. Set up
Expand Down Expand Up @@ -74,7 +77,7 @@ def main(num_images=100, num_captions=10, lm_temperature=0.9, random_seed=42):
"""

# Randomly select images from the COCO dataset
img_files = coco_manager.get_random_image_paths(num_images=num_images)
img_files = coco_manager.get_random_image_paths(num_images=num_images, set_type=set_type)

# Create dictionaries to store the images features
img_dic = {}
Expand Down Expand Up @@ -127,7 +130,7 @@ def main(num_images=100, num_captions=10, lm_temperature=0.9, random_seed=42):
location_dic = {}
for img_name, img_feat in img_feat_dic.items():
sorted_places, places_scores = clip_manager.get_nn_text(vocab_manager.place_list, place_emb, img_feat)
location_dic[img_name] = sorted_places[0]
location_dic[img_name] = sorted_places

# Classify image objects
obj_topk = 10
Expand Down Expand Up @@ -156,7 +159,7 @@ def main(num_images=100, num_captions=10, lm_temperature=0.9, random_seed=42):

for img_name in img_dic:
# Create the prompt for the language model
prompt_dic[img_name] = pg.create_baseline_lm_prompt(
prompt_dic[img_name] = pg.create_socratic_original_prompt(
img_type_dic[img_name], num_people_dic[img_name], location_dic[img_name], obj_list_dic[img_name]
)

Expand Down
Loading