You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* corrected broken links and broken references
* Update setup.cfg
- bump to stable v1 and v0.1.1
* Update pyproject.toml
* Update version.py
* Corrected typo. Added config yamls in setup
* Removed config files that are no longer needed
* changed work from to pull the repo from git
* Added comments to remind people to pay attentino to data folder in the demo notebooks
* fixed pypi typo
* Fixed a bug in create_project. Changed default use_vlm to False. Updated demo notebooks
* removed WIP 3d keypoints
* Fixed one more
* WIP
* enforcing the use of create_project in demo notebooks and modified the test
* 3D supported. Better tests. More flexible identifier
* black and isort
* added dlc to test requirement
* Made test use stronger gpt. Added dummy video
* easier superanimal test
* Better 3D prompt and fixed self-debug
* preventing infinite loop
* better prompt for 3D
* better prompt for 3D
* better prompt
* updates
* fixed serialization
* extension to support animation. Made self-debugging work with bigger output. Allowing to skip code execution in parse result
* better interpolation and corrected x,y,z convention
* incorporated suggestions
* add a test plot keypoint label
* Fixed a bug. Changed hardcoded path to relative path in notebooks
* updated vlm prompt to be more robust
* deleted y axis inversion prompt
* Added animation support and added animation in horse demo
* edited readme
---------
Co-authored-by: Mackenzie Mathis <[email protected]>
Copy file name to clipboardExpand all lines: README.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,7 +73,7 @@ You can git clone (or download) this repo to grab a copy and go. We provide exam
73
73
### Here are a few demos that could fuel your own work, so please check them out!
74
74
75
75
1)[Draw a region of interest (ROI) and ask, "when is the animal in the ROI?"](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/EPM_demo.ipynb)
76
-
2)[Use a DeepLabCut SuperAnimal pose model to do video inference](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/custom_mouse_demo.ipynb) - (make sure you use a GPU if you don't have corresponding DeepLabCut keypoint files already!
76
+
2)[Use your own data](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/YourData.ipynb) - (make sure you use a GPU to run SuperAnimal if you don't have corresponding DeepLabCut keypoint files already!
77
77
3)[Write you own integration modules and use them](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/Horse_demo.ipynb). Bonus: [source code](amadeusgpt/integration_modules). Make sure you delete the cached modules_embedding.pickle if you add new modules!
78
78
4)[Multi-Animal social interactions](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/MABe_demo.ipynb)
79
79
5)[Reuse the task program generated by LLM and run it on different videos](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/MABe_demo.ipynb)
@@ -126,6 +126,8 @@ the key dependencies that need installed are:
126
126
pip install notebook
127
127
conda install hdf5
128
128
conda install pytables==3.8
129
+
# pip install deeplabcut==3.0.0rc4 if you want to use SuperAnimal on your own videos
4) Make sure you do not import any libraries in your code. All needed libraries are imported already.
86
86
5) Make sure you disintuigh positional and keyword arguments when you call functions in api docs
87
87
6) If you are writing code that uses matplotlib to plot, make sure you comment shape of the data to be plotted to double-check
88
-
7) if your plotting code plots coordinates of keypoints, make sure you invert y axis (only during plotting) so that the plot is consistent with the image
89
-
8) make sure the xlim and ylim covers the whole image. The image (h,w) is ({image_h},{image_w})
90
-
9) Do not define your own objects (including grid objects). Only use objects that are given to you.
91
-
10) You MUST use the index from get_keypoint_names to access the keypoint data of specific keyponit names. Do not assume the order of the bodypart.
92
-
11) You MUST call functions in api docs on the analysis object.
93
-
12) For api functions that require min_window and max_window, make sure you leave them as default values unless you are asked to change them.
88
+
7) make sure the xlim and ylim covers the whole image. The image (h,w) is ({image_h},{image_w})
89
+
8) Do not define your own objects (including grid objects). Only use objects that are given to you.
90
+
9) You MUST use the index from get_keypoint_names to access the keypoint data of specific keyponit names. Do not assume the order of the bodypart.
91
+
10) You MUST call functions in api docs on the analysis object.
92
+
11) For api functions that require min_window and max_window, make sure you leave them as default values unless you are asked to change them.
93
+
12) When making plots of keypoints of making animation about keypoints, try to overlap the plots with the scene frame if feasible.
94
+
94
95
95
96
HOW TO AVOID BUGS:
96
97
You should always comment the shape of the any numpy array you are working with to avoid bugs. YOU MUST DO IT.
Copy file name to clipboardExpand all lines: amadeusgpt/system_prompts/visual_llm.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ def _get_system_prompt():
11
11
```
12
12
The "description" has high level description of the image.
13
13
The "individuals" indicates the number of animals in the image
14
-
The "species" indicates the species of the animals in the image. You can only choose from one of "topview_mouse", "sideview_quadruped" or "others".
14
+
The "species" indicates the species of the animals in the image. You can only choose from one of "topview_mouse", "sideview_quadruped" or "others". Note all quadruped animals should be considered as sideview_quadruped.
15
15
The "background_objects" is a list of background objects in the image.
16
16
Explain your answers before you fill the answers. Make sure you only return one json string.
0 commit comments