Skip to content

Commit

Permalink
Merge pull request #6 from nateraw/package-try-2
Browse files Browse the repository at this point in the history
Fix packaging issues
  • Loading branch information
nateraw authored Sep 7, 2022
2 parents 7d349cf + 4af0cc2 commit e2ddeab
Show file tree
Hide file tree
Showing 9 changed files with 357 additions and 54 deletions.
31 changes: 31 additions & 0 deletions .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

name: Upload Python Package

on:
release:
types: [created]

jobs:
deploy:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
python setup.py sdist bdist_wheel
twine upload dist/*
52 changes: 27 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,40 +26,42 @@ The app is built with [Gradio](https://gradio.app/), which allows you to interac
- Set the `num_walk_steps` - for testing you can use a small number like 3 or 5, but to get great results you'll want to use something larger (60-200 steps).
- You can (and should) use the `name` input to separate out where the images/videos are saved. (Note that currently ffmpeg will not overwrite if you already made a video with the same name. You'll have to use ffmpeg to create the video yourself if the app fails to do so.)

### The Script
### Python Package

#### Setup

Install the package

```
git clone https://github.com/nateraw/stable-diffusion-videos
cd stable-diffusion-videos
pip install -r requirements.txt
pip install stable_diffusion_videos
```

#### Usage
Authenticate with Hugging Face

If you would prefer to use the `stable_diffusion_walk.py` script directly, you can do so by running:

Run with `num_steps` set to 3 or 5 for testing, then up it to something like 60-200 for better results.

```bash
python stable_diffusion_walk.py \
--prompts "['a cat', 'a dog', 'a horse']" \
--seeds 903,123,42 \
--output_dir dreams \
--name animals_test \
--guidance_scale 8.5 \
--num_steps 5 \
--height 512 \
--width 512 \
--num_inference_steps 50 \
--scheduler klms \
--disable_tqdm \
--make_video \
--use_lerp_for_text \
--do_loop
```
huggingface-cli login
```

#### Usage

```python
from stable_diffusion_videos import walk

walk(
prompts=['a cat', 'a dog'],
seeds=[42, 1337],
output_dir='dreams', # Where images/videos will be saved
name='animals_test', # Subdirectory of output_dir where images/videos will be saved
guidance_scale=8.5, # Higher adheres to prompt more, lower lets model take the wheel
num_steps=5, # Change to 60-200 for better results...3-5 for testing
num_inference_steps=50,
scheduler='klms', # One of: "klms", "default", "ddim"
disable_tqdm=False, # Set to True to disable tqdm progress bar
make_video=True, # If false, just save images
use_lerp_for_text=True, # Use lerp for text embeddings instead of slerp
do_loop=False, # Change to True if you want last prompt to loop back to first prompt
)
```

## Credits

Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
transformers
git+https://github.com/huggingface/diffusers@f085d2f5c6569a1c0d90327c51328622036ef76e
diffusers==0.2.4
scipy
fire
gradio
28 changes: 28 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
from setuptools import find_packages, setup


def get_version() -> str:
rel_path = "stable_diffusion_videos/__init__.py"
with open(rel_path, "r") as fp:
for line in fp.read().splitlines():
if line.startswith("__version__"):
delim = '"' if '"' in line else "'"
return line.split(delim)[1]
raise RuntimeError("Unable to find version string.")


with open("requirements.txt", "r") as f:
requirements = f.read().splitlines()

setup(
name="stable_diffusion_videos",
version=get_version(),
author="Nathan Raw",
author_email="[email protected]",
description=(
"Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts."
),
license="Apache",
install_requires=requirements,
packages=find_packages(),
)
149 changes: 128 additions & 21 deletions stable_diffusion_videos.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"colab": {
"provenance": [],
"collapsed_sections": [],
"authorship_tag": "ABX9TyN/ZOFCUNqBdfOYeo31y+2Q",
"authorship_tag": "ABX9TyMDsciHN/HhWLYEdURcy00d",
"include_colab_link": true
},
"kernelspec": {
Expand All @@ -26,7 +26,7 @@
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/nateraw/stable-diffusion-videos/blob/main/stable_diffusion_videos.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
"<a href=\"https://colab.research.google.com/github/nateraw/stable-diffusion-videos/blob/package-try-2/stable_diffusion_videos.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
Expand All @@ -47,7 +47,7 @@
"Enjoy 🤗"
],
"metadata": {
"id": "B4V57sVzLHu3"
"id": "z4GhhH25OdYq"
}
},
{
Expand All @@ -56,21 +56,19 @@
"## Setup"
],
"metadata": {
"id": "0L9zhmONL81f"
"id": "dvdCBpWWOhW-"
}
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {
"id": "klUR9ie1DVm-"
"id": "Xwfc0ej1L9A0"
},
"outputs": [],
"source": [
"%%capture\n",
"! git clone https://github.com/nateraw/stable-diffusion-videos\n",
"%cd /content/stable-diffusion-videos/\n",
"! pip install -r requirements.txt\n",
"# ! pip install stable_diffusion_videos\n",
"! pip install git+https://github.com/nateraw/stable-diffusion-videos@package-try-2\n",
"! git config --global credential.helper store"
]
},
Expand All @@ -82,7 +80,7 @@
"You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens)."
],
"metadata": {
"id": "BoTBdktZDs8B"
"id": "dR5iVGYbOky5"
}
},
{
Expand All @@ -93,7 +91,7 @@
"notebook_login()"
],
"metadata": {
"id": "8jVV1OLBDZ8o"
"id": "GmejIGhFMTXG"
},
"execution_count": null,
"outputs": []
Expand All @@ -104,7 +102,17 @@
"## Run the App 🚀"
],
"metadata": {
"id": "TbW39aWzIdsn"
"id": "H7UOKJhVOonb"
}
},
{
"cell_type": "markdown",
"source": [
"### Optional - Connect Google Drive\n",
"\n"
],
"metadata": {
"id": "GjfrKeeR2NQZ"
}
},
{
Expand All @@ -115,19 +123,43 @@
"This step will take a couple minutes the first time you run it."
],
"metadata": {
"id": "jCWuRT78Jt0L"
"id": "g71hslP8OntM"
}
},
{
"cell_type": "code",
"source": [
"# in case you restarted runtime, we need to be in this directory\n",
"%cd /content/stable-diffusion-videos/\n",
"from stable_diffusion_videos import interface"
],
"metadata": {
"id": "bgSNS368L-DV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Connect to Google Drive to Save Outputs\n",
"\n",
"#@markdown If you want to connect Google Drive, click the checkbox below and run this cell. You'll be prompted to authenticate.\n",
"\n",
"#@markdown If you just want to save your outputs in this Colab session, don't worry about this cell\n",
"\n",
"from app import interface"
"connect_google_drive = True #@param {type:\"boolean\"}\n",
"\n",
"#@markdown Then, in the interface, use this path as the `output` in the Video tab to save your videos to Google Drive:\n",
"\n",
"#@markdown > /content/gdrive/MyDrive/stable_diffusion_videos\n",
"\n",
"\n",
"if connect_google_drive:\n",
" from google.colab import drive\n",
"\n",
" drive.mount('/content/gdrive')"
],
"metadata": {
"id": "a6Eey_-YDvFc"
"id": "kidtsR3c2P9Z"
},
"execution_count": null,
"outputs": []
Expand All @@ -147,10 +179,12 @@
"2. Generate videos using the \"Videos\" tab\n",
" - Using the images you found from the step above, provide the prompts/seeds you recorded\n",
" - Set the `num_walk_steps` - for testing you can use a small number like 3 or 5, but to get great results you'll want to use something larger (60-200 steps). \n",
" - You can (and should) use the `name` input to separate out where the images/videos are saved. "
" - You can (and should) use the `name` input to separate out where the images/videos are saved. \n",
"\n",
"💡 **Pro tip** - Click the link that looks like `https://<id-number>.gradio.app` below , and you'll be able to view it in full screen."
],
"metadata": {
"id": "Po9vuzMnJzka"
"id": "VxjRVNnMOtgU"
}
},
{
Expand All @@ -159,7 +193,80 @@
"interface.launch()"
],
"metadata": {
"id": "fflAEZaLIYGP"
"id": "8es3_onUOL3J"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"---"
],
"metadata": {
"id": "mFCoTvlnPi4u"
}
},
{
"cell_type": "markdown",
"source": [
"## Use `walk` programatically\n",
"\n",
"The other option is to not use the interface, and instead use `walk` programatically. Here's how you would do that..."
],
"metadata": {
"id": "SjTQLCiLOWeo"
}
},
{
"cell_type": "markdown",
"source": [
"First we define a helper fn for visualizing videos in colab"
],
"metadata": {
"id": "fGQPClGwOR9R"
}
},
{
"cell_type": "code",
"source": [
"from IPython.display import HTML\n",
"from base64 import b64encode\n",
"\n",
"def visualize_video_colab(video_path):\n",
" mp4 = open(video_path,'rb').read()\n",
" data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
" return HTML(\"\"\"\n",
" <video width=400 controls>\n",
" <source src=\"%s\" type=\"video/mp4\">\n",
" </video>\n",
" \"\"\" % data_url)"
],
"metadata": {
"id": "GqTWc8ZhNeLU"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Walk! 🚶‍♀️"
],
"metadata": {
"id": "Vd_RzwkoPM7X"
}
},
{
"cell_type": "code",
"source": [
"from stable_diffusion_videos import walk\n",
"\n",
"video_path = walk(['a cat', 'a dog'], [42, 1337], num_steps=3, make_video=True)\n",
"visualize_video_colab(video_path)"
],
"metadata": {
"id": "Hv2wBZXXMQ-I"
},
"execution_count": null,
"outputs": []
Expand Down
Loading

0 comments on commit e2ddeab

Please sign in to comment.