-
Notifications
You must be signed in to change notification settings - Fork 11
Home
Michael edited this page Sep 12, 2024
·
38 revisions
Interface has forty one sub-tabs (some with their own sub-tabs) in seven main tabs (Text, Image, Video, 3D, Audio, Extras and Interface): LLM, TTS-STT, MMS, SeamlessM4Tv2, LibreTranslate, StableDiffusion, Kandinsky, Flux, HunyuanDiT, Lumina-T2X, Kolors, AuraFlow, Würstchen, DeepFloydIF, PixArt, PlaygroundV2.5, Wav2Lip, LivePortrait, ModelScope, ZeroScope 2, CogVideoX, Latte, StableFast3D, Shap-E, SV34D, Zero123Plus, StableAudio, AudioCraft, AudioLDM 2, SunoBark, RVC, UVR, Demucs, Upscale (Real-ESRGAN), FaceSwap, MetaData-Info, Wiki, Gallery, ModelDownloader, Settings and System. Select the one you need and follow the instructions below
- First upload your models to the folder: inputs/text/llm_models
- Select your model from the drop-down list
- Select model type (
transformers
orllama
) - Set up the model according to the parameters you need
- Type (or speak) your request
- Click the
Submit
button to receive the generated text and audio response
Optional: you can enable TTS
mode, select the voice
and language
needed to receive an audio response. You can enable multimodal
and upload an image to get its description. You can enable websearch
for Internet access. You can enable libretranslate
to get the translate. Also you can choose LORA
model to improve generation
- Type text for text to speech
- Input audio for speech to text
- Click the
Submit
button to receive the generated text and audio response
- Type text for text to speech
- Input audio for speech to text
- Click the
Submit
button to receive the generated text or audio response
- Type (or speak) your request
- Select source, target and dataset languages
- Set up the model according to the parameters you need
- Click the
Submit
button to get the translate
- First you need to install and run LibreTranslate
- Select source and target languages
- Click the
Submit
button to get the translate
- First upload your models to the folder: inputs/image/sd_models
- Select your model from the drop-down list
- Select model type (
SD
,SD2
orSDXL
) - Set up the model according to the parameters you need
- Enter your request (+ and - for prompt weighting)
- Click the
Submit
button to get the generated image
- First upload your models to the folder: inputs/image/sd_models
- Select your model from the drop-down list
- Select model type (
SD
,SD2
orSDXL
) - Set up the model according to the parameters you need
- Upload the initial image with which the generation will take place
- Enter your request (+ and - for prompt weighting)
- Click the
Submit
button to get the generated image
- Upload the initial image
- Set up the model according to the parameters you need
- Enter your request (+ and - for prompt weighting)
- Click the
Submit
button to get the generated image
- Upload the initial image
- Set up the model according to the parameters you need
- Enter your request (+ and - for prompt weighting)
- Click the
Submit
button to get the generated image
- First upload your stable diffusion models to the folder: inputs/image/sd_models
- Upload the initial image
- Select your stable diffusion and controlnet models from the drop-down lists
- Set up the models according to the parameters you need
- Enter your request (+ and - for prompt weighting)
- Click the
Submit
button to get the generated image
- Upload the initial image
- Select your model
- Set up the model according to the parameters you need
- Click the
Submit
button to get the upscaled image
- Upload the initial image
- Click the
Submit
button to get the refined image
- First upload your models to the folder: inputs/image/sd_models/inpaint
- Select your model from the drop-down list
- Select model type (
SD
,SD2
orSDXL
) - Set up the model according to the parameters you need
- Upload the image with which the generation will take place to
initial image
andmask image
- In
mask image
, select the brush, then the palette and change the color to#FFFFFF
- Draw a place for generation and enter your request (+ and - for prompt weighting)
- Click the
Submit
button to get the inpainted image
- First upload your models to the folder: inputs/image/sd_models/inpaint
- Select your model from the drop-down list
- Select model type (
SD
,SD2
orSDXL
) - Set up the model according to the parameters you need
- Upload the image with which the generation will take place to
initial image
- Enter your request (+ and - for prompt weighting)
- Click the
Submit
button to get the outpainted image
- First upload your models to the folder: inputs/image/sd_models
- Select your model from the drop-down list
- Select model type (
SD
,SD2
orSDXL
) - Set up the model according to the parameters you need
- Enter your request for prompt (+ and - for prompt weighting) and GLIGEN phrases (in "" for box)
- Enter GLIGEN boxes (Like a [0.1387, 0.2051, 0.4277, 0.7090] for box)
- Click the
Submit
button to get the generated image
- First upload your models to the folder: inputs/image/sd_models
- Select your model from the drop-down list
- Set up the model according to the parameters you need
- Enter your request (+ and - for prompt weighting)
- Click the
Submit
button to get the generated image animation
- Enter your request
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated GIF-image
- Upload the initial image
- Select your model
- Enter your request (for IV2Gen-XL)
- Set up the model according to the parameters you need
- Click the
Submit
button to get the video from image
- Enter your request
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated images
- Enter your request
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated image
- Enter your request
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated image
- Upload the initial image
- Select the options you need
- Click the
Submit
button to get the modified image
- Upload the initial image
- Select the options you need
- Click the
Submit
button to get the modified image
- text-to-image:
-
- Enter your request
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated image
-
- image-to-audio:
-
- Upload the initial image
- Select the options you need
- Click the
Submit
button to get the audio from image
-
- audio-to-image:
-
- Upload the initial audio
- Select the options you need
- Click the
Submit
button to get the image from audio
-
- Enter your prompt
- Select a model from the drop-down list
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Enter your prompt
- Select your model
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
Optional: You can select your lora
models to improve the generation method. You can also use quantized models by clicking on the Enable quantize
button if you have low VRAM, but you need to download the model yourself: FLUX.1-dev or FLUX.1-schnell and also VAE, CLIP and T5XXL
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Enter your prompt
- Select your model
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated image
- Upload the initial image of face
- Upload the initial audio of voice
- Set up the model according to the parameters you need
- Click the
Submit
button to receive the lip-sync
- Upload the initial image of face
- Upload the initial video of face moving
- Click the
Submit
button to receive the animated image of face
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated video
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated video
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated video
- Enter your prompt
- Set up the model according to the parameters you need
- Click
Submit
to get the generated video
- Upload the initial image
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated 3D object
- Enter your request or upload the initial image
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated 3D object
- Upload the initial image (for 3D) or video (for 4D)
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated 3D video
- Upload the initial image
- Set up the model according to the parameters you need
- Click the
Submit
button to get the generated 3D rotation of image
- Set up the model according to the parameters you need
- Enter your request
- Click the
Submit
button to get the generated audio
- Select a model from the drop-down list
- Select model type (
musicgen
,audiogen
ormagnet
) - Set up the model according to the parameters you need
- Enter your request
- (Optional) upload the initial audio if you are using
melody
model - Click the
Submit
button to get the generated audio
- Select a model from the drop-down list
- Set up the model according to the parameters you need
- Enter your request
- Click the
Submit
button to get the generated audio
- Type your request
- Set up the model according to the parameters you need
- Click the
Submit
button to receive the generated audio response
- First upload your models to the folder: inputs/audio/rvc_models
- Upload the initial audio
- Select your model from the drop-down list
- Set up the model according to the parameters you need
- Click the
Submit
button to receive the generated voice cloning
- Upload the initial audio to separate
- Click the
Submit
button to get the separated audio
- Upload the initial audio to separate
- Click the
Submit
button to get the separated audio
- Upload the initial file
- Select the options you need
- Click the
Submit
button to get the modified file
- Upload the initial image
- Select your model
- Set up the model according to the parameters you need
- Click the
Submit
button to get the upscaled image
- Upload the source image of face
- Upload the target image or video of face
- Select the options you need
- Click the
Submit
button to get the face swapped image
- Upload generated file
- Click the
Submit
button to get a metadata info from file
- Here you can view online or offline wiki of project
- Here you can view files from the outputs directory
- Here you can download
LLM
andStableDiffusion
models. Just choose the model from the drop-down list and click theSubmit
button
- Here you can change the application settings
- Here you can see the indicators of your computer's sensors
- All generations are saved in the outputs folder. You can open the outputs folder using the
Outputs
button - You can turn off the application using the
Close terminal
button
- LLM models can be taken from HuggingFace or from ModelDownloader inside interface
- StableDiffusion, vae, inpaint, embedding and lora models can be taken from CivitAI or from ModelDownloader inside interface
- RVC models can be taken from VoiceModels
- StableAudio, AudioCraft, AudioLDM 2, TTS, Whisper, MMS, SeamlessM4Tv2, Wav2Lip, LivePortrait, SunoBark, MoonDream2, Upscalers (Latent and Real-ESRGAN), Refiner, GLIGEN, Depth, Pix2Pix, Controlnet, AnimateDiff, HotShot-XL, Videos, LDM3D, SD3, Cascade, T2I-IP-ADAPTER, IP-Adapter-FaceID, Riffusion, Rembg, Roop, CodeFormer, DDColor, PixelOE, Real-ESRGAN, StableFast3D, Shap-E, SV34D, Zero123Plus, UVR, Demucs, Kandinsky, Flux, HunyuanDiT, Lumina-T2X, Kolors, AuraFlow, AuraSR, Würstchen, DeepFloydIF, PixArt, PlaygroundV2.5, ModelScope, ZeroScope 2, CogVideoX, Latte and Multiband diffusion models are downloads automatically in inputs folder when are they used
- You can take voices anywhere. Record yours or take a recording from the Internet. Or just use those that are already in the project. The main thing is that it is pre-processed!