Skip to content

Eden converts your python function into a hosted endpoint with minimal changes to your existing code 🧙‍♂️

License

Notifications You must be signed in to change notification settings

abraham-ai/eden

Repository files navigation

Eden

tests codecov

You were in Eden, the garden of God. Every kind of precious stone adorned you: ruby, topaz, and diamond, beryl, onyx, and jasper, sapphire, turquoise, and emerald. Your mountings and settings were crafted in gold, prepared on the day of your creation.

Ezekiel 28:13

Eden helps you to deploy your AI art pipelines (or sometimes other stuff) as a hosted endpoint with support for multiple GPUs and scaling over multiple machines. If you're new here, check out the examples

pip install eden-python

Setting up a block

Hosting with eden requires minimal changes to your existing code. Each unit within eden is called a Block, they're the units which take certain inputs and generate art accordingly.

The first step is to configure run().

from eden.block import Block
from eden.datatypes import Image

eden_block = Block()

run() is supposed to be the function that runs every time someone wants to use this pipeline to generate art. For now it supports text, images, and numbers as inputs.

my_args = {
        'prompt': 'let there be light', ## text
        'number': 12345,                ## numbers
        'input_image': Image()          ## images require eden.datatypes.Image()
    }

@eden_block.run(args = my_args)
def do_something(config):

    pil_image = config['input_image']
    some_number = config['number']

    return {
        'text': 'hello world',  ## returning text
        'number': some_number,       ## returning numbers
        'image': Image(pil_image)    ## Image() works on PIL.Image, numpy.array and on jpg an png files (str)
    }

Hosting a block

from eden.hosting import host_block

host_block(
    block = eden_block,
    port= 5656,
    logfile= 'logs.log',
    log_level= 'info',
    max_num_workers = 5
)
  • block (eden.block.Block): The eden block you'd want to host.
  • port (int, optional): Localhost port where the block would be hosted. Defaults to 8080.
  • host (str): specifies where the endpoint would be hosted. Defaults to '0.0.0.0'.
  • max_num_workers (int, optional): Maximum number of tasks to run in parallel. Defaults to 4.
  • redis_port (int, optional): Port number for celery's redis server. Defaults to 6379.
  • redis_host (str, optional): Place to host redis for eden.queue.QueueData. Defaults to "localhost".
  • requires_gpu (bool, optional): Set this to False if your tasks dont necessarily need GPUs.
  • log_level (str, optional): Can be 'debug', 'info', or 'warning'. Defaults to 'warning'
  • exclude_gpu_ids (list, optional): List of gpu ids to not use for hosting. Example: [2,3]. Defaults to []
  • logfile(str, optional): Name of the file where the logs would be stored. If set to None, it will show all logs on stdout. Defaults to 'logs.log'
  • queue_name(str, optional): Name of the celery queue used for the block. Useful when hosting multiple blocks with the same redis. (defaults on celery)

Client

A Client is the unit that sends requests to a hosted block.

from eden.client import Client
from eden.datatypes import Image

c = Client(url = 'http://127.0.0.1:5656', username= 'abraham')

After you start a task with run() as shown below, it returns a token as run_response['token']. This token should be used later on to check the task status or to obtain your results.

Note: Image() is compatible with following types: PIL.Image, numpy.array and filenames (str) ending with .jpg or .png

config = {
    'prompt': 'let there be light',
    'number': 2233,
    'input_image': Image('your_image.png')  ## Image() supports jpg, png filenames, np.array or PIL.Image
}

run_response = c.run(config)

Fetching results/checking task status using the token can be done using fetch().

results = c.fetch(token = run_response['token'])
print(results)

You can also get the commit ID and the repo name of your hosted eden_block with the following snippet

generator_id = c.get_generator_identity()
print(generator_id) ## {"name": repo_name, "commit": commit_sha}

Examples

  • Hosting a Resnet18 inference endpoint with eden: server + client
  • A very (very) minimal example which is good for starting out on eden: server + client
  • Working with intermediate results: server + client

Prometheus metrics out of the box

Eden supports the following internal metrics (/metrics) which have been exposed via prometheus:

  • num_queued_jobs: Specifies the number of queued jobs
  • num_running_jobs: Specifies the number of running jobs
  • num_failed_jobs: Specifies the number of failed jobs
  • num_succeeded_jobs: Specifies the number of succeeded jobs

Development

Setup

git clone [email protected]:abraham-ai/eden.git
cd eden
python3 setup.py develop

Compile dependencies with pip-compile (this generates a requirements.txt file). You will need pip-tools installed for this to work (pip install pip-tools)

pip-compile requirements.in

You also have to install redis on your machine

sudo apt-get install redis-server
sudo service redis-server start

Optionally, if you want to stop redis after you're done then you can run:

sudo service redis-server stop

Runnning tests on your local machine can be done with:

sh test_local.sh

About

Eden converts your python function into a hosted endpoint with minimal changes to your existing code 🧙‍♂️

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •