Images (Experimental)#

Learn how to generate images with Xinference.

Introduction#

The Images API provides two methods for interacting with images:

  • The Text-to-image endpoint create images from scratch based on a text prompt.

  • The Image-to-image endpoint allows you to generate a variation of a given image.

API ENDPOINT

OpenAI-compatible ENDPOINT

Text-to-Image API

/v1/images/generations

Image-to-image API

/v1/images/variations

Supported models#

The Text-to-image API is supported with the following models in Xinference:

  • sd-turbo

  • sdxl-turbo

  • stable-diffusion-v1.5

  • stable-diffusion-xl-base-1.0

Quickstart#

Text-to-image#

The Text-to-image API mimics OpenAI’s create images API. We can try Text-to-image API out either via cURL, OpenAI Client, or Xinference’s python client:

curl -X 'POST' \
  'http://<XINFERENCE_HOST>:<XINFERENCE_PORT>/v1/images/generations' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "<MODEL_UID>",
    "prompt": "an apple",
  }'

Image-to-image#

You can find more examples of Images API in the tutorial notebook:

Stable Diffusion ControlNet

Learn from a Stable Diffusion ControlNet example