https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generationhttps://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation1.Unconditional image generation
无条件图像生成是一个相对简单的任务。模型仅生成图像,没有任何额外的上下文,如文本或图像,这些生成的图像类似于它所训练的训练数据。
from diffusers import DiffusionPipelinegenerator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128", use_safetensors=True)generator.to("cuda")
image = generator().images[0]
2.Conditional image generation
条件图像生成允许从文本提示生成图像。文本被转换为嵌入向量,这些向量被用来条件模型从噪声中生成图像。
from diffusers import DiffusionPipelinegenerator = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True)generator.to("cuda")
image = generator("An image of a squirrel in Picasso style").images[0]
3.Text-guided image-to-image generation
StableDiffusionImg2ImgPipeline可以输入文本提示和一个初始图像来条件生成新的图像。
import torch
import requests
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipelinedevice = "cuda"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16, use_safetensors=True
).to(device)url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image.thumbnail((768, 768))prompt = "ghibli style, a fantasy landscape with castles"
generator = torch.Generator(device=device).manual_seed(1024)
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]from diffusers import LMSDiscreteSchedulerlms = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.scheduler = lms
generator = torch.Generator(device=device).manual_seed(1024)
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
strength是一个介于0-1之间的值,控制添加到输入图像上的噪声量,接近1会在语义上输出和输入不一致的图像。
4.Text-guided image-inpainting
StableDiffusionInpaintPipeline可以提供mask和文本提示来编辑图像的特定部分。
import PIL
import requests
import torch
from io import BytesIOfrom diffusers import StableDiffusionInpaintPipelinepipeline = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting",torch_dtype=torch.float16,use_safetensors=True,variant="fp16",
)
pipeline = pipeline.to("cuda")def download_image(url):response = requests.get(url)return PIL.Image.open(BytesIO(response.content)).convert("RGB")img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
5.Text-guided depth-to-image generation