File:Demonstration of inpainting and outpainting using Stable Diffusion (step 2 of 4).png
Original file (2,048 × 3,584 pixels, file size: 4.21 MB, MIME type: image/png)
This is a file from the Wikimedia Commons. Information from its description page there is shown below. Commons is a freely licensed media file repository. You can help. |
Summary
DescriptionDemonstration of inpainting and outpainting using Stable Diffusion (step 2 of 4).png |
Demonstration of the usage of inpainting and outpainting techniques on algorithmically-generated artworks created using the Stable Diffusion V1-4 AI diffusion model. Not only is Stable Diffusion capable of generating new images from scratch via text prompt, it is also capable of providing guided image synthesis for enhancing existing images, through the use of the model's diffusion-denoising mechanism. This image aims to illustrate the process in which Stable Diffusion can be used to perform both inpainting and outpainting, as one part out of four images showing each step of the procedure.
All artworks created using a single NVIDIA RTX 3090. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111.
An initial 512x768 image was algorithmically-generated with Stable Diffusion via txt2img using the following prompts:
Then, two passes of the SD upscale script using "Real-ESRGAN 4x plus anime 6B" were run within img2img. The first pass used a tile overlap of 64, denoising strength of 0.3, 50 sampling steps with Euler a, and a CFG scale of 7. The second pass used a tile overlap of 128, denoising strength of 0.1, 10 sampling steps with Euler a, and a CFG scale of 7. This creates our initial 2048x3072 image to begin working with. Unfortunately for her (and fortunately for the purpose of this demonstration), it appears that the AI neglected to give this woman one of her arms.
Using the "Outpainting mk2" script within img2img, the bottom of the image was extended by 512 pixels (via two passes, each pass extending 256 pixels), using 100 sampling steps with Euler a, denoising strength of 0.8, CFG scale of 7.5, mask blur of 4, fall-off exponent value of 1.8, colour variation set to 0.03. The prompts used were identical to those utilised during the first step. This subsequently increases the image's dimensions to 2048x3584, while also revealing the woman's midriff, belly button and skirt, which were previously absent from the original AI-generated image.
In GIMP, I drew a very shoddy attempt at a human arm using the standard paintbrush. This will provide a guide for the AI model to generate a new arm.
Using the inpaint feature for img2img, I drew a mask over the arm drawn in the previous step, along with a portion of the shoulder. The following settings were used for all passes:
An initial pass was run using the following prompts:
This created the arm; another subsequent pass was then done to fine-tune deformations and blemishes around the newly generated arm along the sleeve. Drawing a new mask over the shoulder, the following prompt was used:
The outcome of this pass resulted in the final image. |
Date | |
Source | Own work |
Author | Benlisquare |
Permission (Reusing this file) |
As the creator of the output images, I release this image under the licence displayed within the template below.
The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.
|
Licensing
- You are free:
- to share – to copy, distribute and transmit the work
- to remix – to adapt the work
- Under the following conditions:
- attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.http://www.gnu.org/copyleft/fdl.htmlGFDLGNU Free Documentation Licensetruetrue |
Items portrayed in this file
depicts
some value
27 September 2022
image/png
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Thumbnail | Dimensions | User | Comment | |
---|---|---|---|---|---|
current | 14:22, 27 September 2022 | 2,048 × 3,584 (4.21 MB) | Benlisquare | {{Information |Description=Demonstration of the usage of inpainting and outpainting techniques on algorithmically-generated artworks created using the [https://github.com/CompVis/stable-diffusion Stable Diffusion V1-4] AI diffusion model. Not only is Stable Diffusion capable of generating new images from scratch via text prompt, it is also capable of providing guided image synthesis for enhancing existing images, through the use of the model's diffusion-denoising mechanism. This image aims t... |
File usage
The following page uses this file:
Global file usage
The following other wikis use this file:
- Usage on zh.wikipedia.org