º»¹® ¹Ù·Î°¡±â

½ºÅ×À̺íÀÇ Ä³½ºÄÉÀ̵带 6G¿¡¼­ µ¹¸®´Ï ºý¼¼³×¿ä,ºñµð¿À ¸Þ¸ð¸® 12GÀÌ»óÀ¸·Î ±¸¸ÅÇϼ¼¿ä

external_image


¼Ò¶ó AI°¡ ³ª¿À±¸ ½ºÅ×ÀÌºí µðÇ»Àü 3.0ÀÌ ³ª¿Ô´Ù´Âµ¥

 

ij½ºÄ³ÀÌµå ¸ðµ¨À̶ó´Â°Ô À־ µ¹·Áº¸·Á°í Çߴµ¥ ºý¼¼³×¿ä


±×³ª¸¶ ÃÖ±Ù¿¡ Á»´õ °£´ÜÇÑ ¹æ½ÄÀÌ °ø°³µÇ¾î¼­ Çغ¸´Âµ¥


python ÀÌ ÇÊ¿äÇÏ°í ,git µµ ¼³Ä¡ÇÏ°í pytorchµµ GPU Áö¿øÇÏ´Â ¸ðµ¨À» ±ò¾Æ¾ß ÇÏ°í ÇØ¾ß ÇÒ°Ô ÀÖ½À´Ï´Ù.


ƯÈ÷ diffusers ¸ðµâÀ» ¼³Ä¡ ÇØ¾ß Çϴµ¥ ÀÌ°Ç Çã±ëÆäÀ̽º ÀÇ ÄɽºÄ³ÀÌµå ¸ðµ¨¿¡ URLÀÌ ÀÖ±äÇѵ¥


±×°É·Î ¼³Ä¡Çصµ ½ÇÇàÇϸé xxxUnitÀÌ ¾ø´Ù°í ¿¡·¯¸¦ ¹ñ¾î¼­ ±¸±Û¸µÀ» Çؼ­ ´Ù¿î·ÎµåÇÒ URLÀ» ã¾Æ¼­ À缳ġ¸¦ ÇØ¾ß ÇÕ´Ï´Ù.


±×¸®°í ÇÊ¿äÇÑ Çϵå¿þ¾î ½ºÆå


 µð½ºÅ© °ø°£ÀÌ ¸¹ÀÌ ÇÊ¿äÇÕ´Ï´Ù. ¸ðµ¨ ÀÚüµµ 10°¡ ³Ñ¾î°¡°í, ½ÇÇàÇÒ ¶§ ÇÊ¿äÇÑ °Íµµ 10°¡ ³Ñ¾î°¡´Ï ÃÖ¼Ò 30GÁ¤µµ´Â ³²°Ü³õ¾Æ¾ß ÇÕ´Ï´Ù.


±×¸®°í Á¦ÀÏ Áß¿äÇÑ ºñµð¿À Ä«µå ¸Þ¸ð¸®, RX°è¿­ÀÇ °ÍÀ¸·Î 12GÀÌ»óÀÌ ÇÊ¿äÇÕ´Ï´Ù.

CPU·Î µ¹¸®¸é half ¸ðµâ ¹ÌÁö¿øÀ̶ó°í ¿¡·¯ ¹ñ¾î³À´Ï´Ù.


 µðÇ»Àü À¥ UI¿¡¼­´Â --no-half¶ó´Â°Ô ÀÖ¾î Åë°ú°¡ °¡´ÉÇѵ¥, ÀÌ°É ¾î¶»°Ô Àû¿ëÇÏ´ÂÁö ¸ô¶ó¼­ ¸ø¾¹´Ï´Ù.


 CUDA¾ø´Â ½Ã½ºÅÛ¿¡¼­ CPU·Î µ¹·Áº¸´Ù°¡ ½ÇÆÐÇß°í


 RX4060¿¡ 16G¸Þ¸ð¸® ´Þ¸° ½Ã½ºÅÛ¿¡¼­ ¾îÂî ¾îÂî µ¹·Á¼­ µ¹¸®´Âµ¥ ¼º°øÇߴµ¥, ±×·¡ÇÈÄ«µå °¥°í ÆÄÀ̽㠳ôÀº ¹öÀü¿¡ ÆÄÀÌÅäÄ¡ GPU¹öÀüÀ¸·Î ¼³Ä¡ÇÏ°í Çؼ­ µ¹·È´Ù°¡,  WEB-UI°¡ ¾Èµ¹¾Æ°¡´Â »óÅ·ΠºüÁ®¼­, °Ü¿ì °Ü¿ì ¹öÀü üũÇÏ´Â ·ÎÁ÷À» ¼öÁ¤ÇØ°¡¸ç ȸÇÇÇؼ­ ÀÏ´Ü µ¹¸®±ä Çß½À´Ï´Ù.


 À̹ø ±Û¿¡¼­´Â 3050ÀÇ 6G¿¡¼­ µ¹·Áº» °ÍÀÔ´Ï´Ù


Çã±ëÆäÀ̽º ¼Ò½º ³ª¿Â ¿¹Á¦¸¦ µ¹¸®´Âµ¥  gpu out of memory°¡ ³ª¿Í¼­ ¾î¶»°Ô ÇØ¾ß ÇÒÁö ¸ð¸£°Ú´õ±º¿ä


ÆÄÀ̼± ½äÄ¡, ÆÄÀÌÅäÄ¡ ¼³Ä¡, diffusers ¼³Ä¡, ±×¸®°í diffusers ¾÷µ¥ÀÌÆ®µî ¼îÇÏ°í Çصµ ½ÇÆÐ


¸ó°¡ µÇ´Â ¹æ¹ýÀÌ ÀÖ°ÚÁö Çߴµ¥


--------------------------------ÆÄÀ̽㠼ҽº

import os


os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "garbage_collection_threshold:0.3,max_split_size_mb:128,expandable_segments:True"

import torch

import gc


for i in range(torch.cuda.device_count()):

   print(torch.cuda.get_device_properties(i).name)

from diffusers import StableCascadeDecoderPipeline, StableCascadePriorPipeline 

device = "cuda"

num_images_per_prompt = 2

print("step-1")

prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", low_cpu_mem_usage=True,torch_dtype=torch.bfloat16).to(device)

print("step-2")


prompt = "Anthropomorphic cat dressed as a pilot"

negative_prompt = ""


prior_output = prior(

    prompt=prompt,

    height=1024,

    width=1024,

    negative_prompt=negative_prompt,

    guidance_scale=4.0,

    num_images_per_prompt=num_images_per_prompt,

    num_inference_steps=20

)

del prior


gc.collect()


with torch.no_grad():

    torch.cuda.empty_cache() 


decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade",  low_cpu_mem_usage=True,torch_dtype=torch.float16).to(device)

print("step-3")

decoder_output = decoder(

    image_embeddings=prior_output.image_embeddings.half(),

    prompt=prompt,

    negative_prompt=negative_prompt,

    guidance_scale=0.0,

    output_type="pil",

    num_inference_steps=10

).images

for idx,img in enumerate(decoder_output ):

  img.save(f"{idx}.jpg")


-------------------------

ÀÌ·¸°Ô ¼öÁ¤À» Çß½À´Ï´Ù.


StableCascadePriorPipeline.from_pretrained Æò¼ÇÀ» ¿¬¼ÓÀ¸·Î È£ÃâÇؼ­ ¸Þ¸ð¸®°¡ ²Ë Â÷´Â °ÍÀ»

ó¸® ¼ø¼­¸¦ ºÐ¸®ÇÏ°í


»ç¿ëÀÌ ³¡³­ °ÍÀº ¸Þ¸ð¸® ÇØÁ¦¸¦ ½ÃµµÇß½À´Ï´Ù.


del prior

gc.collect()

with torch.no_grad():

    torch.cuda.empty_cache() 


¸ð°¡ È¿°úÀûÀΰ£Áö ¸ð¸£Áö¸¸ 


ÀÌ·¸°Ô ½ÃµµÇؼ­ ÇÑÂü ±â´Ù¸®´Ï  ¿ìÁÖÀÎ ¾ß¿ËÀÌ°¡ ÀúÀåµÇ³×¿©


6G¸Þ¸ð¸®°¡ À©µµ¿ì ÀÛ¾÷ °ü¸®ÀÚ GPU ÂÊ¿¡¼­ º¸¸é ²ËÂ÷¼­ ´À¸®°Ô µ¿ÀÛÇÏ´Â µí Çϳ׿ä


CPU only·Î µ¿ÀÛÇÒ ¶§ Float16À» Float32·Î ¹Ù²Ù°í, half()¸¦ Á¦°ÅÇؼ­ ³ëÀÌÁî °¡µæÇÑ À̹ÌÁö¸¦ ¾ò¾î¼­ ½ÇÆÐÇß¾ú´Â¤§µ¥


6G·Îµµ ´À¸®±ä Çصµ ¸¸µé¾îÁÝ´Ï´Ù.



¾ÖÃÊ ¸Þ¸ð¸®¸¦ 20G³ª ±× ÀÌ»óÀÇ °ÍÀ» »òÀ¸¸é ÁÁ¾ÒÀ»²¬

ÆÓ¿ùµå °ÔÀÓ¿¡¼­µµ ¹ö¹÷µÇ´Âµ¥,   AIÅ×½ºÆ®´Â ´õ ¹ö¹÷À̳׿©


Ȥ½Ã ´©±º°¡  ½ºÅ×À̺íÀ» ÄɽºÄÉÀ̵å·Î Å×½ºÆ®ÇÑ´Ù¸é Âü°í µÇ½Ã¶ó°í ³²°Üº¾´Ï´Ù.


 

´ñ±Û 3

0/1000