barreleye

Contact Us

Publications

Key-point Guided Deformable Image Manipulation Using Diffusion Model

AUTHORS
Seok-Hwan Oh1,  Guil Jung1,  Myeong-Gee Kim1,  Sang-Yun Kim1,  Young-Min Kim1,  Hyeonjik Lee1,  Hyuksool Kwon2,  Hyeon-Min Bae1  
PUBLISHED
BMVC
  • 1. Department of Electrical Engineering, KAIST, Daejeon, South Korea
  • 2. Department of Emergency Medicine, SNUBH, Seong-nam, South Korea

Abstract:

In this paper, we introduce a Key-point-guided Diffusion probabilistic Model (KDM) that gains precise control over images by manipulating the object's key-point. We propose a two-stage generative model incorporating an optical flow map as an intermediate output. By doing so, a dense pixel-wise understanding of the semantic relation between the image and sparse key point is configured, leading to more realistic image generation. Additionally, the integration of optical flow helps regulate the inter-frame variance of sequential images, demonstrating an authentic sequential image generation. The KDM is evaluated with diverse key-point conditioned image synthesis tasks, including facial image generation, human pose synthesis, and echocardiography video prediction, demonstrating the KDM is proving consistency enhanced and photo-realistic images compared with state-of-the-art models.

Read moreread more button
    BACK TO LIST