Diffusion-Based Generative Coverless Steganography for Robust Face Recognition
Keywords:
Coverless image steganography (CIS), Contrastive Image Synthesis, oint Source-Channel Coding (JSCC), peak signal-to-noise ratio (PSNR), Diffusion ModelAbstract
Conventional image steganography centers on embedding one image within another to evade detection by unauthorized parties. Coverless image steganography (CIS) improves imperceptibility by omitting the use of a cover image. Recent studies have employed text prompts as keys in Contrastive Image Synthesis via diffusion models. The swift advancement of generative models has initiated a novel approach in steganography known as generative steganography (GS). It facilitates message-to-picture creation without requiring a carrier image. This internationally recognized biometric facial recognition technique is extensively utilized in numerous identity verification systems. This research offers a novel coverless steganography framework for face recognition photos based on a diffusion model, aimed at enhancing personal privacy protection and ensuring the secure transmission and sharing of sensitive information without compromising user experience. We propose a Coverless Semantic Steganography Communication system utilizing a Generative Diffusion Model to conceal hidden images within generated stego images. The semantically associated private and public keys allow the legitimate receiver to accurately decode hidden images, while the eavesdropper, lacking the entire and accurate key pairs, is unable to access them. Simulation outcomes illustrate the efficacy of the plug-and-play architecture across several Joint Source-Channel Coding (JSCC) frameworks. The comparative results under various eavesdropping risks indicate that, at a Signal-to-Noise Ratio (SNR) of 2.03 dB, the peak signal-to-noise ratio (PSNR) for the legitimate receiver exceeds that of the eavesdropper by 4.14 dB.