Fair Diffusion Explorer
Choose from the occupations below to compare how Stable Diffusion (left) and Fair Diffusion (right) represent different professions.
Fair Diffusion Generations
Choose a profession
We present a novel strategy, called Fair Diffusion, to attenuate biases after the deployment of generative text-to-image models. Specifically, we demonstrate shifting a bias, based on human instructions, in any direction yielding arbitrarily new proportions for, e.g., identity groups. As our empirical evaluation demonstrates, this introduced control enables instructing generative image models on fairness, with no data filtering and additional training required. For the full paper by Friedrich et al., see here.