Abstract: With the increasing adoption of machine learning systems, concerns around bias and privacy
have gained significant research interest. This work investigates the intersection of algorithmic fairness and
differential privacy by evaluating differentially private fair representations. The LAFTR framework aims
to learn fair data representations while maintaining utility. Differential privacy is injected into model
training using DP-SGD to provide formal privacy guarantees. Experiments are conducted on the Adult,
German Credit, and CelebA datasets, with gender and age as sensitive attributes. The models are evaluated
across various configurations, including the privacy budget epsilon, adversary strength, and dataset
characteristics. Results demonstrate that with proper tuning, differentially private models can achieve fair
representations comparable or better than non-private models. However, introducing privacy reduces
stability during training. Overall, the analysis provides insights into the tradeoffs between accuracy, fairness
and privacy for different model configurations across datasets. The results establish a benchmark for further
research into differentially...