keyboard_arrow_up
Surfacing Semantic Orthogonality Across Model Safety Benchmarks: A Multidimensional Analysis

Authors

Jonathan Bennion1, Shaona Ghosh2, Mantek Singh3 and Nouha Dziri4, 1The Objective AI, USA, 2Nvidia, USA, 3Google, USA, 4Allen Institute for AI (AI2), USA

Abstract

Various AI safety datasets have been developed to measure LLMs against evolving interpretations of harm. Our evaluation of five recently published open-source safety benchmarks reveals distinct semantic clusters using UMAP dimensionality reduction and k-means clustering (silhouette score: 0.470). We identify six primary harm categories with varying benchmark representation. GretelAI, for example, focuses heavily on privacy concerns, while WildGuardMix emphasizes self-harm scenarios. Significant differences in prompt length distribution suggests confounds to data collection and interpretations of harm as well as offer possible context. Our analysis quantifies benchmark orthogonality among AI benchmarks, allowing for transparency in coverage gaps despite topical similarities. Our quantitative framework for analyzing semantic orthogonality across safety benchmarks enables more targeted development of datasets that comprehensively address the evolving landscape of harms in AI use, however that is defined in the future.

Keywords

AI benchmark meta-analysis, LLM Embeddings, Dimensionality reduction, K-means clustering, AI safety

Full Text  Volume 15, Number 9