Public discourse around trust, safety, and bias in AI systems intensifies, and as AI systems increasingly impact consumers’ daily lives, there is a growing need for empirical research to measure psychological constructs underlying the human-AI relationship. By reviewing literature, we identified a gap in the availability of validated instruments. Instead, researchers seem to adapt, reuse, or develop measures in an ad hoc manner without much systematic validation. Through piloting different instruments, we identified limitations with this approach but also with existing validated instruments. To enable more robust and impactful research on user perceptions of AI systems, we advocate for a community-driven initiative to discuss, exchange, and develop validated, meaningful scales and metrics for human-centered AI research.