9

On a Scale of 1 to 5, How Reliable Are AI User Studies? A Call for Developing Validated, Meaningful Scales and Metrics about User Perceptions of AI Systems

Public discourse around trust, safety, and bias in AI systems intensifies, and as AI systems increasingly impact consumers' daily lives, there is a growing need for empirical research to measure psychological constructs underlying the human-AI …

Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients

Federated learning (FL) enables several users to train machine-learning models jointly without explicitly sharing data with one another. This regime is particularly helpful in cases where keeping the data private and secure is essential (e.g., …

The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations

To help adversarial examples generalize from surrogate machine-learning (ML) models to targets, certain transferability-based black-box evasion attacks incorporate data augmentations (e.g., random resizing). Yet, prior work has explored limited …

Privacy-Preserving Collaborative Genomic Research: A Real-Life Deployment and Vision

The data revolution holds a significant promise for the health sector. Vast amounts of data collected and measured from individuals will be transformed into knowledge, AI models, predictive systems, and digital best practices. One area of health that …

Property-Driven Evaluation of RL-Controllers in Self-Driving Datacenters

Reinforcement learning-based controllers (RL-controllers) in self-driving datacenters have evolved into complex dynamic systems that require continuous tuning to achieve higher performance than hand-crafted expert heuristics. The operating …

Training Older Adults to Resist Scams with Fraud Bingo and Scam-Detection Challenges

Older adults are disproportionately affected by scams, many of which target them specifically. In this interactive demo, we present *Fraud Bingo*, an intervention designed by WISE & Healthy Aging Center in Southern California prior to 2012, that has …

Comparing Hypothetical and Realistic Privacy Valuations

To protect users' privacy, it is important to understand how they value personal information. Prior work identified how framing effects alter users' valuations and highlighted the difficulty in eliciting real valuations through user studies under …

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

Much research has been devoted to better understanding adversarial examples, which are specially crafted inputs to machine-learning models that are perceptually similar to benign inputs, but are classified differently (i.e., misclassified). Both …