Source Themes

Safety Perceptions of Generative AI Conversational Agents: Uncovering Perceptual Differences in Trust, Risk, and Fairness

Public and academic discourse on the safety of conversational agents using generative AI, particularly chatbots, often centers on fairness, trust, and risk. However, there is limited insight into how users differentiate these perceptions and what …

On a Scale of 1 to 5, How Reliable Are AI User Studies? A Call for Developing Validated, Meaningful Scales and Metrics about User Perceptions of AI Systems

Public discourse around trust, safety, and bias in AI systems intensifies, and as AI systems increasingly impact consumers' daily lives, there is a growing need for empirical research to measure psychological constructs underlying the human-AI …

Training Robust ML-based Raw-Binary Malware Detectors in Hours, not Months

Machine-learning (ML) classifiers are increasingly used to distinguish malware from benign binaries. Recent work has shown that ML-based detectors can be evaded by adversarial examples, but also that one may defend against such attacks via …

Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients

Federated learning (FL) enables several users to train machine-learning models jointly without explicitly sharing data with one another. This regime is particularly helpful in cases where keeping the data private and secure is essential (e.g., …

The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations

To help adversarial examples generalize from surrogate machine-learning (ML) models to targets, certain transferability-based black-box evasion attacks incorporate data augmentations (e.g., random resizing). Yet, prior work has explored limited …

Privacy-Preserving Collaborative Genomic Research: A Real-Life Deployment and Vision

The data revolution holds a significant promise for the health sector. Vast amounts of data collected and measured from individuals will be transformed into knowledge, AI models, predictive systems, and digital best practices. One area of health that …

A High Coverage Cybersecurity Scale Predictive of User Behavior

Psychometric security scales can enable various crucial tasks (e.g., measuring changes in user behavior over time), but, unfortunately, they often fail to accurately predict actual user behavior. We hypothesize that one can enhance prediction …

CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers

This work presents CaFA, a system for Cost-aware Feasible Attacks for assessing the robustness of neural tabular classifiers against adversarial examples realizable in the problem space, while minimizing adversaries’ effort. To this end, CaFA …

DrSec: Flexible Distributed Representations for Efficient Endpoint Security

The increasing complexity of attacks has given rise to varied security applications tackling profound tasks, ranging from alert triage to attack reconstruction. Yet, security products, such as Endpoint Detection and Response, bring together …

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. …