Public and academic discourse on the safety of conversational agents using generative AI, particularly chatbots, often centers on fairness, trust, and risk. However, there is limited insight into how users differentiate these perceptions and what …
Public discourse around trust, safety, and bias in AI systems intensifies, and as AI systems increasingly impact consumers' daily lives, there is a growing need for empirical research to measure psychological constructs underlying the human-AI …
Machine-learning (ML) classifiers are increasingly used to distinguish malware from benign binaries. Recent work has shown that ML-based detectors can be evaded by adversarial examples, but also that one may defend against such attacks via …
Federated learning (FL) enables several users to train machine-learning models jointly without explicitly sharing data with one another. This regime is particularly helpful in cases where keeping the data private and secure is essential (e.g., …
To help adversarial examples generalize from surrogate machine-learning (ML) models to targets, certain transferability-based black-box evasion attacks incorporate data augmentations (e.g., random resizing). Yet, prior work has explored limited …
The data revolution holds a significant promise for the health sector. Vast amounts of data collected and measured from individuals will be transformed into knowledge, AI models, predictive systems, and digital best practices. One area of health that …
Psychometric security scales can enable various crucial tasks (e.g., measuring changes in user behavior over time), but, unfortunately, they often fail to accurately predict actual user behavior. We hypothesize that one can enhance prediction …
This work presents CaFA, a system for Cost-aware Feasible Attacks for assessing the robustness of neural tabular classifiers against adversarial examples realizable in the problem space, while minimizing adversaries’ effort. To this end, CaFA …
The increasing complexity of attacks has given rise to varied security applications tackling profound tasks, ranging from alert triage to attack reconstruction. Yet, security products, such as Endpoint Detection and Response, bring together …
Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. …