1

Safety Perceptions of Generative AI Conversational Agents: Uncovering Perceptual Differences in Trust, Risk, and Fairness

Public and academic discourse on the safety of conversational agents using generative AI, particularly chatbots, often centers on fairness, trust, and risk. However, there is limited insight into how users differentiate these perceptions and what …

Training Robust ML-based Raw-Binary Malware Detectors in Hours, not Months

Machine-learning (ML) classifiers are increasingly used to distinguish malware from benign binaries. Recent work has shown that ML-based detectors can be evaded by adversarial examples, but also that one may defend against such attacks via …

A High Coverage Cybersecurity Scale Predictive of User Behavior

Psychometric security scales can enable various crucial tasks (e.g., measuring changes in user behavior over time), but, unfortunately, they often fail to accurately predict actual user behavior. We hypothesize that one can enhance prediction …

CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers

This work presents CaFA, a system for Cost-aware Feasible Attacks for assessing the robustness of neural tabular classifiers against adversarial examples realizable in the problem space, while minimizing adversaries’ effort. To this end, CaFA …

DrSec: Flexible Distributed Representations for Efficient Endpoint Security

The increasing complexity of attacks has given rise to varied security applications tackling profound tasks, ranging from alert triage to attack reconstruction. Yet, security products, such as Endpoint Detection and Response, bring together …

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. …

Accessorize in the Dark: A Security Analysis of Near-Infrared Face Recognition

Prior work showed that face-recognition systems ingesting RGB images captured via visible-light (VIS) cameras are susceptible to real-world evasion attacks. Face-recognition systems in near-infrared (NIR) are widely deployed for critical tasks (e.g., …

Adversarial Training for Raw-Binary Malware Classifiers

Machine learning (ML) models have shown promise in classifying raw executable files (binaries) as malicious or benign with high accuracy. This has led to the increasing influence of ML-based classification methods in academic and real-world malware …

Scalable Verification of GNN-based Job Schedulers

Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers …

"I Have No Idea What a Social Bot Is": On Users' Perceptions of Social Bots and Ability to Detect Them

Social bots—software agents controlling accounts on online social networks (OSNs)—have been employed for various malicious purposes, including spreading disinformation and scams. Understanding user perceptions of bots and ability to distinguish them …