Past year
All results
- All results
- Verbatim
Sep 10, 2024 · In this paper, we show that for a subset of ML models used in MLaaS, namely Support Vector Machines (SVMs) and Support Vector Regression Machines (SVRs) which ...
Feb 15, 2024 · A second, and even less detectable theft is the extraction of your ML model from your application for use in the hacker's application. If this is a direct ...
Sep 24, 2024 · Abstract. Model extraction aims to steal a functionally similar copy from a machine learning as a service (MLaaS) API with minimal overhead, typically for ...
Sep 27, 2024 · Attackers can also be interested in stealing the model itself or its training data. They might repeatedly probe the model to see which inputs lead to which ...
Dec 21, 2023 · Patched in the latest version of MLflow, the flaw allows attackers to steal or poison sensitive training data when a developer visits a random website on ...
Jun 20, 2024 · Model stealing attacks have become a serious concern for deep learning models, where an attacker can steal a trained model by querying its black-box API.
Jun 11, 2024 · We developed a PoC attack that compromises a model to steal private user data the model processes during normal operation. We injected a payload into the ...
Missing: Efficiently | Show results with:Efficiently
Mar 15, 2024 · This allows models to provide services without exposing the underlying parameters and details, and has been harnessed in Machine Learning as a. Service ...
Jan 24, 2024 · DeMistify is implemented as an automated tool to steal on-device ML models and reuse associated services in Android apps. As such, apps on other platforms (e.g ...
Sep 24, 2024 · This article investigates machine learning (ML) security, focusing on threats and attacks against ML. We have created a threat model for ML to illustrate ...