This post is ‘not’ intended to teach people how to use popular predictive modelling APIs for free. Although, to your surprise, this isn’t a far fetched possibility. Trained Machine learning models are basically a function that maps feature vectors to the output variable. Upon querying with a test instance, the model predicts an outcome, assigning probability scores to all the possible classes. Google, Amazon etc provides public facing APIs to train predictive models on the subscriber’s data, the model can further be used for prediction purposes . This service comes at a cost : Pay per query model, monthly subscription etc.
Lets consider a scenario, A user subscribes for such a service on a trial basis for a fraction of cost and queries the system for as long as he can. With these queries and subsequent output by the model, Can the user reverse engineer the system to emulate the exact/ equivalent model, also replicate the underlying algorithm? Can the stolen model leak sensitive training data as well ? can the feature extraction methods been employed behind the scene also be decoded?
Yes !
How many queries would he need to hit for the same? It depends !
“Amazon uses logistic regression for classification and provides black-box access to trained models. It uses one-hot-encoding for categorical variables and quantile binning for numeric ones.”
Say for example, if the algorithm being used to train the data was logistic regression .The confidence value in case of logistic regression is nothing but a log-linear function 1/(1+e−(w·x+β)) of the d- dimensional input vector x . All one needs to do is to solve for the unknown d+1 parameters w and β.Any user who wishes to make more than d + 1 queries to a model would then minimize the prediction cost by first running a cross- user model extraction attack, and then using the extracted model for personal use, free of charge.
The goal of such model extracting algorithm would be to estimate a function f’ which is a close approximation of actual function f by optimising for minimum test and uniform errors.
If they use tree based models, Decision tree model can be similarly estimated by using path-finding techniques which assigns each node a quasi-identifier.
Meek lawd adversarial classifier reverse engineering is one such approach which gained a lot of popularity. Line-search, adaptive retraining, extract and test are different approaches that can applied for the cause.
Among the different methods that can be applied to keep models, and sensitive training data safe :
- Choose not to provide class probabilities upon prediction, provide only class labels
- if at all class probabilities are being provided as output, round it off
- Use ensembles. Tough to reverse engineer, could save you a lot of money .
For original article click here