Abstract
Nowadays, there has been an increment in the use of machine learning methods for cyber-security applications. These methods can be prone to generalization, especially in a binary attack classification setting, where the objective is to differentiate between benign vs. malicious behavior. This generalization creates risky security blind-spot weaknesses that make the system vulnerable. Current attackers are well aware of these blind-spots and as a counter-strategy, they exploit such vulnerabilities to bypass security measures and achieve their nefarious objectives. In this work, we propose a methodology to mitigate the problem, RIsky Blind-Spot (RIBS), by making the classification more robust. Our proposed approach creates a generator model that can learn the real characteristics of the data, and consequently, sample real examples targeting the blind-spots of a classifier. We validate our methodology in the context of power grids, where we show how this framework can improve the detection of unknown malicious behavior. Our approach provides an increment of 10% in terms of accuracy and detected attacks when compared to the baseline method.
Original language | American English |
---|---|
State | Published - 1 Jan 2019 |
Event | 2019 IEEE International Conference on Big Data (Big Data) - Duration: 1 Jan 2019 → … |
Conference
Conference | 2019 IEEE International Conference on Big Data (Big Data) |
---|---|
Period | 1/01/19 → … |
Keywords
- encoding
- generators
- machine learning
- neural networks
- security
- training
EGS Disciplines
- Computer Sciences