Transaction: 46717c060fcd20c252956cbfb525a195f697cab5

Included in block 19,135,955 at 2018/01/20 06:29:00 (UTC).

Transaction overview

Loading...
Transaction info
transaction_id 46717c060fcd20c252956cbfb525a195f697cab5
ref_block_num 64,961
block_num19,135,955
ref_block_prefix 432,954,624
expiration2018/01/20T06:38:54
transaction_num 13
extensions[]
signatures 1f6cf301e65101ed11ba475740672772ddb5c43412dd7f8b086674f5741f88e5c00aaca4a96277c3535ecc88efa626268c431be49fd9da0175e54e39974aff2e6a
operations
comment
"parent_author":"",<br>"parent_permlink":"machine",<br>"author":"maadi",<br>"permlink":"how-machine-learning-deep-learning-and-ai-expand-the-threat-landscape",<br>"title":"How Machine Learning,<br> Deep Learning,<br> and AI Expand the Threat Landscape",<br>"body":"![26d554da3681d7d4a14d1c60d10cd9bd.jpg (https:\/\/steemitimages.com\/DQmXHWxvdowJGw9sHcc1a96ykRQA8XgcWRbZrbmkYDdkpLr\/26d554da3681d7d4a14d1c60d10cd9bd.jpg)\nSmart companies are using artificial intelligence (AI) and machine learning (ML) techniques to improve the scale and speed at which they do business. Smart criminals are doing the same.\n\nAs AI and ML become more mainstream,<br> security teams will likely see more adversaries attempting to poison and evade data sets.\n\n\u201cAs the use of these techniques increases,<br> so will the threats,<br>\u201d says Dr. Celeste Fralick,<br> Chief Data scientist and Senior Principal Engineer,<br> McAfee. In its 2018 Threat Predictions report,<br> McAfee Labs predicts an increased use of ML attacks from adversaries over the next year.\n\nThe concept of adversarial machine learning\u2014the study of bad actors attacking analytics\u2014has already been demonstrated by both black and white hat hackers. Bad actors have already used this technology in documented attacks.\n\nFor security teams,<br> the next big challenge is understanding how the adversaries can attack machine learning \u2013 part of the ongoing game of cat and mouse between defenders and attackers.\n\n\u201cMachines will work for anyone,<br> fueling an arms race in machine-supported actions from defenders and attackers,<br>\u201d the McAfee Labs report states.\n\nDefenders Are Smart,<br> But So Are Adversaries\nBecause of the sheer amount of data,<br> machine learning is typically used to detect cyber attacks and the adversary is enticed to specifically attack the analytic model whether he can see it or not. There are a number of ways attackers can manipulate algorithms,<br> including:\n\nInfluence: Attacking the model\u2019s training set \u2013 the sample data that the algorithms use for learning \u2013 to affect the model\u2019s decision-making capability.\nSpecificity: Attacks can be targeted at specific features in the model or be \u201cindiscriminate\u201d across the entire model.\nSecurity integrity and availability: Integrity impacts all data or a sample of that data,<br> while availability overwhelms the system with so many false positives that security analysts end up ignoring the signal or increase its thresholds so as to not alarm,<br> unknowingly allowing malware to enter.\nEvasion and Poisoning: Evasion increases false negatives using perturbations or false data,<br> and poisoning impacts the data used to train the model.\nOrganizations need to stay one step ahead of this evolving threat. One way to do that,<br> Fralick says,<br> is to include analytic vulnerability checks of machine learning or deep learning models during development. \u201cYou need to plug the holes before products are shipped,<br>\u201d she explains. \u201cPut analytic risk mitigation into development to predict,<br> evade,<br> learn and adapt from these types of attacks.\u201d\n\nMcAfee recommends a systemic and holistic approach against these threats,<br> coupling machine learning models with process improvements in internal analytic development to increase protection against evasion,<br> poisoning,<br> or other types of attacks. Using AI to augment the skills and expertise of analysts \u2013 a \u201chuman-machine teaming\u201d approach \u2013 will be more effective than just machines alone.\n\n\u201cHuman-machine teaming has tremendous potential to swing the advantage back to the defenders,<br> and our job during the next few years is to make that happen,<br>\u201d the McAfee Labs report states. \u201cTo do that,<br> we will have to protect machine detection and correction models from disruption,<br> while continuing to advance our defensive capabilities faster than our adversaries can ramp up their attacks.\u201d\n\nTo learn more about how humans and machines can team up to defend against attacks,<br> visit https:\/\/www.mcafee.com\/us\/solutions\/machine-learning.aspx.",<br>"json_metadata":" \"tags\":[\"machine\",<br>\"learning\",<br>\"programming\",<br>\"supervised\" ,<br>\"image\":[\"https:\/\/steemitimages.com\/DQmXHWxvdowJGw9sHcc1a96ykRQA8XgcWRbZrbmkYDdkpLr\/26d554da3681d7d4a14d1c60d10cd9bd.jpg\" ,<br>\"links\":[\"https:\/\/www.mcafee.com\/us\/solutions\/machine-learning.aspx\" ,<br>\"app\":\"steemit\/0.1\",<br>\"format\":\"markdown\" "
* The API used to generate this page is provided by @steemchiller.