Complex Networks

Transcription

Complex NetworksLjiljana Trajkovićljilja@cs.sfu.caCommunication Networks Laboratoryhttp://www.sfu.ca/ ljilja/cnlSchool of Engineering ScienceSimon Fraser University, Vancouver,British Columbia, Canada

Simon Fraser UniversityBurnaby CampusJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam2

RoadmapnnnnnnIntroductionData processingMachine learning modelsExperimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam3

RoadmapnnnnnnIntroduction:n Complex networksn Machine learningData processingMachine learning modelsExperimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam4

Complexity SciencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam5

Complex NetworksJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam6

The Internethttps://en.wikipedia.org/wiki/Complex network#/media/File:Internet map 1024.jpgBy The Opte Project - Originally from the English ?curid 1538544July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam7

The Internet Partial map of the Internet based on the January 15, 2005data found on opte.org. Each line is drawn between two nodes, representing two IPaddresses. The length of the lines are indicative of the delay betweenthose two nodes. This graph represents less than 30% of the Class Cnetworks reachable by the data collection program in early2005. Lines are color-coded according to their corresponding RFC1918 allocation as follows: Dark blue: net, ca, us Green:com, org Red: mil, gov, edu Yellow: jp, cn, tw, au, deMagenta: uk, it, pl, fr Gold: br, kr, nl White: unknownJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam8

Scale-Free NetworksJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam9

Scale-Free Network An example of complex scale-free network. Graph represents the metadata of thousands of archivedocuments, documenting the social network of hundreds ofLeague of Nations personals. M. Grandjean, "La connaissance est un réseau," LesCahiers du Numérique, vol. 10, no. 3, pp. 37-54.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam10

Dynamical SystemsJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam11

Dynamical Systems The Rössler attractor is a chaotic attractor solution tothe system:x y zy x ayz b z (x c) Proposed by Rössler in 1976 Often called Rössler system Here, (x,y,z) ℝ3 are dynamical variables defining thephase space and (a,b,c) ℝ3 are parametersJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam12

RoadmapnnnnnnIntroduction:n Complex networksn Machine learningData processingMachine learning modelsExperimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam13

Machine LearningnnnUsing machine learning techniques to detect networkintrusions is an important topic in cybersecurity.Machine learning algorithms have been used tosuccessfully classify network anomalies and intrusions.Supervised machine learning algorithms:n Support vector machine: SVMn Long short-term memory: LSTMn Gated recurrent unit: GRUn Broad learning system: BLSJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam14

RoadmapnnnnnnIntroductionData processing:n BGP datasetsn NSL-KDD datasetMachine learning modelsExperimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam15

BGP and NSL-KDD Datasets§ Used to evaluate anomaly detection and intrusiontechniques§ BGP:§ Routing records from Réseaux IP Européens (RIPE)§ BCNET regular traffic§ NSL-KDD:§ an improvement of the KDD’99 dataset§ used in various intrusion detection systems (IDSs)July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam16

BGP Datasets§ Anomalous data: days of the attack§ Regular data: two days prior and two days after theattack§ 37 numerical features from BGP update messages§ Best performance: 60% for training and 40% for testingRegular(min)Anomaly(min)Code Red I6,5996003,678Nimda3,6783,521Slammer6,330869July 20, 2019Regular Anomaly(training) 772,12311,3993,2095313121339ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam17

NSL-KDD Dataset§ KDDTrain and KDDTest : training and test datasets§ KDDTes-21: a subset of the KDDTest dataset that doesnot include records correctly classified by 21 modelsRegularDoSU2RR2LProbeTotalKDDTrain 67,34345,9275299511,656125,973KDDTest 002,7542,40211,850July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam18

RoadmapnnnnnnIntroductionData processingMachine learning models:n Deep learning: multi-layer recurrent neural networksn Broad learning systemExperimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam19

Deep Learning Neural Networkn37 (BGP)/109 (NSL-KDD) RNNs, 80 FC1, 32 FC2, and16 FC3 fully connected (FC) hidden nodes:July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam20

Long Short-Term Memory: LSTMnRepeating module for the Long Short-Term Memory(LSTM) neural network:July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam21

Long Short-Term Memory: LSTMThe outputs of the forget gate 𝑓! , the input gate 𝑖! ,and the output gate 𝑜! at time t are:𝑓! 𝜎 𝑊"# 𝑥! 𝑏"# 𝑈 # ℎ!%& 𝑏 #𝑖! 𝜎 𝑊"" 𝑥! 𝑏"" 𝑈 " ℎ!%& 𝑏 "𝑜! 𝜎(𝑊"' 𝑥! 𝑏"' 𝑈 ' ℎ!%& 𝑏 ' ),where:𝜎 / : logistic sigmoid function𝑥! : current input vectorℎ!%&: previous output vector𝑊"# , 𝑈 # , 𝑊"" , 𝑈 " , 𝑊"' and 𝑈 ' : weight matricesn𝑏"# , 𝑏 # , 𝑏"" , 𝑏 " , 𝑏"' , and 𝑏 ' : bias vectorsJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam22

Long Short-Term Memory: LSTMOutput 𝑖! of the input gate decides if the informationwill be stored in the cell state. The sigmoid function isused to update the information.n Cell state 𝑐! :𝑐! 𝑓! 𝑐!%& 𝑖! 𝑡𝑎𝑛ℎ 𝑊"( 𝑥! 𝑏"( 𝑈 ( ℎ!%& 𝑏 ( ,where:n denotes element-wise multiplicationsn 𝑡𝑎𝑛ℎ function: used to create a vector for the nextcell staten Output of the LSTM cell:ℎ! 𝑜! 𝑡𝑎𝑛ℎ(𝑐! )nJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam23

Gated Recurrent Unit: GRUnRepeating module for the Gated Recurrent Unit (GRU)neural network:July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam24

Gated Recurrent Unit: GRUThe outputs of the reset gate 𝑟! and the update gate 𝑧!at time t are:𝑟! 𝜎 𝑊") 𝑥! 𝑏") 𝑈 ) ℎ!%& 𝑏 )𝑧! 𝜎(𝑊"* 𝑥! 𝑏"* 𝑈 * ℎ!%& 𝑏 * ),where:n 𝜎: sigmoid functionn 𝑥! : input, ℎ!%& is the previous output of the GRU celln 𝑊") , 𝑈 ) , 𝑊"* , and 𝑈 * : weight matricesn 𝑏") , 𝑏 ) , 𝑏"* , and 𝑏 * : bias vectorsnJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam25

Gated Recurrent Unit: GRUOutput of the GRU cell:ℎ! 1 𝑧! 𝑛! 𝑧! ℎ!%&,where 𝑛! :n 𝑛! 𝑡𝑎𝑛ℎ(𝑊" 𝑥! 𝑏" 𝑟! (𝑈 ℎ!%& 𝑏 ))n 𝑊" and 𝑈 : weight matricesn 𝑏" and 𝑏 : bias vectorsnJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam26

RoadmapnnnnnnIntroductionData processingMachine learning models:n Deep learning: multi-layer recurrent neural networksn Broad learning systemExperimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam27

Broad Learning System: BLSnModule of the Broad Learning System (BLS) algorithmwith increments of mapped features, enhancementnodes, and new input data:July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam28

Original BLSnMatrix 𝑨, is constructed from groups of mappedfeatures 𝒁 and groups of enhancement nodes 𝑯- as:𝐴! 𝒁" 𝑯# ] 𝜙 𝑿𝑾 % 𝛽 % 𝜉(𝒁"! 𝑾&' 𝛽&' ) ,𝑖 1, 2, , 𝑛 𝑎𝑛𝑑 𝑗 1, 2, , 𝑚where:n 𝜙 and 𝜉: projection mappingsn 𝑾." , 𝑾 / : weightsn𝛽." , 𝛽 / : bias parametersModified to include additional mapped features 𝒁 0&,enhancement nodes 𝑯-0&, and/or input nodes 𝑿1July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam29

RBF-BLS ExtensionnnThe RBF function is implemented using Gaussian kernel: 𝑥 𝑐 3𝜉 𝑥 𝑒𝑥𝑝 𝛾3Weight vectors of the output 𝑯𝑾 are deduced from:𝑾 (𝑯2 𝑯)%&𝑯2 𝒀 𝑯0𝒀,where:n 𝑾 𝜔& , 𝜔3 , , 𝜔4 : output weightsn 𝑯 𝜉& , 𝜉3 , , 𝜉4 : hidden nodes0n 𝑯 : pseudoinverse of 𝑯July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam30

Cascades of Mapped FeaturesnnCascade of mapped features (CFBLS):the new group of mapped features is created by usingthe previous group (𝑘 1).Groups of mapped features are formulated as:𝒁4 𝜙(𝒁4%&𝑾.4 𝛽.4 ) 𝜙 4 𝑿 ; {𝑾." , 𝛽." }4"5& , 𝑓𝑜𝑟 𝑘 1, , 𝑛July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam31

Cascades of Enhancement NodesnnThe first enhancement node in cascade ofenhancement nodes (CEBLS) is generated formmapped features.The subsequent enhancement nodes are generatedfrom previous enhancement nodes creating a cascade:𝐻6 𝜉 6 𝒁 ; 𝑾 " , 𝛽 " 6"5& , 𝑓𝑜𝑟 𝑢 1, , 𝑚,where:n 𝑾 " and 𝛽 " : randomly generatedJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam32

RoadmapnnnnnnIntroductionData processing:Machine learning models:Experimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam33

Experimental Procedure§ Step 1: Normalize training and test datasets.§ Step 2: Train the RNN models and BLS using 10-foldvalidation. Tune parameters of the RNN and BLSmodels.§ Step 3: Test the RNN and BLS models.§ Step 4: Evaluate models based on:§ Accuracy§ F-ScoreRNN: recurrent neural networkBNN: board learning systemJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam34

Number of BLS Training ParametersParametersCode Red INimdaSlammerNSL-KDD10050010010011255Enhancement nodes500700300100Incremental learningsteps10923Data 5060Mapped featuresGroups of mappedfeaturesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam35

RoadmapnnnnnnIntroductionData processing:n BGP datasetsn NSL-KDD datasetMachine learning models:n Deep learning: multi-layer recurrent neural networksn Broad learning systemExperimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam36

Training Time: RNN ModelsDatasetsLSTM2LSTM3LSTM4GRU2GRU3GRU4Python (CPU)Time (s)BGP Python (GPU)BGP 93355.86394.55317.53345.04369.86Time (s)July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam37

Training Time: BLS ModelsDatasetsBLSRBF-BLSCFBLSCEBLSCFEBLSPython (CPU)Time (s)BGP 798.13108.23108.14MATLAB (CPU)Time (s)BGP 888.95July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam38

LSTM Models: BGP Datasets (Python)Accuracy (%)ModelLSTM2LSTM3LSTM4July 20, 2019F-Score estCode Red mer92.9892.9985.9772.42Code Red mer90.9092.0184.3867.29Code Red mer92.4992.2286.1870.72ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam39

GRU Models: BGP Datasets (Python)Accuracy (%)ModelGRU2GRU3GRU4July 20, 2019F-Score estCode Red mer91.8893.3390.9069.42Code Red mer91.7695.2190.8368.72Code Red mer92.1492.1590.3570.11ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam40

BLS Models: BGP Datasets (Python)Accuracy (%)ModelBLSRBF-BLSJuly 20, 2019F-Score estCode Red mer87.6575.6268.4057.68Code Red mer91.2190.5570.7664.57ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam41

BLS Models: BGP Datasets (Python)Accuracy (%)ModelCFBLSCEBLSCFEBLSJuly 20, 2019F-Score estCode Red mer89.2871.2561.8160.99Code Red mer91.0187.7182.4366.38Code Red mer86.3671.1157.7155.30ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam42

RNN and BLS Models: NSL-KDDDataset (Python)Accuracy (%)F-Score (%)ModelKDDTest KDDTest-21KDDTest 3.0574.06CFBLS82.2067.4782.2376.29July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam43

Incremental BLS Model: BGP andNSL-KDD Datasets (MATLAB)TestAccuracy (%)F-Score (%)Time (s)Code Red .072.805KDDTest 81.3481.9932.99KDDTest-2178.7088.0629.71July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam44

RoadmapnnnnnnIntroductionData processing:Machine learning models:Experimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam45

ConclusionnnWe evaluated performance of:n LSTM and GRU deep recurrent neural networkswith a variable number of hidden layersn BLS models that employ radial basis function(RBF), cascades of mapped features andenhancement nodes, and incremental learningBLS and cascade combinations of mapped featuresand enhancement nodes achieved comparableperformance and shorter training time because oftheir wide and deep structure.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam46

ConclusionnnBLS models:n consist of a small number of hidden layers andadjust weights using pseudoinverse instead ofback-propagationn dynamically update weights in case of incrementallearningn better optimized weights due to additional datapoints for large datasets (NSL-KDD)While increasing the number of mapped features andenhancement nodes as well as mapped groups led tobetter performance, it required additional memoryand training time.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam47

RoadmapnnnnnnIntroductionData processing:Machine learning algorithms:Experimental procedurePerformance evaluationConclusion and referencesJuly 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam48

References: DatasetsnnnnBCNET :http://www.bc.net/RIPE RIS raw ments/routinginformation-service-risNSL-KDD dataset:https://www.unb.ca/cic/datasets/nsl.htmlM. Tavallaee, E. Bagheri, W. Lu, and A. A. Ghorbani, “A detailedanalysis of the KDD CUP 99 data set,” in Proc. IEEE Symp. Comput.Intell. in Security and Defense Appl. (CISDA), Ottawa, ON, Canada,July 2009, pp. 1–6.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam49

References: Intrusion DetectionnPandas: https://pandas.pydata.org/PyTorch: ing: http://www.broadlearning.ai/nnnnnC. M. Bishop, Pattern Recognition and Machine Learning. Secaucus, NJ,USA: Springer-Verlag, 2006.V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: a survey,”ACM Comput. Surv., vol. 41, no. 3, pp. 15:1–15:58, July 2009.T. A. Tang, L. Mhamdi, D. McLernon, S. A. R. Zaidi, and M. Ghogho,“Deep learning approach for network intrusion detection in softwaredefined networking,” in Proc. Wireless Netw. Mobile Commun. (WINCOM),Fez, Morocco, Oct. 2016, pp. 258–263.N. Shone, T. N. Ngoc, V. D. Phai, and Q. Shi, “A deep learning approachto network intrusion detection,” IEEE Trans. Emerg. Topics Comput. Intell.,vol. 2, no. 1, pp. 41–50, Feb. 2018.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam50

References: Deep LearningnnnnnnnR. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcementlearning,” Mach. Learn., vol. 8, no. 3, pp. 229–256, May 1992.S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp.1735–1780, Oct. 1997.G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improvingneural networks by preventing co-adaptation of feature detectors,” Computing ResearchRepository (CoRR), abs/1207.0580, pp. 1–18, Jul. 2012.K. Cho, B. van Merriënboer, C. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, andY. Bengio, “Learning phrase representations using RNN encoder–decoder for statisticalmachine translations,” in Proc. 2014 Conf. Empirical Methods Natural Lang. Process., Doha,Qatar, Oct. 2014, pp. 1724–1734.D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in Proc. 3rd Int. Conf.Learn. Representations, San Diego, CA, USA, May 2015, pp. 1–15.K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, and J. Schmidhuber, “LSTM: a searchspace odyssey,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 10, pp. 2222–2232, Oct.2017.C. Yin, Y. Zhu, J. Fei, and X. He, “A deep learning approach for intrusion detection usingrecurrent neural networks,” IEEE Access, vol. 5, pp. 21954–21961, Nov. 2017.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam51

References: Broad Learning SystemnnnZ. Liu and C. L. P. Chen, “Broad learning system: structural extensions onsingle-layer and multi-layer neural networks,” in Proc. 2017 Int. Conf.Secur., Pattern Anal., Cybern., Shenzhen, China, Dec. 2017, pp. 136–141.C. L. P. Chen and Z. Liu, “Broad learning system: an effective and efficientincremental learning system without the need for deep architecture,” IEEETrans. Neural Netw. Learn. Syst., vol. 29, no. 1, pp. 10–24, Jan. 2018.C. L. P. Chen, Z. Liu, and S. Feng, “Universal approximation capability ofbroad learning system and its structural variations,” IEEE Trans. NeuralNetw. Learn. Syst., vol. 30, no. 4, pp. 1191–1204, Apr. 2019.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam52

IWCSN: 2004 - 2019nInternational Workshop on Complex Systems and Networks (IWCSN):http://iwcsn.eie.polyu.edu.hk/July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam53

PublicationsnIEEE CAS Magazine Special Issue on Applications of ComplexNetworks, vol. 10, no. 3, 2010.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam54

Publications: http://www.sfu.ca/ ljiljaBook chapters:nnQ. Ding, Z. Li, S. Haeri, and Lj. Trajković, “Application of machine learningtechniques to detecting anomalies in communication networks: Datasets andFeature Selection Algorithms” in Cyber Threat Intelligence, M. Conti, A.Dehghantanha, and T. Dargahi, Eds., Berlin: Springer, pp. 47-70, 2018.Z. Li, Q. Ding, S. Haeri, and Lj. Trajković, “Application of machine learningtechniques to detecting anomalies in communication networks: ClassificationAlgorithms” in Cyber Threat Intelligence, M. Conti, A. Dehghantanha, and T.Dargahi, Eds., Berlin: Springer, pp. 71-92, 2018.July 20, 2019ICSSE 2019, Dong Hoi City, Quang Binh, Vietnam55

Publications: http://www.sfu.ca/ ljiljaRecent conference publications:nZ. Li, A. L. Gonzalez Rios, G. Xu, and Lj. Trajkovic, "Machine learningtechniques for classifying network anomalies and intrusions," in Proc. IEEE Int.Symp. Circuits and Systems, Sapporo, Japan, May 2019.nA. L. Gonzalez Rios, Z. Li, G. Xu, A. Dias Alonso, and Lj. Trajković, “DetectingNetwork Anomalies and Intrusions in Communication Networks,” in Proc. 23rdIEEE International Conference on Intelligent Engineering Systems 2019,Gödöllő, Hungary, Apr. 2019, pp. 29-34.nnnZ. Li, P. Batta, and Lj. Trajkovic ,́ “Comparison of machine learning algorithms fordetection of network intrusions,” in Proc. IEEE Int. Conf. Syst., Man, Cybern.,Miyazaki, Japan, Oct. 2018, pp. 4248–4253.P. Batta, M. Singh, Z. Li, Q. Ding, and Lj. Trajković, “Evaluation o

Machine Learning n Using machine learning techniques to detect network intrusions is an important topic in cybersecurity. n Machine learning algorithms have been used to successfully classify network anomalies and intrusions. n Supervised machine learning algorithms: n Support vector machine: SVM n Long short-term memory:LSTM n Gated recurre