## root / thesis / ch4_experiment_setup.tex @ c6e2ce5a

History | View | Annotate | Download (13.5 KB)

1 | df1b1a42 | Quynh PX Nguyen | %!TEX root = main.tex |
---|---|---|---|

2 | |||

3 | c6e2ce5a | Quynh PX Nguyen | \chapter{Experiment} \label{sec:experiment} |

4 | The previous chapter presents two algorithms to calculate Betweenness Centrality (BC). One algorithm was found out by \citeauthor{Brandes01afaster} in \cite{Brandes01afaster}. For the ease of notation, from now on it is called \emph{BBC}. The other heuristic version to calculate betweenness centrality was devised by \citeauthor{PuzisZEDB12} \cite{PuzisZEDB12}, and its short-form name from this point is \emph{HBC}. |
||

5 | df1b1a42 | Quynh PX Nguyen | |

6 | c6e2ce5a | Quynh PX Nguyen | This chapter presents all the setup information to compare the running time of two algorithms. In order to perform the analysis, there are few things needed to address first. To analyze the running time of the algorithm, we need to know about all the factors that can influence the running time. Those factors are the implementation, the type and size of graph, the system that runs the algorithms. The following section addresses those factors in depth. \cref{sec:experiment_implementation} presents the implementation of BBC and HBC were run on the physical router and server. \cref{sec:experiment_hardware} provides the hardware specifications for the system we use. \cref{sec:experiment_openwrt} concerns with the technical side of porting the program to embedded device, in our case is the Ubiquiti router. This section contains a lot of technical note, which might be served as a guideline on how to make a custom package for embedded system. \cref{sec:experiment_input} presents about two types of input that we use in the experiment: the randomly generated graphs, and the real topologies of the WCNs. Lastly, there are fluctuation when measuring the running time of the algorithms. \cref{sec:experiment_design} presents the method to get the average running time, how the experiment was carried out, e.g. how many graphs was generated, how many times the algorithm was run for one specific input. |

7 | |||

8 | \section{Implementation} \label{sec:experiment_implementation} |
||

9 | The BBC is implemented in the Boost Graph Library\footnote{http://www.boost.org/doc/libs/1_60_0/libs/graph/doc/index.html} (BGL). That is the generic library with implementation for many classical graph algorithms. |
||

10 | |||

11 | From \cref{sec:computation_heuristic}, the implementation for HBC contains four main steps. The first step involves dividing the main graph into some bi-connected components $B_i$. And then for each articulation points in all $B_i$, the link weight is computed. To calculate the link weight, we need to use the information from the original graph. Second, for each $B_i$, create the traffic matrix $h^i$ to represent the \emph{communication intensity} between all pair of vertices in each $B_i$. |
||

12 | |||

13 | The third, and the final part is the procedure to calculate betweenness for each $B_i$. Since the original graph is divided into some bi-connected components $B_i$, performing the original BBC for each $B_i$ will not generate the correct result. Actually, the accumulation step, as shown in \cref{eq:BC_for_v_heuristic} involves multiplying the pair-dependency $\delta_{st}(v)$ with its corresponding communication intensity value $h_{st}^{i}$. For an ease of comparison, the pseudo code of the accumulation step for BBC and HBC are presented in \cref{algo:accummulation_brandes} and \cref{algo:accumulation_heuristic}, respectively. |
||

14 | |||

15 | \begin{algorithm}[h] |
||

16 | \caption{Accumulation procedure for BBC} \label{algo:accummulation_brandes} |
||

17 | \begin{algorithmic}[1] |
||

18 | \Procedure{accumulation}{} |
||

19 | \State $c_B[s] \leftarrow c_B[s] + (|S| - 1)$ |
||

20 | \For{$v \in V$} |
||

21 | $\delta[v] \leftarrow 0$ |
||

22 | \EndFor |
||

23 | |||

24 | \While{$S$ not empty} |
||

25 | \State pop $w \leftarrow S$ |
||

26 | \For{$v \in Pred[w]$} |
||

27 | $\delta[v] \leftarrow \delta[v] + \frac{\sigma[v]}{\sigma[w]} (1 + \delta[w])$ |
||

28 | \EndFor |
||

29 | \If{$w \neq s$} |
||

30 | $c_B[w] \leftarrow c_B[w] + \delta[w] + 1$ \Comment{$w$ is target of $s$ once} |
||

31 | \EndIf |
||

32 | \EndWhile |
||

33 | \EndProcedure |
||

34 | \end{algorithmic} |
||

35 | \end{algorithm} |
||

36 | |||

37 | \begin{algorithm} |
||

38 | \caption{Accumulation procedure for HBC}\label{algo:accumulation_heuristic} |
||

39 | \begin{algorithmic}[1] |
||

40 | \Procedure{accumulation}{} |
||

41 | \For{$v \in V$} |
||

42 | $\delta[v] \leftarrow 0$ |
||

43 | \EndFor |
||

44 | \While{$S$ not empty} |
||

45 | \State pop $w \leftarrow S$ |
||

46 | \State $h \leftarrow $ get_the_communication_intensity_between $(w, s)$ |
||

47 | \State $c_B[s] += h$ |
||

48 | \For{$v \in Pred[w]$} |
||

49 | $\delta[v] \leftarrow \delta[v] + \frac{\sigma[v]}{\sigma[w]} (h + \delta[w])$ |
||

50 | \EndFor |
||

51 | \If{$w \neq s$} |
||

52 | $c_B[w] \leftarrow c_B[w] + \delta[w] + h$ |
||

53 | \EndIf |
||

54 | \EndWhile |
||

55 | \EndProcedure |
||

56 | \end{algorithmic} |
||

57 | \end{algorithm} |
||

58 | |||

59 | |||

60 | \section{Hardware specifications} \label{sec:experiment_hardware} |
||

61 | \subsection*{Server} |
||

62 | The experiment is run on the machine provided by Advanced Networking Systems (ANS) lab\footnote{https://ans.disi.unitn.it/}, with the hardware components as in \cref{tab:specification_computer} |
||

63 | |||

64 | \begin{table} |
||

65 | \caption{Specification of a computer} |
||

66 | \label{tab:specification_computer} |
||

67 | \centering |
||

68 | \begin{tabular}{l l} |
||

69 | \toprule |
||

70 | CPUs: & 2 $\times$ Intel(R) Xeon(R) CPU E5-2630 v3 \@ 2.40GHz \\ |
||

71 | Memory: & 4 $\times$ 16GB \@ 1866 MHz \\ |
||

72 | \bottomrule |
||

73 | \end{tabular} |
||

74 | \end{table} |
||

75 | |||

76 | The machine was running GNU/Linux with kernel 3.16. |
||

77 | |||

78 | \subsection*{WiFi Router} |
||

79 | We use Ubiquiti NanoStation M2 router to test the performance. The specification is in \cref{tab:specification_router}. Ubiquiti router is flashed with a firmware obtained from OpenWrt as described in the next section. |
||

80 | |||

81 | \begin{table} |
||

82 | \caption{Specification of Ubiquiti NanoStation M2 router} |
||

83 | \label{tab:specification_router} |
||

84 | \centering |
||

85 | \begin{tabular}{l l} |
||

86 | \toprule |
||

87 | Platform: & Atheros AR7240 \\ |
||

88 | CPU MHz: & 390 \\ |
||

89 | Flash MB: & 8 \\ |
||

90 | RAM MB: & 32 \\ |
||

91 | \bottomrule |
||

92 | \end{tabular} |
||

93 | \end{table} |
||

94 | |||

95 | |||

96 | \section{Porting a program to OpenWrt} \label{sec:experiment_openwrt} |
||

97 | OpenWrt\footnote{https://openwrt.org/ -- the most widely used open-souce operating system for wireless routers.} ``is described as a Linux distribution for embedded devices.''. It allows people to build custom firmware to run on many supported devices. And it aids with cross compiling the package which can be installed on those devices with custom firmware. |
||

98 | |||

99 | Even though we can build the firmware image with OpenWrt toolchain. However, there is the firmware image available on the Download pageootnote{https://wiki.openwrt.org/toh/hwdata/ubiquiti/ubiquiti_nanostationm2} for Ubiquiti NanaStation M2 router, so we use it and flash the device with that firmware. |
||

100 | |||

101 | The source code of the program running on the machine with GNU/Linux kernel 3.16, and the program running on Ubiquiti router are the same. To make it possible to run a program in the Ubiquiti router, OpenWrt helps with compiling the source code into a package that can be installed on Ubiquiti router. |
||

102 | |||

103 | |||

104 | \section{Input data} \label{sec:experiment_input} |
||

105 | The experiment involves testing with 2 main sources of input: the artificially generated graphs, and the real topologies from the Wireless Community Networks. |
||

106 | |||

107 | \subsection{Synthetic graphs} |
||

108 | For randomly generated graphs, we are interested in three types: Erdos, Power Law, and Community Graph. All the graphs were generated following the open-source code made available by \citeauthor{Baldesi2016} \cite{Baldesi2016} in a framework called \emph{NePA TesT}. NePA TesT provides a way to generate synthetic graph for different graph type such as Erdos, Power Law, and Community Network. |
||

109 | |||

110 | The reason why we wanted to test on synthetic graphs is because we want to know how BBC and HBC would perform with different graph structures, and graph sizes. And as will see later in \cref{sec:experiment_result}, performance of BBC and HBC depends on the structure and property of each graph type. |
||

111 | |||

112 | \begin{table} |
||

113 | \caption{All synthetic graphs generated for the experiment} |
||

114 | \label{tab:graph_generator} |
||

115 | \centering |
||

116 | \begin{tabular}{l|l|l} |
||

117 | \toprule |
||

118 | Graph type & Number of nodes & Edges / Nodes Ratio \\ |
||

119 | \midrule |
||

120 | CN & from [100, 1000] with the step 100 & 1.2 \\ |
||

121 | ER & from [100, 1000] with the step 100 & 3 \\ |
||

122 | PL & from [100, 1000] with the step 100 & 1.2 \\ |
||

123 | \bottomrule |
||

124 | \end{tabular} |
||

125 | \end{table} |
||

126 | |||

127 | For every line in \cref{tab:graph_generator}, 30 different graphs were generated per size. So for CN graphs, we have 10 different sizes multiplies by 30 graphs, and we have 300 CN graphs in total. For all three graph types, we have 900 graphs. We use the notation \emph{CN100}, \emph{PL100}, \emph{ER100} to denotes the Community Network, Power Law and Erdos graphs with 100 nodes, respectively. The same format of notation is used for graphs with larger number of nodes. |
||

128 | |||

129 | |||

130 | \subsection{Real topologies of WCNs} |
||

131 | In conjunction with the synthetic networks, we also want to evaluate BBC and HBC on the real networks. Since in the end, what matters is which algorithm would work better in reality. In this thesis, we focus on three WCNs: Funk Feuer Wien (FFW)\footnote{https://www.funkfeuer.at/} and Funk Feuer Graz (FFG)\footnote{graz.funkfeuer.at/en/index.html} in Austria and Ninux (NNX)\footnote{wiki.ninux.org/} in Italy. |
||

132 | |||

133 | The WCNs are ever evolving with some current node leaving the network, while some new nodes might join it. \cite{Maccari2015175} provides a depth and throughout analysis on these networks for a week \cite{Maccari2015175}. He found out that the fluctuation in the number of nodes and the number of links are small. According to him, those small changes are from a small subset of leaf nodes leaving and entering network sporadically. Since that fluctuations happen at peripheral nodes, the shortest paths for pair of nodes did not vary much with other time-periods. The reason is that those peripheral leaf nodes does not lies in the shortest paths when routing for other nodes. Hence the performance of shortest-path based betweenness measurement varies slightly. There are other measurements taken as well, but the conclusion from the paper is that for a week, the network features of these real WCNs are stable. With the above analysis, we limit the performance testing of BBC and HBC on one snapshot for each WCNs in consideration. |
||

134 | |||

135 | |||

136 | \section{Experiment Design} \label{sec:experiment_design} |
||

137 | Those previous subsections present all the component of the experiment. We have the input data composing of synthetic and real graphs. We have 2 algorithms needed to evaluate: BBC and HBC. We specified the target system to evaluate the running time of the 2 mentioned algorithms. We start the section with definition of running time. The execution step of the experiment is presented after, aka how many times one single algorithm (BBC or HBC) is evaluated for each input graph. |
||

138 | |||

139 | Every time we mention about the running time, or execution time of BBC or HBC algorithm, we are talking about the time that a processor spent on calculating result for each algorithm. In C++, there is the function call in standard library to help measure the number of CPU clocks consumped by the program. |
||

140 | |||

141 | \begin{figure} |
||

142 | \begin{lstlisting} |
||

143 | #include<time.h> |
||

144 | |||

145 | clock_t begin = clock() |
||

146 | |||

147 | // Run BBC algorithm / HBC algorithm for one graph |
||

148 | |||

149 | clock_t end = clock() |
||

150 | |||

151 | double running_time = double (end - begin) / CLOCKS_PER_SECOND |
||

152 | \end{lstlisting} |
||

153 | \caption{Code to calculate the CPU time} |
||

154 | \end{figure} |
||

155 | |||

156 | |||

157 | The experiment consists of running each algorithm 10 times for each input graph. In this experiment, the measurement we want to get is the running time for each algorithm. And we want that measure to be as close to the actual value as possible. When running for one time, there are various things which might affect the result. And we need to repeat the experiment multiple times to make sure the measurement we get is not a coincidence. |
||

158 | |||

159 | \textbf{Note:} Though we run the program on the machine with CPU containing 8 cores and supporting hyper threading, the program does not support parallelism. One program only run on one single thread from the beginning to the end. |
||

160 | |||

161 | For example, for the Ninux graph, we will run BBC 10 times with it, and then we will run HBC 10 times. The running time were all noted down for the analysis. |
||

162 | |||

163 | \begin{table} |
||

164 | \centering |
||

165 | \begin{tabular}{l|l|l} |
||

166 | \toprule |
||

167 | Graph type & On the computer & On Ubiquiti router \\ |
||

168 | \midrule |
||

169 | CN with nodes in the range [100, 1000] & Yes & Yes \\ |
||

170 | ER with nodes in the range [100, 1000] & Yes & No \\ |
||

171 | PL with nodes in the range [100, 1000] & Yes & No \\ |
||

172 | Ninux & Yes & Yes \\ |
||

173 | FFG & Yes & Yes \\ |
||

174 | FFW & Yes & Yes \\ |
||

175 | \bottomrule |
||

176 | \end{tabular} |
||

177 | \caption{Summary of the number of runs (repetitions) for different input graph} |
||

178 | \label{tab:experiment_design} |
||

179 | \end{table} |