iof-tools @ 67493fe0
|BGPpysim||a395aaa8||over 1 year||tiamilani||first look to the code|
|BirdParseRoutes||8fb4fb93||over 1 year||tiamilani||code reorganized|
|birdPolicyGenerator||9a18ba0c||over 1 year||tiamilani||bug fixes|
|confFileGenerator||e0ef2b78||over 1 year||Marco Nesler||Added AS_PATH prepend template configuration|
|experimentFiles||67493fe0||over 1 year||tiamilani||README added|
|graphGenerator||67493fe0||over 1 year||tiamilani||README added|
|logHandlers||67493fe0||over 1 year||tiamilani||README added|
|mrai_setter||c9f0b2d1||over 1 year||tiamilani||stable state pre merge|
|playbooks||1da6f77c||over 1 year||tiamilani||bug fix|
|plotsGenerator||67493fe0||over 1 year||tiamilani||README added|
|templates||344e481a||over 1 year||tiamilani||config script updated|
|tests||c9f0b2d1||over 1 year||tiamilani||stable state pre merge|
|utils||a256629c||over 1 year||Marco Nesler||Reorganized utils directory|
|.gitignore||383 Bytes||8fb4fb93||over 1 year||tiamilani||code reorganized|
|README.md||17.1 KB||4e2f1f2a||over 1 year||Marco Nesler||Readme additions|
|check-sessions.sh||908 Bytes||909c7b79||over 1 year||Marco Nesler||Added script to check BGP sessions|
|config_templates.py||2.64 KB||dc88fbaa||over 1 year||Michele Segata||gen-config.py: generate user independent config...|
|configure_env.sh||572 Bytes||aac1ed2e||over 1 year||tiamilani||keys management removed from the configure_env|
|const.py||849 Bytes||63130429||over 1 year||Marco Nesler||Added basic network interface reservation|
|deploy-experiment.sh||2.22 KB||10eb07d9||over 1 year||Marco Nesler||Minor bugfixes|
|fetch-results.sh||2.63 KB||52dade90||over 1 year||Marco Nesler||Scripts and fixes to playbooks to manage the ex...|
|gen-config.py||3.9 KB||28dbee38||over 1 year||Marco Nesler||Added multiple rspec files support|
|gen-deploy.py||6.55 KB||5ffc99bc||over 1 year||Marco Nesler||Minor bugfixes|
|gen-rspec.py||5.39 KB||63130429||over 1 year||Marco Nesler||Added basic network interface reservation|
|get-node-info.sh||451 Bytes||10eb07d9||over 1 year||Marco Nesler||Minor bugfixes|
|nodes_info.py||3.28 KB||26bd881f||almost 2 years||Michele Segata||add script and edit nodes_info.py to retrieve i...|
|omni_config||803 Bytes||3829e122||over 1 year||Marco Nesler||Added user|
|reserve.py||4.58 KB||c52be641||almost 2 years||Michele Segata||add reserve.py from poprow project. still to be...|
|rsync-ignore||32 Bytes||27cc0deb||almost 2 years||Michele Segata||add scripts to copy iof-tools onto master node ...|
|rsync-master-node.sh||377 Bytes||27cc0deb||almost 2 years||Michele Segata||add scripts to copy iof-tools onto master node ...|
|run||795 Bytes||ffc66763||almost 2 years||Michele Segata||run: copy configuration files and playbook auto...|
|run-experiment.sh||6.08 KB||10eb07d9||over 1 year||Marco Nesler||Minor bugfixes|
|setenv.sh||361 Bytes||5e36d8b4||almost 2 years||Michele Segata||set node0 to be the master node for virtual walls|
|setup-nodes-environment.sh||1.46 KB||10eb07d9||over 1 year||Marco Nesler||Minor bugfixes|
|twist_all.rspec||803 Bytes||2e91210d||almost 2 years||tiamilani||Exporting session generation implemented|
|twisttemplates.py||1.16 KB||2e7a003b||almost 2 years||Michele Segata||gen-rspec.py: dump information on hardware type...|
|wall1templates.py||1.64 KB||63130429||over 1 year||Marco Nesler||Added basic network interface reservation|
|wall2templates.py||1.64 KB||63130429||over 1 year||Marco Nesler||Added basic network interface reservation|
|wilabtemplates.py||1.16 KB||2e7a003b||almost 2 years||Michele Segata||gen-rspec.py: dump information on hardware type...|
Internet on FIRE Scripts Repo
README assumes that:
* you are working on a Unix-like system, so the variable
$HOME is available;
* all the software will be in the
* this repository has been cloned to
Please execute the following beforehand:
mkdir -p $HOME/src
Key pair setup
First of all, we assume that the user has a valid iMinds Authority account. We also assume that the user's public and private keys associated with the iMinds Authority account are located in ~/.ssh/twist.pub and ~/.ssh/twist.key respectively (the private key MUST NOT be encrypted). If you don't have the keys already setup, you can follow these instructions:
Go to iMinds Authority account management and download your certificate
clicking on the "Download Login Cerificate" button. Save it with the name
Extract the public key with the following command:
openssl x509 -pubkey -noout -in twist.cert > ~/.ssh/twist.pub
twist.cert file and copy the private key part in a new file named
Remove the password from the private key:
openssl rsa -in twist.protected.key -out ~/.ssh/twist.key
The Omni command line tool is required to perform operations on the remote testbeds. Supported operations include querying for testbed status/available resources, allocating/releasing resources (slices) and creating/deleting experiments.
Omni software dependencies
omni only works with Python version 2, so you should either switch your system
wide installation of Python to version 2 or install Python 2 and then change the
first line of the
omni tool source code (see Omni installation).
On ubuntu, in order to install the
omni's software dependencies run the
sudo apt install python-m2crypto python-dateutil python-openssl libxmlsec1
xmlsec1 libxmlsec1-openssl libxmlsec1-dev autoconf
For other operating systems take a look at the official wiki page
In order to install
omni execute the following commands:
cd $HOME/src &&
git clone https://github.com/GENI-NSF/geni-tools omni &&
cd omni &&
If you are using Python version 3 and you don't want to switch system-wide to
Python 2, edit the first line of the
omni source file and change it to
omni has been installed correctly by executing
This command should print something that resembles the following:
omni: GENI Omni Command Line Aggregate Manager Tool Version 2.11
Copyright (c) 2011-2016 Raytheon BBN Technologies
Omni configuration file
omni_config file provided in this repository is a template of the
configuration file. Before running any other
omni command, this template file
must be modified in order to adapt it to the local host environment.
The users whose public keys will be installed on the testbed's nodes are listed
(comma separated list) in the value of the
users key in the
For each user listed in the
users key, there is a corresponding section (named
after the user name) containing the specific configuration for that particular
user. For example, in the current template configuration file one of the user
segata, and the corresponding configuration section looks like this:
urn = urn:publicid:IDN+wall2.ilabt.iminds.be+user+segata
keys = ~/.ssh/twist.pub
The value of the field
keys must be modified to point to the public key of the
In case you need to add a new user, these are the required steps:
1. append the new user name in the comma separated list of the
users key in
2. add to the
omni_config file a new section for the new user.
3. commit and push the new
Testbed resource reservation
You can use jFed directly to reserve nodes, if you plan on using a lot of nodes, you can use the rspec generation scripts to ease this step.
RSPEC files (extension .rspec) are XML files that describes which nodes to
allocate in a given testbed. For the TWIST and w.iLab1 testbeds the .rspec files
can be generated automatically using the
gen-rspec.py script. The script
supports the following command line parameters:
--testbed): specifies which testbed the RSPEC will be generated for. Use
twistfor the TWIST testbed,
wall1for VirtualWall1, and
wall2for VirtualWall2. It is possible to specify a comma-separated list of testbeds, e.g.
--filter): comma separated list of node name prefixes. Only the available nodes whose name starts with one of the specified prefixes are inserted in the generated RSPEC. By default all the available nodes are used for generating the RSPEC file.
--nodes): comma separated list of node names. Only the available nodes whose name is listed with the
-noption are inserted in the RSPEC file. By default all the available nodes are used. The
-noption takes precedence over
--hardware): comma separated list of hardware types (e.g.,
pcgen05). To know the type of hardware, look inside the Virtual Walls webpage or inside jFed.
For example, an RSPEC containing all the available nodes in the TWIST testbed can be generated with the following command:
./gen-rspec.py -t twist > twist_all.rspec
Instead, an RSPEC containing all the nuc nodes in the TWIST testbed can be generated with the following command:
./gen-rspec.py -t twist -f nuc > twist_nuc.rspec
An RSPEC containing only nuc4 and nuc6 from the TWIST testbed can be generated with the following command:
./gen-rspec.py -t twist -n nuc4,nuc6 > twist_nuc_4_6.rspec
An RSPEC containing nodes of hardware type
pcgen05 from both the
VirtualWall1 and the VirtualWall2 testbeds can be generated with the following
./gen-rspec.py -t wall1,wall2 -w pcgen05 > iof.rspec
Note that, in any case, a node is inserted in the RSPEC only if it is available
in the moment the
gen-rspec.py command is executed. For this reason the
suggested best practice is to execute
gen-rspec.py just before allocating the
resources using the
One simple way of reserving the resource is to open the generated
file inside jFed and click on
Run. This is also the safest option as the
reserve.py script is still under development.
reserve.py command can be used to allocate nodes specified in an
file and to release resources previously allocated. The command supports the
--testbed): specifies in which testbed to allocate the nodes. The testbed specified here must match the testbed used in the .rspec file specified with the parameter
-f. Use twist for the TWIST testbed and wilab for w.iLab1;
--duration): it's an integer value that specifies how many hours the nodes will be reserved for. The minimum value currently supported is 3.
--name): specifies the name that identify the experiment. Every experiment whose allocation time overlaps must have a unique name.
--rspec): specifies the path to the .rspec file generated with the
--project): specifies the project the experiments belongs to (by default
reserve.py allocate the resources specified in the .rspec file. The
same command can be used also to release previously allocated resources using
For example, an experiment called
iofexp1 that allocates in the Wall1
testbed the nodes specified in the file
iof.rspec for 4 hours can be
created with the following command:
./reserve.py -t wall1 -d 4 -n iofexp1 -f iof.rspec
Instead, the resources allocated in
iofexp1 can be released with the
./reserve.py -t wall1 -d 4 -n iofexp1 -f iof.rspec -r
The command queries for the status of the testbed every 10 seconds, and reports when everything is up and running.
reserve.py script currently works only when a single
testbed is involved. In case of an
.rspec files with nodes from multiple
testbeds, the operations needs to be performed twice. This is under development.
Generating SSH and Ansible config
After generating the
rspec file, the
gen-config.py script can generate
the SSH and the ansible configuration files to access the nodes of the
testbeds. To do so, simply run:
./gen-config.py -r <rspec files> -u <username> -k <identity file>
The identity file is the private key or the certificate obtained after getting
an account from the iMinds authority.
This file will be copied under the current directory with the name
username is your username on the Testbed.
The script will generate:
ssh-config: the configuration file to be given to the SSH command (e.g.,
ssh -F ssh-config ...). This defines the names of the hosts as
i going from 0 to N-1. To connect to one host, you can thus run
ssh -F ssh-config node0. To connect to the node, the configuration uses a
proxy node with public IP address, which is called
ssh-config-no-proxy: the same configuration file as
proxy0. This can be used by
when run on a testbed node.
ansible.cfg: the Ansible configuration file.
ansible-hosts: the Ansible inventory (list of nodes). In this file the
group of nodes reserved for the experiments is named
nodes. To test that
this is properly working, try with
ansible nodes -m shell -a "uptime".
The filename of the configuration files can be changed via command line
Setting up the testing environment on the nodes
The process of setting up the testing environment on the nodes is composed by two steps.
The first one takes care of installing all the needed software and tweaks some system parameters.
from your local machine.
The second step is needed to configure the node0 as the master node for the experiments and
will correctly setup the syslog collection system on that node.
on your local machine. If you want you can automate the whole procedure executing the
To test the installation run from your local machine (do so only if you have
reserved a few nodes)
ansible nodes -m shell -a "~/iof-bird-daemon/bird --version"
The result should be the version of the bird daemon for each node in the
Retrieving CPU and network info
To retrieve CPU and interface information for all the nodes in the testbed run
This will create a
cpu_info containing one
for each node in the testbed. The information can be used within python
programs using the
nodes_info::NodesInfo class. See the unit test
test_nodes_info.py for an example usage.
If you used the
setup-nodes-environment.sh in the previous step, the informations have already
beed retrieved by the script. If you want to do it by hand, be sure to delete the
cpu_info direcroty first.
Topologies and BGP configurations
This section describes the tools that are used to generate network topologies
to test and the corresponding
bird configuration files.
Chain gadget topology
This tool generates chain gadget topologies as described in the Fabrikant
and Rexford paper There's something about MRAI: Timing diversity can
exponentially worsen BGP convergence. The tool is composed by two files
chain_gadget.py: main library that exposes the
gen_chain_gadget.py: script that invokes the
gen_chain_gadget method of
the library and writes the graph on a
.graphml output file.
The parameters that both the method accepts as inputs are the following (the
parameters of the script have different names, but the same meaning):
n_rings: the number of rings to generate in the topology. For example,
the number of rings in Figs. 1 and 3 in the paper is 3. The rings connected
together form the chain.
n_inner: the number of inner nodes. Each ring as inner nodes (marked with
Y_i in the paper). The topology in Fig. 1 in the paper has only 1 inner
node per ring, while Fig. 3 has 3.
add_outer: if set to
true, the tool will generate outer nodes as well
(nodes marked with
Z_i in the paper). The topology in Fig. 1 in the paper
has no outer nodes, while Fig. 3 has 4. The number of outer nodes is
automatically derived, and it is simply the number of inner nodes plus 1.
node_type: the node type to assign to nodes. This can either be
edge_type: the edge type to assign to edges. This can either be
peer. By default this is set to
set_timer: if set to
true, the tool will compute the
MRAI timer for
the nodes, so that the automatic BGP configuration tool can use them during
the generation phase. The timer is assigned with an exponentially decreasing
value, starting with the default of
30 s. The left-most ring (according to
the graphical description of the topology in the paper) has the highest
timer. Each ring's timer is halved with respect to the one of its left ring.
As an example, if you want to generate an eight ring Fabrikant topology:
cd graphGenerator/fabrikant &&
python3 gen_chain_gadget.py -r 8 -i 1 -t M -w OUTPUTFILE.graphml
AS graph generator
This tool generates graphs resembling the Internet BGP speaker topology.
Generation is as easy as typing:
python3 generate.py <number_of_nodes> <number_of_graphs>
This tool sets the MRAI value on a graphml topology, using a specific strategy. You can look
at the Readme file in the
mrai_setter folder for a complete explanation of the arguments.
Bird Policy file generator
If you want to simulate a chain gedget topology you must also generate a Bird policy file. This generator implements the routing policies needed on for the correct functioning of the Fabrikant topologies. The policy generator will also add three nodes needed to manage the routing change in the topology. It is mandatory to have a single destination route to be announced configured in the graph. If you have more than one (because you added them to correctly calculate the DPC values), you need to remove them by hand, editing the graphml file and deleting the "destination" entries on every node (except the last one). If you plan to use the Elmokashfi generator, you can skip this step.
To generate the policy file, use the tools as follows:
cd birdPolicyGenerator &&
python3 gen_bird_preferences.py -g <graph_name>
Bird Config file generator
This tool is available in the
confFileGenerator folder, it can be used to generate the Bird
configuration files to deploy on the Testbed. You can refer to the tool Readme for a complete explanation
of the different options.
Experiment deployment and execution
To deploy an experiment on the Testbed, a mix of ansible playbooks and various scripts is needed. You'll need: * A set of nodes reserved on the Testbed * The output directory of the Bird policy generator tool, containing the configuration files of the selected topology to be tested.
If you are testing a fabrikant gadget topology, only two nodes are needed. If you are testing an Elmokashfi topology, the total number of cores needed is dependant upon the number of Autonomous Systems of the topology. We tried topologies up to 4000 Autonomous Systems, using a 6:1 ratio (6 AS on a single core).
## Deployment of the topology
1. Copy the Bird config file directory in
2. You can use the
./deploy-experiment.sh script to automate all the deployment steps.
## Running the experiment
After you successfully deployed the experiment files, you can connect to the control node to run the experiment:
ssh -F ssh-config node0
From the control node, execute the
./run-experiment.sh script. You'll need to specify some arguments:
-a ASNumber this flag specifies which AS is going to trigger the change in the topology
-n ASNumber this flag specifies the adjacency that will be changed, if you want to trigger the
change on the AS 10 over the adjacency with the AS 15, the command line will be
-a 10 -n 15.
If you don't specify a neighbor, the first one will be selected.
-r runs this flag specifies the number of runs to execute, on each run the script will:
* Start via ansible the bird process on all the nodes;
* Check if all the bird processes and adjacencies are ok;
* Wait for the topology to converge;
* Trigger the change on the network;
* Wait for the topology to converge;
* Collect all the relevant logs;
* Kill all Bird processes.
Also available in: Atom