Statistics
| Branch: | Tag: | Revision:

mininet / mininet / net.py @ 55179737

History | View | Annotate | Download (23.9 KB)

1
"""
2

3
    Mininet: A simple networking testbed for OpenFlow/SDN!
4

5
author: Bob Lantz (rlantz@cs.stanford.edu)
6
author: Brandon Heller (brandonh@stanford.edu)
7

8
Mininet creates scalable OpenFlow test networks by using
9
process-based virtualization and network namespaces.
10

11
Simulated hosts are created as processes in separate network
12
namespaces. This allows a complete OpenFlow network to be simulated on
13
top of a single Linux kernel.
14

15
Each host has:
16

17
A virtual console (pipes to a shell)
18
A virtual interfaces (half of a veth pair)
19
A parent shell (and possibly some child processes) in a namespace
20

21
Hosts have a network interface which is configured via ifconfig/ip
22
link/etc.
23

24
This version supports both the kernel and user space datapaths
25
from the OpenFlow reference implementation (openflowswitch.org)
26
as well as OpenVSwitch (openvswitch.org.)
27

28
In kernel datapath mode, the controller and switches are simply
29
processes in the root namespace.
30

31
Kernel OpenFlow datapaths are instantiated using dpctl(8), and are
32
attached to the one side of a veth pair; the other side resides in the
33
host namespace. In this mode, switch processes can simply connect to the
34
controller via the loopback interface.
35

36
In user datapath mode, the controller and switches can be full-service
37
nodes that live in their own network namespaces and have management
38
interfaces and IP addresses on a control network (e.g. 192.168.123.1,
39
currently routed although it could be bridged.)
40

41
In addition to a management interface, user mode switches also have
42
several switch interfaces, halves of veth pairs whose other halves
43
reside in the host nodes that the switches are connected to.
44

45
Consistent, straightforward naming is important in order to easily
46
identify hosts, switches and controllers, both from the CLI and
47
from program code. Interfaces are named to make it easy to identify
48
which interfaces belong to which node.
49

50
The basic naming scheme is as follows:
51

52
    Host nodes are named h1-hN
53
    Switch nodes are named s1-sN
54
    Controller nodes are named c0-cN
55
    Interfaces are named {nodename}-eth0 .. {nodename}-ethN
56

57
Note: If the network topology is created using mininet.topo, then
58
node numbers are unique among hosts and switches (e.g. we have
59
h1..hN and SN..SN+M) and also correspond to their default IP addresses
60
of 10.x.y.z/8 where x.y.z is the base-256 representation of N for
61
hN. This mapping allows easy determination of a node's IP
62
address from its name, e.g. h1 -> 10.0.0.1, h257 -> 10.0.1.1.
63

64
Note also that 10.0.0.1 can often be written as 10.1 for short, e.g.
65
"ping 10.1" is equivalent to "ping 10.0.0.1".
66

67
Currently we wrap the entire network in a 'mininet' object, which
68
constructs a simulated network based on a network topology created
69
using a topology object (e.g. LinearTopo) from mininet.topo or
70
mininet.topolib, and a Controller which the switches will connect
71
to. Several configuration options are provided for functions such as
72
automatically setting MAC addresses, populating the ARP table, or
73
even running a set of terminals to allow direct interaction with nodes.
74

75
After the network is created, it can be started using start(), and a
76
variety of useful tasks maybe performed, including basic connectivity
77
and bandwidth tests and running the mininet CLI.
78

79
Once the network is up and running, test code can easily get access
80
to host and switch objects which can then be used for arbitrary
81
experiments, typically involving running a series of commands on the
82
hosts.
83

84
After all desired tests or activities have been completed, the stop()
85
method may be called to shut down the network.
86

87
"""
88

    
89
import os
90
import re
91
import select
92
import signal
93
from time import sleep
94

    
95
from mininet.cli import CLI
96
from mininet.log import info, error, debug, output
97
from mininet.node import Host, OVSKernelSwitch, Controller
98
from mininet.link import Link, Intf
99
from mininet.util import quietRun, fixLimits, numCores
100
from mininet.util import macColonHex, ipStr, ipParse, netParse, ipAdd
101
from mininet.term import cleanUpScreens, makeTerms
102

    
103
# Mininet version: should be consistent with README and LICENSE
104
VERSION = "2.0.0d2"
105

    
106
class Mininet( object ):
107
    "Network emulation with hosts spawned in network namespaces."
108

    
109
    def __init__( self, topo=None, switch=OVSKernelSwitch, host=Host,
110
                 controller=Controller, link=Link, intf=Intf,
111
                 build=True, xterms=False, cleanup=False, ipBase='10.0.0.0/8',
112
                 inNamespace=False,
113
                 autoSetMacs=False, autoStaticArp=False, autoPinCpus=False,
114
                 listenPort=None ):
115
        """Create Mininet object.
116
           topo: Topo (topology) object or None
117
           switch: default Switch class
118
           host: default Host class/constructor
119
           controller: default Controller class/constructor
120
           link: default Link class/constructor
121
           intf: default Intf class/constructor
122
           ipBase: base IP address for hosts,
123
           build: build now from topo?
124
           xterms: if build now, spawn xterms?
125
           cleanup: if build now, cleanup before creating?
126
           inNamespace: spawn switches and controller in net namespaces?
127
           autoSetMacs: set MAC addrs automatically like IP addresses?
128
           autoStaticArp: set all-pairs static MAC addrs?
129
           autoPinCpus: pin hosts to (real) cores (requires CPULimitedHost)?
130
           listenPort: base listening port to open; will be incremented for
131
               each additional switch in the net if inNamespace=False"""
132
        self.topo = topo
133
        self.switch = switch
134
        self.host = host
135
        self.controller = controller
136
        self.link = link
137
        self.intf = intf
138
        self.ipBase = ipBase
139
        self.ipBaseNum, self.prefixLen = netParse( self.ipBase )
140
        self.nextIP = 1  # start for address allocation
141
        self.inNamespace = inNamespace
142
        self.xterms = xterms
143
        self.cleanup = cleanup
144
        self.autoSetMacs = autoSetMacs
145
        self.autoStaticArp = autoStaticArp
146
        self.autoPinCpus = autoPinCpus
147
        self.numCores = numCores()
148
        self.nextCore = 0  # next core for pinning hosts to CPUs
149
        self.listenPort = listenPort
150

    
151
        self.hosts = []
152
        self.switches = []
153
        self.controllers = []
154

    
155
        self.nameToNode = {}  # name to Node (Host/Switch) objects
156

    
157
        self.terms = []  # list of spawned xterm processes
158

    
159
        Mininet.init()  # Initialize Mininet if necessary
160

    
161
        self.built = False
162
        if topo and build:
163
            self.build()
164

    
165
    def addHost( self, name, cls=None, **params ):
166
        """Add host.
167
           name: name of host to add
168
           cls: custom host class/constructor (optional)
169
           params: parameters for host
170
           returns: added host"""
171
        # Default IP and MAC addresses
172
        defaults = { 'ip': ipAdd( self.nextIP,
173
                                  ipBaseNum=self.ipBaseNum,
174
                                  prefixLen=self.prefixLen ) +
175
                                  '/%s' % self.prefixLen }
176
        if self.autoSetMacs:
177
            defaults[ 'mac'] = macColonHex( self.nextIP )
178
        if self.autoPinCpus:
179
            defaults[ 'cores' ] = self.nextCore
180
            self.nextCore = ( self.nextCore + 1 ) % self.numCores
181
        self.nextIP += 1
182
        defaults.update( params )
183
        if not cls:
184
            cls = self.host
185
        h = cls( name, **defaults )
186
        self.hosts.append( h )
187
        self.nameToNode[ name ] = h
188
        return h
189

    
190
    def addSwitch( self, name, cls=None, **params ):
191
        """Add switch.
192
           name: name of switch to add
193
           cls: custom switch class/constructor (optional)
194
           returns: added switch
195
           side effect: increments listenPort ivar ."""
196
        defaults = { 'listenPort': self.listenPort,
197
                     'inNamespace': self.inNamespace }
198
        defaults.update( params )
199
        if not cls:
200
            cls = self.switch
201
        sw = cls( name, **defaults )
202
        if not self.inNamespace and self.listenPort:
203
            self.listenPort += 1
204
        self.switches.append( sw )
205
        self.nameToNode[ name ] = sw
206
        return sw
207

    
208
    def addController( self, name='c0', controller=None, **params ):
209
        """Add controller.
210
           controller: Controller class"""
211
        if not controller:
212
            controller = self.controller
213
        controller_new = controller( name, **params )
214
        if controller_new:  # allow controller-less setups
215
            self.controllers.append( controller_new )
216
            self.nameToNode[ name ] = controller_new
217
        return controller_new
218

    
219
    # BL: is this better than just using nameToNode[] ?
220
    # Should it have a better name?
221
    def getNodeByName( self, *args ):
222
        "Return node(s) with given name(s)"
223
        if len( args ) == 1:
224
            return self.nameToNode[ args[ 0 ] ]
225
        return [ self.nameToNode[ n ] for n in args ]
226

    
227
    def get( self, *args ):
228
        "Convenience alias for getNodeByName"
229
        return self.getNodeByName( *args )
230

    
231
    def addLink( self, node1, node2, port1=None, port2=None,
232
                 cls=None, **params ):
233
        """"Add a link from node1 to node2
234
            node1: source node
235
            node2: dest node
236
            port1: source port
237
            port2: dest port
238
            returns: link object"""
239
        defaults = { 'port1': port1,
240
                     'port2': port2,
241
                     'intf': self.intf }
242
        defaults.update( params )
243
        if not cls:
244
            cls = self.link
245
        return cls( node1, node2, **defaults )
246

    
247
    def configHosts( self ):
248
        "Configure a set of hosts."
249
        for host in self.hosts:
250
            info( host.name + ' ' )
251
            intf = host.defaultIntf()
252
            if intf:
253
                host.configDefault( defaultRoute=intf )
254
            else:
255
                # Don't configure nonexistent intf
256
                host.configDefault( ip=None, mac=None )
257
            # You're low priority, dude!
258
            # BL: do we want to do this here or not?
259
            # May not make sense if we have CPU lmiting...
260
            # quietRun( 'renice +18 -p ' + repr( host.pid ) )
261
            # This may not be the right place to do this, but
262
            # it needs to be done somewhere.
263
            host.cmd( 'ifconfig lo up' )
264
        info( '\n' )
265

    
266
    def buildFromTopo( self, topo=None ):
267
        """Build mininet from a topology object
268
           At the end of this function, everything should be connected
269
           and up."""
270

    
271
        # Possibly we should clean up here and/or validate
272
        # the topo
273
        if self.cleanup:
274
            pass
275

    
276
        info( '*** Creating network\n' )
277

    
278
        if not self.controllers:
279
            # Add a default controller
280
            info( '*** Adding controller\n' )
281
            self.addController( 'c0' )
282

    
283
        info( '*** Adding hosts:\n' )
284
        for hostName in topo.hosts():
285
            self.addHost( hostName, **topo.nodeInfo( hostName ) )
286
            info( hostName + ' ' )
287

    
288
        info( '\n*** Adding switches:\n' )
289
        for switchName in topo.switches():
290
            self.addSwitch( switchName, **topo.nodeInfo( switchName) )
291
            info( switchName + ' ' )
292

    
293
        info( '\n*** Adding links:\n' )
294
        for srcName, dstName in topo.links(sort=True):
295
            src, dst = self.nameToNode[ srcName ], self.nameToNode[ dstName ]
296
            params = topo.linkInfo( srcName, dstName )
297
            srcPort, dstPort = topo.port( srcName, dstName )
298
            self.addLink( src, dst, srcPort, dstPort, **params )
299
            info( '(%s, %s) ' % ( src.name, dst.name ) )
300

    
301
        info( '\n' )
302

    
303
    def configureControlNetwork( self ):
304
        "Control net config hook: override in subclass"
305
        raise Exception( 'configureControlNetwork: '
306
               'should be overriden in subclass', self )
307

    
308
    def build( self ):
309
        "Build mininet."
310
        if self.topo:
311
            self.buildFromTopo( self.topo )
312
        if ( self.inNamespace ):
313
            self.configureControlNetwork()
314
        info( '*** Configuring hosts\n' )
315
        self.configHosts()
316
        if self.xterms:
317
            self.startTerms()
318
        if self.autoStaticArp:
319
            self.staticArp()
320
        self.built = True
321

    
322
    def startTerms( self ):
323
        "Start a terminal for each node."
324
        info( "*** Running terms on %s\n" % os.environ[ 'DISPLAY' ] )
325
        cleanUpScreens()
326
        self.terms += makeTerms( self.controllers, 'controller' )
327
        self.terms += makeTerms( self.switches, 'switch' )
328
        self.terms += makeTerms( self.hosts, 'host' )
329

    
330
    def stopXterms( self ):
331
        "Kill each xterm."
332
        for term in self.terms:
333
            os.kill( term.pid, signal.SIGKILL )
334
        cleanUpScreens()
335

    
336
    def staticArp( self ):
337
        "Add all-pairs ARP entries to remove the need to handle broadcast."
338
        for src in self.hosts:
339
            for dst in self.hosts:
340
                if src != dst:
341
                    src.setARP( ip=dst.IP(), mac=dst.MAC() )
342

    
343
    def start( self ):
344
        "Start controller and switches."
345
        if not self.built:
346
            self.build()
347
        info( '*** Starting controller\n' )
348
        for controller in self.controllers:
349
            controller.start()
350
        info( '*** Starting %s switches\n' % len( self.switches ) )
351
        for switch in self.switches:
352
            info( switch.name + ' ')
353
            switch.start( self.controllers )
354
        info( '\n' )
355

    
356
    def stop( self ):
357
        "Stop the controller(s), switches and hosts"
358
        if self.terms:
359
            info( '*** Stopping %i terms\n' % len( self.terms ) )
360
            self.stopXterms()
361
        info( '*** Stopping %i hosts\n' % len( self.hosts ) )
362
        for host in self.hosts:
363
            info( host.name + ' ' )
364
            host.terminate()
365
        info( '\n' )
366
        info( '*** Stopping %i switches\n' % len( self.switches ) )
367
        for switch in self.switches:
368
            info( switch.name + ' ' )
369
            switch.stop()
370
        info( '\n' )
371
        info( '*** Stopping %i controllers\n' % len( self.controllers ) )
372
        for controller in self.controllers:
373
            info( controller.name + ' ' )
374
            controller.stop()
375
        info( '\n*** Done\n' )
376

    
377
    def run( self, test, *args, **kwargs ):
378
        "Perform a complete start/test/stop cycle."
379
        self.start()
380
        info( '*** Running test\n' )
381
        result = test( *args, **kwargs )
382
        self.stop()
383
        return result
384

    
385
    def monitor( self, hosts=None, timeoutms=-1 ):
386
        """Monitor a set of hosts (or all hosts by default),
387
           and return their output, a line at a time.
388
           hosts: (optional) set of hosts to monitor
389
           timeoutms: (optional) timeout value in ms
390
           returns: iterator which returns host, line"""
391
        if hosts is None:
392
            hosts = self.hosts
393
        poller = select.poll()
394
        Node = hosts[ 0 ]  # so we can call class method fdToNode
395
        for host in hosts:
396
            poller.register( host.stdout )
397
        while True:
398
            ready = poller.poll( timeoutms )
399
            for fd, event in ready:
400
                host = Node.fdToNode( fd )
401
                if event & select.POLLIN:
402
                    line = host.readline()
403
                    if line is not None:
404
                        yield host, line
405
            # Return if non-blocking
406
            if not ready and timeoutms >= 0:
407
                yield None, None
408

    
409
    # XXX These test methods should be moved out of this class.
410
    # Probably we should create a tests.py for them
411

    
412
    @staticmethod
413
    def _parsePing( pingOutput ):
414
        "Parse ping output and return packets sent, received."
415
        # Check for downed link
416
        if 'connect: Network is unreachable' in pingOutput:
417
            return (1, 0)
418
        r = r'(\d+) packets transmitted, (\d+) received'
419
        m = re.search( r, pingOutput )
420
        if m == None:
421
            error( '*** Error: could not parse ping output: %s\n' %
422
                     pingOutput )
423
            return (1, 0)
424
        sent, received = int( m.group( 1 ) ), int( m.group( 2 ) )
425
        return sent, received
426

    
427
    def ping( self, hosts=None ):
428
        """Ping between all specified hosts.
429
           hosts: list of hosts
430
           returns: ploss packet loss percentage"""
431
        # should we check if running?
432
        packets = 0
433
        lost = 0
434
        ploss = None
435
        if not hosts:
436
            hosts = self.hosts
437
            output( '*** Ping: testing ping reachability\n' )
438
        for node in hosts:
439
            output( '%s -> ' % node.name )
440
            for dest in hosts:
441
                if node != dest:
442
                    result = node.cmd( 'ping -c1 ' + dest.IP() )
443
                    sent, received = self._parsePing( result )
444
                    packets += sent
445
                    if received > sent:
446
                        error( '*** Error: received too many packets' )
447
                        error( '%s' % result )
448
                        node.cmdPrint( 'route' )
449
                        exit( 1 )
450
                    lost += sent - received
451
                    output( ( '%s ' % dest.name ) if received else 'X ' )
452
            output( '\n' )
453
            ploss = 100 * lost / packets
454
        output( "*** Results: %i%% dropped (%d/%d lost)\n" %
455
                ( ploss, lost, packets ) )
456
        return ploss
457

    
458
    def pingAll( self ):
459
        """Ping between all hosts.
460
           returns: ploss packet loss percentage"""
461
        return self.ping()
462

    
463
    def pingPair( self ):
464
        """Ping between first two hosts, useful for testing.
465
           returns: ploss packet loss percentage"""
466
        hosts = [ self.hosts[ 0 ], self.hosts[ 1 ] ]
467
        return self.ping( hosts=hosts )
468

    
469
    @staticmethod
470
    def _parseIperf( iperfOutput ):
471
        """Parse iperf output and return bandwidth.
472
           iperfOutput: string
473
           returns: result string"""
474
        r = r'([\d\.]+ \w+/sec)'
475
        m = re.findall( r, iperfOutput )
476
        if m:
477
            return m[-1]
478
        else:
479
            # was: raise Exception(...)
480
            error( 'could not parse iperf output: ' + iperfOutput )
481
            return ''
482

    
483
    # XXX This should be cleaned up
484

    
485
    def iperf( self, hosts=None, l4Type='TCP', udpBw='10M' ):
486
        """Run iperf between two hosts.
487
           hosts: list of hosts; if None, uses opposite hosts
488
           l4Type: string, one of [ TCP, UDP ]
489
           returns: results two-element array of server and client speeds"""
490
        if not quietRun( 'which telnet' ):
491
            error( 'Cannot find telnet in $PATH - required for iperf test' )
492
            return
493
        if not hosts:
494
            hosts = [ self.hosts[ 0 ], self.hosts[ -1 ] ]
495
        else:
496
            assert len( hosts ) == 2
497
        client, server = hosts
498
        output( '*** Iperf: testing ' + l4Type + ' bandwidth between ' )
499
        output( "%s and %s\n" % ( client.name, server.name ) )
500
        server.cmd( 'killall -9 iperf' )
501
        iperfArgs = 'iperf '
502
        bwArgs = ''
503
        if l4Type == 'UDP':
504
            iperfArgs += '-u '
505
            bwArgs = '-b ' + udpBw + ' '
506
        elif l4Type != 'TCP':
507
            raise Exception( 'Unexpected l4 type: %s' % l4Type )
508
        server.sendCmd( iperfArgs + '-s', printPid=True )
509
        servout = ''
510
        while server.lastPid is None:
511
            servout += server.monitor()
512
        if l4Type == 'TCP':
513
            while 'Connected' not in client.cmd(
514
                'sh -c "echo A | telnet -e A %s 5001"' % server.IP()):
515
                output('waiting for iperf to start up...')
516
                sleep(.5)
517
        cliout = client.cmd( iperfArgs + '-t 5 -c ' + server.IP() + ' ' +
518
                           bwArgs )
519
        debug( 'Client output: %s\n' % cliout )
520
        server.sendInt()
521
        servout += server.waitOutput()
522
        debug( 'Server output: %s\n' % servout )
523
        result = [ self._parseIperf( servout ), self._parseIperf( cliout ) ]
524
        if l4Type == 'UDP':
525
            result.insert( 0, udpBw )
526
        output( '*** Results: %s\n' % result )
527
        return result
528

    
529
    # BL: I think this can be rewritten now that we have
530
    # a real link class.
531
    def configLinkStatus( self, src, dst, status ):
532
        """Change status of src <-> dst links.
533
           src: node name
534
           dst: node name
535
           status: string {up, down}"""
536
        if src not in self.nameToNode:
537
            error( 'src not in network: %s\n' % src )
538
        elif dst not in self.nameToNode:
539
            error( 'dst not in network: %s\n' % dst )
540
        else:
541
            if type( src ) is str:
542
                src = self.nameToNode[ src ]
543
            if type( dst ) is str:
544
                dst = self.nameToNode[ dst ]
545
            connections = src.connectionsTo( dst )
546
            if len( connections ) == 0:
547
                error( 'src and dst not connected: %s %s\n' % ( src, dst) )
548
            for srcIntf, dstIntf in connections:
549
                result = srcIntf.ifconfig( status )
550
                if result:
551
                    error( 'link src status change failed: %s\n' % result )
552
                result = dstIntf.ifconfig( status )
553
                if result:
554
                    error( 'link dst status change failed: %s\n' % result )
555

    
556
    def interact( self ):
557
        "Start network and run our simple CLI."
558
        self.start()
559
        result = CLI( self )
560
        self.stop()
561
        return result
562

    
563
    inited = False
564

    
565
    @classmethod
566
    def init( cls ):
567
        "Initialize Mininet"
568
        if cls.inited:
569
            return
570
        if os.getuid() != 0:
571
            # Note: this script must be run as root
572
            # Probably we should only sudo when we need
573
            # to as per Big Switch's patch
574
            print "*** Mininet must run as root."
575
            exit( 1 )
576
        fixLimits()
577
        cls.inited = True
578

    
579

    
580
class MininetWithControlNet( Mininet ):
581

    
582
    """Control network support:
583

584
       Create an explicit control network. Currently this is only
585
       used/usable with the user datapath.
586

587
       Notes:
588

589
       1. If the controller and switches are in the same (e.g. root)
590
          namespace, they can just use the loopback connection.
591

592
       2. If we can get unix domain sockets to work, we can use them
593
          instead of an explicit control network.
594

595
       3. Instead of routing, we could bridge or use 'in-band' control.
596

597
       4. Even if we dispense with this in general, it could still be
598
          useful for people who wish to simulate a separate control
599
          network (since real networks may need one!)
600

601
       5. Basically nobody ever used this code, so it has been moved
602
          into its own class.
603

604
       6. Ultimately we may wish to extend this to allow us to create a
605
          control network which every node's control interface is
606
          attached to."""
607

    
608
    def configureControlNetwork( self ):
609
        "Configure control network."
610
        self.configureRoutedControlNetwork()
611

    
612
    # We still need to figure out the right way to pass
613
    # in the control network location.
614

    
615
    def configureRoutedControlNetwork( self, ip='192.168.123.1',
616
        prefixLen=16 ):
617
        """Configure a routed control network on controller and switches.
618
           For use with the user datapath only right now."""
619
        controller = self.controllers[ 0 ]
620
        info( controller.name + ' <->' )
621
        cip = ip
622
        snum = ipParse( ip )
623
        for switch in self.switches:
624
            info( ' ' + switch.name )
625
            link = self.link( switch, controller, port1=0 )
626
            sintf, cintf = link.intf1, link.intf2
627
            switch.controlIntf = sintf
628
            snum += 1
629
            while snum & 0xff in [ 0, 255 ]:
630
                snum += 1
631
            sip = ipStr( snum )
632
            cintf.setIP( cip, prefixLen )
633
            sintf.setIP( sip, prefixLen )
634
            controller.setHostRoute( sip, cintf )
635
            switch.setHostRoute( cip, sintf )
636
        info( '\n' )
637
        info( '*** Testing control network\n' )
638
        while not cintf.isUp():
639
            info( '*** Waiting for', cintf, 'to come up\n' )
640
            sleep( 1 )
641
        for switch in self.switches:
642
            while not sintf.isUp():
643
                info( '*** Waiting for', sintf, 'to come up\n' )
644
                sleep( 1 )
645
            if self.ping( hosts=[ switch, controller ] ) != 0:
646
                error( '*** Error: control network test failed\n' )
647
                exit( 1 )
648
        info( '\n' )