"""
Mininet: A simple networking testbed for OpenFlow/SDN!
author: Bob Lantz (rlantz@cs.stanford.edu)
author: Brandon Heller (brandonh@stanford.edu)
Mininet creates scalable OpenFlow test networks by using
process-based virtualization and network namespaces.
Simulated hosts are created as processes in separate network
namespaces. This allows a complete OpenFlow network to be simulated on
top of a single Linux kernel.
Each host has:
A virtual console (pipes to a shell)
A virtual interfaces (half of a veth pair)
A parent shell (and possibly some child processes) in a namespace
Hosts have a network interface which is configured via ifconfig/ip
link/etc.
This version supports both the kernel and user space datapaths
from the OpenFlow reference implementation (openflowswitch.org)
as well as OpenVSwitch (openvswitch.org.)
In kernel datapath mode, the controller and switches are simply
processes in the root namespace.
Kernel OpenFlow datapaths are instantiated using dpctl(8), and are
attached to the one side of a veth pair; the other side resides in the
host namespace. In this mode, switch processes can simply connect to the
controller via the loopback interface.
In user datapath mode, the controller and switches can be full-service
nodes that live in their own network namespaces and have management
interfaces and IP addresses on a control network (e.g. 192.168.123.1,
currently routed although it could be bridged.)
In addition to a management interface, user mode switches also have
several switch interfaces, halves of veth pairs whose other halves
reside in the host nodes that the switches are connected to.
Consistent, straightforward naming is important in order to easily
identify hosts, switches and controllers, both from the CLI and
from program code. Interfaces are named to make it easy to identify
which interfaces belong to which node.
The basic naming scheme is as follows:
Host nodes are named h1-hN
Switch nodes are named s1-sN
Controller nodes are named c0-cN
Interfaces are named {nodename}-eth0 .. {nodename}-ethN
Note: If the network topology is created using mininet.topo, then
node numbers are unique among hosts and switches (e.g. we have
h1..hN and SN..SN+M) and also correspond to their default IP addresses
of 10.x.y.z/8 where x.y.z is the base-256 representation of N for
hN. This mapping allows easy determination of a node's IP
address from its name, e.g. h1 -> 10.0.0.1, h257 -> 10.0.1.1.
Note also that 10.0.0.1 can often be written as 10.1 for short, e.g.
"ping 10.1" is equivalent to "ping 10.0.0.1".
Currently we wrap the entire network in a 'mininet' object, which
constructs a simulated network based on a network topology created
using a topology object (e.g. LinearTopo) from mininet.topo or
mininet.topolib, and a Controller which the switches will connect
to. Several configuration options are provided for functions such as
automatically setting MAC addresses, populating the ARP table, or
even running a set of terminals to allow direct interaction with nodes.
After the network is created, it can be started using start(), and a
variety of useful tasks maybe performed, including basic connectivity
and bandwidth tests and running the mininet CLI.
Once the network is up and running, test code can easily get access
to host and switch objects which can then be used for arbitrary
experiments, typically involving running a series of commands on the
hosts.
After all desired tests or activities have been completed, the stop()
method may be called to shut down the network.
"""
import os
import re
import select
import signal
import random
import time
from sys import exit # pylint: disable=redefined-builtin
from time import sleep
from itertools import chain, groupby
from math import ceil
from mininet.cli import CLI
from mininet.log import info, error, debug, output, warn
from mininet.node import ( Node, Host, OVSKernelSwitch, DefaultController,
Controller )
from mininet.nodelib import NAT
from mininet.link import Link, Intf
from mininet.util import ( quietRun, fixLimits, numCores, ensureRoot,
macColonHex, ipStr, ipParse, netParse, ipAdd,
waitListening, BaseString )
from mininet.term import cleanUpScreens, makeTerms
# Mininet version: should be consistent with README and LICENSE
VERSION = "2.3.0"
class Mininet( object ):
"Network emulation with hosts spawned in network namespaces."
# pylint: disable=too-many-arguments
def __init__( self, topo=None, switch=OVSKernelSwitch, host=Host,
controller=DefaultController, link=Link, intf=Intf,
build=True, xterms=False, cleanup=False, ipBase='10.0.0.0/8',
inNamespace=False,
autoSetMacs=False, autoStaticArp=False, autoPinCpus=False,
listenPort=None, waitConnected=False ):
"""Create Mininet object.
topo: Topo (topology) object or None
switch: default Switch class
host: default Host class/constructor
controller: default Controller class/constructor
link: default Link class/constructor
intf: default Intf class/constructor
ipBase: base IP address for hosts,
build: build now from topo?
xterms: if build now, spawn xterms?
cleanup: if build now, cleanup before creating?
inNamespace: spawn switches and controller in net namespaces?
autoSetMacs: set MAC addrs automatically like IP addresses?
autoStaticArp: set all-pairs static MAC addrs?
autoPinCpus: pin hosts to (real) cores (requires CPULimitedHost)?
listenPort: base listening port to open; will be incremented for
each additional switch in the net if inNamespace=False
waitConnected: wait for switches to Connect?
(False; True/None=wait indefinitely; time(s)=timed wait)"""
self.topo = topo
self.switch = switch
self.host = host
self.controller = controller
self.link = link
self.intf = intf
self.ipBase = ipBase
self.ipBaseNum, self.prefixLen = netParse( self.ipBase )
hostIP = ( 0xffffffff >> self.prefixLen ) & self.ipBaseNum
# Start for address allocation
self.nextIP = hostIP if hostIP > 0 else 1
self.inNamespace = inNamespace
self.xterms = xterms
self.cleanup = cleanup
self.autoSetMacs = autoSetMacs
self.autoStaticArp = autoStaticArp
self.autoPinCpus = autoPinCpus
self.numCores = numCores()
self.nextCore = 0 # next core for pinning hosts to CPUs
self.listenPort = listenPort
self.waitConn = waitConnected
self.hosts = []
self.switches = []
self.controllers = []
self.links = []
self.nameToNode = {} # name to Node (Host/Switch) objects
self.terms = [] # list of spawned xterm processes
Mininet.init() # Initialize Mininet if necessary
self.built = False
if topo and build:
self.build()
def waitConnected( self, timeout=None, delay=.5 ):
"""wait for each switch to connect to a controller
timeout: time to wait, or None or True to wait indefinitely
delay: seconds to sleep per iteration
returns: True if all switches are connected"""
info( '*** Waiting for switches to connect\n' )
time = 0.0
remaining = list( self.switches )
# False: 0s timeout; None: wait forever (preserve 2.2 behavior)
if isinstance( timeout, bool ):
timeout = None if timeout else 0
while True:
for switch in tuple( remaining ):
if switch.connected():
info( '%s ' % switch )
remaining.remove( switch )
if not remaining:
info( '\n' )
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
基于LSTM的SDN流量预测与负载均衡python源码+数据+详细注释 - 不懂运行,下载完可以私聊问,可远程教学 该资源内项目源码是个人的毕设,代码都测试ok,都是运行成功后才上传资源,答辩评审平均分达到96分,放心下载使用! <项目介绍> 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载学习,也适合小白学习进阶,当然也可作为毕设项目、课程设计、作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可用于毕设、课设、作业等。 下载后请首先打开README.md文件(如有),仅供学习参考, 切勿用于商业用途。 --------
资源推荐
资源详情
资源评论
收起资源包目录
SDN_load-prediction-and-balancing-master.zip (10个子文件)
SDN_load-prediction-and-balancing-master
Short_Forwarding.py 18KB
Short_Forwarding_3.py 20KB
flow17.csv 915KB
main.py 6KB
main(1).py 6KB
mytopo.py 1KB
Short_Forwarding_2.py 17KB
cli.py 17KB
lstm.pkl 9.61MB
net.py 42KB
共 10 条
- 1
资源评论
- m0_749070912024-04-10发现一个宝藏资源,赶紧冲冲冲!支持大佬~
- 疯狂毛毛虫吃面包2024-06-09资源内容详细全面,与描述一致,对我很有用,有一定的使用价值。
- weibo_68928562024-04-19资源不错,内容挺好的,有一定的使用价值,值得借鉴,感谢分享。
机智的程序员zero
- 粉丝: 2416
- 资源: 4877
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功