Switch to unified view

a b/examples/SimpleRelayCongestion/description.txt
1
# This topology (SimpleRelayWithCongestion) is designed for testing simple 
2
# network congestion on a InteriorRouter. You can change its behaviour by 
3
# defining multiple variables for EFCP, RMT and FA. Several implications needs 
4
# to be taken into account when defining these variables and their mutual relations.
5
# For Flow Allocator it is mainly the createRequestTimeout. It specifies the 
6
# upper limit in which it expects response for createFlow request. So if you want 
7
# to avoid re-sending createFlow request, set it high enough to accomodate for 
8
# recursive Flow creation. It depends heavily on chosen topology.
9
# Default value is 10s. You can change it in .ini.
10
#
11
# Example:
12
# **.fa.createRequestTimeout = 12s
13
#
14
# For EFCP it is
15
# initialSenderCredit (default 10)
16
# rcvCredit (default 10)
17
# closedWindowQLen (default 4)
18
# mpl (default 50s)
19
# rtt (default 2s)
20
# initialSenderCredit     specifies the initial credit on the sender direction. 
21
#                     Setting this variable high (might) cause the 
22
#                     InteriorRouter get congested. Setting it too low will on 
23
#                     the other hand prevent the initial overload. After first 
24
#                     exchange of FlowControlPDU, the initialSenderCredit looses 
25
#                     its importance.
26
# rcvCredit           is the value that is send (in form of RcvRightWindowEdge) 
27
#                     in mentioned FlowControlPDU.
28
# closedWindowQLen        takes place when SndRightWindowEdge permits from sending 
29
#                     another PDU, so DTP starts to put them on closedWindowQ. 
30
#                     When lenght of this queue reaches closedWindowQLen, 
31
#                     DTP signals Push-Back to upper flow.
32
# mpl                     is maximum PDU lifetime and is used for computation few 
33
#                     inner timers. This variable is fixed throughout simulation run.
34
# rtt                     is initial value for round-trip time. RTT is then gradually 
35
#                     updated through RTTEstimator policy. Set it high enough, 
36
#                     so the first DataTransferPDU - Ack exchange have chance 
37
#                     to updated it before retransmission timer expiration.
38
#
39
# In RMT you can set defaultThreshQLength, defaultMaxQLength and maxQPolicyName.
40
# defaultThreshQLength    sets initial threshold length of dynamically initiated RMT 
41
#                     queues (10 by default).
42
# defaultMaxQLength   sets initial maximum length of dynamically initiated RMT 
43
#                     queues (20 by default, defaultMaxQLength >= defaultThreshQLength). 
44
# maxQPolicyName      specifies the RMT MaxQueue policy to be used in the scenario. 
45
#                     This policy is invoked by a queue each time number of 
46
#                     queued PDUs exceeds the defaultThreshQLength. The sample 
47
#                     MaxQueue policy used for invocation of congestion notifications 
48
#                     is called "UpstreamNotifier".
49
#
50
# CongestionPing:
51
# We are trying to congest InteriorRouter so the connection between InteriorRouter 
52
# and Host2 has bigger latency, queues from InteriorRouter to Host2 are shorter, etc.
53
#
54
# Used AE: AEPing - behave in same sense as ICMP Echo Request/reply
55
# **.host1.applicationProcess1.applicationEntity.iae.dstApName = "App2"
56
# **.host1.applicationProcess1.applicationEntity.iae.dstAeName = "Ping"
57
# **.host1.applicationProcess1.applicationEntity.iae.startAt = 10s -> start of Application entity (not start of sending PINGs)
58
# **.host1.applicationProcess1.applicationEntity.iae.pingAt =  60s -> AE starts sending PINGs
59
# **.host1.applicationProcess1.applicationEntity.iae.rate = 50 -> number of PINGs that will be sent
60
# **.host1.applicationProcess1.applicationEntity.iae.stopAt = 250s -> time of deallocation
61
# **.interiorRouter.relayIpc.relayAndMux.defaultMaxQLength = 5 -> shortened output queue towards Host2
62
# **.interiorRouter.relayIpc.relayAndMux.defaultThreshQLength = 3 -> lowered threshold
63
# **.efcp.rtt = 25s -> high enough
64
# **.host1.ipcProcess1.efcp.efcp.initialSenderCredit = 50 -> Host1 can send up to 50 PDUs before getting Ack or FlowControl update
65
# **.interiorRouter.ipcProcess1.efcp.efcp.initialSenderCredit = 3 -> But IPC towards Host2 on InteriorRouter in the lower DIF can send only up to 3 PDUs before getting Ack or FlowControl update
66
# **.interiorRouter.ipcProcess1.efcp.efcp.maxClosedWinQueLen = 4 -> after reaching 4 PDUs on closedWindowQ this IPC emits Push-Back
67
# **.host2.ipcProcess0.efcp.efcp.rcvCredit = 3 -> thanks to this, the sender credit on Interior router in ipcProcess1 stays same as initialSenderCredit even after FlowControl update.
68
# **.relayAndMux.maxQPolicyName = "UpstreamNotifier" -> Name of policy.
69
#
70
# Important Events:
71
# t=10s - created connection between Host1.ipcp1 - Host1.ipcp0
72
# t=10.000Xs - created connection between Host1.ipcp0 - interiorRouter.ipcp0, interiorRouter.ipcp0 - interiorRouter.relayIpc, interiorRouter.relayIpc - interiorRouter.ipcp1.
73
# t=15s - created connection between interiorRouter.ipcp1 - Host2.ipcp0.
74
# t=25s - created connection Host2.ipcp0 - Host2.ipcp1, Host2.ipcp1 - Host2.irm
75
# t=30s - created connection Host1.ipcp1 - Host1.irm
76
# t=60s - start of sending PINGs
77
# t=62s - interiorRouter.ipcp1 - senderCredit gets depleted (see SndRightWindowEdge -5 and NextSeqNumToSend -6)
78
# t=63s - interiorRouter.ipcp1 - first PDU is put on closedWindowQ
79
# t=66s - interiorRouter.ipcp1 - closedWindowQ is full - initiate PushBack (block upper flow),
80
#       -               .relayIpc - RMT shuts down port towards ipcp1 and it starts to fill
81
# t=70s - interiorRouter.ipcp1 - Ack is received -> there is space in closedWindowQ -> RMT port is unblocked and 1 PDU is released.
82
# t=70s - interiorRouter.ipcp1 - closedWindowQ is full - initiate PushBack (block upper flow) (this happens several times
83
# t=95s - interiorRouter.relayIpc - RMT port is full - SlowDown mechanism is invoked
84
#       -                         - RIBd sends CDAP message to Host1.ipcp1 to "SlowDown"
85
# t=95s - Host1.ipcp1 - RIBd receives CDAP message to "SlowDown" and ECNSlowDownPolicy in DTCP is initiated.
86
#
87
#
88
# CongestionStream
89
# Used AE: AEStream - Sends messages to the other side. Unlike AEPing, it does not send response back.
90
# Important Events:
91
# t=10s - created connection between Host1.ipcp1 - Host1.ipcp0
92
# t=10.000Xs - created connection between Host1.ipcp0 - interiorRouter.ipcp0, interiorRouter.ipcp0 - interiorRouter.relayIpc, interiorRouter.relayIpc - interiorRouter.ipcp1.
93
# t=15s - created connection between interiorRouter.ipcp1 - Host2.ipcp0.
94
# t=25s - created connection Host2.ipcp0 - Host2.ipcp1, Host2.ipcp1 - Host2.irm
95
# t=30s - created connection Host1.ipcp1 - Host1.irm
96
#
97
# t=60s - start of sending PINGs
98
# t=62s - interiorRouter.ipcp1 - senderCredit gets depleted (see SndRightWindowEdge -5 and NextSeqNumToSend -6)
99
# t=63s - interiorRouter.ipcp1 - first PDU is put on closedWindowQ
100
# t=66s - interiorRouter.ipcp1 - closedWindowQ is full - initiate PushBack (block upper flow),
101
#       -               .relayIpc - RMT shuts down port towards ipcp1 and it starts to fill
102
# t=70s - interiorRouter.ipcp1 - Ack is received -> there is space in closedWindowQ -> RMT port is unblocked and 1 PDU is released.
103
# t=70s - interiorRouter.ipcp1 - closedWindowQ is full - initiate PushBack (block upper flow),
104
# t=84s - interiorRouter.relayIpc - RMT port is full - SlowDown mechanism is invoked
105
#       -                         - RIBd sends CDAP message to Host1.ipcp1 to "SlowDown"
106
# t=84s - Host1.ipcp1 - RIBd receives CDAP message to "SlowDown" and ECNSlowDownPolicy in DTCP is initiated.  
107
108