1.0
closed
gaixas1
None
2015-02-03
2014-11-13
gaixas1
No

We want to do:
Have a distinct Scheduling Module instance for each In/Out N-1 flow used by an IPC.

Problems that we currently find in RinaSim:
Based on the simulator code online, it seems that at each IPC there is only one instance of the scheduling module. This seems inappropriate, as the operation of every In/Out should be independent of the remainder, and all input and output ports do not have the same limits on bandwidth (e.g. Internet connections at home are commonly asymmetrical).

Proposed modifications/extensions:
We propose to associate a Scheduling instance for each N-1 flow with distinct calls for read and serve.

Discussion

1 2 > >> (Page 1 of 2)
  • gaixas1
    gaixas1
    2014-11-13

    • assigned_to: gaixas1
     
  • Tomáš Hykel
    Tomáš Hykel
    2014-11-29

    Okay, I can see why this is needed. Until now we didn't simulate any queue processing delays in higher-level IPC processes and used only one queue per (N-1)-flow, so this wasn't so obvious.

    For each (N-1)-flow, we shall create two instances of RMTSchedulingPolicy -- one for input queues and one for output queues.

     
  • Tomáš Hykel
    Tomáš Hykel
    2014-12-03

    Now that I think about it, is it really necessary for scheduling policy to be instantiated at all?
    The way I understand it, scheduling module doesn't need to keep its own state since all state variables are managed by monitoring policy (I think). Thus, we should only need a single instance for each available scheduling policy (and read/serve calls would accept an additional argument specifying which port we want to process).

    What is your opinion on this?

     
  • gaixas1
    gaixas1
    2014-12-03

    I'm not sure that that could work, all depends on how the state variables are stored (RIB, scheduling instances, etc.) and if there if a fast mapping port+QoS <-> queue (e.g. a map from port+QoS to queue pointer/name and a port and QoS attributes on queues).

    My initial idea was that each scheduling instance stores its own variables (modified by observer policy), as well as the mapping from QoS to queues for its port. But using the RIB to store all seems possible.

    Ok, we need two instances (read/serve), but if calls to them include the port as a parameter we can make it work, and it can be simpler later to perform long term managment by the RA using simply the RIB.
    But the mapping port+qos <-> queue will be necessary (on the RIB?).

     
  • gaixas1
    gaixas1
    2014-12-09

    Ok, after talking with Edu from i2Cat, it seems that only one instance of the scheduler should be used for each IPC, and invoked each time some queue has data to send.
    That policy should check both input and output queues and decide how to process them.

    In this case, what should be really important is for output ports to have some "isReady" function, in order for the scheduler to know if the N-1 flow admits more data (N-1 flow should determine that depending on output ratio and/or maxBandwith, buffer,etc.)

     
  • Tomáš Hykel
    Tomáš Hykel
    2014-12-09

    Okay! Good to hear that since the original solution would, IMHO, introduce a lot of unnecessary redundancy into the implementation.

    So, just to recap:

    For each IPC process, there shall be a single scheduling policy active at given time for all (N-1)-flows of all connected (N-1)-IPC processes. This policy may maintain additional values for each queue of each (N-1)-port (such as described on https://wiki.ict-pristine.eu/wp3/d31/D31-CO-CL#Scheduling-for-resource-allocation-with-multiple-levels-of-QoS ) and those may be updated by the monitoring policy. Calls to the policy shall include an (N-1)-port ID as a parameter and decision will always be made in context of the received (N-1)-port. This should make operations of (N-1)-ports independent of each other.

    Is this OK?

     
  • gaixas1
    gaixas1
    2014-12-09

    Correct. With that it's only needed that N-1 ports let know the scheduler if it allows new data.

     
  • Tomáš Hykel
    Tomáš Hykel
    2014-12-22

    There's still one thing I'm a bit puzzled about.

    Quote: "...only one instance of the scheduler should be used for each IPC, and invoked each time some queue has data to send. That policy should check both input and output queues and decide how to process them."

    Does it really make sense for both input and output queues to be processed at each policy invocation? I believe both directions are going to be handled at different rates (e.g. the operation of popping an ingress queue doesn't need the (N-1)-port to be ready to serve).

     
  • gaixas1
    gaixas1
    2014-12-22

    Yes. When talking with Edu, the problem seemed to be that making distinct calls (functions) for each trigger (input queue with data/output port ready) would reduce the idea of "policy decides how to do it". If all triggers invoke the same function, then is this function/policy that decides what to do.
    Maybe the best solution is simply for the invocation call to include its trigger as a parameter, in this case if IN/OUT Port with data/ready, and which port. With that, each policy can decide if use that information or to ignore it.

     
  • Tomáš Hykel
    Tomáš Hykel
    2015-02-03

    The current stable version contains a reworked scheduling subsystem that differentiates between (N-1)-ports.

    The scheduling function accepts two arguments specifying the (N-1)-port and data flow direction. An example policy "LongestQFirst" (which always releases PDU from the longest queue of given direction) is present and used by default.

    At any given moment, each port on its output is either ready to accept PDUs or busy (e.g. blocked by a transmission of some other PDU). State of a port is indicated visually in the Tkenv GUI in the top right corner of port's module.

    The scheduling policy is invoked
    1) each time a PDU enters a queue, and
    2) each time a port becomes ready to serve while there are still some PDUs waiting in queues.

     
1 2 > >> (Page 1 of 2)