osmux-reference: Add traffic saving plot
Change-Id: I8fa60c1f95436c39fd1ff9424a907876d367484e
This commit is contained in:
parent
1ce6c62914
commit
53ce68f977
|
@ -499,91 +499,67 @@ msc {
|
|||
|
||||
== Evaluation: Expected traffic savings
|
||||
|
||||
The following figure shows the traffic saving (in %) depending on the number
|
||||
of concurrent numbers of callings (asumming trunking but no batching at all):
|
||||
The following figure shows the growth in traffic saving (in %) depending on the
|
||||
number of concurrent numbers of callings for a given set of batching factor
|
||||
values:
|
||||
|
||||
["python2"]
|
||||
----
|
||||
Traffic savings (%)
|
||||
100 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
+ + + + + + batch factor 1 **E*** +
|
||||
| |
|
||||
80 ++ ++
|
||||
| |
|
||||
| |
|
||||
| ****E********E
|
||||
60 ++ ****E*******E********E*** ++
|
||||
| **E**** |
|
||||
| **** |
|
||||
40 ++ *E** ++
|
||||
| ** |
|
||||
| ** |
|
||||
| ** |
|
||||
20 ++ E ++
|
||||
| |
|
||||
+ + + + + + + + +
|
||||
0 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
0 1 2 3 4 5 6 7 8
|
||||
Concurrent calls
|
||||
from pychart import *
|
||||
theme.get_options()
|
||||
theme.scale_factor = 5
|
||||
theme.use_color = 1
|
||||
theme.reinitialize()
|
||||
|
||||
IP_HEADER=20
|
||||
UDP_HEADER=8
|
||||
RTP_HEADER=12
|
||||
OSMUX_HEADER=4
|
||||
AMR59_PAYLOAD=17
|
||||
|
||||
def osmux_get_size(calls, payloads):
|
||||
return IP_HEADER + UDP_HEADER + (OSMUX_HEADER + AMR59_PAYLOAD * payloads) * calls
|
||||
|
||||
def rtp_get_size(calls, payloads):
|
||||
return calls * payloads * (IP_HEADER + UDP_HEADER + RTP_HEADER + AMR59_PAYLOAD)
|
||||
|
||||
def calc_traffic_saving(calls, payloads):
|
||||
return 100 - 100.0 * osmux_get_size(calls, payloads) / rtp_get_size(calls, payloads)
|
||||
|
||||
# The first value in each tuple is the X value, and subsequent values are Y values for different lines.
|
||||
def gen_table():
|
||||
data = []
|
||||
for calls in range(1, 9):
|
||||
col = (calls,)
|
||||
for factor in range(1, 9):
|
||||
col += (calc_traffic_saving(calls, factor),)
|
||||
data.append(col)
|
||||
return data
|
||||
|
||||
def do_plot(data):
|
||||
xaxis = axis.X(format="/hL%d", tic_interval = 1, label="Concurrent calls")
|
||||
yaxis = axis.Y(format="%d%%", tic_interval = 10, label="Traffic Saving")
|
||||
ar = area.T(x_axis=xaxis, y_axis=yaxis, y_range=(None,None), x_grid_interval=1, x_grid_style=line_style.gray70_dash3)
|
||||
for y in range(1, len(data[0])):
|
||||
plot = line_plot.T(label="bfactor "+str(y), data=data, ycol=y, tick_mark=tick_mark.circle1)
|
||||
ar.add_plot(plot)
|
||||
ar.draw()
|
||||
|
||||
data = gen_table()
|
||||
do_plot(data)
|
||||
----
|
||||
|
||||
The results shows a saving of 15.79% with only one concurrent call, that
|
||||
quickly improves with more concurrent calls (due to trunking).
|
||||
The results show a saving of 15.79% with only one concurrent call and with
|
||||
batching disabled (bfactor 1), that quickly improves with more concurrent calls
|
||||
(due to trunking).
|
||||
|
||||
We also provide the expected results by batching 4 messages for a single call:
|
||||
----
|
||||
Traffic savings (%)
|
||||
100 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
+ + + + + + batch factor 4 **E*** +
|
||||
| |
|
||||
80 ++ ++
|
||||
| |
|
||||
| |
|
||||
| ****E********E*******E********E*******E********E
|
||||
60 ++ ****E**** ++
|
||||
| E*** |
|
||||
| |
|
||||
40 ++ ++
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
20 ++ ++
|
||||
| |
|
||||
+ + + + + + + + +
|
||||
0 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
0 1 2 3 4 5 6 7 8
|
||||
Concurrent calls
|
||||
----
|
||||
By increasing the batching of messages to 4, the results show a saving of 56.68%
|
||||
with only one concurrent call. Trunking slightly improves the situation with
|
||||
more concurrent calls.
|
||||
|
||||
The results show a saving of 56.68% with only one concurrent call. Trunking
|
||||
slightly improves the situation with more concurrent calls.
|
||||
|
||||
We also provide the figure with batching factor of 8:
|
||||
----
|
||||
Traffic savings (%)
|
||||
100 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
+ + + + + + batch factor 8 **E*** +
|
||||
| |
|
||||
80 ++ ++
|
||||
| |
|
||||
| ****E*******E********E
|
||||
| ****E********E********E*******E**** |
|
||||
60 ++ E*** ++
|
||||
| |
|
||||
| |
|
||||
40 ++ ++
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
20 ++ ++
|
||||
| |
|
||||
+ + + + + + + + +
|
||||
0 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
0 1 2 3 4 5 6 7 8
|
||||
Concurrent calls
|
||||
----
|
||||
|
||||
That shows very little improvement with regards to batching 4 messages.
|
||||
Still, we risk to degrade user experience. Thus, we consider a batching factor
|
||||
of 3 and 4 is adecuate.
|
||||
A batching factor of 8 provides very little improvement with regards to batching
|
||||
4 messages. Still, we risk to degrade user experience. Thus, we consider a
|
||||
batching factor of 3 and 4 is adecuate.
|
||||
|
||||
== Other proposed follow-up works
|
||||
|
||||
|
|
Loading…
Reference in New Issue