osmux-reference: Add traffic saving plot
Change-Id: I8fa60c1f95436c39fd1ff9424a907876d367484e
This commit is contained in:
parent
2d133928e8
commit
36cc3e748c
|
@ -6,7 +6,8 @@ apt-get install \
|
|||
xsltproc \
|
||||
dblatex \
|
||||
mscgen \
|
||||
graphviz
|
||||
graphviz \
|
||||
python-pychart
|
||||
|
||||
Build PDFs, run:
|
||||
make
|
||||
|
|
|
@ -499,91 +499,67 @@ msc {
|
|||
|
||||
== Evaluation: Expected traffic savings
|
||||
|
||||
The following figure shows the traffic saving (in %) depending on the number
|
||||
of concurrent numbers of callings (asumming trunking but no batching at all):
|
||||
The following figure shows the growth in traffic saving (in %) depending on the
|
||||
number of concurrent numbers of callings for a given set of batching factor
|
||||
values:
|
||||
|
||||
["python2"]
|
||||
----
|
||||
Traffic savings (%)
|
||||
100 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
+ + + + + + batch factor 1 **E*** +
|
||||
| |
|
||||
80 ++ ++
|
||||
| |
|
||||
| |
|
||||
| ****E********E
|
||||
60 ++ ****E*******E********E*** ++
|
||||
| **E**** |
|
||||
| **** |
|
||||
40 ++ *E** ++
|
||||
| ** |
|
||||
| ** |
|
||||
| ** |
|
||||
20 ++ E ++
|
||||
| |
|
||||
+ + + + + + + + +
|
||||
0 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
0 1 2 3 4 5 6 7 8
|
||||
Concurrent calls
|
||||
from pychart import *
|
||||
theme.get_options()
|
||||
theme.scale_factor = 5
|
||||
theme.use_color = 1
|
||||
theme.reinitialize()
|
||||
|
||||
IP_HEADER=20
|
||||
UDP_HEADER=8
|
||||
RTP_HEADER=12
|
||||
OSMUX_HEADER=4
|
||||
AMR59_PAYLOAD=17
|
||||
|
||||
def osmux_get_size(calls, payloads):
|
||||
return IP_HEADER + UDP_HEADER + (OSMUX_HEADER + AMR59_PAYLOAD * payloads) * calls
|
||||
|
||||
def rtp_get_size(calls, payloads):
|
||||
return calls * payloads * (IP_HEADER + UDP_HEADER + RTP_HEADER + AMR59_PAYLOAD)
|
||||
|
||||
def calc_traffic_saving(calls, payloads):
|
||||
return 100 - 100.0 * osmux_get_size(calls, payloads) / rtp_get_size(calls, payloads)
|
||||
|
||||
# The first value in each tuple is the X value, and subsequent values are Y values for different lines.
|
||||
def gen_table():
|
||||
data = []
|
||||
for calls in range(1, 9):
|
||||
col = (calls,)
|
||||
for factor in range(1, 9):
|
||||
col += (calc_traffic_saving(calls, factor),)
|
||||
data.append(col)
|
||||
return data
|
||||
|
||||
def do_plot(data):
|
||||
xaxis = axis.X(format="/hL%d", tic_interval = 1, label="Concurrent calls")
|
||||
yaxis = axis.Y(format="%d%%", tic_interval = 10, label="Traffic Saving")
|
||||
ar = area.T(x_axis=xaxis, y_axis=yaxis, y_range=(None,None), x_grid_interval=1, x_grid_style=line_style.gray70_dash3)
|
||||
for y in range(1, len(data[0])):
|
||||
plot = line_plot.T(label="bfactor "+str(y), data=data, ycol=y, tick_mark=tick_mark.circle1)
|
||||
ar.add_plot(plot)
|
||||
ar.draw()
|
||||
|
||||
data = gen_table()
|
||||
do_plot(data)
|
||||
----
|
||||
|
||||
The results shows a saving of 15.79% with only one concurrent call, that
|
||||
quickly improves with more concurrent calls (due to trunking).
|
||||
The results show a saving of 15.79% with only one concurrent call and with
|
||||
batching disabled (bfactor 1), that quickly improves with more concurrent calls
|
||||
(due to trunking).
|
||||
|
||||
We also provide the expected results by batching 4 messages for a single call:
|
||||
----
|
||||
Traffic savings (%)
|
||||
100 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
+ + + + + + batch factor 4 **E*** +
|
||||
| |
|
||||
80 ++ ++
|
||||
| |
|
||||
| |
|
||||
| ****E********E*******E********E*******E********E
|
||||
60 ++ ****E**** ++
|
||||
| E*** |
|
||||
| |
|
||||
40 ++ ++
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
20 ++ ++
|
||||
| |
|
||||
+ + + + + + + + +
|
||||
0 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
0 1 2 3 4 5 6 7 8
|
||||
Concurrent calls
|
||||
----
|
||||
By increasing the batching of messages to 4, the results show a saving of 56.68%
|
||||
with only one concurrent call. Trunking slightly improves the situation with
|
||||
more concurrent calls.
|
||||
|
||||
The results show a saving of 56.68% with only one concurrent call. Trunking
|
||||
slightly improves the situation with more concurrent calls.
|
||||
|
||||
We also provide the figure with batching factor of 8:
|
||||
----
|
||||
Traffic savings (%)
|
||||
100 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
+ + + + + + batch factor 8 **E*** +
|
||||
| |
|
||||
80 ++ ++
|
||||
| |
|
||||
| ****E*******E********E
|
||||
| ****E********E********E*******E**** |
|
||||
60 ++ E*** ++
|
||||
| |
|
||||
| |
|
||||
40 ++ ++
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
20 ++ ++
|
||||
| |
|
||||
+ + + + + + + + +
|
||||
0 ++-------+-------+--------+--------+-------+--------+-------+-------++
|
||||
0 1 2 3 4 5 6 7 8
|
||||
Concurrent calls
|
||||
----
|
||||
|
||||
That shows very little improvement with regards to batching 4 messages.
|
||||
Still, we risk to degrade user experience. Thus, we consider a batching factor
|
||||
of 3 and 4 is adecuate.
|
||||
A batching factor of 8 provides very little improvement with regards to batching
|
||||
4 messages. Still, we risk to degrade user experience. Thus, we consider a
|
||||
batching factor of 3 and 4 is adecuate.
|
||||
|
||||
== Other proposed follow-up works
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ ASCIIDOCSTYLE ?= $(BUILDDIR)/custom-dblatex.sty
|
|||
|
||||
cleanfiles += $(ASCIIDOCPDFS)
|
||||
|
||||
ASCIIDOC_OPTS := -f $(BUILDDIR)/mscgen-filter.conf -f $(BUILDDIR)/diag-filter.conf -f $(BUILDDIR)/docinfo-releaseinfo.conf
|
||||
ASCIIDOC_OPTS := -f $(BUILDDIR)/mscgen-filter.conf -f $(BUILDDIR)/diag-filter.conf -f $(BUILDDIR)/docinfo-releaseinfo.conf -f $(BUILDDIR)/python2-filter.conf
|
||||
DBLATEX_OPTS := -s $(ASCIIDOCSTYLE) -P draft.mode=yes -P draft.watermark=0
|
||||
|
||||
ifeq (,$(BUILD_RELEASE))
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
#
|
||||
# AsciiDoc mscgen filter configuration file.
|
||||
#
|
||||
|
||||
[python2-filter-style]
|
||||
python2-style=template="python2-block",subs=(),posattrs=("style","target"),filter='../build/filter-wrapper.py python2 - --output="{outdir={indir}}/{imagesdir=}{imagesdir?/}{target}"'
|
||||
|
||||
[blockdef-listing]
|
||||
template::[python2-filter-style]
|
||||
|
||||
[paradef-default]
|
||||
template::[python2-filter-style]
|
||||
|
||||
[python2-block]
|
||||
template::[filter-image-pngsvg-blockmacro]
|
||||
|
||||
[filter-image-pngsvg-blockmacro]
|
||||
{target%}{counter2:target-number}
|
||||
{target%}{set2:target:{docname}__{target-number}.{format={basebackend-docbook!png}{basebackend-docbook?png}}}
|
||||
|
|
||||
template::[image-blockmacro]
|
Loading…
Reference in New Issue