Data buffering multilevel model at the multiservice traffic service node

2014;
Authors: 

M. M. Klymash, M. I. Kyryk, N. M. Pleskanka, І. O. Kagalo

Lviv Polytechnic National University

Given the bursty behavior of cloud applications described above, an easy solution to the incast problem would be to overdesign buffer capacity at each network node. The basic principles of data buffering at the multiservice traffic service node were reviewed. The data buffering multilevel mode in the nodes, that serve a large number of TCP flows, was proposed. Each level of the model has its own characteristics and is relatively independent. However the malfunction of any of them, may adversely excel at the efficiency of other levels. The network interface represents the physical level, which is the lowest. Packets routing are occurs at the protocol level. Two queues provide communication between physical layer (network interface card) and ip module. One is called the backlog and is used for incoming packets, the other - txqueue, for outgoing packets. In the current networks, the basis for data transmission is TCP/IP protocol stack. It provides a set of tools to deliver data from one application to another. Today, the size of the buffers is determined by the dynamics of TCP’s congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = RTT × C, where RTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. Arguably, router buffers are the single biggest contributor to uncertainty in the Internet. Buffers cause queueing delay and delay-variance; when they overflow they cause packet loss, and when they underflow they can degrade throughput. Given the significance of their role, we might reasonably expect the dynamics and sizing of router buffers to be well understood, based on a well-grounded theory, and supported by extensive simulation and experimentation. The Smart-Buffer architecture takes into consideration that congestion in a typical data center environment is localized to a subset of egress ports at any given point in time and realistically never happens on all ports simultaneously. This enables its centralized on-chip buffer to be right-sized for overall cost and power; at the same time, the buffer is dynamically shareable and weighted towards congested ports or flows exactly when needed using self-tuning thresholds. In addition, the centralized buffer can be allocated based on class of service or priority group. Available buffer resources can therefore be partitioned into separate, virtual buffer pools and assigned to special traffic classes. This is especially useful in converged I/O scenarios where some traffic classes (such as storage) may require guaranteed lossless behavior. These properties enable Smart-Buffer technology to strike an optimum balance between silicon efficiency and burst absorption performance – essential design principles in current and next-generation high density.  Optimization of the data buffering multilevel model to ensure a satisfactory quality of service in multiservice data network was made in this paper.