|dc.description.abstract||In real-time communications it is often vital that data arrive at its destination in a timely fashion.
Whether it is the user experience of online games, or the reliability of tele-surgery, a reliable, consistent and predictable communications channel between source and destination is important.
However, the Internet as we know it was designed to ensure that data will arrive at the desired destination instead of being designed for predictable, low-latency communication.
Data traveling from point to point on the Internet is comprised of smaller packages known as packets. As these packets traverse the Internet, they encounter routers or similar devices that will often queue the packets before sending them toward their destination.
Queued packets introduces a delay that depends greatly on the router configuration and the number of other packets that exist on the network.
In times of high demand, packets may be discarded by the router or even lost in transmission. Protocols exist that retransmit lost packets, but these protocols introduce additional overhead and delays - costs that may be prohibitive in some applications.
Being able to predict when packets may be delayed or lost could allow applications to compensate for unreliable data channels.
In this thesis I investigate the effects of cross traffic and router configuration on a low bandwidth traffic stream such as that which is common in games.
The experiments investigate the effects of cross traffic packet size, bit-rate, inter-packet timing and protocol used. The experiments also investigate router configurations including queue management type and the number of queues.
These experiments are compared to real-world data and a mitigation strategy, where n previous packets are bundled with each new packet, is applied to both the simulated data and the real-world captures.
The experiments indicate that most of the parameters explored had an impact on the packet loss.
However, the real world data and simulated data differ and would require additional work to attempt to apply the lessons learned to real world applications.
The mitigation strategy appeared to work well, allowing 90\% of all runs to complete without data loss.
However, the mitigation strategy was implemented analytically and the actual implementation and testing has been left for future work.||