Repository logo
 

Prediction of waiting time for entities in independent and merged queues using Markov chain and machine learning techniques

dc.contributor.advisorSamarghandi, Hamed
dc.contributor.committeeMemberRayan, Steven
dc.contributor.committeeMemberZhang, W.J. (Chris)
dc.contributor.committeeMemberSpiteri, Ray
dc.creatorDadgar, Mehrdad
dc.creator.orcid0000-0001-7480-5539
dc.date.accessioned2020-10-02T21:36:36Z
dc.date.available2020-10-02T21:36:36Z
dc.date.created2019-09
dc.date.issued2020-10-02
dc.date.submittedSeptember 2019
dc.date.updated2020-10-02T21:36:36Z
dc.description.abstractServing customers for any organization is a factor in making profit and extending business. Waiting time has become one of the most critical features in measuring the performance of systems; managers try to predict and minimize this factor. On the other hand, any system can have different customers. Thus, defining priorities can increase profitability and customer loyalty. Two models can emerge in these situations: the independent queue lines and the merge queue line. For the first model, a non-linear programming model is presented by combining resource planning concepts and queueing theory. The non-linear model helps with minimizing the weighted sum of average waiting times by assigning servers to different priority groups. Also, we address a "Memoryless/Memoryless/3 Servers'' or M/M/3 queue system with two priorities. In other words, the system contains three servers and the distributions of arrival and service times are exponential. A dimension reduction method is used to convert the transition matrix from two-infinite dimensional to a one-infinite dimensional matrix with the help of Phase-type distribution. After solving the Quasi-birth-death process, to compare the results, a simulation method is applied; the average gap between the dimentionality reduction method and simulation results is about 7%. Moreover, two unsupervised learning methods, K-Means and Gaussian mixture model, are used principal component analysis to cluster customers in an example on the banking industry. In the last step of research, we investigate the M/M/3 with three priorities. The high complexity of the queuing systems with more than two priority groups has resulted in having few methods in the literature. Therefore, a novel simulation-driven machine learning algorithm is developed to solve the problem. Some supervised machine learning algorithms are applied to predict the dynamic waiting time for customers. Among them, a random forest regression with mean absolute error of 6.95 minutes has the best performance. This novel method can analyze any merged queueing system with an arbitrary number of priorities and servers.
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/10388/13081
dc.subjectQueueing systems
dc.subjectMarkov Chain
dc.subjectDimentionality Reduction
dc.subjectQuasi-Birth-Death
dc.subjectMachine Learning
dc.titlePrediction of waiting time for entities in independent and merged queues using Markov chain and machine learning techniques
dc.typeThesis
dc.type.materialtext
thesis.degree.departmentMathematics and Statistics
thesis.degree.disciplineMathematics
thesis.degree.grantorUniversity of Saskatchewan
thesis.degree.levelMasters
thesis.degree.nameMaster of Science (M.Sc.)

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
DADGAR-THESIS-2019.pdf
Size:
3.32 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
2.27 KB
Format:
Plain Text
Description: