﻿<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD with MathML3 v1.2 20190208//EN" "http://dtd.nlm.nih.gov/publishing/3.0/journalpublishing3.dtd">
<article
    xmlns:mml="http://www.w3.org/1998/Math/MathML"
    xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="3.0" xml:lang="en" article-type="article">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">JAIBD</journal-id>
      <journal-title-group>
        <journal-title>Journal of Artificial Intelligence and Big Data</journal-title>
      </journal-title-group>
      <issn pub-type="epub">2771-2389</issn>
      <issn pub-type="ppub"></issn>
      <publisher>
        <publisher-name>Science Publications</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.31586/jaibd.2022.1344</article-id>
      <article-id pub-id-type="publisher-id">JAIBD-1344</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Article</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>
          Towards the Efficient Management of Cloud Resource Allocation: A Framework Based on Machine Learning
        </article-title>
      </title-group>
      <contrib-group>
<contrib contrib-type="author">
<name>
<surname>Mamidala</surname>
<given-names>Jaya Vardhani</given-names>
</name>
<xref rid="af1" ref-type="aff">1</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="cr1" ref-type="corresp">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Enokkaren</surname>
<given-names>Sunil Jacob</given-names>
</name>
<xref rid="af3" ref-type="aff">3</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Attipalli</surname>
<given-names>Avinash</given-names>
</name>
<xref rid="af4" ref-type="aff">4</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Bitkuri</surname>
<given-names>Varun</given-names>
</name>
<xref rid="af5" ref-type="aff">5</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kendyala</surname>
<given-names>Raghuvaran</given-names>
</name>
<xref rid="af6" ref-type="aff">6</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kurma</surname>
<given-names>Jagan</given-names>
</name>
<xref rid="af7" ref-type="aff">7</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
<xref rid="af2" ref-type="aff">2</xref>
</contrib>
      </contrib-group>
<aff id="af1"><label>1</label> Department of Computer Science, University of Central Missouri, USA</aff>
<aff id="af2"><label>2</label> ADP, Solution Architect, USA</aff>
<aff id="af3"><label>3</label> Department of Computer Science, University of Bridgeport, USA</aff>
<aff id="af4"><label>4</label> Software Engineer, Stratford University, USA</aff>
<aff id="af5"><label>5</label> Department of Computer Science, University of Illinois at Springfield, USA</aff>
<aff id="af6"><label>6</label> Computer Information Systems, Christian Brothers University, USA</aff>
<author-notes>
<corresp id="c1">
<label>*</label>Corresponding author at: Department of Computer Science, University of Central Missouri, USA
</corresp>
</author-notes>
      <pub-date pub-type="epub">
        <day>27</day>
        <month>12</month>
        <year>2022</year>
      </pub-date>
      <volume>2</volume>
      <issue>1</issue>
      <history>
        <date date-type="received">
          <day>12</day>
          <month>09</month>
          <year>2022</year>
        </date>
        <date date-type="rev-recd">
          <day>30</day>
          <month>10</month>
          <year>2022</year>
        </date>
        <date date-type="accepted">
          <day>29</day>
          <month>11</month>
          <year>2022</year>
        </date>
        <date date-type="pub">
          <day>27</day>
          <month>12</month>
          <year>2022</year>
        </date>
      </history>
      <permissions>
        <copyright-statement>&#xa9; Copyright 2022 by authors and Trend Research Publishing Inc. </copyright-statement>
        <copyright-year>2022</copyright-year>
        <license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
          <license-p>This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/</license-p>
        </license>
      </permissions>
      <abstract>
        In the constantly evolving world of cloud computing, appropriate resource allocation is essential for both keeping costs down and ensuring an ongoing flow of apps and services. Because of its adaptability to specific tasks and human behavior, machine learning (ML) is a desirable choice for fulfilling those needs. This study Efficient cloud resource allocation is critical for optimizing performance and cost in cloud computing environments. In order to improve the precision of resource allocation, this study investigates the use of Long Short-Term Memory (LSTM). The LSTM model achieved 97% accuracy, 97.5% precision, 98% recall, and a 97.8% F1-score (F1-score: harmonic mean of precision and recall), according to experimental data. The confusion matrix demonstrates strong classification performance across several resource classes, while the accuracy and loss curves verify steady learning with minimal overfitting. The suggested LSTM model performs better than more conventional ML (machine learning) models like Gradient Boosting (GB) and Logistic Regression (LR), according to a comparative study. These findings underscore the LSTM (Long Short-Term Memory) model&#x02019;s robustness and suitability for dynamic cloud environments, enabling more accurate forecasting and efficient resource management.
      </abstract>
      <kwd-group>
        <kwd-group><kwd>Cloud Computing</kwd>
<kwd>Resource Allocation</kwd>
<kwd>Machine Learning</kwd>
<kwd>Reinforcement Learning</kwd>
<kwd>Deep Q-Learning</kwd>
</kwd-group>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec1">
<title>Introduction</title><p>The ability of cloud-based information systems to store data makes them essential in today's digital world, process, and manage vast quantities of data. With the ability to adjust to evolving task requirements, these systems provide on-demand, scalable tools [
<xref ref-type="bibr" rid="R1">1</xref>]. This renders them ideal for a diverse array of applications, including enterprise-level software and applications designed for individual users [
<xref ref-type="bibr" rid="R2">2</xref>,<xref ref-type="bibr" rid="R3">3</xref>]. However, this freedom also means that you must be skilled at managing resources to ensure the system operates efficiently, is cost-effective, and maintains user satisfaction. In cloud environments, resource allocation is typically determined by simple formulae or heuristics, however these approaches might not be able to effectively handle fluctuating demand [
<xref ref-type="bibr" rid="R4">4</xref>,<xref ref-type="bibr" rid="R5">5</xref>]. Resource allocation is one of the main issues with cloud computing, i.e., allocating virtualized computing resources (CPU, memory, storage, bandwidth) to contending tasks and users in an optimum manner [
<xref ref-type="bibr" rid="R6">6</xref>,<xref ref-type="bibr" rid="R7">7</xref>]. </p>
<p>In real-world setups, cloud environments are characterized by extremely dynamic and unpredictable workloads, where user demands may fluctuate rapidly over short time intervals. In such settings, traditional resource allocation techniques, which are usually rule-based or threshold-based policies, are not adequate [
<xref ref-type="bibr" rid="R8">8</xref>]. These are not adaptive and are slow to respond to sudden fluctuations in demand, resulting in resource utilization, overprovisioning, latency in services, and increased operational expenses [
<xref ref-type="bibr" rid="R9">9</xref>]. Machine learning (ML) has emerged as an effective solution to these challenges, as it enables the implementation of optimisation strategies, real-time decision-making, and predictive analytics. Additionally, the prediction of patterns and resource optimality can be facilitated by the use of ML algorithms, resulting in improved overall resource efficiency [
<xref ref-type="bibr" rid="R10">10</xref>]. ML has introduced transformative approaches to resource management. Predictive and adaptive resource allocation is facilitated by ML, which enhances efficiency and performance by utilising historical data and advanced algorithms.</p>
<title>2.1. Motivation and Contribution</title><p>It is impossible to exaggerate the importance of cloud-based information systems in today's digital environment, as they help organizations to handle, store, and process large-scale data effectively. Despite its scalability and flexibility, another key issue remains in the optimal assignment of virtualized computing resources, such as CPU, memory, storage, and bandwidth, in highly dynamic and unpredictable workload patterns. Traditional resource allocation schemes, typically implemented through static assignment rules or threshold-based heuristics, often fail to respond dynamically to sudden demand changes, resulting in inefficient resource usage, overprovisioning, delayed responses, and other operational issues. This shortcoming explains why smarter, dynamic mechanisms should be sought. Driven by these issues, the use of ML has become eminent and provides data-driven methods for predictive analytics and real-time decision-making. By leveraging past usage patterns and utilizing ML-based approaches, dynamic and optimized resource administration is achieved, thereby enhancing system efficiency, cost-effectiveness, and service dependability in cloud computing systems. The research makes a significant contribution to the following aspects of the cloud environment:</p>
<p>Employed a realistic cloud operations dataset with 19 features and 4,000 records, reflecting diverse resource allocation scenarios.</p>
<p>Implemented effective pre-processing steps including missing value handling, noise removal, and data standardization to enhance prediction reliability.</p>
<p>Capitalised on the capacity of the LSTM model to understand sequence dynamics and long-term dependencies, which are essential for predicting cloud resource demands.</p>
<p>Designed the model to predict and allocate resources dynamically, improving efficiency in real-time cloud environments.</p>
<p>Evaluated model accuracy using multiple metrics (F1-score, precision, recall, and accuracy) for thorough performance assessment.</p>
<title>2.2. Justification and Novelty</title><p>The legitimacy of employing the LSTM model is substantiated by its successful capture of temporal dependencies and temporal patterns, which are critical factors in the effective allocation of cloud resources in a constantly evolving environment. In contrast to traditional models like Logistic Regression (LR) and Gradient Boosting (GB), which do not account for dynamic relationships, LSTM may gain knowledge from historical trends and patterns of usage growth and decline. The use of a deep learning (DL) based sequential model to forecast cloud resources, which is more precise and broadly applicable, is what makes this study innovative. This approach enables more adaptive and intelligent allocation, reducing both over-provisioning and under-provisioning of resources, which is critical for optimizing performance and cost in cloud systems.</p>
<title>2.3. Paper Organization</title><p>The paper is organized as: Section II presents the literature study in resource allocation using ML. Section III present the research methodology in detail. The experiment results and comparative analysis are present in Section IV. Conclusion with limitations and future work in section V.</p>
</sec><sec id="sec2">
<title>Literature Review</title><p>A wide range of significant research studies on Efficient Cloud Resource Allocation have been reviewed and analysed to guide and support the development of this work.</p>
<p>Chudasama and Bhavsar (2020) highlights the importance of resource elasticity in cloud applications. Traditional adaptive policies, such as threshold-based auto-scaling, may not be effective during dynamic workloads. They provide a method to forecast short-term computing resource consumption based on queuing theory and DL. The proposed model improves resource elasticity and performance metrics, outperforming the baseline model by 5% [
<xref ref-type="bibr" rid="R11">11</xref>].</p>
<p>Chen et al. (2019) The distribution of cloud-based software service resources is proposed to be self-adaptive and self-learning. By using machine learning to build a QoS model using historical data, the approach predicts the QoS value depending on the workload and resources allotted. The model will automatically make decisions on resource allocation through a genetic algorithm. The method has been tested on the RUBiS benchmark, achieving an accuracy of above 90 percent and a range of 10 to 30 percent improvement in resource utilization [
<xref ref-type="bibr" rid="R12">12</xref>].</p>
<p>Rayan and Nah (2018) propose the development of machine learning-based methodologies that can predict the daily workload of operations in cloud data centres. Three methodologies are investigated: RFR, SVR, and polynomial regression. According to the research findings, RFR performs best, with a PC prediction of 4869.08 and a minimal root-mean-square error of 11.68. This assists in the management of resources, conservation of energy, saving of CPU and memory and better service. [
<xref ref-type="bibr" rid="R13">13</xref>].</p>
<p>Ataie et al. (2017) suggest a compromise model to process large data sets using commodity hardware clusters, a system that would merge MapReduce and Apache Hadoop. In order to optimise precision at a minimal expense, this approach implements support vector regression and queuing networks. The experimental results suggest that the accuracy is increased by 21% in comparison to machine learning methods that do not employ analytical models [
<xref ref-type="bibr" rid="R14">14</xref>].</p>
<p>Dai et al. (2016) propose a cloud computing multi-objective optimisation technique that aims to balance cost, availability, and performance for large-scale data applications. The method, following the analysis and modeling of the objectives involved, is 20% faster than conventional approaches, 15% more efficient in performance than other heuristic algorithms, and achieves a cost savings of 4-20% [
<xref ref-type="bibr" rid="R15">15</xref>].</p>
<p>A summary of recent studies on the efficient management of cloud resource allocation using the ML approach can be found inTable <xref ref-type="table" rid="tab1">1</xref>, which presents innovative models, datasets used, and the main findings and challenges.</p>
<p></p>
<table-wrap id="tab1">
<label>Table 1</label>
<caption>
<p><b> </b><b>Overview </b><b>o</b><b>f Recent Studies </b><b>o</b><b>n Cloud Resource Allocation Using Machine Learning</b></p>
</caption>

<table>
<thead>
<tr>
<th align="center"><bold>Author</bold></th>
<th align="center"><bold>Proposed Work</bold></th>
<th align="center"><bold>Dataset</bold></th>
<th align="center"><bold>Key Findings</bold></th>
<th align="center"><bold>Challenges/recommendation</bold></th>
<th align="center"></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Chudasama and  Bhavsar (2020)</td>
<td align="center">DL + Queuing  Theory model for proactive auto-scaling</td>
<td align="center">University server  logs</td>
<td align="center">Improved SLA  violation prediction by 5%, Enhances resource elasticity under hybrid cloud</td>
<td align="center">Static threshold  auto-scaling fails under unpredictable loads, need for proactive,  prediction-driven auto-scaling mechanisms in hybrid cloud environments</td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Chen et al. (2019)</td>
<td align="center">A self-adaptive  system for allocating resources for cloud-based software applications and  self-learning, utilizing genetic algorithms for optimization and machine  learning for QoS modelling.</td>
<td align="center">RUBiS benchmark</td>
<td align="center">QoS prediction  accuracy &#x00026;gt; 90% <br/>  10%&#x02013;30% improvement in resource utilization</td>
<td align="center">Traditional  policy-driven methods lead to complexity and high administrative cost;  recommends ML-driven automatic decision-making to adapt to dynamic  environments.</td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Rayan and Nah  (2018)</td>
<td align="center">ML-based workload  prediction for cloud data centers (RFR, SVR, PR)</td>
<td align="center">Operational  workload logs</td>
<td align="center">RFR achieved  lowest RMSE (11.68 for PMs, 4869.08 for PC), 2-second training time<br/>  Enables proactive allocation and energy/resource efficiency</td>
<td align="center">Focused on  prediction, not dynamic real-time scheduling, Need to integrate accurate  workload prediction with adaptive scheduling/auto-scaling mechanisms in  large-scale environments</td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Ataie et al.  (2017)</td>
<td align="center">Hybrid  methodology that integrates support vector regression (SVR) and queuing  networks to forecast the duration of job execution</td>
<td align="center">Hadoop MapReduce  job traces</td>
<td align="center">Achieved 21%  improvement in prediction accuracy over standalone ML methods</td>
<td align="center">Need to balance  accuracy and computational cost, Integration of analytical models and ML  recommended for better resource management</td>
<td align="center"></td>
</tr>
<tr>
<td align="center" colspan="5">
<hr />
</td>
</tr>
<tr>
<td align="center">Dai et al. (2016)</td>
<td align="center">A method for  multi-objective optimization that is intended to maximize the price,  accessibility, and efficiency of cloud-based Big Data programs. carried out  on the testbed.</td>
<td align="center">Experimental  setup</td>
<td align="center">Execution time  improved by 20% over traditional methods- 15% higher performance than  heuristics- 4&#x02013;20% cost savings</td>
<td align="center">Emphasizes the  need for fine-grained resource allocation in cloud infrastructure; recommends  multi-objective optimization to handle competing objectives.</td>
<td align="center"></td>
</tr>
<tr>
<td align="center" colspan="5">
<hr />
</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>

</fn>
</table-wrap-foot>
</table-wrap><p></p>
</sec><sec id="sec3">
<title>Research Methodology</title><p>The proposed cloud resource allocation model methodology is initiated by the data collection process, which involves sampling 19 columns and 4000 rows of data that represent the complexity of operations in the cloud. This is preceded by data pre-processing, which entails treating missing values by either deleting or imputing them, and eliminating noise to remove irrelevant or redundant data. Additionally, data standardization is performed to prevent interference with data prediction accuracy. The model's performance is further tested by dividing the dataset 80:20 across training and test sets. The Long Short-Term Memory (LSTM) model, which offers a significant advantage in handling temporal correlations and sequential patterns to identify optimal and adaptive cloud resource allocation, is finally put into practice. Finally, the accuracy, precision, recall, and F1-score are used to evaluate the model's performance in the context of ML-based cloud resource capacity allocation.Figure <xref ref-type="fig" rid="fig1"> 1</xref> displays the flowchart of phases for the resource allocation methodology.</p>
<fig id="fig1">
<label>Figure 1</label>
<caption>
<p>Proposed flowchart for Cloud Resource Allocation</p>
</caption>
<graphic xlink:href="1344.fig.001" />
</fig><p>The following steps involved in the proposed flowchart of detecting the efficient allocation of cloud resources will be described and discussed below.</p>
<title>3.1. Data Collection</title><p>The process of gathering full information on historical data from cloud service providers. This comprises operating expenditures, workload profiles, activity levels, and resource utilisation performance data. The dataset is used as the input source for training the suggested methods and has 19 columns and infinite attributes. ThisFigure <xref ref-type="fig" rid="figreflects"> reflects</xref> the intricacy of the resource allocation mechanism and is closely correlated with the dataset's row count, reaching 4000.</p>
<title>3.2. Data Pre-Processing</title><p>The data is collected, and then there is a systematic pre-processing stage of the data. It includes data cleaning to remove missing, noisy and inconsistent data, and data transformation to transform raw data into meaningful features that can be used in ML. To enable accurate model evaluation and ensure that the outcomes can be effectively used in scenarios that are not visible, the cleaned data is then separated into testing, validation, and training categories. The pre-processing steps that follow are the following ones:</p>
<p><bold>Handle missing value:</bold> The methods for handling missing data include imputation, which substitutes statistical estimators like the model, mean, or median of the missing data; and deletion, which removes rows with missing values. This is a crucial stage in determining the calibre of data used to train ML models.</p>
<p><bold>Remove noise:</bold> Data pre-processing uses the elimination of noise whereby irrelevant or unnecessary data points are detected and deleted as they do not add to the general structure of the data set. In order to mitigate noisiness in cloud resource allocation, there are several strategies that you may utilise. The nature of noise, the choice of a proper model of cloud service, and the usage of dedicated or isolated resources can reduce the effects of noise.</p>
<title>3.3. Data Standardization</title><p>The data values are normalised by the utmost value in the same data feature, ensuring that they lie within the standard range of 0 to 1. The normalisation operation was implemented by employing Equation. (1). This process resulted in the conversion of all numerical values to a value range of 0 to 1.</p>

<disp-formula id="FD1"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msup><mrow><mi>X</mi></mrow><mrow><mi mathvariant="normal">'</mi></mrow></msup><mo>=</mo><mfrac><mrow><mi>x</mi><mo>-</mo><mi>μ</mi></mrow><mrow><mi>σ</mi></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(1)</label></div></div></disp-formula><p>The original feature value is denoted by x, while the normalised value is represented by <math><semantics><mrow><msup><mrow><mi>X</mi></mrow><mrow><mi>'</mi></mrow></msup></mrow></semantics></math> in this equation. The standard deviation and mean are denoted by &#x26;#x0d835;&#x26;#x0df0e; and &#x26;#x0d835;&#x26;#x0df07;, respectively. The normalisation process can mitigate the detrimental impact of features with high numerical values that would otherwise adversely affect performance.</p>
<title>3.4. Data Splitting</title><p>Data division is the process of dividing a dataset into smaller groups for 20% testing, 80% training, and 20% validation. Training comprises eighty percent of the allocation, while twenty percent is reserved for assessment.</p>
<title>3.5. Long Short-Term Memory (LSTM) Model</title><p>The classifier is capable of learning long-term dependencies between the text, which is why LSTM is particularly popular for text classification. LSTM classifiers are a type of recurrent neural network (RNN), which is a stratified network that employs the outputs of the preceding layer as inputs for the subsequent layer. There are feedback connections in LSTM that enable it to operate with sequences of data rather than merely individual data points [
<xref ref-type="bibr" rid="R16">16</xref>,<xref ref-type="bibr" rid="R17">17</xref>]. An LSTM node is composed of a cell, input gate, output gate, and forget gate. Three gates control how information moves through the cell, which is in charge of holding onto values across time. Each memory block in the LSTM layers contains three multiplicative gates and is connected recurrently. To ensure that temporary data is used for a predetermined period, gates continuously write, read, and reset. The input of the unit, <math><semantics><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>,</mo><msub><mrow><mi>h</mi></mrow><mrow><mi>t</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>c</mi></mrow><mrow><mi>t</mi><mo>-</mo><mn>1</mn></mrow></msub></mrow></semantics></math> and the output of the unit, <math><semantics><mrow><msub><mrow><mi>h</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>,</mo><msub><mrow><mi> </mi><mi>c</mi></mrow><mrow><mi>t</mi></mrow></msub></mrow></semantics></math>We updated as follows Equation from (2) to (7):</p>
<p>Gates: </p>

<disp-formula id="FD2"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>i</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>=</mo><mi>σ</mi><mfenced separators="|"><mrow><msub><mrow><mi>W</mi></mrow><mrow><mi>i</mi></mrow></msub><msub><mrow><mi>x</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>U</mi></mrow><mrow><mi>i</mi></mrow></msub><msub><mrow><mi>h</mi></mrow><mrow><mi>t</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>b</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(2)</label></div></div></disp-formula>
<disp-formula id="FD3"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>=</mo><mi>σ</mi><mfenced separators="|"><mrow><msub><mrow><mi>W</mi></mrow><mrow><mi>f</mi></mrow></msub><msub><mrow><mi>x</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>U</mi></mrow><mrow><mi>i</mi></mrow></msub><msub><mrow><mi>h</mi></mrow><mrow><mi>t</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>b</mi></mrow><mrow><mi>f</mi></mrow></msub></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(3)</label></div></div></disp-formula>
<disp-formula id="FD4"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>o</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>=</mo><mi>σ</mi><mfenced separators="|"><mrow><msub><mrow><mi>W</mi></mrow><mrow><mi>o</mi></mrow></msub><msub><mrow><mi>x</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>U</mi></mrow><mrow><mi>o</mi></mrow></msub><msub><mrow><mi>h</mi></mrow><mrow><mi>t</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>b</mi></mrow><mrow><mi>o</mi></mrow></msub></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(4)</label></div></div></disp-formula><p>Input transform:</p>

<disp-formula id="FD5"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>g</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>=</mo><mi>t</mi><mi>a</mi><mi>n</mi><mi>h</mi><mfenced separators="|"><mrow><msub><mrow><mi>W</mi></mrow><mrow><mi>g</mi></mrow></msub><msub><mrow><mi>x</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>U</mi></mrow><mrow><mi>g</mi></mrow></msub><msub><mrow><mi>h</mi></mrow><mrow><mi>t</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>b</mi></mrow><mrow><mi>g</mi></mrow></msub></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(5)</label></div></div></disp-formula><p>State update</p>

<disp-formula id="FD6"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">c</mi></mrow><mrow><mi mathvariant="normal">t</mi></mrow></msub><mo>=</mo><mi mathvariant="normal"> </mi><msub><mrow><mi mathvariant="normal">f</mi></mrow><mrow><mi mathvariant="normal">t</mi></mrow></msub><mo>⊙</mo><msub><mrow><mi mathvariant="normal">c</mi></mrow><mrow><mi mathvariant="normal">t</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi mathvariant="normal">i</mi></mrow><mrow><mi mathvariant="normal">t</mi></mrow></msub><mo>⊙</mo><msub><mrow><mi mathvariant="normal">g</mi></mrow><mrow><mi mathvariant="normal">t</mi></mrow></msub></mrow></semantics></math></div><div class="l"><label>(6)</label></div></div></disp-formula>
<disp-formula id="FD7"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">h</mi></mrow><mrow><mi mathvariant="normal">t</mi></mrow></msub><mo>=</mo><mi mathvariant="normal"> </mi><msub><mrow><mi mathvariant="normal">o</mi></mrow><mrow><mi mathvariant="normal">t</mi></mrow></msub><mo>⊙</mo><msub><mrow><mi mathvariant="normal">t</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">n</mi><mi mathvariant="normal">h</mi><mo>⁡</mo><mo>(</mo><mi mathvariant="normal">c</mi></mrow><mrow><mi mathvariant="normal">t</mi></mrow></msub><mo>)</mo></mrow></semantics></math></div><div class="l"><label>(7)</label></div></div></disp-formula><p>In the previous equations, element-wise multiplication and the logistic sigmoid function are represented by &#x26;#x0d835;&#x26;#x0df0e; and &#x26;#x02299;, respectively. An input gate <math><semantics><mrow><msub><mrow><mi>i</mi></mrow><mrow><mi>t</mi></mrow></msub></mrow></semantics></math>, A forget gate <math><semantics><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>t</mi></mrow></msub></mrow></semantics></math>, an output gate <math><semantics><mrow><msub><mrow><mi>o</mi></mrow><mrow><mi>t</mi></mrow></msub></mrow></semantics></math>, a hidden unit <math><semantics><mrow><msub><mrow><mi>h</mi></mrow><mrow><mi>t</mi></mrow></msub></mrow></semantics></math> , and a memory cell <math><semantics><mrow><msub><mrow><mi>c</mi></mrow><mrow><mi>t</mi></mrow></msub></mrow></semantics></math> are present in the LSTM unit at each time step t. The learnt parameters are W and U, and the added bias is denoted by (b). The input gate regulates how much each unit is updated, the forget gate regulates how much the memory cell is expunged, and the output gate regulates how much of the internal memory state is disclosed.</p>
<title>3.6. Evaluation Metrics</title><p>To compare the results by evaluating their accuracy and F1-scores after the pre-processing and modelling phases. Utilise a confusion matrix to ascertain these Figures. A confusion matrix is employed to ascertain F1-score, recall, accuracy, and precision. These metric values are derived from the training subset [
<xref ref-type="bibr" rid="R18">18</xref>]. The highest True Negative (TN) and True Positive (TP) values are preferred. True Negative is a term that denotes situations in which the actual and anticipated data are both negative (0). True Positive denotes that the actual and anticipated data are both true positives (1). The equations below give the following of the matrix equation.</p>
<p><bold>Accuracy:</bold> The number of accurately predicted samples divided by the total number of samples in a dataset is the definition of this statistic, it is given as Equation (8):</p>

<disp-formula id="FD8"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>A</mi><mi>c</mi><mi>c</mi><mi>u</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>y</mi><mo>=</mo><mfrac><mrow><mi>T</mi><mi>P</mi><mo>+</mo><mi>T</mi><mi>N</mi></mrow><mrow><mi>T</mi><mi>P</mi><mo>+</mo><mi>F</mi><mi>p</mi><mo>+</mo><mi>T</mi><mi>N</mi><mo>+</mo><mi>F</mi><mi>N</mi></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(8)</label></div></div></disp-formula><p><bold>Precision:</bold> Precision is a measure that determines the precision with which a given model generates optimistic forecasts. The statistic measures the proportion of positively recognised instances relative to the overall count of instances that were anticipated to test positive, it is expressed as Equation (9):</p>

<disp-formula id="FD9"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>P</mi><mi>r</mi><mi>e</mi><mi>c</mi><mi>i</mi><mi>s</mi><mi>i</mi><mi>o</mi><mi>n</mi><mo>=</mo><mfrac><mrow><mi>T</mi><mi>P</mi></mrow><mrow><mi>T</mi><mi>P</mi><mo>+</mo><mi>F</mi><mi>P</mi></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(9)</label></div></div></disp-formula><p><bold>Recall:</bold> This metric, also known as TPR or sensitivity, is a metric that quantifies the accuracy of the model in classifying positive samples from all possible positive samples. Mathematically, it may be expressed as Equation (10):</p>

<disp-formula id="FD10"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>R</mi><mi>e</mi><mi>c</mi><mi>a</mi><mi>l</mi><mi>l</mi><mo>=</mo><mfrac><mrow><mi>T</mi><mi>P</mi></mrow><mrow><mi>T</mi><mi>P</mi><mo>+</mo><mi>F</mi><mi>N</mi></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(10)</label></div></div></disp-formula><p><bold>F1 score:</bold> F1 scores combine accuracy and recall into one statistic, making them a suitable way to evaluate the model's performance. Mathematically, it is given as Equation (11):</p>

<disp-formula id="FD11"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>F</mi><mn>1</mn><mo>-</mo><mi>s</mi><mi>c</mi><mi>o</mi><mi>r</mi><mi>e</mi><mo>=</mo><mn>2</mn><mo>×</mo><mfrac><mrow><mi>P</mi><mi>r</mi><mi>e</mi><mi>c</mi><mi>i</mi><mi>s</mi><mi>i</mi><mi>o</mi><mi>n</mi><mo>×</mo><mi>R</mi><mi>e</mi><mi>c</mi><mi>a</mi><mi>l</mi><mi>l</mi></mrow><mrow><mi>P</mi><mi>r</mi><mi>e</mi><mi>c</mi><mi>i</mi><mi>s</mi><mi>i</mi><mi>o</mi><mi>n</mi><mo>+</mo><mi>R</mi><mi>e</mi><mi>c</mi><mi>a</mi><mi>l</mi><mi>l</mi></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(11)</label></div></div></disp-formula><p>In conclusion, the model's accuracy and its general propensity to accurately predict the objective variable are evaluated by all of these measures.</p>
</sec><sec id="sec4">
<title>Results and Discussion</title><p>This section presents the experimental findings and the resource allocation simulation environment. The hardware of the experimental platform is configured with an Intel Core i7-6500U CPU, 8 GB RAM, and 1 TB storage.Table <xref ref-type="table" rid="tab2">2</xref> displays the experimental outcomes of the suggested LSTM model for cloud resource allocation. The model is highly efficient in terms of all essential performance indicators, achieving a 97 percent accuracy rate, 97.5 percent precision rate, 98 percent recall rate, and a 97.8 percent F1-score. These results demonstrate that the LSTM model can accurately anticipate resource consumption with a high recall level and little FP and FN occurrences. This demonstrates the model's effectiveness and appropriateness in addressing the dynamic and complex nature of cloud computing situations.</p>
<table-wrap id="tab2">
<label>Table 2</label>
<caption>
<p><b> </b><b>Experiment Results of Proposed Models for Cloud Resource Allocation</b></p>
</caption>

<table>
<thead>
<tr>
<th align="center"><bold>Performance matrix</bold></th>
<th align="center"><bold>Long Short-Term Memory (LSTM)</bold></th>
<th align="center"></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Accuracy</td>
<td align="center">97</td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Precision</td>
<td align="center">97.5</td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Recall</td>
<td align="center">98</td>
<td align="center"></td>
</tr>
<tr>
<td align="center">F1-score</td>
<td align="center">97.8</td>
<td align="center"></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>

</fn>
</table-wrap-foot>
</table-wrap><fig id="fig2">
<label>Figure 2</label>
<caption>
<p>Accuracy Curves for the LSTM Model</p>
</caption>
<graphic xlink:href="1344.fig.002" />
</fig><p>As shown inFigure <xref ref-type="fig" rid="fig2"> 2</xref>, the LSTM model with 100 training epochs has a training and validation accuracy diagram. The two curves exhibit a consistent upward trend, suggesting that the learners were progressively learning and improving over time. Training accuracy is generally higher than validation accuracy, although the difference is minor, as expected. This is a sign that the model is not overfitting to a significant extent. The training curve is tracked by the validation accuracy, which tends to converge to it in the concluding phases of the training. This nearly flawless outcome demonstrates how well the LSTM model classifies and generalises to previously unobserved data.</p>
<fig id="fig3">
<label>Figure 3</label>
<caption>
<p>Loss Curves for the LSTM Model</p>
</caption>
<graphic xlink:href="1344.fig.003" />
</fig><p>Figure 3 shows the LSTM model's training and validation loss across 100 epochs. Learning has occurred, and the model is converging, as evidenced by the decreasing trend of both contours. The training and validation losses, which are initially substantial but rapidly decrease throughout the first epochs, show the model's ability to identify significant patterns in the data. Training progresses, and the loss reduction becomes more gradual, ultimately stabilising at or near zero. The validation loss is marginally greater than the training loss, which implies that the generalisation is adequate and that there is minimal overfitting. The LSTM model's robust predictive performance and sustained learning are both confirmed by the close alignment of both curves.</p>
<fig id="fig4">
<label>Figure 4</label>
<caption>
<p>Confusion Matrix for LSTM Model</p>
</caption>
<graphic xlink:href="1344.fig.004" />
</fig><p>The efficacy of a multi-class classification model across three classes (0, 1, and 2) is illustrated inFigure <xref ref-type="fig" rid="fig4"> 4</xref>, which is a confusion matrix. The diagonal elements (118, 91, and 110) represent the cases for each class that were accurately predicted, indicating excellent overall performance. However, some misclassifications are present: Classifying 23 instances of class 1 as class 0 and 31 instances of class 1 as class 2 was incorrect. Eleven Class 2 cases were mistakenly classed as Class 1, while eight Class 2 cases were mistakenly classified as Class 0. Because just five and three instances were misclassified, Class 0 had very little ambiguity. While class 1 has the most classification errors, indicating possible areas for development in differentiating this class, the model performs best in recognising class 0 and class 2.</p>
<table-wrap id="tab3">
<label>Table 3</label>
<caption>
<p><b> </b><b>Comparison of different Cloud Resource Allocation using Machine Learning.</b></p>
</caption>

<table>
<thead>
<tr>
<th align="center"><bold>Models</bold></th>
<th align="center"><bold>Accuracy</bold></th>
<th align="center"></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Gradient Boosting  (GB) [19]</td>
<td align="center">92</td>
<td align="center"></td>
</tr>
<tr>
<td align="center"><a name="_Hlk180749884">Logistic  Regression (LR) [20]</a></td>
<td align="center">95</td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Proposed LSTM Model</td>
<td align="center">97</td>
<td align="center"></td>
</tr>
</tbody>
</table>
</table-wrap><p></p>
<p>InTable <xref ref-type="table" rid="tab3">3</xref>, the effectiveness of three different ML models&#x26;#x02014;GB, LR, and LSTM&#x26;#x02014;in relation to cloud resource allocation is compared, with accuracy serving as the assessment criterion. With respective accuracy rates of 92% and 95%, the GB and LR models demonstrated their capacity to handle structured data and perform rather well in resource allocation prediction. Nevertheless, although these models facilitate accurate capture of linear relationships and boosting-based enhancements, they might be inadequate to account complex temporal patterns reported in dynamic cloud spaces. Conversely, the LSTM model performed better, achieving 97% accuracy, which expresses its superior ability to learn and interpret long-term dependencies and nonlinear relationships in chronological data. This makes LSTM especially suitable for forecasting resources whose use varies over time and is dependent on specific needs. </p>
<p>The suggested LSTM model offers substantial benefits to cloud resource allocation prospects due to its ability to learn and model temporal patterns effectively in the usage dataset. LSTM is better at handling longer-term dependencies than traditional models, and this is essential in dynamic and time-based clouds. This will result in increased accuracy in the predictions, as shown in the experiment findings and more accurate forecasting of the resources requirements. Consequently, it aids in reducing both over-provisioning and under-provisioning, thus enabling the optimum use of resources.</p>
<p></p>
</sec><sec id="sec5">
<title>Conclusion and Future Study</title><p>Reinforcement learning-based techniques have entered the industry as a result of cloud systems' requirement for dynamic resource allocation. The cloud computing services offer the users on demand resources of diverse workloads that demand different performance of services. Nevertheless, changing workloads, resource needs, and the conflict between effective performance and cost effectiveness may exert a severe burden on resource management in such kind of platforms. This investigation reveals that the accuracy and efficiency of allocating cloud resources using ML and DL models show tremendous potential. In the tested models, LSTM had displayed the highest accuracy (97%), compared to GB and LR which were 95% respectively. Although the suggested LSTM model for cloud resource allocation has shown promising results, several limitations apply. The model's effectiveness may be impacted by the amount and quality of the data collection, as in this study, no more than 4,000 records were employed. Future research will involve the use of larger real-time datasets and experiments on hybrid models, such as LSTM-Transformer, to achieve improvements in accuracy and scalability within dynamic cloud settings.</p>
</sec>
  </body>
  <back>
    <ref-list>
      <title>References</title>
      
<ref id="R1">
<label>[1]</label>
<mixed-citation publication-type="other">P. Peddi and D. S. Arumugam, "Comparative study on cloud optimized resource and prediction using machine learning algorithm," Anveshana's Int. J. Res. Eng. Appl. Sci., vol. 1, no. 3, 2016.
</mixed-citation>
</ref>
<ref id="R2">
<label>[2]</label>
<mixed-citation publication-type="other">F. Nzanywayingoma and Y. Yang, "Efficient resource management techniques in cloud computing environment: a review and discussion," Int. J. Comput. Appl., 2019, doi: 10.1080/1206212X.2017.1416558.
</mixed-citation>
</ref>
<ref id="R3">
<label>[3]</label>
<mixed-citation publication-type="other">C. Riedelsheimer and A. E. Melchinger, "Optimizing the allocation of resources for genomic selection in one breeding cycle," Theor. Appl. Genet., 2013, doi: 10.1007/s00122-013-2175-9.
</mixed-citation>
</ref>
<ref id="R4">
<label>[4]</label>
<mixed-citation publication-type="other">M. Abbasi, M. Yaghoobikia, M. Rafiee, A. Jolfaei, and M. R. Khosravi, "Efficient resource management and workload allocation in fog-cloud computing paradigm in IoT using learning classifier systems," Comput. Commun., vol. 153, pp. 217-228, Mar. 2020, doi: 10.1016/j.comcom.2020.02.017.
</mixed-citation>
</ref>
<ref id="R5">
<label>[5]</label>
<mixed-citation publication-type="other">M. Aibin, "LSTM for Cloud Data Centers Resource Allocation in Software-Defined Optical Networks," in 2020 11th IEEE Annual Ubiquitous Computing, Electronics &#x00026; Mobile Communication Conference (UEMCON), 2020, pp. 162-167. doi: 10.1109/UEMCON51285.2020.9298133.
</mixed-citation>
</ref>
<ref id="R6">
<label>[6]</label>
<mixed-citation publication-type="other">A. Beloglazov, J. Abawajy, and R. Buyya, "Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing," Futur. Gener. Comput. Syst., 2012, doi: 10.1016/j.future.2011.04.017.
</mixed-citation>
</ref>
<ref id="R7">
<label>[7]</label>
<mixed-citation publication-type="other">M. Zamzam, T. Elshabrawy, and M. Ashour, "Resource Management using Machine Learning in Mobile Edge Computing: A Survey," in Proceedings - 2019 IEEE 9th International Conference on Intelligent Computing and Information Systems, ICICIS 2019, 2019. doi: 10.1109/ICICIS46948.2019.9014733.
</mixed-citation>
</ref>
<ref id="R8">
<label>[8]</label>
<mixed-citation publication-type="other">N. Liu et al., "A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning," in Proceedings - International Conference on Distributed Computing Systems, 2017. doi: 10.1109/ICDCS.2017.123.
</mixed-citation>
</ref>
<ref id="R9">
<label>[9]</label>
<mixed-citation publication-type="other">A. Yousafzai et al., "Cloud resource allocation schemes: review, taxonomy, and opportunities," Knowl. Inf. Syst., 2017, doi: 10.1007/s10115-016-0951-y.
</mixed-citation>
</ref>
<ref id="R10">
<label>[10]</label>
<mixed-citation publication-type="other">F. D. la Prieta, S. Rodr&#x000ed;guez-Gonz&#x000e1;lez, P. Chamoso, Y. Demazeau, and J. M. Corchado, "An Intelligent Approach to Allocating Resources within an Agent-Based Cloud Computing Platform," Appl. Sci., vol. 10, no. 12, p. 4361, Jun. 2020, doi: 10.3390/app10124361.
</mixed-citation>
</ref>
<ref id="R11">
<label>[11]</label>
<mixed-citation publication-type="other">V. Chudasama and M. Bhavsar, "A dynamic prediction for elastic resource allocation in hybrid cloud environment," Scalable Comput., vol. 21, no. 4, pp. 661-672, 2020, doi: 10.12694:/scpe.v21i4.1805.
</mixed-citation>
</ref>
<ref id="R12">
<label>[12]</label>
<mixed-citation publication-type="other">X. Chen, J. Lin, B. Lin, T. Xiang, Y. Zhang, and G. Huang, "Self&#x02010;learning and self&#x02010;adaptive resource allocation for cloud&#x02010;based software services," Concurr. Comput. Pract. Exp., vol. 31, no. 23, Dec. 2019, doi: 10.1002/cpe.4463.
</mixed-citation>
</ref>
<ref id="R13">
<label>[13]</label>
<mixed-citation publication-type="other">A. Rayan and Y. Nah, "Resource prediction for big data processing in a cloud data center: A machine learning approach," IEIE Trans. Smart Process. Comput., vol. 7, no. 6, pp. 478-488, 2018, doi: 10.5573/IEIESPC.2018.7.6.478.
</mixed-citation>
</ref>
<ref id="R14">
<label>[14]</label>
<mixed-citation publication-type="other">E. Ataie, E. Gianniti, D. Ardagna, and A. Movaghar, "A combined analytical modeling machine learning approach for performance prediction of MapReduce jobs in cloud environment," Proc. - 18th Int. Symp. Symb. Numer. Algorithms Sci. Comput. SYNASC 2016, pp. 431-439, 2017, doi: 10.1109/SYNASC.2016.072.
</mixed-citation>
</ref>
<ref id="R15">
<label>[15]</label>
<mixed-citation publication-type="other">W. Dai, L. Qiu, A. Wu, and M. Qiu, "Cloud Infrastructure Resource Allocation for Big Data Applications," IEEE Trans. Big Data, vol. 4, no. 3, pp. 313-324, Sep. 2018, doi: 10.1109/TBDATA.2016.2597149.
</mixed-citation>
</ref>
<ref id="R16">
<label>[16]</label>
<mixed-citation publication-type="other">Y. Liu, L. L. Njilla, J. Wang, and H. Song, "An LSTM Enabled Dynamic Stackelberg Game Theoretic Method for Resource Allocation in the Cloud," in 2019 International Conference on Computing, Networking and Communications, ICNC 2019, 2019. doi: 10.1109/ICCNC.2019.8685670.
</mixed-citation>
</ref>
<ref id="R17">
<label>[17]</label>
<mixed-citation publication-type="other">G. Park and M. Song, "Prediction-based resource allocation using LSTM and minimum cost and maximum flow algorithm," in Proceedings - 2019 International Conference on Process Mining, ICPM 2019, 2019. doi: 10.1109/ICPM.2019.00027.
</mixed-citation>
</ref>
<ref id="R18">
<label>[18]</label>
<mixed-citation publication-type="other">J. B. Wang et al., "A Machine Learning Framework for Resource Allocation Assisted by Cloud Computing," IEEE Netw., vol. 32, no. 2, pp. 144-151, 2018, doi: 10.1109/MNET.2018.1700293.
</mixed-citation>
</ref>
<ref id="R19">
<label>[19]</label>
<mixed-citation publication-type="other">S. D. Pasham, "Dynamic Resource Provisioning in Cloud Environments Using Predictive Analytics," vol. 4, no. 2, pp. 1-28, 2018.
</mixed-citation>
</ref>
<ref id="R20">
<label>[20]</label>
<mixed-citation publication-type="other">J. Zhang, N. Xie, X. Zhang, K. Yue, W. Li, and D. Kumar, "Machine learning based resource allocation of cloud computing in auction," Comput. Mater. Contin., 2018, doi: 10.3970/cmc.2018.03728.
</mixed-citation>
</ref>
    </ref-list>
  </back>
</article>