Authors: (1) Rasoul Samani, School of Electrical and Computer Engineering, Isfahan University of Technology and this author contributed equally to this work; (2) Mohammad Dehghani, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran and this author contributed equally to this work (dehghani.mohammad@ut.ac.ir); (3) Fahime Shahrokh, School of Electrical and Computer Engineering, Isfahan University of Technology. Authors: Authors: (1) Rasoul Samani, School of Electrical and Computer Engineering, Isfahan University of Technology and this author contributed equally to this work; (2) Mohammad Dehghani, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran and this author contributed equally to this work (dehghani.mohammad@ut.ac.ir); (3) Fahime Shahrokh, School of Electrical and Computer Engineering, Isfahan University of Technology. Table of Links Abstract and 1. Introduction Abstract and 1. Introduction 2. Related Works 2. Related Works 3. Methodology and 3.1 Data 3. Methodology and 3.1 Data 3.2 Data preprocessing 3.2 Data preprocessing 3.3. Predictive models 3.3. Predictive models 4. Evaluation 4.1. Evaluation metrics 4.1. Evaluation metrics 4.2. Results and discussion 4.2. Results and discussion 5. Conclusion and References 5. Conclusion and References 3.3. Predictive models In this study, multiple data mining algorithms and deep learning were employed to create prediction model. Logistic regression: Logistic regression, a statistical technique, finds extensive use in binary classification tasks, particularly in health sciences studies where the focus lies on disease states (diseased or healthy) and decision-making scenarios (yes or no) [38]. Logistic regression: Random forest: Random forest utilizes a structure composed of numerous decision trees, with predictions from each tree combined to forecast the value of a variable [39]. Random forest: KNN: KNN is a model that leverages the values of the nearest samples in the training data to determine the category or value of a given sample [40]. KNN SVM: Using SVM, data are transformed into a high-dimensional feature space where separating hyperplanes are constructed to maximize the margin between data points and the hyperplane, effectively delineating them into distinct classes [41]. This process facilitates robust classification by ensuring clear boundaries between different classes in the feature space, enhancing the model's ability to generalize to unseen data. SVM Gaussian Naive Bayes: Naive Bayes is a probabilistic classifier that employs Bayes' theorem to estimate the probability of a given set of features belonging to a specific label. It calculates the conditional probability of event A occurring given the individual probabilities of A and B, as well as the conditional probability of event B. This approach assumes that features are independent [42]. Gaussian Naive Bayes is a variant of the Naive Bayes classifier that assumes features to follow a Gaussian distribution. Gaussian Naive Bayes: By employing these diverse algorithms, this study aims to explore their effectiveness in predicting patient readmission rates and identifying the most suitable approach for the given dataset and research objectives. This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license. This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license. available on arxiv