DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI PDF full book. Access full book title DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI by Vivian Siahaan. Download full books in PDF and EPUB format.

DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI

DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 412

Book Description
Thyroid disease is a prevalent condition that affects the thyroid gland, leading to various health issues. In this session of the Data Science Crash Course, we will explore the classification and prediction of thyroid disease using machine learning and deep learning techniques, all implemented with the power of Python and a user-friendly GUI built with PyQt. We will start by conducting data exploration on a comprehensive dataset containing relevant features and thyroid disease labels. Through analysis and pattern recognition, we will gain insights into the underlying factors contributing to thyroid disease. Next, we will delve into the machine learning phase, where we will implement popular algorithms including Support Vector, Logistic Regression, K-Nearest Neighbors (KNN), Decision Tree, Random Forest, Gradient Boosting, Light Gradient Boosting, Naive Bayes, Adaboost, Extreme Gradient Boosting, and Multi-Layer Perceptron. These models will be trained using different preprocessing techniques, including raw data, normalization, and standardization, to evaluate their performance and accuracy. We train each model on the training dataset and evaluate its performance using appropriate metrics such as accuracy, precision, recall, and F1-score. This helps us assess how well the models can predict stroke based on the given features. To optimize the models' performance, we perform hyperparameter tuning using techniques like grid search or randomized search. This involves systematically exploring different combinations of hyperparameters to find the best configuration for each model. After training and tuning the models, we save them to disk using joblib. This allows us to reuse the trained models for future predictions without having to train them again. Moving beyond traditional machine learning, we will build an artificial neural network (ANN) using TensorFlow. This ANN will capture complex relationships within the data and provide accurate predictions of thyroid disease. To ensure the effectiveness of our ANN, we will train it using a curated dataset split into training and testing sets. This will allow us to evaluate the model's performance and its ability to generalize predictions. To provide an interactive and user-friendly experience, we will develop a Graphical User Interface (GUI) using PyQt. The GUI will allow users to input data, select prediction methods (machine learning or deep learning), and visualize the results. Through the GUI, users can explore different prediction methods, compare performance, and gain insights into thyroid disease classification. Visualizations of training and validation loss, accuracy, and confusion matrices will enhance understanding and model evaluation. Line plots comparing true values and predicted values will further aid interpretation and insights into classification outcomes. Throughout the project, we will emphasize the importance of preprocessing techniques, feature selection, and model evaluation in building reliable and effective thyroid disease classification and prediction models. By the end of the project, readers will have gained practical knowledge in data exploration, machine learning, deep learning, and GUI development. They will be equipped to apply these techniques to other domains and real-world challenges. The project’s comprehensive approach, from data exploration to model development and GUI implementation, ensures a holistic understanding of thyroid disease classification and prediction. It empowers readers to explore applications of data science in healthcare and beyond. The combination of machine learning and deep learning techniques, coupled with the intuitive GUI, offers a powerful framework for thyroid disease classification and prediction. This project serves as a stepping stone for readers to contribute to the field of medical data science. Data-driven approaches in healthcare have the potential to unlock valuable insights and improve outcomes. The focus on thyroid disease classification and prediction in this session showcases the transformative impact of data science in the medical field. Together, let us embark on this journey to advance our understanding of thyroid disease and make a difference in the lives of individuals affected by this condition. Welcome to the Data Science Crash Course on Thyroid Disease Classification and Prediction!

DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI

DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 412

Book Description
Thyroid disease is a prevalent condition that affects the thyroid gland, leading to various health issues. In this session of the Data Science Crash Course, we will explore the classification and prediction of thyroid disease using machine learning and deep learning techniques, all implemented with the power of Python and a user-friendly GUI built with PyQt. We will start by conducting data exploration on a comprehensive dataset containing relevant features and thyroid disease labels. Through analysis and pattern recognition, we will gain insights into the underlying factors contributing to thyroid disease. Next, we will delve into the machine learning phase, where we will implement popular algorithms including Support Vector, Logistic Regression, K-Nearest Neighbors (KNN), Decision Tree, Random Forest, Gradient Boosting, Light Gradient Boosting, Naive Bayes, Adaboost, Extreme Gradient Boosting, and Multi-Layer Perceptron. These models will be trained using different preprocessing techniques, including raw data, normalization, and standardization, to evaluate their performance and accuracy. We train each model on the training dataset and evaluate its performance using appropriate metrics such as accuracy, precision, recall, and F1-score. This helps us assess how well the models can predict stroke based on the given features. To optimize the models' performance, we perform hyperparameter tuning using techniques like grid search or randomized search. This involves systematically exploring different combinations of hyperparameters to find the best configuration for each model. After training and tuning the models, we save them to disk using joblib. This allows us to reuse the trained models for future predictions without having to train them again. Moving beyond traditional machine learning, we will build an artificial neural network (ANN) using TensorFlow. This ANN will capture complex relationships within the data and provide accurate predictions of thyroid disease. To ensure the effectiveness of our ANN, we will train it using a curated dataset split into training and testing sets. This will allow us to evaluate the model's performance and its ability to generalize predictions. To provide an interactive and user-friendly experience, we will develop a Graphical User Interface (GUI) using PyQt. The GUI will allow users to input data, select prediction methods (machine learning or deep learning), and visualize the results. Through the GUI, users can explore different prediction methods, compare performance, and gain insights into thyroid disease classification. Visualizations of training and validation loss, accuracy, and confusion matrices will enhance understanding and model evaluation. Line plots comparing true values and predicted values will further aid interpretation and insights into classification outcomes. Throughout the project, we will emphasize the importance of preprocessing techniques, feature selection, and model evaluation in building reliable and effective thyroid disease classification and prediction models. By the end of the project, readers will have gained practical knowledge in data exploration, machine learning, deep learning, and GUI development. They will be equipped to apply these techniques to other domains and real-world challenges. The project’s comprehensive approach, from data exploration to model development and GUI implementation, ensures a holistic understanding of thyroid disease classification and prediction. It empowers readers to explore applications of data science in healthcare and beyond. The combination of machine learning and deep learning techniques, coupled with the intuitive GUI, offers a powerful framework for thyroid disease classification and prediction. This project serves as a stepping stone for readers to contribute to the field of medical data science. Data-driven approaches in healthcare have the potential to unlock valuable insights and improve outcomes. The focus on thyroid disease classification and prediction in this session showcases the transformative impact of data science in the medical field. Together, let us embark on this journey to advance our understanding of thyroid disease and make a difference in the lives of individuals affected by this condition. Welcome to the Data Science Crash Course on Thyroid Disease Classification and Prediction!

PYTHON GUI PROJECTS WITH MACHINE LEARNING AND DEEP LEARNING

PYTHON GUI PROJECTS WITH MACHINE LEARNING AND DEEP LEARNING PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 917

Book Description
PROJECT 1: THE APPLIED DATA SCIENCE WORKSHOP: Prostate Cancer Classification and Recognition Using Machine Learning and Deep Learning with Python GUI Prostate cancer is cancer that occurs in the prostate. The prostate is a small walnut-shaped gland in males that produces the seminal fluid that nourishes and transports sperm. Prostate cancer is one of the most common types of cancer. Many prostate cancers grow slowly and are confined to the prostate gland, where they may not cause serious harm. However, while some types of prostate cancer grow slowly and may need minimal or even no treatment, other types are aggressive and can spread quickly. The dataset used in this project consists of 100 patients which can be used to implement the machine learning and deep learning algorithms. The dataset consists of 100 observations and 10 variables (out of which 8 numeric variables and one categorical variable and is ID) which are as follows: Id, Radius, Texture, Perimeter, Area, Smoothness, Compactness, Diagnosis Result, Symmetry, and Fractal Dimension. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. PROJECT 2: THE APPLIED DATA SCIENCE WORKSHOP: Urinary Biomarkers Based Pancreatic Cancer Classification and Prediction Using Machine Learning with Python GUI Pancreatic cancer is an extremely deadly type of cancer. Once diagnosed, the five-year survival rate is less than 10%. However, if pancreatic cancer is caught early, the odds of surviving are much better. Unfortunately, many cases of pancreatic cancer show no symptoms until the cancer has spread throughout the body. A diagnostic test to identify people with pancreatic cancer could be enormously helpful. In a paper by Silvana Debernardi and colleagues, published this year in the journal PLOS Medicine, a multi-national team of researchers sought to develop an accurate diagnostic test for the most common type of pancreatic cancer, called pancreatic ductal adenocarcinoma or PDAC. They gathered a series of biomarkers from the urine of three groups of patients: Healthy controls, Patients with non-cancerous pancreatic conditions, like chronic pancreatitis, and Patients with pancreatic ductal adenocarcinoma. When possible, these patients were age- and sex-matched. The goal was to develop an accurate way to identify patients with pancreatic cancer. The key features are four urinary biomarkers: creatinine, LYVE1, REG1B, and TFF1. Creatinine is a protein that is often used as an indicator of kidney function. YVLE1 is lymphatic vessel endothelial hyaluronan receptor 1, a protein that may play a role in tumor metastasis. REG1B is a protein that may be associated with pancreas regeneration. TFF1 is trefoil factor 1, which may be related to regeneration and repair of the urinary tract. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, and MLP classifier. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. PROJECT 3: DATA SCIENCE CRASH COURSE: Voice Based Gender Classification and Prediction Using Machine Learning and Deep Learning with Python GUI This dataset was created to identify a voice as male or female, based upon acoustic properties of the voice and speech. The dataset consists of 3,168 recorded voice samples, collected from male and female speakers. The voice samples are pre-processed by acoustic analysis in R using the seewave and tuneR packages, with an analyzed frequency range of 0hz-280hz (human vocal range). The following acoustic properties of each voice are measured and included within the CSV: meanfreq: mean frequency (in kHz); sd: standard deviation of frequency; median: median frequency (in kHz); Q25: first quantile (in kHz); Q75: third quantile (in kHz); IQR: interquantile range (in kHz); skew: skewness; kurt: kurtosis; sp.ent: spectral entropy; sfm: spectral flatness; mode: mode frequency; centroid: frequency centroid (see specprop); peakf: peak frequency (frequency with highest energy); meanfun: average of fundamental frequency measured across acoustic signal; minfun: minimum fundamental frequency measured across acoustic signal; maxfun: maximum fundamental frequency measured across acoustic signal; meandom: average of dominant frequency measured across acoustic signal; mindom: minimum of dominant frequency measured across acoustic signal; maxdom: maximum of dominant frequency measured across acoustic signal; dfrange: range of dominant frequency measured across acoustic signal; modindx: modulation index. Calculated as the accumulated absolute difference between adjacent measurements of fundamental frequencies divided by the frequency range; and label: male or female. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. PROJECT 4: DATA SCIENCE CRASH COURSE: Thyroid Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Thyroid disease is a general term for a medical condition that keeps your thyroid from making the right amount of hormones. Thyroid typically makes hormones that keep body functioning normally. When the thyroid makes too much thyroid hormone, body uses energy too quickly. The two main types of thyroid disease are hypothyroidism and hyperthyroidism. Both conditions can be caused by other diseases that impact the way the thyroid gland works. Dataset used in this project was from Garavan Institute Documentation as given by Ross Quinlan 6 databases from the Garavan Institute in Sydney, Australia. Approximately the following for each database: 2800 training (data) instances and 972 test instances. This dataset contains plenty of missing data, while 29 or so attributes, either Boolean or continuously-valued. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy.

DATA SCIENCE WORKSHOP: Chronic Kidney Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI

DATA SCIENCE WORKSHOP: Chronic Kidney Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 361

Book Description
In the captivating journey of our data science workshop, we embarked on the exploration of Chronic Kidney Disease classification and prediction. Our quest began with a thorough dive into data exploration, where we meticulously delved into the dataset's intricacies to unearth hidden patterns and insights. We analyzed the distribution of categorized features, unraveling the nuances that underlie chronic kidney disease. Guided by the principles of machine learning, we embarked on the quest to build predictive models. With the aid of grid search, we fine-tuned our machine learning algorithms, optimizing their hyperparameters for peak performance. Each model, whether K-Nearest Neighbors, Decision Trees, Random Forests, Gradient Boosting, Naive Bayes, Extreme Gradient Boosting, Light Gradient Boosting, or Multi-Layer Perceptron, was meticulously trained and tested, paving the way for robust predictions. The voyage into the realm of deep learning took us further, as we harnessed the power of Artificial Neural Networks (ANNs). By constructing intricate architectures, we designed ANNs to discern intricate patterns from the data. Leveraging the prowess of TensorFlow, we artfully crafted layers, each contributing to the ANN's comprehension of the underlying dynamics. This marked our initial foray into the world of deep learning. Our expedition, however, did not conclude with ANNs. We ventured deeper into the abyss of deep learning, uncovering the potential of Long Short-Term Memory (LSTM) networks. These networks, attuned to sequential data, unraveled temporal dependencies within the dataset, fortifying our predictive capabilities. Diving even further, we encountered Self-Organizing Maps (SOMs) and Restricted Boltzmann Machines (RBMs). These innovative models, rooted in unsupervised learning, unmasked underlying structures in the dataset. As our understanding of the data deepened, so did our repertoire of tools for prediction. Autoencoders, our final frontier in deep learning, emerged as our champions in dimensionality reduction and feature learning. These unsupervised neural networks transformed complex data into compact, meaningful representations, guiding our predictive models with newfound efficiency. To furnish a granular understanding of model behavior, we employed the classification report, which delineated precision, recall, and F1-Score for each class, providing a comprehensive snapshot of the model's predictive capacity across diverse categories. The confusion matrix emerged as a tangible visualization, detailing the interplay between true positives, true negatives, false positives, and false negatives. We also harnessed ROC and precision-recall curves to illuminate the dynamic interplay between true positive rate and false positive rate, vital when tackling imbalanced datasets. For regression tasks, MSE and its counterpart RMSE quantified the average squared differences between predictions and actual values, facilitating an insightful assessment of model fit. Further enhancing our toolkit, the R-squared (R2) score unveiled the extent to which the model explained variance in the dependent variable, offering a valuable gauge of overall performance. Collectively, this ensemble of metrics enabled us to make astute model decisions, optimize hyperparameters, and gauge the models' fitness for accurate disease prognosis in a clinical context. Amidst this whirlwind of data exploration and model construction, our GUI using PyQt emerged as a beacon of user-friendly interaction. Through its intuitive interface, users navigated seamlessly between model selection, training, and prediction. Our GUI encapsulated the intricacies of our journey, bridging the gap between data science and user experience. In the end, our odyssey illuminated the intricate landscape of Chronic Kidney Disease classification and prediction. We harnessed the power of both machine learning and deep learning, uncovering hidden insights and propelling our predictive capabilities to new heights. Our journey transcended the realms of data, algorithms, and interfaces, leaving an indelible mark on the crossroads of science and innovation.

DATA SCIENCE CRASH COURSE: Skin Cancer Classification and Prediction Using Machine Learning and Deep Learning

DATA SCIENCE CRASH COURSE: Skin Cancer Classification and Prediction Using Machine Learning and Deep Learning PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 85

Book Description
Skin cancer develops primarily on areas of sun-exposed skin, including the scalp, face, lips, ears, neck, chest, arms and hands, and on the legs in women. But it can also form on areas that rarely see the light of day — your palms, beneath your fingernails or toenails, and your genital area. Skin cancer affects people of all skin tones, including those with darker complexions. When melanoma occurs in people with dark skin tones, it's more likely to occur in areas not normally exposed to the sun, such as the palms of the hands and soles of the feet. Dataset used in this project contains a balanced dataset of images of benign skin moles and malignant skin moles. The data consists of two folders with each 1800 pictures (224x244) of the two types of moles. The machine learning models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. The deep learning models used are CNN and MobileNet.

DATA SCIENCE WORKSHOP: Parkinson Classification and Prediction Using Machine Learning and Deep Learning with Python GUI

DATA SCIENCE WORKSHOP: Parkinson Classification and Prediction Using Machine Learning and Deep Learning with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 373

Book Description
In this data science workshop focused on Parkinson's disease classification and prediction, we begin by exploring the dataset containing features relevant to the disease. We perform data exploration to understand the structure of the dataset, check for missing values, and gain insights into the distribution of features. Visualizations are used to analyze the distribution of features and their relationship with the target variable, which is whether an individual has Parkinson's disease or not. After data exploration, we preprocess the dataset to prepare it for machine learning models. This involves handling missing values, scaling numerical features, and encoding categorical variables if necessary. We ensure that the dataset is split into training and testing sets to evaluate model performance effectively. With the preprocessed dataset, we move on to the classification task. Using various machine learning algorithms such as Logistic Regression, K-Nearest Neighbors, Decision Trees, Random Forests, Gradient Boosting, Naive Bayes, Adaboost, Extreme Gradient Boosting, Light Gradient Boosting, and Multi-Layer Perceptron (MLP), we train multiple models on the training data. To optimize the hyperparameters of these models, we utilize Grid Search, a technique to exhaustively search for the best combination of hyperparameters. For each machine learning model, we evaluate their performance on the test set using various metrics such as accuracy, precision, recall, and F1-score. These metrics help us understand the model's ability to correctly classify individuals with and without Parkinson's disease. Next, we delve into building an Artificial Neural Network (ANN) for Parkinson's disease prediction. The ANN architecture is designed with input, hidden, and output layers. We utilize the TensorFlow library to construct the neural network with appropriate activation functions, dropout layers, and optimizers. The ANN is trained on the preprocessed data for a fixed number of epochs, and we monitor its training and validation loss and accuracy to ensure proper training. After training the ANN, we evaluate its performance using the same metrics as the machine learning models, comparing its accuracy, precision, recall, and F1-score against the previous models. This comparison helps us understand the benefits and limitations of using deep learning for Parkinson's disease prediction. To provide a user-friendly interface for the classification and prediction process, we design a Python GUI using PyQt. The GUI allows users to load their own dataset, choose data preprocessing options, select machine learning classifiers, train models, and predict using the ANN. The GUI provides visualizations of the data distribution, model performance, and prediction results for better understanding and decision-making. In the GUI, users have the option to choose different data preprocessing techniques, such as raw data, normalization, and standardization, to observe how these techniques impact model performance. The choice of classifiers is also available, allowing users to compare different models and select the one that suits their needs best. Throughout the workshop, we emphasize the importance of proper evaluation metrics and the significance of choosing the right model for Parkinson's disease classification and prediction. We highlight the strengths and weaknesses of each model, enabling users to make informed decisions based on their specific requirements and data characteristics. Overall, this data science workshop provides participants with a comprehensive understanding of Parkinson's disease classification and prediction using machine learning and deep learning techniques. Participants gain hands-on experience in data preprocessing, model training, hyperparameter tuning, and designing a user-friendly GUI for efficient and effective data analysis and prediction.

DATA SCIENCE WORKSHOP: Lung Cancer Classification and Prediction Using Machine Learning and Deep Learning with Python GUI

DATA SCIENCE WORKSHOP: Lung Cancer Classification and Prediction Using Machine Learning and Deep Learning with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 294

Book Description
This Data Science Workshop presents a comprehensive journey through lung cancer analysis. Beginning with data exploration, the dataset is thoroughly examined to uncover insights into its structure and contents. The focus then shifts to categorizing features and understanding their distribution patterns, revealing key trends and relationships that could impact the predictive models. To predict lung cancer using machine learning models, an extensive grid search is conducted, fine-tuning model hyperparameters for optimal performance. The iterative process involves training various models, such as K-Nearest Neighbors, Decision Trees, Random Forests, Gradient Boosting, Naive Bayes, Extreme Gradient Boosting, Light Gradient Boosting, and Multi-Layer Perceptron, and evaluating their outcomes to select the best-performing approach. Utilizing GridSearchCV aids in systematically optimizing parameters to enhance predictive accuracy. Deep Learning is harnessed through Artificial Neural Networks (ANN), which involve building multi-layered models capable of learning intricate patterns from data. The ANN architecture, comprising input, hidden, and output layers, is designed to capture the complex relationships within the dataset. Metrics like accuracy, precision, recall, and F1-score are employed to comprehensively evaluate model performance. These metrics provide a holistic view of the model's ability to classify lung cancer cases accurately and minimize false positives or negatives. The Graphical User Interface (GUI) aspect of the project is developed using PyQt, enabling user-friendly interactions with the predictive models. The GUI design includes features such as radio buttons for selecting preprocessing options (Raw, Normalization, or Standardization), a combobox for choosing the ANN model type (e.g., CNN 1D), and buttons to initiate training and prediction. The PyQt interface enhances usability by allowing users to visualize predictions, classification reports, confusion matrices, and loss-accuracy plots. The GUI's functionality expands to encompass the entire workflow. It enables data preprocessing by loading and splitting the dataset into training and testing subsets. Users can then select machine learning or deep learning models for training. The trained models are saved for future use to avoid retraining. The interface also facilitates model evaluation, showcasing accuracy scores, classification reports detailing precision and recall, and visualizations depicting loss and accuracy trends over epochs. The project's educational value lies in its comprehensive approach, taking participants through every step of a data science pipeline. Attendees gain insights into data preprocessing, model selection, hyperparameter tuning, and performance evaluation. The integration of machine learning and deep learning methodologies, along with GUI development, provides a well-rounded understanding of creating predictive tools for real-world applications. Participants leave the workshop empowered with the skills to explore and analyze medical datasets, implement machine learning and deep learning models, and build user-friendly interfaces for effective interaction. The workshop bridges the gap between theoretical knowledge and practical implementation, fostering a deeper understanding of data-driven decision-making in the realm of medical diagnostics and classification.

The Applied Data Science Workshop On Medical Datasets Using Machine Learning and Deep Learning with Python GUI

The Applied Data Science Workshop On Medical Datasets Using Machine Learning and Deep Learning with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 1574

Book Description
Workshop 1: Heart Failure Analysis and Prediction Using Scikit-Learn, Keras, and TensorFlow with Python GUI Cardiovascular diseases (CVDs) are the number 1 cause of death globally taking an estimated 17.9 million lives each year, which accounts for 31% of all deaths worldwide. Heart failure is a common event caused by CVDs and this dataset contains 12 features that can be used to predict mortality by heart failure. People with cardiovascular disease or who are at high cardiovascular risk (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidaemia or already established disease) need early detection and management wherein a machine learning models can be of great help. Dataset used in this project is from Davide Chicco, Giuseppe Jurman. Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC Medical Informatics and Decision Making 20, 16 (2020). Attribute information in the dataset are as follows: age: Age; anaemia: Decrease of red blood cells or hemoglobin (boolean); creatinine_phosphokinase: Level of the CPK enzyme in the blood (mcg/L); diabetes: If the patient has diabetes (boolean); ejection_fraction: Percentage of blood leaving the heart at each contraction (percentage); high_blood_pressure: If the patient has hypertension (boolean); platelets: Platelets in the blood (kiloplatelets/mL); serum_creatinine: Level of serum creatinine in the blood (mg/dL); serum_sodium: Level of serum sodium in the blood (mEq/L); sex: Woman or man (binary); smoking: If the patient smokes or not (boolean); time: Follow-up period (days); and DEATH_EVENT: If the patient deceased during the follow-up period (boolean). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 2: Cervical Cancer Classification and Prediction Using Machine Learning and Deep Learning with Python GUI About 11,000 new cases of invasive cervical cancer are diagnosed each year in the U.S. However, the number of new cervical cancer cases has been declining steadily over the past decades. Although it is the most preventable type of cancer, each year cervical cancer kills about 4,000 women in the U.S. and about 300,000 women worldwide. Numerous studies report that high poverty levels are linked with low screening rates. In addition, lack of health insurance, limited transportation, and language difficulties hinder a poor woman’s access to screening services. Human papilloma virus (HPV) is the main risk factor for cervical cancer. In adults, the most important risk factor for HPV is sexual activity with an infected person. Women most at risk for cervical cancer are those with a history of multiple sexual partners, sexual intercourse at age 17 years or younger, or both. A woman who has never been sexually active has a very low risk for developing cervical cancer. Sexual activity with multiple partners increases the likelihood of many other sexually transmitted infections (chlamydia, gonorrhea, syphilis). Studies have found an association between chlamydia and cervical cancer risk, including the possibility that chlamydia may prolong HPV infection. Therefore, early detection of cervical cancer using machine and deep learning models can be of great help. The dataset used in this project is obtained from UCI Repository and kindly acknowledged. This file contains a List of Risk Factors for Cervical Cancer leading to a Biopsy Examination. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 3: Chronic Kidney Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Chronic kidney disease is the longstanding disease of the kidneys leading to renal failure. The kidneys filter waste and excess fluid from the blood. As kidneys fail, waste builds up. Symptoms develop slowly and aren't specific to the disease. Some people have no symptoms at all and are diagnosed by a lab test. Medication helps manage symptoms. In later stages, filtering the blood with a machine (dialysis) or a transplant may be required The dataset used in this project was taken over a 2-month period in India with 25 features (eg, red blood cell count, white blood cell count, etc). The target is the 'classification', which is either 'ckd' or 'notckd' - ckd=chronic kidney disease. It contains measures of 24 features for 400 people. Quite a lot of features for just 400 samples. There are 14 categorical features, while 10 are numerical. The dataset needs cleaning: in that it has NaNs and the numeric features need to be forced to floats. Attribute Information: Age(numerical) age in years; Blood Pressure(numerical) bp in mm/Hg; Specific Gravity(categorical) sg - (1.005,1.010,1.015,1.020,1.025); Albumin(categorical) al - (0,1,2,3,4,5); Sugar(categorical) su - (0,1,2,3,4,5); Red Blood Cells(categorical) rbc - (normal,abnormal); Pus Cell (categorical) pc - (normal,abnormal); Pus Cell clumps(categorical) pcc - (present, notpresent); Bacteria(categorical) ba - (present,notpresent); Blood Glucose Random(numerical) bgr in mgs/dl; Blood Urea(numerical) bu in mgs/dl; Serum Creatinine(numerical) sc in mgs/dl; Sodium(numerical) sod in mEq/L; Potassium(numerical) pot in mEq/L; Hemoglobin(numerical) hemo in gms; Packed Cell Volume(numerical); White Blood Cell Count(numerical) wc in cells/cumm; Red Blood Cell Count(numerical) rc in millions/cmm; Hypertension(categorical) htn - (yes,no); Diabetes Mellitus(categorical) dm - (yes,no); Coronary Artery Disease(categorical) cad - (yes,no); Appetite(categorical) appet - (good,poor); Pedal Edema(categorical) pe - (yes,no); Anemia(categorical) ane - (yes,no); and Class (categorical) class - (ckd,notckd). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 4: Lung Cancer Classification and Prediction Using Machine Learning and Deep Learning with Python GUI The effectiveness of cancer prediction system helps the people to know their cancer risk with low cost and it also helps the people to take the appropriate decision based on their cancer risk status. The data is collected from the website online lung cancer prediction system. Total number of attributes in the dataset is 16, while number of instances is 309. Following are attribute information of dataset: Gender: M(male), F(female); Age: Age of the patient; Smoking: YES=2 , NO=1; Yellow fingers: YES=2 , NO=1; Anxiety: YES=2 , NO=1; Peer_pressure: YES=2 , NO=1; Chronic Disease: YES=2 , NO=1; Fatigue: YES=2 , NO=1; Allergy: YES=2 , NO=1; Wheezing: YES=2 , NO=1; Alcohol: YES=2 , NO=1; Coughing: YES=2 , NO=1; Shortness of Breath: YES=2 , NO=1; Swallowing Difficulty: YES=2 , NO=1; Chest pain: YES=2 , NO=1; and Lung Cancer: YES , NO. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 5: Alzheimer’s Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Alzheimer's is a type of dementia that causes problems with memory, thinking and behavior. Symptoms usually develop slowly and get worse over time, becoming severe enough to interfere with daily tasks. Alzheimer's is not a normal part of aging. The greatest known risk factor is increasing age, and the majority of people with Alzheimer's are 65 and older. But Alzheimer's is not just a disease of old age. Approximately 200,000 Americans under the age of 65 have younger-onset Alzheimer’s disease (also known as early-onset Alzheimer’s). The dataset consists of a longitudinal MRI data of 374 subjects aged 60 to 96. Each subject was scanned at least once. Everyone is right-handed. 206 of the subjects were grouped as 'Nondemented' throughout the study. 107 of the subjects were grouped as 'Demented' at the time of their initial visits and remained so throughout the study. 14 subjects were grouped as 'Nondemented' at the time of their initial visit and were subsequently characterized as 'Demented' at a later visit. These fall under the 'Converted' category. Following are some important features in the dataset: EDUC:Years of Education; SES: Socioeconomic Status; MMSE: Mini Mental State Examination; CDR: Clinical Dementia Rating; eTIV: Estimated Total Intracranial Volume; nWBV: Normalize Whole Brain Volume; and ASF: Atlas Scaling Factor. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 6: Parkinson Classification and Prediction Using Machine Learning and Deep Learning with Python GUI The dataset was created by Max Little of the University of Oxford, in collaboration with the National Centre for Voice and Speech, Denver, Colorado, who recorded the speech signals. The original study published the feature extraction methods for general voice disorders. This dataset is composed of a range of biomedical voice measurements from 31 people, 23 with Parkinson's disease (PD). Each column in the table is a particular voice measure, and each row corresponds one of 195 voice recording from these individuals ("name" column). The main aim of the data is to discriminate healthy people from those with PD, according to "status" column which is set to 0 for healthy and 1 for PD. The data is in ASCII CSV format. The rows of the CSV file contain an instance corresponding to one voice recording. There are around six recordings per patient, the name of the patient is identified in the first column. Attribute information of this dataset are as follows: name - ASCII subject name and recording number; MDVP:Fo(Hz) - Average vocal fundamental frequency; MDVP:Fhi(Hz) - Maximum vocal fundamental frequency; MDVP:Flo(Hz) - Minimum vocal fundamental frequency; MDVP:Jitter(%); MDVP:Jitter(Abs); MDVP:RAP; MDVP:PPQ; Jitter:DDP – Several measures of variation in fundamental frequency; MDVP:Shimmer; MDVP:Shimmer(dB); Shimmer:APQ3; Shimmer:APQ5; MDVP:APQ; Shimmer:DDA - Several measures of variation in amplitude; NHR; HNR - Two measures of ratio of noise to tonal components in the voice; status - Health status of the subject (one) - Parkinson's, (zero) – healthy; RPDE,D2 - Two nonlinear dynamical complexity measures; DFA - Signal fractal scaling exponent; and spread1,spread2,PPE - Three nonlinear measures of fundamental frequency variation. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 7: Liver Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. This dataset was used to evaluate prediction algorithms in an effort to reduce burden on doctors. This dataset contains 416 liver patient records and 167 non liver patient records collected from North East of Andhra Pradesh, India. The "Dataset" column is a class label used to divide groups into liver patient (liver disease) or not (no disease). This data set contains 441 male patient records and 142 female patient records. Any patient whose age exceeded 89 is listed as being of age "90". Columns in the dataset: Age of the patient; Gender of the patient; Total Bilirubin; Direct Bilirubin; Alkaline Phosphotase; Alamine Aminotransferase; Aspartate Aminotransferase; Total Protiens; Albumin; Albumin and Globulin Ratio; and Dataset: field used to split the data into two sets (patient with liver disease, or no disease). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performance of the model, scalability of the model, training loss, and training accuracy.

DATA SCIENCE WORKSHOP: Liver Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI

DATA SCIENCE WORKSHOP: Liver Disease Classification and Prediction Using Machine Learning and Deep Learning with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 353

Book Description
In this project, Data Science Workshop focused on Liver Disease Classification and Prediction, we embarked on a comprehensive journey through various stages of data analysis, model development, and performance evaluation. The workshop aimed to utilize Python and its associated libraries to create a Graphical User Interface (GUI) that facilitates the classification and prediction of liver disease cases. Our exploration began with a thorough examination of the dataset. This entailed importing necessary libraries such as NumPy, Pandas, and Matplotlib for data manipulation, visualization, and preprocessing. The dataset, representing liver-related attributes, was read and its dimensions were checked to ensure data integrity. To gain a preliminary understanding, the dataset's initial rows and column information were displayed. We identified key features such as 'Age', 'Gender', and various biochemical attributes relevant to liver health. The dataset's structure, including data types and non-null counts, was inspected to identify any potential data quality issues. We detected that the 'Albumin_and_Globulin_Ratio' feature had a few missing values, which were subsequently filled with the median value. Our exploration extended to visualizing categorical distributions. Pie charts provided insights into the proportions of healthy and unhealthy liver cases among different gender categories. Stacked bar plots further delved into the connections between 'Total_Bilirubin' categories and the prevalence of liver disease, fostering a deeper understanding of these relationships. Transitioning to predictive modeling, we embarked on constructing machine learning models. Our arsenal included a range of algorithms such as Logistic Regression, Support Vector Machines, K-Nearest Neighbors, Decision Trees, Random Forests, Gradient Boosting, Extreme Gradient Boosting, Light Gradient Boosting. The data was split into training and testing sets, and each model underwent rigorous evaluation using metrics like accuracy, precision, recall, F1-score, and ROC-AUC. Hyperparameter tuning played a pivotal role in model enhancement. We leveraged grid search and cross-validation techniques to identify the best combination of hyperparameters, optimizing model performance. Our focus shifted towards assessing the significance of each feature, using techniques such as feature importance from tree-based models. The workshop didn't halt at machine learning; it delved into deep learning as well. We implemented an Artificial Neural Network (ANN) using the Keras library. This powerful model demonstrated its ability to capture complex relationships within the data. With distinct layers, activation functions, and dropout layers to prevent overfitting, the ANN achieved impressive results in liver disease prediction. Our journey culminated with a comprehensive analysis of model performance. The metrics chosen for evaluation included accuracy, precision, recall, F1-score, and confusion matrix visualizations. These metrics provided a comprehensive view of the model's capability to correctly classify both healthy and unhealthy liver cases. In summary, the Data Science Workshop on Liver Disease Classification and Prediction was a holistic exploration into data preprocessing, feature categorization, machine learning, and deep learning techniques. The culmination of these efforts resulted in the creation of a Python GUI that empowers users to input patient attributes and receive predictions regarding liver health. Through this workshop, participants gained a well-rounded understanding of data science techniques and their application in the field of healthcare.

THE APPLIED DATA SCIENCE WORKSHOP: Prostate Cancer Classification and Recognition Using Machine Learning and Deep Learning with Python GUI

THE APPLIED DATA SCIENCE WORKSHOP: Prostate Cancer Classification and Recognition Using Machine Learning and Deep Learning with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 357

Book Description
The Applied Data Science Workshop on Prostate Cancer Classification and Recognition using Machine Learning and Deep Learning with Python GUI involved several steps and components. The project aimed to analyze prostate cancer data, explore the features, develop machine learning models, and create a graphical user interface (GUI) using PyQt5. The project began with data exploration, where the prostate cancer dataset was examined to understand its structure and content. Various statistical techniques were employed to gain insights into the data, such as checking the dimensions, identifying missing values, and examining the distribution of the target variable. The next step involved exploring the distribution of features in the dataset. Visualizations were created to analyze the characteristics and relationships between different features. Histograms, scatter plots, and correlation matrices were used to uncover patterns and identify potential variables that may contribute to the classification of prostate cancer. Machine learning models were then developed to classify prostate cancer based on the available features. Several algorithms, including Logistic Regression, K-Nearest Neighbors, Decision Trees, Random Forests, Gradient Boosting, Naive Bayes, Adaboost, Extreme Gradient Boosting, Light Gradient Boosting, and Multi-Layer Perceptron (MLP), were implemented. Each model was trained and evaluated using appropriate techniques such as cross-validation and grid search for hyperparameter tuning. The performance of each machine learning model was assessed using evaluation metrics such as accuracy, precision, recall, and F1-score. These metrics provided insights into the effectiveness of the models in accurately classifying prostate cancer cases. Model comparison and selection were based on their performance and the specific requirements of the project. In addition to the machine learning models, a deep learning model based on an Artificial Neural Network (ANN) was implemented. The ANN architecture consisted of multiple layers, including input, hidden, and output layers. The ANN model was trained using the dataset, and its performance was evaluated using accuracy and loss metrics. To provide a user-friendly interface for the project, a GUI was designed using PyQt, a Python library for creating desktop applications. The GUI allowed users to interact with the machine learning models and perform tasks such as selecting the prediction method, loading data, training models, and displaying results. The GUI included various graphical components such as buttons, combo boxes, input fields, and plot windows. These components were designed to facilitate data loading, model training, and result visualization. Users could choose the prediction method, view accuracy scores, classification reports, and confusion matrices, and explore the predicted values compared to the actual values. The GUI also incorporated interactive features such as real-time updates of prediction results based on user selections and dynamic plot generation for visualizing model performance. Users could switch between different prediction methods, observe changes in accuracy, and examine the history of training loss and accuracy through plotted graphs. Data preprocessing techniques, such as standardization and normalization, were applied to ensure the consistency and reliability of the machine learning and deep learning models. The dataset was divided into training and testing sets to assess model performance on unseen data and detect overfitting or underfitting. Model persistence was implemented to save the trained machine learning and deep learning models to disk, allowing for easy retrieval and future use. The saved models could be loaded and utilized within the GUI for prediction tasks without the need for retraining. Overall, the Applied Data Science Workshop on Prostate Cancer Classification and Recognition provided a comprehensive framework for analyzing prostate cancer data, developing machine learning and deep learning models, and creating an interactive GUI. The project aimed to assist in the accurate classification and recognition of prostate cancer cases, facilitating informed decision-making and potentially contributing to improved patient outcomes.

DATA SCIENCE WORKSHOP: Heart Failure Analysis and Prediction Using Scikit-Learn, Keras, and TensorFlow with Python GUI

DATA SCIENCE WORKSHOP: Heart Failure Analysis and Prediction Using Scikit-Learn, Keras, and TensorFlow with Python GUI PDF Author: Vivian Siahaan
Publisher: BALIGE PUBLISHING
ISBN:
Category : Computers
Languages : en
Pages : 398

Book Description
In this "Heart Failure Analysis and Prediction" data science workshop, we embarked on a comprehensive journey through the intricacies of cardiovascular health assessment using machine learning and deep learning techniques. Our journey began with an in-depth exploration of the dataset, where we meticulously studied its characteristics, dimensions, and underlying patterns. This initial step laid the foundation for our subsequent analyses. We delved into a detailed examination of the distribution of categorized features, meticulously dissecting variables such as age, sex, serum sodium levels, diabetes status, high blood pressure, smoking habits, and anemia. This critical insight enabled us to comprehend how these features relate to each other and potentially impact the occurrence of heart failure, providing valuable insights for subsequent modeling. Subsequently, we engaged in the heart of the project: predicting heart failure. Employing machine learning models, we harnessed the power of grid search to optimize model parameters, meticulously fine-tuning algorithms to achieve the best predictive performance. Through an array of models including Logistic Regression, KNeighbors Classifier, DecisionTrees Classifier, Random Forest Classifier, Gradient Boosting Classifier, XGB Classifier, LGBM Classifier, and MLP Classifier, we harnessed metrics like accuracy, precision, recall, and F1-score to meticulously evaluate each model's efficacy. Venturing further into the realm of deep learning, we embarked on an exploration of neural networks, striving to capture intricate patterns in the data. Our arsenal included diverse architectures such as Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM) networks, Self Organizing Maps (SOMs), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), and Autoencoders. These architectures enabled us to unravel complex relationships within the data, yielding nuanced insights into the dynamics of heart failure prediction. Our approach to evaluating model performance was rigorous and thorough. By scrutinizing metrics such as accuracy, recall, precision, and F1-score, we gained a comprehensive understanding of the models' strengths and limitations. These metrics enabled us to make informed decisions about model selection and refinement, ensuring that our predictions were as accurate and reliable as possible. The evaluation phase emerges as a pivotal aspect, accentuated by an array of comprehensive metrics. Performance assessment encompasses metrics such as accuracy, precision, recall, F1-score, and ROC-AUC. Cross-validation and learning curves are strategically employed to mitigate overfitting and ensure model generalization. Furthermore, visual aids such as ROC curves and confusion matrices provide a lucid depiction of the models' interplay between sensitivity and specificity. Complementing our advanced analytical endeavors, we also embarked on the creation of a Python GUI using PyQt. This intuitive graphical interface provided an accessible platform for users to interact with the developed models and gain meaningful insights into heart health. The GUI streamlined the prediction process, making it user-friendly and facilitating the application of our intricate models to real-world scenarios. In conclusion, the "Heart Failure Analysis and Prediction" data science workshop was a journey through the realms of data exploration, feature distribution analysis, and the application of cutting-edge machine learning and deep learning techniques. By meticulously evaluating model performance, harnessing the capabilities of neural networks, and culminating in the creation of a user-friendly Python GUI, we armed participants with a comprehensive toolkit to analyze and predict heart failure with precision and innovation.