Home > Backend Development > Python Tutorial > ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

Barbara Streisand
Release: 2024-10-26 05:04:30
Original
563 people have browsed it

ClassiSage

A Machine Learning model made with AWS SageMaker and its Python SDK for Classification of HDFS Logs using Terraform for automation of infrastructure setup.

Link: GitHub
Language: HCL (terraform), Python

Content

  • Overview: Project Overview.
  • System Architecture: System Architecture Diagram
  • ML Model: Model Overview.
  • Getting Started: How to run the project.
  • Console Observations: Changes in instances and infrastructure that can be observed while running the project.
  • Ending and Cleanup: Ensuring no additional charges.
  • Auto Created Objects: Files and Folders created during execution process.

  • Firstly follow the Directory Structure for better project setup.
  • Take major reference from the ClassiSage's Project Repository uploaded in GitHub for better understanding.

Overview

  • The model is made with AWS SageMaker for Classification of HDFS Logs along with S3 for storing dataset, Notebook file (containing code for SageMaker instance) and Model Output.
  • The Infrastructure setup is automated using Terraform a tool to provide infrastructure-as-code created by HashiCorp
  • The data set used is HDFS_v1.
  • The project implements SageMaker Python SDK with the model XGBoost version 1.2

System Architecture

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

ML Model

  • Image URI
  # Looks for the XGBoost image URI and builds an XGBoost container. Specify the repo_version depending on preference.
  container = get_image_uri(boto3.Session().region_name,
                            'xgboost', 
                            repo_version='1.0-1')
Copy after login
Copy after login
Copy after login
Copy after login
Copy after login

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • Initializing Hyper Parameter and Estimator call to the container
  hyperparameters = {
        "max_depth":"5",                ## Maximum depth of a tree. Higher means more complex models but risk of overfitting.
        "eta":"0.2",                    ## Learning rate. Lower values make the learning process slower but more precise.
        "gamma":"4",                    ## Minimum loss reduction required to make a further partition on a leaf node. Controls the model’s complexity.
        "min_child_weight":"6",         ## Minimum sum of instance weight (hessian) needed in a child. Higher values prevent overfitting.
        "subsample":"0.7",              ## Fraction of training data used. Reduces overfitting by sampling part of the data. 
        "objective":"binary:logistic",  ## Specifies the learning task and corresponding objective. binary:logistic is for binary classification.
        "num_round":50                  ## Number of boosting rounds, essentially how many times the model is trained.
        }
  # A SageMaker estimator that calls the xgboost-container
  estimator = sagemaker.estimator.Estimator(image_uri=container,                  # Points to the XGBoost container we previously set up. This tells SageMaker which algorithm container to use.
                                          hyperparameters=hyperparameters,      # Passes the defined hyperparameters to the estimator. These are the settings that guide the training process.
                                          role=sagemaker.get_execution_role(),  # Specifies the IAM role that SageMaker assumes during the training job. This role allows access to AWS resources like S3.
                                          train_instance_count=1,               # Sets the number of training instances. Here, it’s using a single instance.
                                          train_instance_type='ml.m5.large',    # Specifies the type of instance to use for training. ml.m5.2xlarge is a general-purpose instance with a balance of compute, memory, and network resources.
                                          train_volume_size=5, # 5GB            # Sets the size of the storage volume attached to the training instance, in GB. Here, it’s 5 GB.
                                          output_path=output_path,              # Defines where the model artifacts and output of the training job will be saved in S3.
                                          train_use_spot_instances=True,        # Utilizes spot instances for training, which can be significantly cheaper than on-demand instances. Spot instances are spare EC2 capacity offered at a lower price.
                                          train_max_run=300,                    # Specifies the maximum runtime for the training job in seconds. Here, it's 300 seconds (5 minutes).
                                          train_max_wait=600)                   # Sets the maximum time to wait for the job to complete, including the time waiting for spot instances, in seconds. Here, it's 600 seconds (10 minutes).
Copy after login
Copy after login
Copy after login
Copy after login

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • Training Job
  estimator.fit({'train': s3_input_train,'validation': s3_input_test})
Copy after login
Copy after login
Copy after login
Copy after login

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • Deployment
  xgb_predictor = estimator.deploy(initial_instance_count=1,instance_type='ml.m5.large')
Copy after login
Copy after login
Copy after login

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • Validation
  # Looks for the XGBoost image URI and builds an XGBoost container. Specify the repo_version depending on preference.
  container = get_image_uri(boto3.Session().region_name,
                            'xgboost', 
                            repo_version='1.0-1')
Copy after login
Copy after login
Copy after login
Copy after login
Copy after login

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

Getting Started

  • Clone the repository using Git Bash / download a .zip file / fork the repository.
  • Go to your AWS Management Console, click on your account profile on the Top-Right corner and select My Security Credentials from the dropdown.
  • Create Access Key: In the Access keys section, click on Create New Access Key, a dialog will appear with your Access Key ID and Secret Access Key.
  • Download or Copy Keys: (IMPORTANT) Download the .csv file or copy the keys to a secure location. This is the only time you can view the secret access key.
  • Open the cloned Repo. in your VS Code
  • Create a file under ClassiSage as terraform.tfvars with its content as
  hyperparameters = {
        "max_depth":"5",                ## Maximum depth of a tree. Higher means more complex models but risk of overfitting.
        "eta":"0.2",                    ## Learning rate. Lower values make the learning process slower but more precise.
        "gamma":"4",                    ## Minimum loss reduction required to make a further partition on a leaf node. Controls the model’s complexity.
        "min_child_weight":"6",         ## Minimum sum of instance weight (hessian) needed in a child. Higher values prevent overfitting.
        "subsample":"0.7",              ## Fraction of training data used. Reduces overfitting by sampling part of the data. 
        "objective":"binary:logistic",  ## Specifies the learning task and corresponding objective. binary:logistic is for binary classification.
        "num_round":50                  ## Number of boosting rounds, essentially how many times the model is trained.
        }
  # A SageMaker estimator that calls the xgboost-container
  estimator = sagemaker.estimator.Estimator(image_uri=container,                  # Points to the XGBoost container we previously set up. This tells SageMaker which algorithm container to use.
                                          hyperparameters=hyperparameters,      # Passes the defined hyperparameters to the estimator. These are the settings that guide the training process.
                                          role=sagemaker.get_execution_role(),  # Specifies the IAM role that SageMaker assumes during the training job. This role allows access to AWS resources like S3.
                                          train_instance_count=1,               # Sets the number of training instances. Here, it’s using a single instance.
                                          train_instance_type='ml.m5.large',    # Specifies the type of instance to use for training. ml.m5.2xlarge is a general-purpose instance with a balance of compute, memory, and network resources.
                                          train_volume_size=5, # 5GB            # Sets the size of the storage volume attached to the training instance, in GB. Here, it’s 5 GB.
                                          output_path=output_path,              # Defines where the model artifacts and output of the training job will be saved in S3.
                                          train_use_spot_instances=True,        # Utilizes spot instances for training, which can be significantly cheaper than on-demand instances. Spot instances are spare EC2 capacity offered at a lower price.
                                          train_max_run=300,                    # Specifies the maximum runtime for the training job in seconds. Here, it's 300 seconds (5 minutes).
                                          train_max_wait=600)                   # Sets the maximum time to wait for the job to complete, including the time waiting for spot instances, in seconds. Here, it's 600 seconds (10 minutes).
Copy after login
Copy after login
Copy after login
Copy after login
  • Download and install all the dependancies for using Terraform and Python.
  • In the terminal type/paste terraform init to initialize the backend.

  • Then type/paste terraform Plan to view the plan or simply terraform validate to ensure that there is no error.

  • Finally in the terminal type/paste terraform apply --auto-approve

  • This will show two outputs one as bucket_name other as pretrained_ml_instance_name (The 3rd resource is the variable name given to the bucket since they are global resources ).

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • After Completion of the command is shown in the terminal, navigate to ClassiSage/ml_ops/function.py and on the 11th line of the file with code
  estimator.fit({'train': s3_input_train,'validation': s3_input_test})
Copy after login
Copy after login
Copy after login
Copy after login

and change it to the path where the project directory is present and save it.

  • Then on the ClassiSageml_opsdata_upload.ipynb run all code cell till cell number 25 with the code
  xgb_predictor = estimator.deploy(initial_instance_count=1,instance_type='ml.m5.large')
Copy after login
Copy after login
Copy after login

to upload dataset to S3 Bucket.

  • Output of the code cell execution

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • After the execution of the notebook re-open your AWS Management Console.
  • You can search for S3 and Sagemaker services and will see an instance of each service initiated (A S3 bucket and a SageMaker Notebook)

S3 Bucket with named 'data-bucket-' with 2 objects uploaded, a dataset and the pretrained_sm.ipynb file containing model code.

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model


  • Go to the notebook instance in the AWS SageMaker, click on the created instance and click on open Jupyter.
  • After that click on new on the top right side of the window and select on terminal.
  • This will create a new terminal.

  • On the terminal paste the following (Replacing with the bucket_name output that is shown in the VS Code's terminal output):
  # Looks for the XGBoost image URI and builds an XGBoost container. Specify the repo_version depending on preference.
  container = get_image_uri(boto3.Session().region_name,
                            'xgboost', 
                            repo_version='1.0-1')
Copy after login
Copy after login
Copy after login
Copy after login
Copy after login

Terminal command to upload the pretrained_sm.ipynb from S3 to Notebook's Jupyter environment

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model


  • Go Back to the opened Jupyter instance and click on the pretrained_sm.ipynb file to open it and assign it a conda_python3 Kernel.
  • Scroll Down to the 4th cell and replace the variable bucket_name's value by the VS Code's terminal output for bucket_name = ""
  hyperparameters = {
        "max_depth":"5",                ## Maximum depth of a tree. Higher means more complex models but risk of overfitting.
        "eta":"0.2",                    ## Learning rate. Lower values make the learning process slower but more precise.
        "gamma":"4",                    ## Minimum loss reduction required to make a further partition on a leaf node. Controls the model’s complexity.
        "min_child_weight":"6",         ## Minimum sum of instance weight (hessian) needed in a child. Higher values prevent overfitting.
        "subsample":"0.7",              ## Fraction of training data used. Reduces overfitting by sampling part of the data. 
        "objective":"binary:logistic",  ## Specifies the learning task and corresponding objective. binary:logistic is for binary classification.
        "num_round":50                  ## Number of boosting rounds, essentially how many times the model is trained.
        }
  # A SageMaker estimator that calls the xgboost-container
  estimator = sagemaker.estimator.Estimator(image_uri=container,                  # Points to the XGBoost container we previously set up. This tells SageMaker which algorithm container to use.
                                          hyperparameters=hyperparameters,      # Passes the defined hyperparameters to the estimator. These are the settings that guide the training process.
                                          role=sagemaker.get_execution_role(),  # Specifies the IAM role that SageMaker assumes during the training job. This role allows access to AWS resources like S3.
                                          train_instance_count=1,               # Sets the number of training instances. Here, it’s using a single instance.
                                          train_instance_type='ml.m5.large',    # Specifies the type of instance to use for training. ml.m5.2xlarge is a general-purpose instance with a balance of compute, memory, and network resources.
                                          train_volume_size=5, # 5GB            # Sets the size of the storage volume attached to the training instance, in GB. Here, it’s 5 GB.
                                          output_path=output_path,              # Defines where the model artifacts and output of the training job will be saved in S3.
                                          train_use_spot_instances=True,        # Utilizes spot instances for training, which can be significantly cheaper than on-demand instances. Spot instances are spare EC2 capacity offered at a lower price.
                                          train_max_run=300,                    # Specifies the maximum runtime for the training job in seconds. Here, it's 300 seconds (5 minutes).
                                          train_max_wait=600)                   # Sets the maximum time to wait for the job to complete, including the time waiting for spot instances, in seconds. Here, it's 600 seconds (10 minutes).
Copy after login
Copy after login
Copy after login
Copy after login

Output of the code cell execution

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model


  • On the top of the file do a Restart by going to the Kernel tab.
  • Execute the Notebook till code cell number 27, with the code
  estimator.fit({'train': s3_input_train,'validation': s3_input_test})
Copy after login
Copy after login
Copy after login
Copy after login
  • You will get the intended result. The data will be fetched, split into train and test sets after being adjusted for Labels and Features with a defined output path, then a model using SageMaker's Python SDK will be Trained, Deployed as a EndPoint, Validated to give different metrics.

Console Observation Notes

Execution of 8th cell

  xgb_predictor = estimator.deploy(initial_instance_count=1,instance_type='ml.m5.large')
Copy after login
Copy after login
Copy after login
  • An output path will be setup in the S3 to store model data.

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

Execution of 23rd cell

  # Looks for the XGBoost image URI and builds an XGBoost container. Specify the repo_version depending on preference.
  container = get_image_uri(boto3.Session().region_name,
                            'xgboost', 
                            repo_version='1.0-1')
Copy after login
Copy after login
Copy after login
Copy after login
Copy after login
  • A training job will start, you can check it under the training tab.

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • After some time (3 mins est.) It shall be completed and will show the same.

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

Execution of 24th code cell

  hyperparameters = {
        "max_depth":"5",                ## Maximum depth of a tree. Higher means more complex models but risk of overfitting.
        "eta":"0.2",                    ## Learning rate. Lower values make the learning process slower but more precise.
        "gamma":"4",                    ## Minimum loss reduction required to make a further partition on a leaf node. Controls the model’s complexity.
        "min_child_weight":"6",         ## Minimum sum of instance weight (hessian) needed in a child. Higher values prevent overfitting.
        "subsample":"0.7",              ## Fraction of training data used. Reduces overfitting by sampling part of the data. 
        "objective":"binary:logistic",  ## Specifies the learning task and corresponding objective. binary:logistic is for binary classification.
        "num_round":50                  ## Number of boosting rounds, essentially how many times the model is trained.
        }
  # A SageMaker estimator that calls the xgboost-container
  estimator = sagemaker.estimator.Estimator(image_uri=container,                  # Points to the XGBoost container we previously set up. This tells SageMaker which algorithm container to use.
                                          hyperparameters=hyperparameters,      # Passes the defined hyperparameters to the estimator. These are the settings that guide the training process.
                                          role=sagemaker.get_execution_role(),  # Specifies the IAM role that SageMaker assumes during the training job. This role allows access to AWS resources like S3.
                                          train_instance_count=1,               # Sets the number of training instances. Here, it’s using a single instance.
                                          train_instance_type='ml.m5.large',    # Specifies the type of instance to use for training. ml.m5.2xlarge is a general-purpose instance with a balance of compute, memory, and network resources.
                                          train_volume_size=5, # 5GB            # Sets the size of the storage volume attached to the training instance, in GB. Here, it’s 5 GB.
                                          output_path=output_path,              # Defines where the model artifacts and output of the training job will be saved in S3.
                                          train_use_spot_instances=True,        # Utilizes spot instances for training, which can be significantly cheaper than on-demand instances. Spot instances are spare EC2 capacity offered at a lower price.
                                          train_max_run=300,                    # Specifies the maximum runtime for the training job in seconds. Here, it's 300 seconds (5 minutes).
                                          train_max_wait=600)                   # Sets the maximum time to wait for the job to complete, including the time waiting for spot instances, in seconds. Here, it's 600 seconds (10 minutes).
Copy after login
Copy after login
Copy after login
Copy after login
  • An endpoint will be deployed under Inference tab.

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

Additional Console Observation:

  • Creation of an Endpoint Configuration under Inference tab.

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • Creation of an model also under under Inference tab.

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model


Ending and Cleanup

  • In the VS Code comeback to data_upload.ipynb to execute last 2 code cells to download the S3 bucket's data into the local system.
  • The folder will be named downloaded_bucket_content. Directory Structure of folder Downloaded.

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • You will get a log of downloaded files in the output cell. It will contain a raw pretrained_sm.ipynb, final_dataset.csv and a model output folder named 'pretrained-algo' with the execution data of the sagemaker code file.
  • Finally go into pretrained_sm.ipynb present inside the SageMaker instance and execute the final 2 code cells. The end-point and the resources within the S3 bucket will be deleted to ensure no additional charges.
  • Deleting The EndPoint
  estimator.fit({'train': s3_input_train,'validation': s3_input_test})
Copy after login
Copy after login
Copy after login
Copy after login

ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model

  • Clearing S3: (Needed to destroy the instance)
  # Looks for the XGBoost image URI and builds an XGBoost container. Specify the repo_version depending on preference.
  container = get_image_uri(boto3.Session().region_name,
                            'xgboost', 
                            repo_version='1.0-1')
Copy after login
Copy after login
Copy after login
Copy after login
Copy after login
  • Come back to the VS Code terminal for the project file and then type/paste terraform destroy --auto-approve
  • All the created resource instances will be deleted.

Auto Created Objects

ClassiSage/downloaded_bucket_content
ClassiSage/.terraform
ClassiSage/ml_ops/pycache
ClassiSage/.terraform.lock.hcl
ClassiSage/terraform.tfstate
ClassiSage/terraform.tfstate.backup

NOTE:
If you liked the idea and the implementation of this Machine Learning Project using AWS Cloud's S3 and SageMaker for HDFS log classification, using Terraform for IaC (Infrastructure setup automation), Kindly consider liking this post and starring after checking-out the project repository at GitHub.

The above is the detailed content of ClassiSage: Terraform IaC Automated AWS SageMaker based HDFS Log classification Model. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template