
We all love traffic, right? The only time where I think about how I totally messed up my presentation (overthinking is a pain).
All jokes aside, I've wanted to create a project where I can look for traffic in real-time as a PoC so that I further enhance it in the future. Meet the traffic congestion predictor.
I'll walk through deploying the Traffic Congestion Predictor using AWS Bedrock. AWS Bedrock provides a fully managed service for foundation models, making it perfect for deploying AI applications. We'll cover everything from initial setup to final deployment and testing.
First, set up your development environment:
# Create a new virtual environment python -m venv bedrock-env source bedrock-env/bin/activate # On Windows use: bedrock-env\Scripts\activate # Install required packages pip install boto3 pandas numpy scikit-learn streamlit plotly
Navigate to AWS Console and enable AWS Bedrock
Create a new model in Bedrock:
Create a new file "bedrock_integration.py":
import boto3
import json
import numpy as np
import pandas as pd
from typing import Dict, Any
class TrafficPredictor:
def __init__(self):
self.bedrock = boto3.client(
service_name='bedrock-runtime',
region_name='us-east-1' # Change to your region
)
def prepare_features(self, input_data: Dict[str, Any]) -> pd.DataFrame:
# Convert input data to model features
hour = input_data['hour']
day = input_data['day']
features = pd.DataFrame({
'hour_sin': [np.sin(2 * np.pi * hour/24)],
'hour_cos': [np.cos(2 * np.pi * hour/24)],
'day_sin': [np.sin(2 * np.pi * day/7)],
'day_cos': [np.cos(2 * np.pi * day/7)],
'temperature': [input_data['temperature']],
'precipitation': [input_data['precipitation']],
'special_event': [input_data['special_event']],
'road_work': [input_data['road_work']],
'vehicle_count': [input_data['vehicle_count']]
})
return features
def predict(self, input_data: Dict[str, Any]) -> float:
features = self.prepare_features(input_data)
# Prepare prompt for Claude
prompt = f"""
Based on the following traffic conditions, predict the congestion level (0-10):
- Time: {input_data['hour']}:00
- Day of week: {input_data['day']}
- Temperature: {input_data['temperature']}°C
- Precipitation: {input_data['precipitation']}mm
- Special event: {'Yes' if input_data['special_event'] else 'No'}
- Road work: {'Yes' if input_data['road_work'] else 'No'}
- Vehicle count: {input_data['vehicle_count']}
Return only the numerical prediction.
"""
# Call Bedrock
response = self.bedrock.invoke_model(
modelId='anthropic.claude-v2',
body=json.dumps({
"prompt": prompt,
"max_tokens": 10,
"temperature": 0
})
)
# Parse response
response_body = json.loads(response['body'].read())
prediction = float(response_body['completion'].strip())
return np.clip(prediction, 0, 10)
Create "api.py:"
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from bedrock_integration import TrafficPredictor
from typing import Dict, Any
app = FastAPI()
predictor = TrafficPredictor()
class PredictionInput(BaseModel):
hour: int
day: int
temperature: float
precipitation: float
special_event: bool
road_work: bool
vehicle_count: int
@app.post("/predict")
async def predict_traffic(input_data: PredictionInput) -> Dict[str, float]:
try:
prediction = predictor.predict(input_data.dict())
return {"congestion_level": prediction}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Step 5: Create AWS Infrastructure
Create "infrastructure.py":
import boto3
import json
def create_infrastructure():
# Create ECR repository
ecr = boto3.client('ecr')
try:
ecr.create_repository(repositoryName='traffic-predictor')
except ecr.exceptions.RepositoryAlreadyExistsException:
pass
# Create ECS cluster
ecs = boto3.client('ecs')
ecs.create_cluster(clusterName='traffic-predictor-cluster')
# Create task definition
task_def = {
'family': 'traffic-predictor',
'containerDefinitions': [{
'name': 'traffic-predictor',
'image': f'{ecr.describe_repositories()["repositories"][0]["repositoryUri"]}:latest',
'memory': 512,
'cpu': 256,
'essential': True,
'portMappings': [{
'containerPort': 8000,
'hostPort': 8000,
'protocol': 'tcp'
}]
}],
'requiresCompatibilities': ['FARGATE'],
'networkMode': 'awsvpc',
'cpu': '256',
'memory': '512'
}
ecs.register_task_definition(**task_def)
Create "Dockerfile:"
FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["uvicorn", "api:app", "--host", "0.0.0.0", "--port", "8000"]
Create "requirements.txt:"
fastapi uvicorn boto3 pandas numpy scikit-learn
Run these commands:
# Build and push Docker image aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com docker build -t traffic-predictor . docker tag traffic-predictor:latest $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/traffic-predictor:latest docker push $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/traffic-predictor:latest # Create infrastructure python infrastructure.py
Modify "app.py" to connect to the API:
import streamlit as st
import requests
import plotly.graph_objects as go
import plotly.express as px
API_ENDPOINT = "your-api-endpoint"
def predict_traffic(input_data):
response = requests.post(f"{API_ENDPOINT}/predict", json=input_data)
return response.json()["congestion_level"]
# Rest of the Streamlit code remains the same, but replace direct model calls
# with API calls using predict_traffic()
Test the API endpoint:
curl -X POST "your-api-endpoint/predict" \
-H "Content-Type: application/json" \
-d '{"hour":12,"day":1,"temperature":25,"precipitation":0,"special_event":false,"road_work":false,"vehicle_count":1000}'
Monitor using AWS CloudWatch:
If everything goes well. Congratulations! You've successfully deployed a traffic congestion predictor. Pad yourself on the back for that one! Make sure you monitor costs and performance, regularly update the model, and implement a CI/CD pipeline. The next steps are adding user authentication, enhancing monitoring and alerting, optimising model performance, and adding more features based on user feedback.
Thanks for reading this. Let me know any thoughts, questions or observations!
The above is the detailed content of Deploying an AI Traffic Congestion Predictor using AWS Bedrock: A Complete Overview. For more information, please follow other related articles on the PHP Chinese website!