Bienvenue dans la première partie de notre série complète de blogs sur la mise en œuvre d'un système sophistiqué de traitement des commandes utilisant Temporal pour l'orchestration des microservices. Dans cette série, nous explorerons les subtilités de la création d’un système robuste, évolutif et maintenable, capable de gérer des flux de travail complexes et de longue durée.
Notre voyage commence par la mise en place des bases de notre projet. À la fin de cet article, vous disposerez d'une API CRUD REST entièrement fonctionnelle implémentée dans Golang, intégrée à Temporal pour l'orchestration des flux de travail et soutenue par une base de données Postgres. Nous utiliserons des outils modernes et les meilleures pratiques pour garantir que notre base de code est propre, efficace et facile à entretenir.
Plongeons-nous et commençons à créer notre système de traitement des commandes !
Avant de commencer la mise en œuvre, passons brièvement en revue les technologies et concepts clés que nous utiliserons :
Go est un langage compilé à typage statique connu pour sa simplicité, son efficacité et son excellente prise en charge de la programmation simultanée. Sa bibliothèque standard et son écosystème robuste en font un excellent choix pour créer des microservices.
Temporal est une plateforme d'orchestration de microservices qui simplifie le développement d'applications distribuées. Il nous permet d'écrire des flux de travail complexes et de longue durée sous forme de code procédural simple, gérant automatiquement les échecs et les tentatives.
Gin est un framework Web HTTP hautes performances écrit en Go. Il fournit une API de type martini avec de bien meilleures performances et une utilisation moindre de la mémoire.
OpenAPI (anciennement connu sous le nom de Swagger) est une spécification de fichiers d'interface lisibles par machine permettant de décrire, de produire, de consommer et de visualiser des services Web RESTful. oapi-codegen est un outil qui génère du code Go à partir des spécifications OpenAPI 3.0, nous permettant de définir d'abord notre contrat API et de générer des stubs de serveur et du code client.
sqlc génère du code Go de type sécurisé à partir de SQL. Il nous permet d'écrire des requêtes SQL simples et de générer du code Go entièrement sécurisé pour interagir avec notre base de données, réduisant ainsi le risque d'erreurs d'exécution et améliorant la maintenabilité.
PostgreSQL est un système de base de données objet-relationnel open source puissant, connu pour sa fiabilité, la robustesse de ses fonctionnalités et ses performances.
Docker nous permet de regrouper notre application et ses dépendances dans des conteneurs, garantissant ainsi la cohérence entre différents environnements. docker-compose est un outil permettant de définir et d'exécuter des applications Docker multi-conteneurs, que nous utiliserons pour configurer notre environnement de développement local.
Maintenant que nous avons couvert les bases, commençons à mettre en œuvre notre système.
Tout d'abord, créons notre répertoire de projet et mettons en place la structure de base :
mkdir order-processing-system cd order-processing-system # Create directory structure mkdir -p cmd/api \ internal/api \ internal/db \ internal/models \ internal/service \ internal/workflow \ migrations \ pkg/logger \ scripts # Initialize Go module go mod init github.com/yourusername/order-processing-system # Create main.go file touch cmd/api/main.go
Cette structure suit la disposition standard du projet Go :
Créons un Makefile pour simplifier les tâches courantes :
touch Makefile
Ajoutez le contenu suivant au Makefile :
.PHONY: generate build run test clean generate: @echo "Generating code..." go generate ./... build: @echo "Building..." go build -o bin/api cmd/api/main.go run: @echo "Running..." go run cmd/api/main.go test: @echo "Running tests..." go test -v ./... clean: @echo "Cleaning..." rm -rf bin .DEFAULT_GOAL := build
Ce Makefile fournit des cibles pour générer du code, créer l'application, l'exécuter, exécuter des tests et nettoyer les artefacts de construction.
Créez un fichier nommé api/openapi.yaml et définissez notre spécification API :
openapi: 3.0.0 info: title: Order Processing API version: 1.0.0 description: API for managing orders in our processing system paths: /orders: get: summary: List all orders responses: '200': description: Successful response content: application/json: schema: type: array items: $ref: '#/components/schemas/Order' post: summary: Create a new order requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/CreateOrderRequest' responses: '201': description: Created content: application/json: schema: $ref: '#/components/schemas/Order' /orders/{id}: get: summary: Get an order by ID parameters: - name: id in: path required: true schema: type: integer responses: '200': description: Successful response content: application/json: schema: $ref: '#/components/schemas/Order' '404': description: Order not found put: summary: Update an order parameters: - name: id in: path required: true schema: type: integer requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/UpdateOrderRequest' responses: '200': description: Successful response content: application/json: schema: $ref: '#/components/schemas/Order' '404': description: Order not found delete: summary: Delete an order parameters: - name: id in: path required: true schema: type: integer responses: '204': description: Successful response '404': description: Order not found components: schemas: Order: type: object properties: id: type: integer customer_id: type: integer status: type: string enum: [pending, processing, completed, cancelled] total_amount: type: number created_at: type: string format: date-time updated_at: type: string format: date-time CreateOrderRequest: type: object required: - customer_id - total_amount properties: customer_id: type: integer total_amount: type: number UpdateOrderRequest: type: object properties: status: type: string enum: [pending, processing, completed, cancelled] total_amount: type: number
Cette spécification définit nos opérations CRUD de base pour les commandes.
Installer oapi-codegen :
go install github.com/deepmap/oapi-codegen/cmd/oapi-codegen@latest
Generate the server code:
oapi-codegen -package api -generate types,server,spec api/openapi.yaml > internal/api/api.gen.go
This command generates the Go code for our API, including types, server interfaces, and the OpenAPI specification.
Create a new file internal/api/handler.go:
package api import ( "net/http" "github.com/gin-gonic/gin" ) type Handler struct { // We'll add dependencies here later } func NewHandler() *Handler { return &Handler{} } func (h *Handler) RegisterRoutes(r *gin.Engine) { RegisterHandlers(r, h) } // Implement the ServerInterface methods func (h *Handler) GetOrders(c *gin.Context) { // TODO: Implement c.JSON(http.StatusOK, []Order{}) } func (h *Handler) CreateOrder(c *gin.Context) { var req CreateOrderRequest if err := c.ShouldBindJSON(&req); err != nil { c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()}) return } // TODO: Implement order creation logic order := Order{ Id: 1, CustomerId: req.CustomerId, Status: "pending", TotalAmount: req.TotalAmount, } c.JSON(http.StatusCreated, order) } func (h *Handler) GetOrder(c *gin.Context, id int) { // TODO: Implement c.JSON(http.StatusOK, Order{Id: id}) } func (h *Handler) UpdateOrder(c *gin.Context, id int) { var req UpdateOrderRequest if err := c.ShouldBindJSON(&req); err != nil { c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()}) return } // TODO: Implement order update logic order := Order{ Id: id, Status: *req.Status, } c.JSON(http.StatusOK, order) } func (h *Handler) DeleteOrder(c *gin.Context, id int) { // TODO: Implement c.Status(http.StatusNoContent) }
This implementation provides a basic structure for our API handlers. We’ll flesh out the actual logic when we integrate with the database and Temporal workflows.
Create a docker-compose.yml file in the project root:
version: '3.8' services: postgres: image: postgres:13 environment: POSTGRES_USER: orderuser POSTGRES_PASSWORD: orderpass POSTGRES_DB: orderdb ports: - "5432:5432" volumes: - postgres_data:/var/lib/postgresql/data volumes: postgres_data:
This sets up a Postgres container for our local development environment.
Install golang-migrate:
go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest
Create our first migration:
migrate create -ext sql -dir migrations -seq create_orders_table
Edit the migrations/000001_create_orders_table.up.sql file:
CREATE TABLE orders ( id SERIAL PRIMARY KEY, customer_id INTEGER NOT NULL, status VARCHAR(20) NOT NULL, total_amount DECIMAL(10, 2) NOT NULL, created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP ); CREATE INDEX idx_orders_customer_id ON orders(customer_id); CREATE INDEX idx_orders_status ON orders(status);
Edit the migrations/000001_create_orders_table.down.sql file:
DROP TABLE IF EXISTS orders;
Add a new target to our Makefile:
migrate-up: @echo "Running migrations..." migrate -path migrations -database "postgresql://orderuser:orderpass@localhost:5432/orderdb?sslmode=disable" up migrate-down: @echo "Reverting migrations..." migrate -path migrations -database "postgresql://orderuser:orderpass@localhost:5432/orderdb?sslmode=disable" down
Now we can run migrations with:
make migrate-up
go install github.com/kyleconroy/sqlc/cmd/sqlc@latest
Create a sqlc.yaml file in the project root:
version: "2" sql: - engine: "postgresql" queries: "internal/db/queries.sql" schema: "migrations" gen: go: package: "db" out: "internal/db" emit_json_tags: true emit_prepared_queries: false emit_interface: true emit_exact_table_names: false
Create a file internal/db/queries.sql:
-- name: GetOrder :one SELECT * FROM orders WHERE id = $1 LIMIT 1; -- name: ListOrders :many SELECT * FROM orders ORDER BY id; -- name: CreateOrder :one INSERT INTO orders ( customer_id, status, total_amount ) VALUES ( $1, $2, $3 ) RETURNING *; -- name: UpdateOrder :one UPDATE orders SET status = $2, total_amount = $3, updated_at = CURRENT_TIMESTAMP WHERE id = $1 RETURNING *; -- name: DeleteOrder :exec DELETE FROM orders WHERE id = $1;
Add a new target to our Makefile:
generate-sqlc: @echo "Generating sqlc code..." sqlc generate
Run the code generation:
make generate-sqlc
This will generate Go code for interacting with our database in the internal/db directory.
Add Temporal to our docker-compose.yml:
temporal: image: temporalio/auto-setup:1.13.0 ports: - "7233:7233" environment: - DB=postgresql - DB_PORT=5432 - POSTGRES_USER=orderuser - POSTGRES_PWD=orderpass - POSTGRES_SEEDS=postgres depends_on: - postgres temporal-admin-tools: image: temporalio/admin-tools:1.13.0 depends_on: - temporal
Create a file internal/workflow/order_workflow.go:
package workflow import ( "time" "go.temporal.io/sdk/workflow" "github.com/yourusername/order-processing-system/internal/db" ) func OrderWorkflow(ctx workflow.Context, order db.Order) error { logger := workflow.GetLogger(ctx) logger.Info("OrderWorkflow started", "OrderID", order.ID) // Simulate order processing err := workflow.Sleep(ctx, 5*time.Second) if err != nil { return err } // Update order status err = workflow.ExecuteActivity(ctx, UpdateOrderStatus, workflow.ActivityOptions{ StartToCloseTimeout: time.Minute, }, order.ID, "completed").Get(ctx, nil) if err != nil { return err } logger.Info("OrderWorkflow completed", "OrderID", order.ID) return nil } func UpdateOrderStatus(ctx workflow.Context, orderID int64, status string) error { // TODO: Implement database update return nil }
This basic workflow simulates order processing by waiting for 5 seconds and then updating the order status to “completed”.
Update the internal/api/handler.go file to include Temporal client and start the workflow:
package api import ( "context" "net/http" "github.com/gin-gonic/gin" "go.temporal.io/sdk/client" "github.com/yourusername/order-processing-system/internal/db" "github.com/yourusername/order-processing-system/internal/workflow" ) type Handler struct { queries *db.Queries temporalClient client.Client } func NewHandler(queries *db.Queries, temporalClient client.Client) *Handler { return &Handler{ queries: queries, temporalClient: temporalClient, } } // ... (previous handler methods) func (h *Handler) CreateOrder(c *gin.Context) { var req CreateOrderRequest if err := c.ShouldBindJSON(&req); err != nil { c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()}) return } order, err := h.queries.CreateOrder(c, db.CreateOrderParams{ CustomerID: req.CustomerId, Status: "pending", TotalAmount: req.TotalAmount, }) if err != nil { c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()}) return } // Start Temporal workflow workflowOptions := client.StartWorkflowOptions{ ID: "order-" + order.ID, TaskQueue: "order-processing", } _, err = h.temporalClient.ExecuteWorkflow(context.Background(), workflowOptions, workflow.OrderWorkflow, order) if err != nil { c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to start workflow"}) return } c.JSON(http.StatusCreated, order) } // ... (implement other handler methods)
Create a new file internal/service/service.go:
package service import ( "database/sql" "github.com/yourusername/order-processing-system/internal/api" "github.com/yourusername/order-processing-system/internal/db" "go.temporal.io/sdk/client" ) type Service struct { DB *sql.DB Queries *db.Queries TemporalClient client.Client Handler *api.Handler } func NewService() (*Service, error) { // Initialize database connection db, err := sql.Open("postgres", "postgresql://orderuser:orderpass@localhost:5432/orderdb?sslmode=disable") if err != nil { return nil, err } // Initialize Temporal client temporalClient, err := client.NewClient(client.Options{ HostPort: "localhost:7233", }) if err != nil { return nil, err } // Initialize queries queries := db.New(db) // Initialize handler handler := api.NewHandler(queries, temporalClient) return &Service{ DB: db, Queries: queries, TemporalClient: temporalClient, Handler: handler, }, nil } func (s *Service) Close() { s.DB.Close() s.TemporalClient.Close() }
Update the cmd/api/main.go file:
package main import ( "log" "github.com/gin-gonic/gin" _ "github.com/lib/pq" "github.com/yourusername/order-processing-system/internal/service" ) func main() { svc, err := service.NewService() if err != nil { log.Fatalf("Failed to initialize service: %v", err) } defer svc.Close() r := gin.Default() svc.Handler.RegisterRoutes(r) if err := r.Run(":8080"); err != nil { log.Fatalf("Failed to run server: %v", err) } }
Create a Dockerfile in the project root:
# Build stage FROM golang:1.17-alpine AS build WORKDIR /app COPY go.mod go.sum ./ RUN go mod download COPY . . RUN CGO_ENABLED=0 GOOS=linux go build -o /order-processing-system ./cmd/api # Run stage FROM alpine:latest WORKDIR / COPY --from=build /order-processing-system /order-processing-system EXPOSE 8080 ENTRYPOINT ["/order-processing-system"]
Update the docker-compose.yml file to include our application:
version: '3.8' services: postgres: # ... (previous postgres configuration) temporal: # ... (previous temporal configuration) temporal-admin-tools: # ... (previous temporal-admin-tools configuration) app: build: . ports: - "8080:8080" depends_on: - postgres - temporal environment: - DB_HOST=postgres - DB_USER=orderuser - DB_PASSWORD=orderpass - DB_NAME=orderdb - TEMPORAL_HOST=temporal:7233
Throughout the implementation guide, we’ve provided code snippets with explanations. Here’s a more detailed look at a key part of our system: the Order Workflow.
package workflow import ( "time" "go.temporal.io/sdk/workflow" "github.com/yourusername/order-processing-system/internal/db" ) // OrderWorkflow defines the workflow for processing an order func OrderWorkflow(ctx workflow.Context, order db.Order) error { logger := workflow.GetLogger(ctx) logger.Info("OrderWorkflow started", "OrderID", order.ID) // Simulate order processing // In a real-world scenario, this could involve multiple activities such as // inventory check, payment processing, shipping arrangement, etc. err := workflow.Sleep(ctx, 5*time.Second) if err != nil { return err } // Update order status // We use ExecuteActivity to run the status update as an activity // This allows for automatic retries and error handling err = workflow.ExecuteActivity(ctx, UpdateOrderStatus, workflow.ActivityOptions{ StartToCloseTimeout: time.Minute, }, order.ID, "completed").Get(ctx, nil) if err != nil { return err } logger.Info("OrderWorkflow completed", "OrderID", order.ID) return nil } // UpdateOrderStatus is an activity that updates the status of an order func UpdateOrderStatus(ctx workflow.Context, orderID int64, status string) error { // TODO: Implement database update // In a real implementation, this would use the db.Queries to update the order status return nil }
This workflow demonstrates several key concepts:
For this initial setup, we’ll focus on manual testing to ensure our system is working as expected. In future posts, we’ll dive into unit testing, integration testing, and end-to-end testing strategies.
To manually test our system:
docker-compose up
Use a tool like cURL or Postman to send requests to our API:
Check the logs to ensure the Temporal workflow is being triggered and completed successfully.
While setting up this initial version of our order processing system, we encountered several challenges and considerations:
Database Schema Design : Designing a flexible yet efficient schema for orders is crucial. We kept it simple for now, but in a real-world scenario, we might need to consider additional tables for order items, customer information, etc.
Error Handling : Our current implementation has basic error handling. In a production system, we’d need more robust error handling and logging, especially for the Temporal workflows.
Configuration Management : We hardcoded configuration values for simplicity. In a real-world scenario, we’d use environment variables or a configuration management system.
安全性:我们当前的设置不包括任何身份验证或授权。在生产系统中,我们需要实施适当的安全措施。
可扩展性:虽然 Temporal 有助于提高工作流可扩展性,但我们需要考虑高流量系统的数据库可扩展性和 API 性能。
监控和可观察性:我们还没有实现任何监控或可观察性工具。在生产系统中,这些对于应用程序的维护和故障排除至关重要。
在我们系列的第一部分中,我们已经为订单处理系统奠定了基础。我们有基本的 CRUD API、数据库集成和简单的时态工作流程。
在下一部分中,我们将深入研究临时工作流程和活动。我们将探索:
我们还将开始用更现实的订单处理逻辑来充实我们的 API,并探索随着我们的系统复杂性的增长而保持干净、可维护的代码的模式。
请继续关注第 2 部分,我们将把我们的订单处理系统提升到一个新的水平!
您是否面临着具有挑战性的问题,或者需要外部视角来看待新想法或项目?我可以帮忙!无论您是想在进行更大投资之前建立技术概念验证,还是需要解决困难问题的指导,我都会为您提供帮助。
如果您有兴趣与我合作,请通过电子邮件与我联系:hungaikevin@gmail.com。
让我们将挑战转化为机遇!
Ce qui précède est le contenu détaillé de. pour plus d'informations, suivez d'autres articles connexes sur le site Web de PHP en chinois!