오늘날의 디지털 세계에서는 데이트 앱을 스와이프하거나 구매를 완료하는 등 모든 작업이 뒤에서 효율적으로 작동하는 API에 의존합니다. 백엔드 개발자로서 우리는 매 밀리초가 중요하다는 것을 알고 있습니다. 하지만 API가 더 빠르게 응답하도록 하려면 어떻게 해야 할까요? 그 답은 캐싱에 있습니다.
캐싱은 자주 액세스하는 데이터를 메모리에 저장하는 기술로, 매번 느린 데이터베이스를 쿼리하는 대신 API가 즉시 응답할 수 있도록 해줍니다. 요리할 때마다 식료품 저장실에서 주요 재료(소금, 후추, 기름)를 가져오는 대신 주방 조리대 위에 보관하는 것과 같다고 생각하십시오. 이렇게 하면 시간이 절약되고 프로세스가 더 효율적이 됩니다. 마찬가지로 캐싱은 일반적으로 요청되는 데이터를 Redis와 같이 빠르고 액세스 가능한 위치에 저장하여 API 응답 시간을 단축합니다.
FastAPI를 사용하여 Redis Cache에 연결하려면 다음 라이브러리가 사전 설치되어 있어야 합니다.
pip install fastapi uvicorn aiocache pydantic
Pydantic은 데이터베이스 테이블과 구조를 생성하는 데 사용됩니다. aiocache는 캐시에서 비동기 작업을 수행합니다. uvicorn은 서버 실행을 담당합니다.
현재 Windows 시스템에서 Redis를 직접 설정하는 것은 불가능합니다. 따라서 Linux용 Windows 하위 시스템에서 설정하고 실행해야 합니다. WSL 설치 지침은 아래에 나와 있습니다
Post installing WSL, the following commands are required to install Redis
sudo apt update sudo apt install redis-server sudo systemctl start redis
To test Redis server connectivity, the following command is used
redis-cli
After this command, it will enter into a virtual terminal of port 6379. In that terminal, the redis commands can be typed and tested.
Let’s create a simple FastAPI app that retrieves user information and caches it for future requests. We will use Redis for storing cached responses.
We’ll use Pydantic to define our User model, which represents the structure of the API response.
from pydantic import BaseModel class User(BaseModel): id: int name: str email: str age: int
To avoid repeating the caching logic for each endpoint, we’ll create a reusable caching decorator using the aiocache library. This decorator will attempt to retrieve the response from Redis before calling the actual function.
import json from functools import wraps from aiocache import Cache from fastapi import HTTPException def cache_response(ttl: int = 60, namespace: str = "main"): """ Caching decorator for FastAPI endpoints. ttl: Time to live for the cache in seconds. namespace: Namespace for cache keys in Redis. """ def decorator(func): @wraps(func) async def wrapper(*args, **kwargs): user_id = kwargs.get('user_id') or args[0] # Assuming the user ID is the first argument cache_key = f"{namespace}:user:{user_id}" cache = Cache.REDIS(endpoint="localhost", port=6379, namespace=namespace) # Try to retrieve data from cache cached_value = await cache.get(cache_key) if cached_value: return json.loads(cached_value) # Return cached data # Call the actual function if cache is not hit response = await func(*args, **kwargs) try: # Store the response in Redis with a TTL await cache.set(cache_key, json.dumps(response), ttl=ttl) except Exception as e: raise HTTPException(status_code=500, detail=f"Error caching data: {e}") return response return wrapper return decorator
We’ll now implement a FastAPI route that retrieves user information based on a user ID. The response will be cached using Redis for faster access in subsequent requests.
from fastapi import FastAPI app = FastAPI() # Sample data representing users in a database users_db = { 1: {"id": 1, "name": "Alice", "email": "alice@example.com", "age": 25}, 2: {"id": 2, "name": "Bob", "email": "bob@example.com", "age": 30}, 3: {"id": 3, "name": "Charlie", "email": "charlie@example.com", "age": 22}, } @app.get("/users/{user_id}") @cache_response(ttl=120, namespace="users") async def get_user_details(user_id: int): # Simulate a database call by retrieving data from users_db user = users_db.get(user_id) if not user: raise HTTPException(status_code=404, detail="User not found") return user
Start your FastAPI application by running:
uvicorn main:app --reload
Now, you can test the API by fetching user details via:
http://127.0.0.1:8000/users/1
The first request will fetch the data from the users_db, but subsequent requests will retrieve the data from Redis.
You can verify the cache by inspecting the keys stored in Redis. Open the Redis CLI:
redis-cli KEYS *
You will get all keys that have been stored in the Redis till TTL.
: When the user data is requested for the first time, the API fetches it from the database (users_db) and stores the result in Redis with a time-to-live (TTL) of 120 seconds.
Any subsequent requests for the same user within the TTL period are served directly from Redis, making the response faster and reducing the load on the database.
After 120 seconds, the cache entry expires, and the data is fetched from the database again on the next request, refreshing the cache.
In this tutorial, we’ve demonstrated how to implement Redis caching in a FastAPI application using a simple user details example. By caching API responses, you can significantly improve the performance of your application, particularly for data that doesn't change frequently.
Please do upvote and share if you find this article useful.
Das obige ist der detaillierte Inhalt vonCaching in FastAPI: Hochleistungsentwicklung freischalten:. Für weitere Informationen folgen Sie bitte anderen verwandten Artikeln auf der PHP chinesischen Website!