Automatically reload Celery workers with a custom Django command

WBOY
リリース: 2024-07-22 09:40:11
オリジナル
1003 人が閲覧しました

Automatically reload Celery workers with a custom Django command

Celery previously had an --autoreload flag that has since been removed. However, Django has automatic reloading built into its manage.py runserver command. The absence of automatic reloading in Celery workers creates a confusing development experience: updating Python code causes the Django server to reload with the current code, but any tasks that the server fires will run stale code in the Celery worker.

This post will show you how to build a custom manage.py runworker command that automatically reloads Celery workers during development. The command will be modeled after runserver, and we will take a look at how Django's automatic reloading works under-the-hood.

Before we begin

This post assumes that you have a Django app with Celery already installed (guide). It also assumes you have an understanding of the differences between projects and applications in Django.

All links to source code and documentation will be for current versions of Django and Celery at the time of publication (July, 2024). If you're reading this in the distant future, things may have changed.

Finally, the main project directory will be named my_project in the post's examples.

Solution: a custom command

We will create a custom manage.py command called runworker. Because Django provides automatic reloading via its runsever command, we will use runserver's source code as the basis of our custom command.

You can create a command in Django by making a management/commands/ directory within any of your project's applications. Once the directories have been created, you may then put a Python file with the name of the command you'd like to create within that directory (docs).

Assuming your project has an application named polls, we will create a file at polls/management/commands/runworker.py and add the following code:

# polls/management/commands/runworker.py

import sys
from datetime import datetime

from celery.signals import worker_init

from django.conf import settings
from django.core.management.base import BaseCommand
from django.utils import autoreload

from my_project.celery import app as celery_app


class Command(BaseCommand):
    help = "Starts a Celery worker instance with auto-reloading for development."

    # Validation is called explicitly each time the worker instance is reloaded.
    requires_system_checks = []
    suppressed_base_arguments = {"--verbosity", "--traceback"}

    def add_arguments(self, parser):
        parser.add_argument(
            "--skip-checks",
            action="store_true",
            help="Skip system checks.",
        )
        parser.add_argument(
            "--loglevel",
            choices=("DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL", "FATAL"),
            type=str.upper,  # Transforms user input to uppercase.
            default="INFO",
        )

    def handle(self, *args, **options):
        autoreload.run_with_reloader(self.run_worker, **options)

    def run_worker(self, **options):
        # If an exception was silenced in ManagementUtility.execute in order
        # to be raised in the child process, raise it now.
        autoreload.raise_last_exception()

        if not options["skip_checks"]:
            self.stdout.write("Performing system checks...\n\n")
            self.check(display_num_errors=True)

        # Need to check migrations here, so can't use the
        # requires_migrations_check attribute.
        self.check_migrations()

        # Print Django info to console when the worker initializes.
        worker_init.connect(self.on_worker_init)

        # Start the Celery worker.
        celery_app.worker_main(
            [
                "--app",
                "my_project",
                "--skip-checks",
                "worker",
                "--loglevel",
                options["loglevel"],
            ]
        )

    def on_worker_init(self, sender, **kwargs):
        quit_command = "CTRL-BREAK" if sys.platform == "win32" else "CONTROL-C"

        now = datetime.now().strftime("%B %d, %Y - %X")
        version = self.get_version()
        print(
            f"{now}\n"
            f"Django version {version}, using settings {settings.SETTINGS_MODULE!r}\n"
            f"Quit the worker instance with {quit_command}.",
            file=self.stdout,
        )
ログイン後にコピー

IMPORTANT: Be sure to replace all instances of my_project with the name of your Django project.

If you want to copy-and-paste this code and continue with your programming, you can safely stop here without reading the rest of this post. This is an elegant solution that will serve you well as you develop your Django & Celery project. However, if you want to learn more about how it works then keep reading.

How it works (optional)

Rather than review this code line-by-line, I'll discuss the most interesting parts by topic. If you aren't already familiar with Django custom commands, you may want to review the docs before proceeding.

Automatic reloading

This part feels the most magical. Within the body of the command's handle() method, there is a call to Django's internal autoreload.run_with_reloader(). It accepts a callback function that will execute every time a Python file is changed in the project. How does that actually work?

Let's take a look at a simplified version of the autoreload.run_with_reloader() function's source code. The simplified function rewrites, inlines, and deletes code to provide clarity about its operation.

# NOTE: This has been dramatically pared down for clarity.

def run_with_reloader(callback_func, *args, **kwargs):
    # NOTE: This will evaluate to False the first time it is run.
    is_inside_subprocess = os.getenv("RUN_MAIN") == "true"

    if is_inside_subprocess:
        # The reloader watches for Python file changes.
        reloader = get_reloader()

        django_main_thread = threading.Thread(
            target=callback_func, args=args, kwargs=kwargs
        )
        django_main_thread.daemon = True
        django_main_thread.start()

        # When the code changes, the reloader exits with return code 3.
        reloader.run(django_main_thread)

    else:
        # Returns Python path and the arguments passed to the command.
        # Example output: ['/path/to/python', './manage.py', 'runworker']
        args = get_child_arguments()

        subprocess_env = {**os.environ, "RUN_MAIN": "true"}
        while True:
            # Rerun the manage.py command in a subprocess.
            p = subprocess.run(args, env=subprocess_env, close_fds=False)
            if p.returncode != 3:
                sys.exit(p.returncode)
ログイン後にコピー

When manage.py runworker is run in the command line, it will first call the handle() method which will call run_with_reloader().

Inside run_with_reloader(), it will check to see if an environment variable called RUN_MAIN has a value of "true". When the function is first called, RUN_MAIN should have no value.

When RUN_MAIN is not set to "true", run_with_reloader() will enter a loop. Inside the loop, it will start a subprocess that reruns the manage.py [command_name] that was passed in, then wait for that subprocess to exit. If the subprocess exits with return code 3, the next iteration of the loop will start a new subprocess and wait. The loop will run until a subprocess returns an exit code that is not 3 (or until the user exits with ctrl + c). Once it gets a non-3 return code, it will exit the program completely.

The spawned subprocess runs the manage.py command again (in our case manage.py runworker), and again the command will call run_with_reloader(). This time, RUN_MAIN will be set to "true" because the command is running in a subprocess.

Now that run_with_reloader() knows it is in a subprocess, it will get a reloader that watches for file changes, put the provided callback function in a thread, and pass it to the reloader which begins watching for changes.

When a reloader detects a file change, it runs sys.exit(3). This exits the subprocess, which triggers the next iteration of the loop from the code that spawned the subprocess. In turn, a new subprocess is launched that uses an updated version of the code.

System checks & migrations

By default, Django commands perform system checks before they run their handle() method. However, in the case of runserver & our custom runworker command, we'll want to postpone running these until we're inside the callback that we provide to run_with_reloader(). In our case, this is our run_worker() method. This allows us to run the command with automatic reloading while fixing broken system checks.

To postpone running the system checks, the value of the requires_system_checks attribute is set to an empty list, and the checks are performed by calling self.check() in the body of run_worker(). Like runserver, our custom runworker command also checks to see if all migrations have been run, and it displays a warning if there are pending migrations.

Because we're already performing Django's system checks within the run_worker() method, we disable the system checks in Celery by passing it the --skip-checks flag to prevent duplicate work.

All of the code that pertains to system checks and migrations was lifted directly from the runserver command source code.

celery_app.worker_main()

Our implementation launches the Celery worker directly from Python using celery_app.worker_main() rather than shelling out to Celery.

on_worker_init()

This code executes when the worker is initialized, displaying the date & time, Django version, and the command to quit. It is modeled after the information that displays when runserver boots.

Other runserver boilerplate

The following lines were also lifted from the runserver source:

  • suppressed_base_arguments = {"--verbosity", "--traceback"}
  • autoreload.raise_last_exception()

Log level

Our custom command has a configurable log level incase the developer wants to adjust the setting from the CLI without modifying the code.

Going further

I poked-and-prodded at the source code for Django & Celery to build this implementation, and there are many opportunities to extend it. You could configure the command to accept more of Celery's worker arguments. Alternatively, you could create a custom manage.py command that automatically reloads any shell command like David Browne did in this Gist.

If you found this useful, feel free to leave a like or a comment. Thanks for reading.

以上がAutomatically reload Celery workers with a custom Django commandの詳細内容です。詳細については、PHP 中国語 Web サイトの他の関連記事を参照してください。

ソース:dev.to
このウェブサイトの声明
この記事の内容はネチズンが自主的に寄稿したものであり、著作権は原著者に帰属します。このサイトは、それに相当する法的責任を負いません。盗作または侵害の疑いのあるコンテンツを見つけた場合は、admin@php.cn までご連絡ください。
人気のチュートリアル
詳細>
最新のダウンロード
詳細>
ウェブエフェクト
公式サイト
サイト素材
フロントエンドテンプレート
私たちについて 免責事項 Sitemap
PHP中国語ウェブサイト:福祉オンライン PHP トレーニング,PHP 学習者の迅速な成長を支援します!