Добрый день, уже мучаюсь неделю, может глаз замылился, возможно сможет кто подсказать
исходные данные:
1) структура приложения
-Project
--docker
--Pref_Project
---manage.py
---.env
---Pref_project
----init.py
----setting.py
---Project
----IntegrationData
-----tasks.py
----init.py
----celeryapp.ry
--docker-compose.yml
2) Заполнил Project.Pref_Project.Project.init.py
from .celeryapp import app as celery_app
__all__ = ['celery_app']
3) Заполнил Project.Pref_Project.Project.calaryapp.py
import os
from celery import Celery
from celery.schedules import crontab
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Pref_Project.settings")
app = Celery('main', include=["Pref_Project.Project.integrationData.tasks"])
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
app.conf.beat_schedule = {
"task_store_loader":
{
"task": "Pref_Project.Project.integrationData.tasks.store_loader",
"schedule": crontab(minute=1, hour=0),
}
}
4) Заполнил файл Project.Pref_Project.Project.IntegrationData.tasks.py
from ..celeryapp import app
from ..integrationProcessing import IntegrationProcessing
@app.task(name="store_loader", bind=True)
def store_loader():
integrationProcessing = IntegrationProcessing()
integrationProcessing.load_store()
5) Заполнил Project.docker-compose.yml
version: "3.4"
services:
db:
image: mdillon/postgis:11
container_name: same_name_db
env_file: ./.env
volumes:
- ./db/data:/var/lib/postgresql/data
- ./db/backup:/backup
- ./docker/db/init-user.sql:/docker-entrypoint-initdb.d/init-user.sql
ports:
- "same-port"
app:
container_name: same_name_app
image: same-image
env_file: ./.env
restart: unless-stopped
ports:
- "same-port"
command: /start
links:
- db
- redis
depends_on:
- db
- redis
celeryworker:
image: same-image:celeryworker-test
container_name: same-name_celeryworker
env_file: ./.env
environment:
- CELERY_BROKER_URL=redis://redis:6379
restart: unless-stopped
links:
- db
- redis
depends_on:
- db
- redis
- app
command:
celery -A celery worker
--loglevel INFO
--concurrency 4
--max-tasks-per-child 100
celerybeat:
image: same-image:celerybeat-test
container_name: same-name_celerybeat
env_file: ./.env
environment:
- CELERY_BROKER_URL=redis://redis:6379
restart: unless-stopped
links:
- db
- redis
depends_on:
- db
- redis
- app
command:
celery -A celery beat
--loglevel INFO
redis:
container_name: name_redis
image: redis:latest
privileged: true
command:
redis-server --port 6379
--appendonly yes
--maxmemory 1gb
--maxmemory-policy allkeys-lru
expose:
- "6379"
ports:
- "6379:6379"
pgadmin:
container_name: name_pgadmin
image: dpage/pgadmin4
env_file: ./.env
volumes:
- ./pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
user: root
volumes:
static:
media:
protected:
pgadmin:
В итоге получил 3 контейнера: redis
redis:
1:C 23 Mar 2023 03:25:16.781 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 23 Mar 2023 03:25:16.781 # Redis version=7.0.10, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 23 Mar 2023 03:25:16.781 # Configuration loaded
1:M 23 Mar 2023 03:25:16.783 * monotonic clock: POSIX clock_gettime
1:M 23 Mar 2023 03:25:16.784 * Running mode=standalone, port=6379.
1:M 23 Mar 2023 03:25:16.784 # Server initialized
1:M 23 Mar 2023 03:25:16.784 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 23 Mar 2023 03:25:16.873 * Creating AOF base file appendonly.aof.1.base.rdb on server start
1:M 23 Mar 2023 03:25:17.278 * Creating AOF incr file appendonly.aof.1.incr.aof on server start
1:M 23 Mar 2023 03:25:17.278 * Ready to accept connections
celery worker:
-------------- celery@0daef9123e89 v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.15.0-58-generic-x86_64-with-glibc2.2.5 2023-03-23 03:25:30
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: default:0x7f47c782f280 (.default.Loader)
- ** ---------- .> transport: redis://redis:6379//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
/usr/local/lib/python3.8/site-packages/celery/platforms.py:840: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2023-03-23 03:25:31,267: INFO/MainProcess] Connected to redis://redis:6379//
[2023-03-23 03:25:31,271: INFO/MainProcess] mingle: searching for neighbors
[2023-03-23 03:25:32,285: INFO/MainProcess] mingle: all alone
[2023-03-23 03:25:32,303: INFO/MainProcess] celery@0daef9123e89 ready.
[2023-03-23 04:00:00,126: INFO/MainProcess] Task celery.backend_cleanup[ed7cbc7f-16f6-47e9-ba17-45d4171168cd] received
/usr/local/lib/python3.8/site-packages/celery/platforms.py:840: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2023-03-23 04:00:00,129: INFO/ForkPoolWorker-4] Task celery.backend_cleanup[ed7cbc7f-16f6-47e9-ba17-45d4171168cd] succeeded in 0.0005096299573779106s: None
celery beat
[2023-03-23 03:25:30,764: INFO/MainProcess] beat: Starting...
[2023-03-23 04:00:00,114: INFO/MainProcess] Scheduler: Sending due task celery.backend_cleanup (celery.backend_cleanup)
В итоге моя задача не запускается, нужна помощь