In the first part of this tutorial, we created a function that replies to a message. When a new request comes in, we call this function and then this function calls the telegram API with the reply text. You may have noticed a problem with this approach. The whole process is synchronous. If it takes a few seconds to calculate the reply and then call the telegram API, the request may go to time out.
In this section, we will see how we can make the reply process asynchronous. We will be using celery for this.
This is the second part of a three-part tutorial. Here are the links to other parts of this tutorial.
Setting Up Celery Tasks
We need to install the required packages first. We need celery to run background tasks and redis to store the information about the task queue. Run the following command to install these through pipenv.
pipenv install celery redis
Next, create a celery.py
file in your project configuration directory (The directory which contains the settings file) and add the following contents to it.
from celery import Celery
app = Celery("<your-project-name>")
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
Make sure you put in the correct project name.
Celery Task for Sending Reply
We need to convert our reply function into a celery task. For this create a new tasks.py
file in our chat app and add the following content.
import requests
from celery import shared_task
from django.conf import settings
@shared_task
def send_telegram_reply(message):
name = message["message"]["from"]["first_name"]
text = message["message"]["text"]
chat_id = message["message"]["chat"]["id"]
reply = f"Hi {name}! Got your message: {text}"
reply_url = f"https://api.telegram.org/bot{settings.TELEGRAM_API_TOKEN}/sendMessage"
data = {"chat_id": chat_id, "text": reply}
requests.post(reply_url, data=data)
We have added a shared task decorator to convert the function to a celery task. Next, update our view to call this shared task instead of the function in utils.py
from django.conf import settings
from rest_framework import status
from rest_framework.response import Response
from rest_framework.views import APIView
from chat.tasks import send_telegram_reply
class TelegramWebhook(APIView):
def post(self, request, token):
if token != settings.TELEGRAM_WEBHOOK_TOKEN:
return Response(
{"error": "Unauthorized"}, status=status.HTTP_401_UNAUTHORIZED
)
send_telegram_reply.delay(request.data)
return Response({"success": True})
Note that we are calling the tasks through its delay
method. If you don’t do this, the tasks will not go to the message queue for celery to process.
Adding Docker
You can install redis on your machine and then run a celery process from a terminal but it is complicated to do all this manually. Also, if you are on windows, running redis natively is a bit tricky. We will be using docker to solve all these problems.
Go to the docker website and follow the install instructions for your operating system if you don’t have docker installed on your machine.
Create a new file Dockerfile
in your project directory (the directory which contains manage.py
file) and add the following contents.
FROM python:3.10
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN pip3 install pipenv
COPY Pipfile /code/
RUN pipenv install --dev
COPY . /code/
The docker file should be self-explanatory. We will also need a docker-compose file to orchestrate all the processes we run. Create a new docker-compose.yml
next to the Dockerfile
and add the following code.
version: "3.9"
services:
database_server:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
redis_server:
image: redis
celery:
build: .
command: pipenv run celery -A <your-project-name> worker -l info
volumes:
- .:/code
depends_on:
- database_server
- redis_server
restart: always
django_server:
build: .
command: pipenv run python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- database_server
- redis_server
- celery
The first service we add is a PostgreSQL database. We will be using this database in the next part of this tutorial. Then we added a redis service for celery to store the task information.
The celery service we added builds from the current directory. Then we run the celery process using pipenv. Don’t forget to put the correct project name in this command. This will be the name of the directory which contains celery.py
file. We are mounting the current directory as a volume to make sure the changes we make to our code will be automatically loaded. We don’t have to restart docker-compose to run the updated code. The depends_on
makes sure both PostgreSQL and redis are started before celery starts.
The django_server
service should be self-explanatory at this point. We are using pipenv to run the django development server.
In the last section, we have configured celery to read from the django settings file. This means we do not have to maintain a separate settings file for celery. We can put celery configuration in the django settings file and celery will load it. Open your settings file and add the following to it.
CELERY_BROKER_URL = "redis://redis_server:6379/0"
CELERY_RESULT_BACKEND = "redis://redis_server:6379/0"
We are using redis for both tasks queue and results store. Note that we are using the docker-compose service name instead of an IP address or hostname. This is how processes communicate when ran using docker.
You should also update the DATABASES
section in your settings file to use the postgresql service.
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "postgres",
"USER": "postgres",
"PASSWORD": "postgres",
"HOST": "database_server",
"PORT": "5432",
}
}
You may have noticed the credential we set in the docker-compose file as environment variables. We are using the same credentials here.
Run the following command to start all of our services using docker-compose.
docker-compose up --build
This will take some time depending on the speed of your internet connection. Note that you don’t need to add the --build
option every time you run this. Whenever you change the Dockerfile
or docker-compose.yml
or if you have changes in Pipfile
, you will need to add the build option. Also, if you have changed the celery task code, you have to restart the docker-compose. Celery won’t pick up the code changes automatically. For django, you don’t have to restart docker-compose.
Once the build is completed, go to telegram and send a message to your bot. You should get a reply back. (Make sure you run the management command to update the webhook if the ngrock url is changed.)
A note on running management commands: You can still run the management command to set the webhook after activating pipenv from a terminal. But if you want to run it through docker you can use the following command.
docker-compose run django_server pipenv run python manage.py <your-command>
For example, to run the set webhook command, you should use the following.
docker-compose run django_server pipenv run python manage.py set_telegram_webhook <ngrock url>