diff --git a/README.md b/README.md
index e093b8b17..32cd76f34 100644
--- a/README.md
+++ b/README.md
@@ -6,54 +6,308 @@ Repository for Shoonya's backend.
The project was created using [Python 3.7](https://www.python.org/downloads/). All major dependencies along with the versions are listed in the `backend/deploy/requirements.txt` file.
-## Installation
-The installation and setup instructions have been tested on the following platforms:
+| Systems | Requirements |
+| --------- | ------------ |
+| Windows | Windows 10 or later with at least 8GB of RAM. |
+| Ubuntu | Ubuntu 20.04 or later with at least 4GB of RAM. |
+| Unix/Mac | macOS 10.13 or later with at least 4GB of RAM. |
-It can be done in 2 ways:-
-1) using docker by running all services in containers or
-2) directly running the services on local one by one (like, celery and redis).
-
-If your preference is 1 over 2 please be fixed to this environment and follow the steps below it:
-- Docker
-- Ubuntu 20.04 OR macOs
-If you are using a different operating system, you will have to look at external resources (eg. StackOverflow) to correct any errors.
+## Backend Components and Services
+The whole backend setup is divided into mainly 5 Components
+```
+Backend Comopnents
+|
+|-- 1. Default Components
+| |-- a) Django
+| |-- b) Celery
+| |-- c) Redis
+|
+|-- 2. Elastic Logstash Kibana (ELK) & Flower Confirguration
+|
+|-- 3. Nginx-Certbot
+|
+|-- 4. Additional Services
+| |-- a) Google Application Credentials
+| |-- b) Ask_Dhruva
+| |-- c) Indic_Trans_V2
+| |-- d) Email Service
+| |-- e) Logging
+| |-- f) Minio
+```
+
+> Note : There are some accordian so you need to expand
+
+
+
+ 1. Default Components
+
+
+This section outlines the essential setup needed for the application to function properly. It encompasses Docker deployments for Django, Celery, and Redis, which form the core components of our application infrastructure.
+
+#### a) Django
+- **Description:** Django is a Python-based framework for web development, providing tools and features to build robust web applications quickly and efficiently.
+- Configuration:
+ - `SECRET_KEY`: Django secret key either enter manually or generate using the command
+
+ To create a new secret key, run the following commands (within the virtual environment):
+ ```
+ # Open a Python shell
+ python backend/manage.py shell
+
+ >> from django.core.management.utils import get_random_secret_key
+ >> get_random_secret_key()
+ ```
+
+#### b) Celery
+- **Description:** Celery is a system for asynchronous task processing based on distributed message passing, allowing computationally intensive operations to run in the background without impacting the main application's performance.
+- Configuration:
+ - `CELERY_BROKER_URL`: Broker for Celery tasks
+
+
+#### c) Redis
+- Description: Redis is an open-source, in-memory data structure store used as a database, cache, and message broker.
+- Configuration:
+ - `REDIS_HOST`: Need to configue the port
+ - `REDIS_PORT` : Need to configure the port
+
+
+
+
+
+ 2. ELK & Flower configuration
+
+
+
+#### a) Elasticsearch-Logstash-Kibanas
+This section contains the ELK stack for logging monitoring etc.
+
+- Elasticsearch
+ - Description: Elasticsearch is a distributed, RESTful search and analytics engine.
+ - Configuration:
+ - `ELASTICSEARCH_URL`: URL for Elasticsearch
+ - `INDEX_NAME`: Index name
+
+- Logstash
+ - Description: Logstash is a server-side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.
+ - Configuration: None
+
+- Kibana
+ - Description: Kibana is an open-source data visualization dashboard for Elasticsearch.
+ - Configuration: None
+
+
+
+#### b) Flower Confirguation
+- Flower is a web-based tool for monitoring and administrating Celery clusters. It allows you to keep track of tasks as they flow through your system, inspect the
+system's health, and perform administrative operations
+like shutting down workers.
+
+- Additionally, Flower would be monitoring the
+tasks defined in our tasks/ directory in
+the `backend/directory`.
+
+- Configuration:
+ - `FLOWER_USERNAME`: Flower username
+ - `FLOWER_PASSWORD`: Flower password
+ - `FLOWER_PORT`: Need to configure
+
+
+
+
+
+ 3. Nginx-Certbot
+
+
+This section contains Nginx and Certbot setup for serving HTTPS traffic.
+
+- Nginx
+ - Description: Nginx is a web server that can also be used as a reverse proxy, load balancer, mail proxy, and HTTP cache.
+ - Configuration: None
+
+- Certbot
+ - Description: Certbot is a free, open-source software tool for automatically using Let's Encrypt certificates on manually-administrated websites to enable HTTPS.
+ - Configuration: None
+
+
+
+
+
+ 4 Additional Services
+
+
+
+These are the additional services that were not only present in certain confirguration but its actually present in whole files some of them are global variables and some are services.
+
+#### a) Google Application Credentials
+- **Description**: Google Application Credentials are used to authenticate and authorize applications to use Google Cloud APIs. They are a key part of Google Cloud's IAM (Identity and Access Management) system, and they allow the application to interact with Google's services securely.
+- Parameters:
+ - `type`
+ - `project_id`
+ - `private_key_id`
+ - `private_key`
+ - `client_email`
+ - `client_id`
+ - `auth_uri`
+ - `token_uri`
+ - `auth_provider_x509_cert_url`
+ - `client_x509_cert_url`
+ - `universe_domain`
+
+#### b) Ask_Dhruva
+- Description: Component for interacting with Dhruva ASR service. This service is likely used in your application to convert spoken language into written text.
+- Parameters:
+ - `ASR_DHRUVA_URL`: Parameter is used to tell your application where to send requests for speech recognition.
+ - `ASR_DHRUVA_AUTHORIZATION`: Authorization token for Dhruva ASR service.
+
+#### c) Indic_Trans_V2
+- Description: Component for interacting with Indic Trans V2 service.
+- Parameters:
+ - `INDIC_TRANS_V2_KEY`: API key for Indic Trans V2 service
+ - `INDIC_TRANS_V2_URL`: URL for Indic Trans V2 service
+
+#### d) Email Service
+- **Description**: The Email Service is likely used in your application to send emails. This could be for a variety of purposes such as sending notifications, password resets, confirmation emails, sending reports etc.
+- Parameters:
+ - `EMAIL_HOST`
+ - `SMTP_USERNAME`
+ - `SMTP_PASSWORD`
+ - `DEFAULT_FROM_EMAIL`
+
+#### e) Logging
+- Description:
+Logging is used to record events or actions that occur during the execution of your program. It's a crucial part of software development for debugging and monitoring purposes
+
+- Required for the application to work. Contains a Docker deployment of Django, Celery, and Redis.
+- Parameters:
+ - `LOGGING` : This is a boolean value (either 'true' or
+'false') that determines whether logging is enabled.
+ - `LOG_LEVEL` : This sets the level of logging. 'INFO' will
+log all INFO, WARNING, ERROR, and CRITICAL level
+logs.
+
+#### f) MINIO
+- Description: MinIO is an open-source, high-performance, AWS S3 compatible object storage system. It is typically used in applications for storing unstructured data like photos, videos, log files, backups, and container/VM images.
+- Parameters:
+ - `MINIO_ACCESS_KEY`
+ - `MINIO_SECRET_KEY`
+ - `MINIO_ENDPOINT`
+
+
+
+## Setup Instructions
+
+The installation can be done in two ways, each with its own advantages:
+
+1. **Dockerize Installation**: This is the latest, easiest, and most hassle-free installation method. All services will be run under Docker containers and volumes, ensuring a consistent environment and simplifying the setup process.
+
+2. **Default Installation (Without Docker)**: This method involves running the services locally one by one (like Celery and Redis). While this method gives you the most control and visibility into each service, it is more complex and time-consuming. Also, asynchronous tasks won't work under Celery tasks in this setup.
+
+### 1. Dockerize Installation
+
+### Pre-requisites
+
+If you are choosing dockerize method then you need to install following things
+
+- **Docker Engine/Docker Desktop running**
+ You may download Docker Desktop from the table given below
+
+ | Systems| Link |
+ | ---------- | ------ |
+ | Windows | https://docs.docker.com/desktop/install/windows-install/ |
+ | Ubuntu | https://docs.docker.com/desktop/install/ubuntu/ |
+ | Unix/Mac | https://docs.docker.com/desktop/install/mac-install/|
+- Python Version 3.7 or above
+- An Azure account subscription.
+- Google cloud subscriptions (mentioned above the backend components)
+
+
+### Running the Setup Script
+
+To run the setup script:
+1. Clone this repository to your local machine.
-### Create a Virtual Environment
+ ```bash
+ git clone "https://github.com/AI4Bharat/Shoonya-Backend"
+ ```
+2. Navigate to the root directory of the project.
+ ```bash
+ cd Shoonya-Backend
+ pip install -r ./backend/deploy/requirements.txt
+ ```
+
+3. Run the following command: `python docker-setup.py` make sure the docker engine is running on your system
+
+ ![installation image](public/image.png)
+
+4. Provide the details that has been asking in the prompt and it will automatically create & run the docker containers, volumes and processes
+ ![installation image](public/image-2.png)
+
+5. Once everything has been asked it will start creating containers and volumes and the server will get started on `http://localhost:8000` all the respective services will run on the provided ports
+
+### What the script does?
+- Automatically creates a Docker network named `shoonya_backend`.
+- Prompts the user to choose whether to run the application in production mode.
+- Guides the user through setting up a PostgreSQL container if desired.
+- Allows selection of components and sets up Docker Compose files accordingly.
+- Manages environment variables in the `.env` file for each selected component.
+- Deploys Docker containers for selected components.
+- Provides feedback and error handling throughout the setup process.
+
+
+### 2. Default Installation (Without Docker)
+
+### Pre-requisites
+
+#### Create a Virtual Environment
We recommend you to create a virtual environment to install all the dependencies required for the project.
+For **Ubuntu/Mac**:
+
```bash
python3 -m venv
-source /bin/activate # this command may be different based on your OS
-
-# Install dependencies
-pip install -r deploy/requirements-dev.txt
+source /bin/activate
```
-
-### Environment file
-
-To set up the environment variables needed for the project, run the following lines:
+For **Windows**:
```bash
-cp .env.example ./backend/.env
+python -m venv
+\Scripts\activate
```
-This creates an `.env` file at the root of the project. It is needed to make sure that the project runs correctly. Please go through the file and set the parameters according to your installation.
+#### Running the Setup Script
-To create a new secret key, run the following commands (within the virtual environment):
-```bash
-# Open a Python shell
-python backend/manage.py shell
+ 1. Clone this repository to your local machine.
->> from django.core.management.utils import get_random_secret_key
->> get_random_secret_key()
-```
+ ```bash
+ git clone "https://github.com/AI4Bharat/Shoonya-Backend"
+ ```
+ 2. Navigate to the root directory of the project.
+ ```bash
+ cd Shoonya-Backend
+ pip install -r ./backend/deploy/requirements.txt
+ ```
+
+ 3. To set up the environment variables needed for the project, run the following lines:
+
+ ```bash
+ cp .env.example ./backend/.env
+ ```
-Paste the value you get there into the `.env` file.
+4. Update the ./backend/.env file with your own credentials
-#### Google Cloud Logging (Optional)
+5. Run the command in the terminal to start the server server will be running on port `8000`
+ ```bash
+ python ./backend/manage.py runserver
+ ```
+
+### Google Cloud Logging (Optional)
If Google Cloud Logging is being used, please follow these additional steps:
@@ -69,21 +323,6 @@ pip install google-cloud-logging
GOOGLE_APPLICATION_CREDENTIALS="/path/to/gcloud-key.json"
```
-### Docker Installation
-
-`cd` back to the root folder .Once inside, build the docker containers:
-
-```bash
-docker-compose -f docker-compose-local.yml build
-```
-
-To run the containers:
-
-```bash
-docker-compose -f docker-compose-local.yml up -d
-```
-
-To share the database with others, just share the postgres_data and the media folder with others.
### Run Migrations (required only for the first time running the project or if you make any changes in the models)
Run the following commands:
@@ -96,8 +335,9 @@ docker-compose exec web python manage.py migrate
# Create a superuser
docker-compose exec web python manage.py createsuperuser
+```
+
-```
If there were no errors, congratulations! The project is up and running.
@@ -150,7 +390,7 @@ celery command - celery -A shoonya_backend.celery worker -l info
celery command - celery -A shoonya_backend.celery beat --loglevel=info
```
-You can set use the celery to local by modifying CELERY_BROKER_URL = "redis://localhost:6379/0" in ./backend/shoonya_backend/settings.py.
+You can set use the celery to local by modifying `CELERY_BROKER_URL = "redis://localhost:6379/0"` in ./backend/shoonya_backend/settings.py.
We can set the concurrency and autoscale in the process as well to manage the number of worker processes in the background. Read more [here](https://stackoverflow.com/a/72366865/9757174).
@@ -170,3 +410,4 @@ black ./backend/
```
Happy Coding!!
+
diff --git a/cron-docker-compose.yml b/cron-docker-compose.yml
new file mode 100644
index 000000000..a06202124
--- /dev/null
+++ b/cron-docker-compose.yml
@@ -0,0 +1,19 @@
+version: '3.3'
+
+services:
+ cron:
+ build: ./cron
+ image: evgeniy-khyst/cron
+ environment:
+ COMPOSE_PROJECT_NAME: "${COMPOSE_PROJECT_NAME}"
+ volumes:
+ - /var/run/docker.sock:/var/run/docker.sock
+ - ./:/workdir:ro
+ restart: unless-stopped
+ networks:
+ - shoonya_backend
+
+
+networks:
+ shoonya_backend:
+ external: true
\ No newline at end of file
diff --git a/default-docker-compose.yml b/default-docker-compose.yml
new file mode 100644
index 000000000..fe57908a2
--- /dev/null
+++ b/default-docker-compose.yml
@@ -0,0 +1,81 @@
+version: '3.3'
+
+services:
+ web:
+ build: ./backend
+ command: python manage.py runserver 0.0.0.0:8000
+ volumes:
+ - ./backend/:/usr/src/backend/
+ - static_volume:/usr/src/backend/static
+ ports:
+ - 8000:8000
+ depends_on:
+ - redis
+ networks:
+ - shoonya_backend
+ redis:
+ container_name: redis
+ image: "redis"
+ ports:
+ - 6379:6379
+ networks:
+ - shoonya_backend
+
+ celery:
+ container_name: celery-default
+ restart: always
+ build: ./backend
+ command: celery -A shoonya_backend.celery worker -Q default --concurrency=2 --loglevel=info
+ volumes:
+ - ./backend/:/usr/src/backend/
+ depends_on:
+ - redis
+ - web
+ networks:
+ - shoonya_backend
+
+ # This is the additional queue which contains the low-priority celery tasks. We can adjust the concurrency and workers alloted to this container.
+ celery2:
+ container_name: celery-low-priority
+ restart: always
+ build: ./backend
+ command: celery -A shoonya_backend.celery worker -Q functions --concurrency=2 --loglevel=info
+ volumes:
+ - ./backend/:/usr/src/backend/
+ depends_on:
+ - redis
+ - web
+ networks:
+ - shoonya_backend
+
+ # Celery beats - for scheduling daily e-mails
+ celery-beat:
+ build: ./backend
+ command: celery -A shoonya_backend.celery beat --loglevel=info
+ volumes:
+ - ./backend/:/usr/src/backend
+ depends_on:
+ - redis
+ - web
+ networks:
+ - shoonya_backend
+
+ celery3:
+ container_name: celery-reports
+ restart: always
+ build: ./backend
+ command: celery -A shoonya_backend.celery worker -Q reports --concurrency=2 --loglevel=info
+ volumes:
+ - ./backend/:/usr/src/backend/
+ depends_on:
+ - redis
+ - web
+ networks:
+ - shoonya_backend
+
+volumes:
+ static_volume:
+
+networks:
+ shoonya_backend:
+ external: true
\ No newline at end of file
diff --git a/default-setup.py b/default-setup.py
new file mode 100644
index 000000000..9edd27ce3
--- /dev/null
+++ b/default-setup.py
@@ -0,0 +1,110 @@
+import os
+import subprocess
+import sys
+import time
+import shutil
+import string
+import random
+
+
+def generate_secret_key(length=50):
+ """
+ Generate a random Django secret key.
+ """
+ characters = string.ascii_letters + string.digits + "!@#$%^&*(-_=+)"
+ return "".join(random.choice(characters) for _ in range(length))
+
+
+def install_dependencies():
+ # Create a virtual environment
+ subprocess.run(["python3", "-m", "venv", "venv"])
+ # Activate the virtual environment
+ if sys.platform == "win32":
+ activate_script = os.path.join("venv", "Scripts", "activate.bat")
+ else:
+ activate_script = os.path.join("venv", "bin", "activate")
+ subprocess.run([activate_script])
+
+ # Install dependencies
+ subprocess.run(["pip", "install", "-r", "./backend/deploy/requirements.txt"])
+
+
+def setup_env_file():
+ # Copy .env.example to .env
+ shutil.copy(".env.example", "./backend/.env")
+
+ # Generate a new secret key
+ new_secret_key = generate_secret_key()
+ # Update .env with the new secret key
+ with open("./backend/.env", "a") as env_file:
+ env_file.write(f"\nSECRET_KEY='{new_secret_key}'\n")
+ print("New secret key has been generated and updated in .env")
+
+
+def run_celery_instances():
+ # Activate the virtual environment
+ if sys.platform == "win32":
+ activate_script = os.path.join("venv", "Scripts", "activate.bat")
+ else:
+ activate_script = os.path.join("venv", "bin", "activate")
+ subprocess.run([activate_script])
+
+ os.chdir("backend")
+
+ # Start Celery workers
+ celery_worker_process = subprocess.Popen(
+ [
+ "celery",
+ "-A",
+ "shoonya_backend.celery",
+ "worker",
+ "--concurrency=2",
+ "--loglevel=info",
+ ],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )
+ # Start Celery beat
+ celery_beat_process = subprocess.Popen(
+ ["celery", "-A", "shoonya_backend.celery", "beat", "--loglevel=info"],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )
+
+ # Capture and print the output
+ worker_output, worker_errors = celery_worker_process.communicate()
+ beat_output, beat_errors = celery_beat_process.communicate()
+
+ print("Celery Worker Output:")
+ print(worker_output.decode("utf-8"))
+ print("Celery Worker Errors:")
+ print(worker_errors.decode("utf-8"))
+
+ print("Celery Beat Output:")
+ print(beat_output.decode("utf-8"))
+ print("Celery Beat Errors:")
+ print(beat_errors.decode("utf-8"))
+
+
+def start_django_server():
+ # Activate the virtual environment
+ if sys.platform == "win32":
+ activate_script = os.path.join("venv", "Scripts", "activate.bat")
+ else:
+ activate_script = os.path.join("venv", "bin", "activate")
+ subprocess.run([activate_script])
+
+ # Start Django server
+ print("Starting Django server...")
+ subprocess.Popen(["python", "./backend/manage.py", "runserver"])
+
+
+def main():
+ install_dependencies()
+ setup_env_file()
+ run_celery_instances()
+ start_django_server()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/docker-setup.py b/docker-setup.py
new file mode 100644
index 000000000..75edcac5b
--- /dev/null
+++ b/docker-setup.py
@@ -0,0 +1,401 @@
+import click
+import json
+import subprocess
+
+component_mapping = {
+ "Default Setup": {
+ "file": "default-docker-compose.yml",
+ "description": "Required for the application to work. Contains a docker deployment of Django, Celery, and Redis",
+ "parameters": {
+ "DB_NAME": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "postgres",
+ "warning": "Please provide a valid database name",
+ },
+ "DB_USER": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "postgres",
+ "warning": "Please provide a valid database user",
+ },
+ "DB_PASSWORD": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "postgres",
+ "warning": "Please provide a valid database password",
+ },
+ "DB_HOST": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "db",
+ "warning": "Please provide a valid database host",
+ },
+ "REDIS_HOST": {
+ "help": "If you are using the inbuilt redis as broker, leave these to default",
+ "default": "redis",
+ "warning": "Please provide a valid redis host",
+ },
+ "REDIS_PORT": {
+ "help": "If you are using the inbuilt redis as broker, leave these to default",
+ "default": "6379",
+ "warning": "Please provide a valid redis port",
+ },
+ "CELERY_BROKER_URL": {
+ "help": "Broker for celery tasks",
+ "default": "redis://redis:6379/0",
+ "warning": "",
+ },
+ "SECRET_KEY": {
+ "help": "Django secret key",
+ "default": "abcd1234",
+ "warning": "Please provide a valid secret key",
+ },
+ },
+ },
+ "Nginx_Certbot": {
+ "description": "Nginx and Certbot setup for serving HTTPS traffic",
+ "parameters": {
+ "DOMAINS": {
+ "help": "Domains for which the certificates will be obtained",
+ "default": "",
+ "warning": "Please provide valid domains",
+ },
+ "CERTBOT_TEST_CERT": {
+ "help": "Whether to use Certbot's test certificate",
+ "default": "0",
+ "warning": "",
+ },
+ "CERTBOT_RSA_KEY_SIZE": {
+ "help": "Size of RSA key for the certificate",
+ "default": "4096",
+ "warning": "",
+ },
+ },
+ },
+ "Elasticsearch-Logstash-Kibana": {
+ "file": "elk-docker-compose.yml",
+ "description": "ELK stack for logging monitoring etc",
+ "parameters": {
+ "ELASTICSEARCH_URL": {
+ "help": "url for elasticsearch",
+ "default": "elasticsearch:9200",
+ "warning": "Please provide a valid elasticsearch endpoint",
+ },
+ "INDEX_NAME": {
+ "help": "",
+ "default": "",
+ "warning": "",
+ },
+ },
+ },
+ "Flower": {
+ "file": "flower-docker-compose.yml",
+ "description": "Flower for monitoring celery tasks",
+ "parameters": {
+ "FLOWER_ADDRESS": {
+ "help": "Address for accessing Flower",
+ "default": "flower",
+ "warning": "Please provide a valid Flower address",
+ },
+ "FLOWER_PORT": {
+ "help": "Port for accessing Flower",
+ "default": "5555",
+ "warning": "Please provide a valid port number for Flower",
+ },
+ "FLOWER_USERNAME": {
+ "help": "Username for Flower",
+ "default": "shoonya",
+ "warning": "Please provide a valid username for Flower",
+ },
+ "FLOWER_PASSWORD": {
+ "help": "Password for Flower",
+ "default": "flower123",
+ "warning": "Please provide a valid password for Flower",
+ },
+ },
+ },
+ "Google_Application_Credentials": {
+ "description": "Google Application Credentials for accessing Google APIs",
+ "parameters": {
+ "type": {
+ "help": "Type of service account",
+ "default": "",
+ "warning": "Please provide a valid type",
+ },
+ "project_id": {
+ "help": "Project ID",
+ "default": "",
+ "warning": "Please provide a valid project ID",
+ },
+ "private_key_id": {
+ "help": "Private key ID",
+ "default": "",
+ "warning": "Please provide a valid private key ID",
+ },
+ "private_key": {
+ "help": "Private key",
+ "default": "",
+ "warning": "Please provide a valid private key",
+ },
+ "client_email": {
+ "help": "Client email",
+ "default": "",
+ "warning": "Please provide a valid client email",
+ },
+ "client_id": {
+ "help": "Client ID",
+ "default": "",
+ "warning": "Please provide a valid client ID",
+ },
+ "auth_uri": {
+ "help": "Authorization URI",
+ "default": "",
+ "warning": "Please provide a valid authorization URI",
+ },
+ "token_uri": {
+ "help": "Token URI",
+ "default": "",
+ "warning": "Please provide a valid token URI",
+ },
+ "auth_provider_x509_cert_url": {
+ "help": "Auth provider X.509 certificate URL",
+ "default": "",
+ "warning": "Please provide a valid Auth provider X.509 certificate URL",
+ },
+ "client_x509_cert_url": {
+ "help": "Client X.509 certificate URL",
+ "default": "",
+ "warning": "Please provide a valid Client X.509 certificate URL",
+ },
+ "universe_domain": {
+ "help": "Universe domain",
+ "default": "",
+ "warning": "Please provide a valid universe domain",
+ },
+ },
+ },
+ "Ask_Dhruva": {
+ "description": "Component for interacting with Dhruva ASR service",
+ "parameters": {
+ "ASR_DHRUVA_URL": {
+ "help": "URL for Dhruva ASR service",
+ "default": "",
+ "warning": "Please provide a valid Dhruva ASR service URL",
+ },
+ "ASR_DHRUVA_AUTHORIZATION": {
+ "help": "Authorization token for Dhruva ASR service",
+ "default": "",
+ "warning": "Please provide a valid authorization token for Dhruva ASR service",
+ },
+ },
+ },
+ "Indic_Trans_V2": {
+ "description": "Component for interacting with Indic Trans V2 service",
+ "parameters": {
+ "INDIC_TRANS_V2_KEY": {
+ "help": "API key for Indic Trans V2 service",
+ "default": "",
+ "warning": "Please provide a valid API key for Indic Trans V2 service",
+ },
+ "INDIC_TRANS_V2_URL": {
+ "help": "URL for Indic Trans V2 service",
+ "default": "",
+ "warning": "Please provide a valid URL for Indic Trans V2 service",
+ },
+ },
+ },
+ "Email Service": {
+ "description": "Required for the application to work. Contains a docker deployment of Django, Celery, and Redis",
+ "parameters": {
+ "EMAIL_HOST": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "postgres",
+ "warning": "Please provide a valid database name",
+ },
+ "SMTP_USERNAME": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "postgres",
+ "warning": "Please provide a valid database user",
+ },
+ "SMTP_PASSWORD": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "postgres",
+ "warning": "Please provide a valid database password",
+ },
+ "DEFAULT_FROM_EMAIL": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "db",
+ "warning": "Please provide a valid database host",
+ },
+ },
+ },
+ "Logging": {
+ "description": "Required for the application to work. Contains a docker deployment of Django, Celery, and Redis",
+ "parameters": {
+ "LOGGING": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "postgres",
+ "warning": "Please provide a valid database name",
+ },
+ "LOG_LEVEL": {
+ "help": "If you set up a PG installation, leave these to default",
+ "default": "postgres",
+ "warning": "Please provide a valid database user",
+ },
+ },
+ },
+ "MINIO": {
+ "description": "Required for the application to work. Contains a docker deployment of Django, Celery, and Redis",
+ "parameters": {
+ "MINIO_ACCESS_KEY": {
+ "help": "",
+ "default": "",
+ "warning": "",
+ },
+ "MINIO_SECRET_KEY": {
+ "help": "",
+ "default": "",
+ "warning": "",
+ },
+ "MINIO_ENDPOINT": {
+ "help": "",
+ "default": "",
+ "warning": "",
+ },
+ },
+ },
+}
+
+environment = {
+ "ENV": {
+ "default": "dev",
+ "help": "The environment in which the application is running. PROD : Production, DEV : Development",
+ },
+ "AZURE_CONNECTION_STRING": {
+ "help": "AZURE storage string",
+ "default": "AZURE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=dummydeveloper;AccountKey=hello/Jm+uq4gvGgd5aloGrqVxYnRs/dgPHX0G6U4XmLCtZCIeKyNNK0n3Q9oRDNE+AStMDbqXg==;EndpointSuffix=core.windows.net",
+ },
+ "LOGS_CONTAINER_NAME": {
+ "help": "Logs container name",
+ "default": "logs",
+ },
+}
+
+
+def echo_error(error_message):
+ click.secho(error_message, fg="red", bold=True)
+ exit(1)
+
+
+def echo_success(success_message):
+ click.secho(success_message, fg="green", bold=True)
+
+
+def echo_warning(warning_message):
+ click.secho(warning_message, fg="yellow", bold=True)
+
+
+@click.command()
+def run_application():
+ echo_success("Welcome to the application setup CLI!")
+
+ try:
+ subprocess.run(["docker", "network", "create", "shoonya_backend"], check=True)
+ echo_success("Network created with the name shoonya_backend")
+ except subprocess.CalledProcessError:
+ echo_warning("Network already exists with the name shoonya. Skipping creation.")
+
+ selected_components = []
+ docker_compose_files = []
+ parameters_dict = {}
+
+ try:
+ production = click.prompt(
+ "Do you want to run the application in production mode? (Y/N)", default="N"
+ )
+ if production.upper() == "N":
+ click.echo("Running in development mode")
+ parameters_dict["ENVIRONMENT"] = dict({"ENV": "dev"})
+ # Ask user if they want PostgreSQL installation
+ install_postgres = click.prompt(
+ "Do you want to include PostgreSQL installation? (Y/N)", default="N"
+ )
+ if install_postgres.upper() == "Y":
+ subprocess.run(
+ [
+ "docker-compose",
+ "-f",
+ "postgres-docker-compose.yml",
+ "up",
+ "--build",
+ "-d",
+ ],
+ check=True,
+ )
+
+ for key, value in component_mapping.items():
+ choice = click.prompt(
+ f"Do you want to include {key}? ({value['description']}) (Y/N)",
+ default="N",
+ )
+ if choice.upper() == "Y":
+ selected_components.append(key)
+ # modify the next line such that it only appends if there is a key called "file" in the value
+ if "file" in value:
+ docker_compose_files.append(value["file"])
+
+ parameters = value.get("parameters")
+ if parameters:
+ click.echo(f"Please provide values for parameters for {key}:")
+ component_params = {}
+ for param, details in parameters.items():
+ help_message = details.get("help", "")
+ default_value = details.get("default", "")
+ warning = details.get("warning", "")
+ value = click.prompt(
+ f"Enter value for {param} ({help_message})",
+ default=default_value,
+ )
+
+ if not value:
+ value = default_value
+ component_params[param] = value
+ parameters_dict[key] = component_params
+ if parameters_dict:
+ with open("backend/.env", "w") as env_file:
+ for component, params in parameters_dict.items():
+ for param, value in params.items():
+ env_file.write(f"{param}='{value}'\n")
+
+ if docker_compose_files:
+ click.echo("Running Docker Compose...")
+ for file in docker_compose_files:
+ subprocess.run(
+ ["docker-compose", "-f", file, "up", "--build", "-d"], check=True
+ )
+
+ # Run docker-compose logs -f for each file
+ for file in docker_compose_files:
+ subprocess.run(["docker-compose", "-f", file, "logs", "-f"], check=True)
+
+ echo_success("Application setup complete!")
+ subprocess.run(["docker", "ps"], check=True)
+ else:
+ echo_error("No components selected. Exiting.")
+
+ except Exception as e:
+ print(f"An error occurred: {e}")
+ echo_error("An error occurred. Exiting. ")
+ echo_error("Stopping all running containers...")
+ if docker_compose_files:
+ for file in docker_compose_files:
+ subprocess.run(["docker-compose", "-f", file, "down"], check=True)
+
+ # Run docker-compose logs -f for each file
+ for file in docker_compose_files:
+ subprocess.run(["docker-compose", "-f", file, "logs", "-f"], check=True)
+
+ else:
+ echo_error("No components selected. Exiting.")
+ exit(1)
+
+
+if __name__ == "__main__":
+ run_application()
diff --git a/elk-docker-compose.yml b/elk-docker-compose.yml
new file mode 100644
index 000000000..68bb52fcf
--- /dev/null
+++ b/elk-docker-compose.yml
@@ -0,0 +1,41 @@
+version: '3.3'
+
+services:
+ elasticsearch:
+ container_name: elasticsearch
+ image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
+ volumes:
+ - ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
+ - elasticsearch_vol:/elasticsearch_data
+ environment:
+ - discovery.type=single-node
+ ports:
+ - "9200:9200"
+ - "9300:9300"
+
+ kibana:
+ container_name: kibana
+ image: docker.elastic.co/kibana/kibana:7.14.0
+ ports:
+ - 5601:5601
+ depends_on:
+ - elasticsearch
+
+ logstash:
+ container_name: logstash
+ image: docker.elastic.co/logstash/logstash:7.14.0
+ hostname: shoonya_dev_logger
+ volumes:
+ - ./logstash_dev.conf:/usr/share/logstash/pipeline/logstash.conf
+ - logs_vol:/logs
+ extra_hosts:
+ - "elasticsearch:elasticsearch"
+ command: logstash -f /usr/share/logstash/pipeline/logstash.conf
+
+volumes:
+ elasticsearch_vol:
+ logs_vol:
+
+networks:
+ shoonya_backend:
+ external: true
\ No newline at end of file
diff --git a/nginx-docker-compose.yml b/nginx-docker-compose.yml
new file mode 100644
index 000000000..cb4be6d4b
--- /dev/null
+++ b/nginx-docker-compose.yml
@@ -0,0 +1,44 @@
+version: '3.3'
+
+services:
+ nginx:
+ build: ./nginx
+ image: evgeniy-khyst/nginx
+ env_file:
+ - ./config.env
+ volumes:
+ - nginx_conf:/etc/nginx/sites
+ - letsencrypt_certs:/etc/letsencrypt
+ - certbot_acme_challenge:/var/www/certbot
+ - ./vhosts:/etc/nginx/vhosts
+ - static_volume:/backend/static
+ ports:
+ - "80:80"
+ - "443:443"
+ restart: unless-stopped
+ networks:
+ - shoonya_backend
+
+ certbot:
+ build: ./certbot
+ image: evgeniy-khyst/certbot
+ env_file:
+ - ./config.env
+ volumes:
+ - letsencrypt_certs:/etc/letsencrypt
+ - certbot_acme_challenge:/var/www/certbot
+ networks:
+ - shoonya_backend
+
+volumes:
+ nginx_conf:
+ external: true
+ letsencrypt_certs:
+ external: true
+ certbot_acme_challenge:
+ static_volume:
+
+
+networks:
+ shoonya_backend:
+ external: true
\ No newline at end of file
diff --git a/oldReadme.md b/oldReadme.md
new file mode 100644
index 000000000..4889df792
--- /dev/null
+++ b/oldReadme.md
@@ -0,0 +1,226 @@
+# Shoonya Backend
+
+Repository for Shoonya's backend.
+
+## Pre-requisites
+
+The project was created using [Python 3.7](https://www.python.org/downloads/). All major dependencies along with the versions are listed in the `backend/deploy/requirements.txt` file.
+
+## Installation
+
+The installation and setup instructions have been tested on the following platforms:
+
+It can be done in 2 ways:-
+1) using docker by running all services in containers or
+2) directly running the services on local one by one (like, celery and redis).
+
+If your preference is 1 over 2 please be fixed to this environment and follow the steps below it:
+
+- Docker
+- Ubuntu 20.04 OR macOs
+
+If you are using a different operating system, you will have to look at external resources (eg. StackOverflow) to correct any errors.
+
+### Create a Virtual Environment
+
+We recommend you to create a virtual environment to install all the dependencies required for the project.
+
+```bash
+python3 -m venv
+source /bin/activate # this command may be different based on your OS
+
+# Install dependencies
+pip install -r deploy/requirements-dev.txt
+```
+
+### Environment file
+
+To set up the environment variables needed for the project, run the following lines:
+```bash
+cp .env.example ./backend/.env
+```
+
+This creates an `.env` file at the root of the project. It is needed to make sure that the project runs correctly. Please go through the file and set the parameters according to your installation.
+
+To create a new secret key, run the following commands (within the virtual environment):
+```bash
+# Open a Python shell
+python backend/manage.py shell
+
+>> from django.core.management.utils import get_random_secret_key
+>> get_random_secret_key()
+```
+
+Paste the value you get there into the `.env` file.
+
+#### Google Cloud Logging (Optional)
+
+If Google Cloud Logging is being used, please follow these additional steps:
+
+1. Install the `google-cloud-logging` library using the following command:
+```bash
+pip install google-cloud-logging
+```
+2. Follow the steps to create a Service Account from the following [Google Cloud Documentation Page](https://cloud.google.com/docs/authentication/production#create_service_account). This will create a Service Account and generate a JSON Key for the Service Account.
+3. Ensure that atleast the Project Logs Writer role (`roles/logging.logWriter`) is assigned to the created Service Account.
+4. Add the `GOOGLE_APPLICATION_CREDENTIALS` variable to the `.env` file. This value of this variable should be the path to the JSON Key generated in Step 2. For example,
+
+```bash
+GOOGLE_APPLICATION_CREDENTIALS="/path/to/gcloud-key.json"
+```
+
+### Docker Installation
+
+
+```markdown
+# Application Setup Guide
+
+This guide will walk you through setting up the application using different methods and components.
+
+## Running the Script
+
+To run the setup script, follow these steps:
+
+1. Clone this repository to your local machine.
+2. Navigate to the repository directory.
+3. Run the following command:
+
+ ```bash
+ python setup.py
+ ```
+
+4. Follow the prompts to select components and provide required parameters.
+
+## Components
+
+### Default Setup
+
+- **Description:** Required for the application to work. Contains a docker deployment of Django, Celery, and Redis.
+
+ **Parameters:**
+ - `DB_NAME`
+ - **Description:** Database name.
+ - **Default:** `postgres`
+ - `DB_USER`
+ - **Description:** Database user.
+ - **Default:** `postgres`
+ - `DB_PASSWORD`
+ - **Description:** Database password.
+ - **Default:** `postgres`
+ - `DB_HOST`
+ - **Description:** Database host.
+ - **Default:** `db`
+ - `SECRET_KEY`
+ - **Description:** Django secret key.
+ - **Default:** `abcd1234`
+ - `AZURE_CONNECTION_STRING`
+ - **Description:** Azure storage string.
+ - **Default:** `AZURE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=dummydeveloper;AccountKey=hello/Jm+uq4gvGgd5aloGrqVxYnRs/dgPHX0G6U4XmLCtZCIeKyNNK0n3Q9oRDNE+AStMDbqXg==;EndpointSuffix=core.windows.net`
+ - `LOGS_CONTAINER_NAME`
+ - **Description:** Logs container name.
+ - **Default:** `logs`
+
+
+
+
+
+
+
+
+If there were no errors, congratulations! The project is up and running.
+
+If your preference is 2 over 1 please be fixed to this environment and follow the steps below it:
+
+- Ubuntu 20.04 OR macOs
+
+You can run the following script and each and every step for setting the codebase will be done directly. Please move to a folder in your local where you would like to store the code and run the script given below there:
+```bash
+os=$(uname)
+
+if [ "$os" = "Linux" ] || [ "$os" = "Darwin" ]; then
+
+git clone https://github.com/AI4Bharat/Shoonya-Backend.git
+cd Shoonya-Backend
+git checkout dev
+git pull origin dev
+cp .env.example ./backend/.env
+cd backend
+python3 -m venv venv
+source venv/bin/activate
+
+pip install -r ./deploy/requirements.txt
+
+new_secret_key=$(python3 -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())")
+
+env_file=".env"
+if sed --version 2>&1 | grep -q 'GNU sed'; then
+ sed -i "/^SECRET_KEY=/d" "$env_file"
+else
+ sed -i.bak "/^SECRET_KEY=/d" "$env_file"
+ rm -f "$env_file.bak"
+fi
+
+echo "SECRET_KEY='$new_secret_key'" >> "$env_file"
+
+echo "New secret key has been generated and updated in $env_file"
+
+else
+ echo "Cannot run this script on: $os"
+fi
+ ```
+
+### Running background tasks
+Please install and run redis from https://redis.io/download/ on port 6379 before starting celery.
+
+To run background tasks for project creation, we need to run the following command in the terminal. This has also been added into the `docker-compose.yml` file.
+```bash
+celery command - celery -A shoonya_backend.celery worker -l info
+celery command - celery -A shoonya_backend.celery beat --loglevel=info
+```
+
+You can set use the celery to local by modifying CELERY_BROKER_URL = "redis://localhost:6379/0" in ./backend/shoonya_backend/settings.py.
+
+We can set the concurrency and autoscale in the process as well to manage the number of worker processes in the background. Read more [here](https://stackoverflow.com/a/72366865/9757174).
+
+The commands will be as follows
+```bash
+celery -A shoonya_backend.celery worker --concurrency=2 --loglevel=info
+celery -A shoonya_backend.celery worker --autoscale=10,3 --loglevel=info
+```
+
+### Running Linters
+
+In case you want to raise a PR, kindly run linters as specified below. You can install black by running pip install black and use `black`
+To run `black` do:
+
+```bash
+black ./backend/
+```
+
+Happy Coding!!
diff --git a/postgres-docker-compose.yml b/postgres-docker-compose.yml
new file mode 100644
index 000000000..dde19b29b
--- /dev/null
+++ b/postgres-docker-compose.yml
@@ -0,0 +1,27 @@
+version: '3.9'
+
+services:
+
+ db:
+ image: postgres
+ restart: always
+ # set shared memory limit when using docker-compose
+ shm_size: 128mb
+ # or set shared memory limit when deploy via swarm stack
+ volumes:
+ - type: tmpfs
+ target: /dev/shm
+ tmpfs:
+ size: 134217728 # 128*2^20 bytes = 128Mb
+ environment:
+ POSTGRES_PASSWORD: postgres
+ POSTGRES_USER: postgres
+ POSTGRES_DB: postgres
+ ports:
+ - 5432:5432
+ networks:
+ - shoonya_backend
+
+networks:
+ shoonya_backend:
+ external: true
\ No newline at end of file
diff --git a/public/image-2.png b/public/image-2.png
new file mode 100644
index 000000000..644d35d9a
Binary files /dev/null and b/public/image-2.png differ
diff --git a/public/image.png b/public/image.png
new file mode 100644
index 000000000..78b8c5378
Binary files /dev/null and b/public/image.png differ