Backend service for the Walk a Story application It manages users, videos, comments, ratings and walks. In addition, it uses Google Cloud Storage as a file store in order to simplify the storage of large files.
- Install requirements.txt
- Run main.py with:
- Edit config.py if was necessary
- Required environment variable with run:
- GOOGLE_APPLICATION_CREDENTIALS={{PATH/API-cloud.json}}
Config file with all default env variables.
Editable env variables in compose file:
- GOOGLE_APPLICATION_CREDENTIALS: Google cloud json credential file absolute path. Credential file.
- REST_URL: Internal URL of service.
- REST_PORT: Internal port of service
- BUCKET_NAME: bucket/segment of Google Cloud Storage
- bememories (testing)
- co-crew (production, first it should be all removed to ignore conflicts)
- DEBUG_FRONTEND: If True then allow run simple frontend to check the service works.
- SECURE_API: If True then login required with cookie between frontend and backend.
- BASE_PATH: set a main directory into bucket selected.
- Example: gs://{{BUCKET_NAME}}/{{BASE_PATH}}/{{place_id}}/
The bucket/segment co-crew is already configured with the event generator after changes. This is used to determine when the Video Analyzer has generated the automatic analysis for an uploaded video.
Documentation and code of Google Cloud function here.
$ docker build -t "registry.hopu.eu/bememories-record/backend:0.5.1" .
$ docker push registry.hopu.eu/bememories-record/backend:0.5.1
Swarm compose file.
- Configs:
- google-cloud-json (credential file, here)
Postman collection with all calls to REST API.
- None
- https://cloud.google.com/storage/docs/authentication
- https://cloud.google.com/storage/docs/uploading-objects
- https://www.codementor.io/@sheena/understanding-sqlalchemy-cheat-sheet-du107lawl
- https://pypi.org/project/Flask-Cors/1.10.3/
- https://aukera.es/blog/tracking-video-html5-gtm/
Documentation here
Documentation here
Notify new files in Google Cloud Storage. Documentation here
- None