Skip to content

patademahesh/Distributed-Video-Transcoding

Repository files navigation

Distributed Multi-bitrate Video Transcoding

Distributed Multi-bitrate Video Transcoding On Centos / Ubuntu / Suse / RedHat (Bash Scripts)

Multi-bitrate Video processing requires lots of computing power and time to process full movie. There are different open source video transcoding and processing tools freely available in Linux, like libav-tools, ffmpeg, mencoder, and handbrake. However, none of these tools support PARALLEL computing easily.

After some research, I found amazing solution designed by 'Dustin Kirkland' based on Ubuntu JUJU and avconv. But our requirement was little bit diffrent from Dustins's solution. Our requirement was to convert single video in Multi-bitrate and in formats like 3gp, flv and upload them to single or multiple CDN(like Akamai or tata). Also we want to build this solution on top of CentOS and ffmpeg. So I decided to develop "Simple Scalable, Parallel, Multi-bitrate Video Transcoding System" by myself. Here is my solution.

The Algorithm is same as Dustin's solution but with some changes:

  1. Upload file to FTP. After a successful upload CallUploadScript(pure-ftpd function) will call script:
    • Script is responsible for syncing files to all nodes(Disabled ssh encryptions to speed up transfer)
    • Divide video duration by number of nodes available to process and add start time, length to MySQL queue table.
    • Updating duration, file path, filename of video and number of nodes available for transcoding to MySQL
  2. Transcode Nodes will pick jobs from the queue
  3. Each Node will then process their segments of video and raise a flag when done
  4. Master nodes will wait for each of the all-done flags, and then any master worker will pick the job to concatenate the result
  5. Upload converted files to different CDN

Fault Tolerant

Making this process fault tolerant to node failures, I have written small script checkNodeFailed.sh, which will check for failed nodes and will try to reassign that job to another node. We need to add every minute cron run this.

Pre-requisites:

  1. bc
  2. nproc
  3. ffmpeg
  4. mysql
  5. mysql-server(For master node)
  6. mplayer
  7. rsync
  8. Password less ssh login
  9. nfs server and client
  10. supervisord
  11. ffprobe

Installation:

  1. Install ffmpeg(Click here for instruction)

  2. Download and copy all scripts(.sh files) to /srv directory

  3. Change file permission to 755

  4. Install Pure-FTPD and change CallUploadscript directive to yes in /etc/pure-ftpd.conf file

  5. Create test user for FTP and set password

    # useradd -m ftptest; passwd ftptest

  6. Run below commands to change pure-ftpd init script

    # sed -i 's#start() {#start() {\n\t/usr/sbin/pure-uploadscript -B -r /srv/CallUpload.sh#g' /etc/init.d/pure-ftpd

    # sed -i 's#stop() {#stop() {\n\tkillall -9 pure-uploadscript#g' /etc/init.d/pure-ftpd

  7. restart pure-ftp service

  8. Make sure to Change Database IP in all three scripts (DB_IP variable)

  9. Install mysql-server and import SQL file 'transcoding.sql'. Create 'transcode' user with password same as username. Make sure user is able to connect from all of the worker nodes.

  10. NFS Export /srv directory and mount it on all nodes with NFS client option "lookupcache=none"

  11. On all servers install supervisord and copy supervisord.conf from download directory to /etc/supervisord.conf. Restart supervisord service.

  12. Add every minute cron for checkNodeFailed.sh script. */1 * * * * /srv/checkNodeFailed.sh

  13. To check the status of jobs you may use the dashboard. Copy frontend folder to your apache DocumentRoot. In my case its /var/www/html/

    # cp -a frontend/ /var/www/html/

Releases

No releases published

Packages

No packages published