Skip to content

Latest commit

 

History

History
152 lines (119 loc) · 4.25 KB

README.md

File metadata and controls

152 lines (119 loc) · 4.25 KB

Mocking AWS Elastic Transcoder Locally

The given Python example is a mock implementation of AWS Elastic Transcoder using Flask and FFmpeg. The purpose is to simulate the AWS Elastic Transcoder service locally, which can be helpful for testing and development without incurring AWS costs or requiring internet access. Overview

The mock implementation provides a simple API similar to AWS Elastic Transcoder, where you can submit a transcoding job with input files, outputs specifications, and playlist configurations. It uses FFmpeg to perform the actual video transcoding and packaging the output in the desired format. Dependencies

  1. Flask: A micro web framework for Python.
  2. FFmpeg: A complete, cross-platform solution to record, convert, and stream audio and video.

Ensure that FFmpeg is installed on your system. You can install it using a package manager:

sudo apt-get install ffmpeg

You can install Flask using pip:

pip install Flask

Code Structure for VIDEO

  1. The script starts a Flask web server.
  2. There is a POST endpoint (/2012-09-25/jobs) that simulates the creation of a new Elastic Transcoder job.
  3. There is a GET endpoint (/2012-09-25/jobs/<job_id>) to retrieve the status and information of a specific job.
  4. When a job is created, the script uses FFmpeg to transcode the video based on the provided parameters.
  5. The HLS playlist files (.m3u8) and segments (.ts) are generated by FFmpeg and stored locally.

Sample Inputs

When creating a job, you can send a POST request with JSON data that includes inputs, outputs, and playlists.

Example:

{
  "Inputs": [
    {
      "Key": "path_to_input_file_1"
    }
  ],
  "Outputs": [
    {
      "Key": "path_to_input_file_1_av720p",
      "PresetId": "preset-id",
      "SegmentDuration": "3"
    },
    {
      "Key": "path_to_input_file_1_av480p",
      "PresetId": "preset-id",
      "SegmentDuration": "3"
    },
    {
      "Key": "path_to_input_file_1_av360p",
      "PresetId": "preset-id",
      "SegmentDuration": "3"
    }
  ],
  "Playlists": [
    {
      "Name": "myplaylist",
      "Format": "HLSv3",
      "OutputKeys": [
        "path_to_input_file_1_av720p",
        "path_to_input_file_1_av480p",
        "path_to_input_file_1_av360p"
      ]
    }
  ]
}

Running the Mock Transcoder

python mock_transcoder.py

This starts a Flask web server listening on http://127.0.0.1:5000.

  1. Use a tool like curl or Postman to send a POST request to http://127.0.0.1:5000/2012-09-25/jobs with the JSON payload described above.
  2. You can retrieve job status by sending a GET request to http://127.0.0.1:5000/2012-09-25/jobs/<JOB_ID>.

Note

This example is a basic simulation of AWS Elastic Transcoder for educational purposes and is not meant for production.

Adjust AWS SDK Calls in Your Application

In your application where you are using the AWS SDK to interact with Elastic Transcoder, you'll need to change the endpoint that the SDK is using to your local Flask server. Here's an example in Python using boto3:

import boto3

# Create a client and point it to the local server
self.client = boto3.client(
  'elastictranscoder',
  region_name='us-east-1',
  endpoint_url='http://localhost:5000'
)

# Example 1 CreateJob request
response = client.create_job(
    PipelineId='your-pipeline-id',
    Inputs=[{'Key': 'path_to_input_file'}],
    Outputs=[{'Key': 'path_to_output_file', 'PresetId': 'preset-id'}]
)

# Example 2 CreateJob request
response = self.client.create_job(
  PipelineId=self.videoPipeline,
  Inputs=inputs,
  Outputs=outputs,
  Playlists=playlists
)

print(response)

Code Structure for Audio

  1. The script starts a Flask web server.
  2. There is a POST endpoint (/job) that simulates the creation of a new Elastic Transcoder job.
  3. There is a GET endpoint (/job/<job_id>) to retrieve the status and information of a specific job.

Sample Inputs

When creating a job, you can send a POST request with JSON data that includes inputs, outputs, and playlists.

Example:

{
  "input": {
    "bucket": "etl-bucket-input",
    "key": "FILESET_ID" 
  },
  "output": [
    {
      "bucket": "dbp-bucket",
      "key": "audio/BIBLE_ID/FILESET_ID-opus16",
      "bitrate": 16,
      "container": "webm",
      "codec": "opus"
    }
  ]
}