You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can't manage to get Mangum to work with AWS Lambda in Docker. Everything works if I just run it as a Python process, but not in Docker container.
Error:
{"errorMessage": "The adapter was unable to infer a handler to use for the event. This is likely related to how the Lambda function was invoked. (Are you testing locally? Make sure the request payload is valid for a supported handler.)", "errorType": "RuntimeError", "requestId": "8a98acd4-aab9-47bd-a21c-180f2c3fb483", "stackTrace": [" File \"/var/lang/lib/python3.10/site-packages/mangum/adapter.py\", line 76, in __call__\n handler = self.infer(event, context)\n", " File \"/var/lang/lib/python3.10/site-packages/mangum/adapter.py\", line 68, in infer\n raise RuntimeError( # pragma: no cover\n"]}
I'm also getting the same error when I deploy on AWS Lambda.
My server:
import joblib
import uvicorn
from api_models import Properties
from fastapi import FastAPI
from mangum import Mangum
from starlette.middleware.cors import CORSMiddleware
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
model = joblib.load(filename="./saved_models/current_model.pkl")
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.post("/make_prediction")
def make_prediction(properties: Properties):
# some calculations
return result
handler = Mangum(app)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
Dockerfile:
FROM public.ecr.aws/lambda/python:3.10
RUN yum install libgomp git -y && \
yum clean all -y && \
rm -rf /var/cache/yum
COPY requirements.txt ${LAMBDA_TASK_ROOT}
RUN pip install -r requirements.txt
COPY serving/backend ${LAMBDA_TASK_ROOT}
COPY saved_models ${LAMBDA_TASK_ROOT}/saved_models
CMD ["server.handler"]
I run it as docker run -p 9000:8080 prediction_lambda. I want to deploy this as Lambda Function URL, which uses API Gateway 2.0 style requests. My request:
I can't manage to get Mangum to work with AWS Lambda in Docker. Everything works if I just run it as a Python process, but not in Docker container.
Error:
I'm also getting the same error when I deploy on AWS Lambda.
My server:
Dockerfile:
I run it as
docker run -p 9000:8080 prediction_lambda
. I want to deploy this as Lambda Function URL, which uses API Gateway 2.0 style requests. My request:So I'm tring to reach at least the
/
endpoint with GET to debug this, but eventually for production I want to use the POST method.How can I fix this?
The text was updated successfully, but these errors were encountered: