Issue Description: When running a Dockerized FastAPI application on Hugging Face, encountering an error related to caching a function '__o_fold'. The application runs without error locally but fails when deployed on Hugging Face. Below is the Dockerfile, the Python code, and the error message received.
Python File (app/model.py):
from fastapi import APIRouter, UploadFile
import pickle
import wave
import numpy as np
import librosa
from pathlib import Path
import os
# Load the trained model
BASE_DIR = Path(__file__).resolve(strict=True).parent
# Get the model file path from the environment variable or use a default path
MODEL_PATH = os.environ.get("NUMBA_CACHE_DIR", str(BASE_DIR / "BabyCry.pkl"))
with open(f"{BASE_DIR}/BabyCry.pkl", "rb") as f:
model = pickle.load(f)
photo = {
"tired": "https://res.cloudinary.com/dkeeazjre/image/upload/v1709667113/Photos/jctslebxolctwgsuras5.jpg",
"belly pain": "https://res.cloudinary.com/dkeeazjre/image/upload/v1709667122/Photos/tcyaxoef4hww6lhwklzt.jpg",
"hungry": "https://res.cloudinary.com/dkeeazjre/image/upload/v1709667131/Photos/x7p2xw9hhs00yijhsoyt.jpg",
"discomfort": "https://res.cloudinary.com/dkeeazjre/image/upload/v1709667141/Photos/dwjt39eidjg29abchtof.jpg",
"burping": "https://res.cloudinary.com/dkeeazjre/image/upload/v1709667151/Photos/tftrw6gelgl1ucpy0ozb.jpg"
}
router = APIRouter(prefix="/baby_cry_predictor", tags=["Baby Cry Predictor"])
@router.post("/")
def predicting_emotions(baby_cry_audio: UploadFile):
try:
# Read audio data
with wave.open(baby_cry_audio.file, 'rb') as audio_file:
audio_data = audio_file.readframes(-1)
sr = audio_file.getframerate()
# Extract features
audio = np.frombuffer(audio_data, dtype=np.int16)
audio = audio.astype(np.float64)
mfccs = librosa.feature.mfcc(y=audio, sr=sr)
mfccs_mean = np.mean(mfccs, axis=1)
# Make prediction
prediction = model.predict([mfccs_mean])
if prediction[0] == "belly_pain":
prediction[0] = "belly pain"
# Return prediction as string
return {"feeling": prediction[0].title(), "photo": photo[prediction[0]]}
except Exception as e:
# Handle any potential errors during processing
print(os.environ.get("NUMBA_DISABLE_JIT_CACHE"))
print(os.environ.get("NUMBA_CACHE_DIR"))
return {"Error": str(e)}
Dockerfile:
# Use the official Python image as the base image
FROM python:3.10
# Set the working directory inside the container
WORKDIR /code
ENV NUMBA_DISABLE_JIT_CACHE="1"
# Copy the requirements file to the working directory
COPY ./requirements.txt /code/requirements.txt
# Install dependencies
RUN pip install --no-cache-dir --upgrade -r requirements.txt
# Copy the entire current directory into the container at /app
COPY ./app /code/app
# Set the environment variable for the model file path
ENV NUMBA_CACHE_DIR=/code/app/model/BabyCry.pkl
# Command to run the FastAPI application with Uvicorn
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"]
Error Message:
{ "Error": "cannot cache function '__o_fold': no locator available for file '/usr/local/lib/python3.10/site-packages/librosa/core/notation.py'" }
Additional Details: When running the Docker command provided by Hugging Face locally, no errors occur. However, deploying the Docker image to Hugging Face results in the above error. The Docker command used is:
docker run -it -p 7860:7860 --platform=linux/amd64 \
-e NUMBA_CACHE_DIR="/code/app/model/BabyCry.pkl" \
registry.hf.space/ahmed-muqawi-baby-cry-predictor:latest
I attempted to run the Docker image locally on my laptop using the command provided by Hugging Face, and it executed without any errors. However, when deploying the same Docker image to Hugging Face, I encountered the '__o_fold' caching error. I expected the Docker image to run successfully on Hugging Face, similar to how it ran on my local machine.