When creating an image using locally developed models, ensure that they meet the specifications defined by ModelArts.
The external service port of the image must be 8080. The inference interface must be consistent with the URL defined by apis in the config.json file. The inference interface can be directly accessed when the image is started. The following is an example of accessing the mnist image. The image contains the model trained with the mnist dataset. The model can identify handwritten digits in images. In this example, listen_ip indicates the IP address of the container.
{"mnist_result": 7}
A custom image must provide a health check interface for ModelArts to call. The health check interface is configured in the config.json file. For details, see the model configuration file compilation description. A sample health check interface is as follows:
GET /health
{"health": "true"}
Status Code |
Message |
Description |
---|---|---|
200 |
OK |
Successful request |
To ensure that the log content can be displayed normally, the logs must be standard output.
To deploy a batch service, set the boot file of an image to /home/run.sh and use CMD to set the default boot path. The following is a sample Dockerfile.
CMD /bin/sh /home/run.sh
To deploy a batch service, install component packages such as Python, JRE/JDK, and ZIP in the image.