From c77e557422f4970e6fed1cf583e77c19f164469f Mon Sep 17 00:00:00 2001 From: "Lai, Weijian" Date: Tue, 9 May 2023 10:07:18 +0000 Subject: [PATCH] modelarts_umn_20230504 Reviewed-by: Jiang, Beibei Reviewed-by: Rechenburg, Matthias Co-authored-by: Lai, Weijian Co-committed-by: Lai, Weijian --- docs/modelarts/umn/modelarts_23_0091.html | 16 ++++----- docs/modelarts/umn/modelarts_23_0092.html | 42 ++++++++--------------- 2 files changed, 22 insertions(+), 36 deletions(-) diff --git a/docs/modelarts/umn/modelarts_23_0091.html b/docs/modelarts/umn/modelarts_23_0091.html index a2c25f38..2f833c02 100644 --- a/docs/modelarts/umn/modelarts_23_0091.html +++ b/docs/modelarts/umn/modelarts_23_0091.html @@ -2,7 +2,7 @@

Model Package Specifications

When you import models in Model Management, if the meta model is imported from OBS or a container image, the model package must meet the following specifications:

-
  • The model package must contain the model directory. The model directory stores the model file, model configuration file, and model inference code.
  • The model configuration file must exist and its name is fixed to config.json. There exists only one model configuration file. For details about how to compile the model configuration file, see Specifications for Compiling the Model Configuration File.
  • The model inference code file is optional. If this file is required, the file name is fixed to customize_service.py. There must be one and only one such file. For details about how to compile the model inference code, see Specifications for Compiling Model Inference Code.
    • The .py file on which customize_service.py depends can be directly stored in the model directory. Use the Python import mode to import the custom package.
    • The other files on which customize_service.py depends can be stored in the model directory. You must use absolute paths to access these files. For more details, see Obtaining an Absolute Path.
    +
    • The model package must contain the model directory. The model directory stores the model file, model configuration file, and model inference code.
    • The model configuration file must exist and its name is fixed to config.json. There exists only one model configuration file. For details about how to compile the model configuration file, see Specifications for Compiling the Model Configuration File.
    • The model inference code file is mandatory. The file name is consistently to be customize_service.py. There must be one and only one such file. For details about how to compile the model inference code, see Specifications for Compiling Model Inference Code.
      • The .py file on which customize_service.py depends can be directly stored in the model directory. Use the Python import mode to import the custom package.
      • The other files on which customize_service.py depends can be stored in the model directory. You must use absolute paths to access these files. For more details, see Obtaining an Absolute Path.

    ModelArts also provides custom script examples of common AI engines. For details, see Examples of Custom Scripts.

    @@ -16,7 +16,7 @@ | │ │ ├── variables.index Mandatory | │ │ ├── variables.data-00000-of-00001 Mandatory | │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. -| │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory. +| │ ├──customize_service.py (Mandatory) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
  • Structure of the MXNet-based model package

    When publishing the model, you only need to specify the resnet directory.

    OBS bucket/directory name
     |── resnet
    @@ -25,7 +25,7 @@
     |   │   ├── resnet-50-symbol.json (Mandatory) Model definition file, which contains the neural network description of the model
     |   │   ├── resnet-50-0000.params (Mandatory) Model variable parameter file, which contains parameter and weight information
     |   │   ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported.
    -|   │   ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
    +| │ ├──customize_service.py (Mandatory) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
  • Structure of the Image-based model package

    When publishing the model, you only need to specify the resnet directory.

    OBS bucket/directory name
     |── resnet
    @@ -38,7 +38,7 @@
     |   │  ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code
     |   │  ├── spark_model (Mandatory) Model directory, which contains the model content saved by PySpark
     |   │  ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported.
    -|   │  ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
    +| │ ├──customize_service.py (Mandatory) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
  • Structure of the PyTorch-based model package

    When publishing the model, you only need to specify the resnet directory.

    OBS bucket/directory name
     |── resnet
    @@ -46,7 +46,7 @@
     |   │  ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code
     |   │  ├── resnet50.pth (Mandatory) PyTorch model file, which contains variable and weight information and is saved as state_dict
     |   │  ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported.
    -|   │  ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
    +| │ ├──customize_service.py (Mandatory) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
  • Structure of the Caffe-based model package
    When publishing the model, you only need to specify the resnet directory.
    OBS bucket/directory name
     |── resnet
     |   |── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files
    @@ -54,7 +54,7 @@
     |   |   |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure
     |   |   |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information
     |   |   |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported.
    -|   |   |── customize_service.py  (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory. 
    +| | |── customize_service.py (Mandatory) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
  • Structure of the XGBoost-based model package
    When publishing the model, you only need to specify the resnet directory.
    OBS bucket/directory name
     |── resnet
    @@ -62,7 +62,7 @@
     |   |   |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code
     |   |   |── *.m (Mandatory): Model file whose extension name is .m
     |   |   |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported.
    -|   |   |── customize_service.py  (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory. 
    +| | |── customize_service.py (Mandatory) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
  • Structure of the Scikit_Learn-based model package
    When publishing the model, you only need to specify the resnet directory.
    OBS bucket/directory name
     |── resnet
    @@ -70,7 +70,7 @@
     |   |   |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code
     |   |   |── *.m (Mandatory): Model file whose extension name is .m
     |   |   |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported.
    -|   |   |── customize_service.py  (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory. 
    +| | |── customize_service.py (Mandatory) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.
diff --git a/docs/modelarts/umn/modelarts_23_0092.html b/docs/modelarts/umn/modelarts_23_0092.html index 13370eb1..c34b615e 100644 --- a/docs/modelarts/umn/modelarts_23_0092.html +++ b/docs/modelarts/umn/modelarts_23_0092.html @@ -411,8 +411,7 @@

Example of the Object Detection Model Configuration File

The following code uses the TensorFlow engine as an example. You can modify the model_type parameter based on the actual engine type.

  • Model input

    Key: images

    Value: image files

    -
  • Model output
    ```
    -{
    +
  • Model output
    {
         "detection_classes": [
             "face",
             "arm"
    @@ -432,10 +431,8 @@
             ]
         ],
         "detection_scores": [0.99, 0.73]
    -}
    -```
    -
  • Configuration file
    ```
    -{
    +}
    +
  • Configuration file
    {
         "model_type": "TensorFlow",
         "model_algorithm": "object_detection",
         "metrics": {
    @@ -505,24 +502,20 @@
                 }
             ]
         }]
    -}
    -```
    +}

Example of the Image Classification Model Configuration File

The following code uses the TensorFlow engine as an example. You can modify the model_type parameter based on the actual engine type.

  • Model input

    Key: images

    Value: image files

    -
  • Model output
    ```
    -{
    +
  • Model output
    {
         "predicted_label": "flower",
         "scores": [
            ["rose", 0.99],
            ["begonia", 0.01]
         ]
    -}
    -```
    -
  • Configuration file
    ```
    -{
    +}
    +
  • Configuration file
    {
         "model_type": "TensorFlow",
         "model_algorithm": "image_classification",
         "metrics": {
    @@ -588,13 +581,11 @@
                 }
             ]
         }]
    -}
    -```
    +}

Example of the Predictive Analytics Model Configuration File

The following code uses the TensorFlow engine as an example. You can modify the model_type parameter based on the actual engine type.

-
  • Model input
    ```
    -{
    +
    • Model input
      {
           "data": {
               "req_data": [
                   {
      @@ -617,10 +608,8 @@
                   }
               ]
           }
      -}
      -```
      -
    • Model output
      ```
      -{
      +}
      +
    • Model output
      {
           "data": {
               "resp_data": [
                   {
      @@ -631,10 +620,8 @@
                   }
               ]
           }
      -}
      -```
      -
    • Configuration file
      ```
      -{
      +}
      +
    • Configuration file
      {
           "model_type": "TensorFlow",
           "model_algorithm": "predict_analysis",
           "metrics": {
      @@ -708,8 +695,7 @@
                           "package_name": "Pillow"
                       }]
               }]
      -}
      -```
      +}

Example of the Custom Image Model Configuration File

The model input and output are similar to those in Example of the Object Detection Model Configuration File.