doc-exports/docs/modelarts/umn/modelarts_01_0015.html
Lai, Weijian ca474a9a73 ModelArts UMN version 21.430 update
Reviewed-by: Jiang, Beibei <beibei.jiang@t-systems.com>
Co-authored-by: Lai, Weijian <laiweijian4@huawei.com>
Co-committed-by: Lai, Weijian <laiweijian4@huawei.com>
2023-03-01 12:15:05 +00:00

1.6 KiB

Model Deployment

Generally, AI model deployment and large-scale implementation are complex.

ModelArts resolves this issue by deploying a trained model on different devices in various scenarios with only a few clicks. This secure and reliable one-stop deployment is available for individual developers, enterprises, and device manufacturers.

Figure 1 Process of deploying a model
  • The real-time inference service features high concurrency, low latency, and elastic scaling.
  • Models can be deployed as real-time inference services and batch inference tasks.