forked from docs/doc-exports
Reviewed-by: gtema <artem.goncharov@gmail.com> Co-authored-by: Jiang, Beibei <beibei.jiang@t-systems.com> Co-committed-by: Jiang, Beibei <beibei.jiang@t-systems.com>
1.6 KiB
1.6 KiB
Model Deployment
Generally, AI model deployment and large-scale implementation are complex.
ModelArts resolves this issue by deploying a trained model on different devices in various scenarios with only a few clicks. This secure and reliable one-stop deployment is available for individual developers, enterprises, and device manufacturers.
- The real-time inference service features high concurrency, low latency, and elastic scaling.
- Models can be deployed as real-time inference services and batch inference tasks.
Parent topic: Basic Knowledge