This is a built-in input and output mode for predictive analytics. The models using this mode are identified as predictive analytics models. The prediction request path is /, the request protocol is HTTP, the request method is POST, and Content-Type is application/json. The request body is in JSON format. For details about the JSON fields, see Table 1. Before selecting this mode, ensure that your model can process the input data in JSON Schema format.
Field |
Type |
Description |
---|---|---|
data |
Data structure |
Inference data. For details, see Table 2. |
ReqData is of the Object type and indicates the inference data. The data structure is determined by the application scenario. For models using this mode, the preprocessing logic in the custom model inference code should be able to correctly process the data inputted in the format defined by the mode.
The JSON Schema of a prediction request is as follows:
{ "type": "object", "properties": { "data": { "type": "object", "properties": { "req_data": { "items": [{ "type": "object", "properties": {} }], "type": "array" } } } } }
The inference result is returned in JSON format. For details about the JSON fields, see Table 3.
Field |
Type |
Description |
---|---|---|
data |
Data structure |
Inference data. For details, see Table 4. |
Similar to ReqData, RespData is also of the Object type and indicates the prediction result. Its structure is determined by the application scenario. For models using this mode, the postprocessing logic in the custom model inference code should be able to correctly output data in the format defined by the mode.
The JSON Schema of a prediction result is as follows:
{ "type": "object", "properties": { "data": { "type": "object", "properties": { "resp_data": { "type": "array", "items": [{ "type": "object", "properties": {} }] } } } } }
In this mode, input the data to be predicted in JSON format. The prediction result is returned in JSON format. The following are examples:
On the Prediction tab page of the service details page, enter inference code and click Predict to obtain the prediction result.
After a model is deployed as a service, you can obtain the API URL on the Usage Guides tab page of the service details page.