diff --git a/docs/modelarts/umn/ALL_META.TXT.json b/docs/modelarts/umn/ALL_META.TXT.json index 982e94ef..9adc8b16 100644 --- a/docs/modelarts/umn/ALL_META.TXT.json +++ b/docs/modelarts/umn/ALL_META.TXT.json @@ -123,7 +123,7 @@ "uri":"modelarts_01_0006.html", "product_code":"modelarts", "code":"13", - "des":"ModelArts uses Object Storage Service (OBS) to store data and model backups and snapshots. OBS provides secure, reliable, low-cost storage. For more details, see Object S", + "des":"ModelArts uses Identity and Access Management (IAM) for authentication and authorization. For more information about IAM, see Identity and Access Management User Guide.Mo", "doc_type":"usermanual", "kw":"Related Services,Service Overview,User Guide", "title":"Related Services", @@ -549,20 +549,10 @@ "title":"Object Detection", "githuburl":"" }, - { - "uri":"modelarts_23_0345.html", - "product_code":"modelarts", - "code":"56", - "des":"Training a model uses a large number of labeled images. Therefore, label images before the model training. You can label images on the ModelArts management console. Alter", - "doc_type":"usermanual", - "kw":"Image Segmentation,Labeling Data,User Guide", - "title":"Image Segmentation", - "githuburl":"" - }, { "uri":"modelarts_23_0013.html", "product_code":"modelarts", - "code":"57", + "code":"56", "des":"Model training requires a large amount of labeled data. Therefore, before the model training, add labels to the files that are not labeled. In addition, you can modify, d", "doc_type":"usermanual", "kw":"Text Classification,Labeling Data,User Guide", @@ -572,7 +562,7 @@ { "uri":"modelarts_23_0014.html", "product_code":"modelarts", - "code":"58", + "code":"57", "des":"Named entity recognition assigns labels to named entities in text, such as time and locations. Before labeling, you need to understand the following:A label name can cont", "doc_type":"usermanual", "kw":"Named Entity Recognition,Labeling Data,User Guide", @@ -582,7 +572,7 @@ { "uri":"modelarts_23_0211.html", "product_code":"modelarts", - "code":"59", + "code":"58", "des":"Triplet labeling is suitable for scenarios where structured information, such as subjects, predicates, and objects, needs to be labeled in statements. With this function,", "doc_type":"usermanual", "kw":"Text Triplet,Labeling Data,User Guide", @@ -592,7 +582,7 @@ { "uri":"modelarts_23_0015.html", "product_code":"modelarts", - "code":"60", + "code":"59", "des":"Model training requires a large amount of labeled data. Therefore, before the model training, label the unlabeled audio files. ModelArts enables you to label audio files ", "doc_type":"usermanual", "kw":"Sound Classification,Labeling Data,User Guide", @@ -602,7 +592,7 @@ { "uri":"modelarts_23_0016.html", "product_code":"modelarts", - "code":"61", + "code":"60", "des":"Model training requires a large amount of labeled data. Therefore, before the model training, label the unlabeled audio files. ModelArts enables you to label audio files ", "doc_type":"usermanual", "kw":"Speech Labeling,Labeling Data,User Guide", @@ -612,27 +602,17 @@ { "uri":"modelarts_23_0017.html", "product_code":"modelarts", - "code":"62", + "code":"61", "des":"Model training requires a large amount of labeled data. Therefore, before the model training, label the unlabeled audio files. ModelArts enables you to label audio files.", "doc_type":"usermanual", "kw":"Speech Paragraph Labeling,Labeling Data,User Guide", "title":"Speech Paragraph Labeling", "githuburl":"" }, - { - "uri":"modelarts_23_0282.html", - "product_code":"modelarts", - "code":"63", - "des":"Model training requires a large amount of labeled video data. Therefore, before the model training, label the unlabeled video files. ModelArts enables you to label video ", - "doc_type":"usermanual", - "kw":"Video Labeling,Labeling Data,User Guide", - "title":"Video Labeling", - "githuburl":"" - }, { "uri":"modelarts_23_0005.html", "product_code":"modelarts", - "code":"64", + "code":"62", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Importing Data", @@ -642,7 +622,7 @@ { "uri":"modelarts_23_0006.html", "product_code":"modelarts", - "code":"65", + "code":"63", "des":"After a dataset is created, you can directly synchronize data from the dataset. Alternatively, you can import more data by importing the dataset. Data can be imported fro", "doc_type":"usermanual", "kw":"Import Operation,Importing Data,User Guide", @@ -652,7 +632,7 @@ { "uri":"modelarts_23_0008.html", "product_code":"modelarts", - "code":"66", + "code":"64", "des":"When a dataset is imported, the data storage directory and file name must comply with the ModelArts specifications if the data to be used is stored in OBS.Only the follow", "doc_type":"usermanual", "kw":"Specifications for Importing Data from an OBS Directory,Importing Data,User Guide", @@ -662,7 +642,7 @@ { "uri":"modelarts_23_0009.html", "product_code":"modelarts", - "code":"67", + "code":"65", "des":"The manifest file defines the mapping between labeling objects and content. The Manifest file import mode means that the manifest file is used for dataset import. The man", "doc_type":"usermanual", "kw":"Specifications for Importing the Manifest File,Importing Data,User Guide", @@ -672,7 +652,7 @@ { "uri":"modelarts_23_0214.html", "product_code":"modelarts", - "code":"68", + "code":"66", "des":"A dataset includes labeled and unlabeled data. You can select images or filter data based on the filter criteria and export to a new dataset or the specified OBS director", "doc_type":"usermanual", "kw":"Exporting Data,Data Management,User Guide", @@ -682,7 +662,7 @@ { "uri":"modelarts_23_0020.html", "product_code":"modelarts", - "code":"69", + "code":"67", "des":"For a created dataset, you can modify its basic information to match service changes.You have created a dataset.Log in to the ModelArts management console. In the left na", "doc_type":"usermanual", "kw":"Modifying a Dataset,Data Management,User Guide", @@ -692,7 +672,7 @@ { "uri":"modelarts_23_0018.html", "product_code":"modelarts", - "code":"70", + "code":"68", "des":"ModelArts distinguishes data of the same source according to versions labeled at different time, which facilitates the selection of dataset versions during subsequent mod", "doc_type":"usermanual", "kw":"Publishing a Dataset,Data Management,User Guide", @@ -702,7 +682,7 @@ { "uri":"modelarts_23_0021.html", "product_code":"modelarts", - "code":"71", + "code":"69", "des":"If a dataset is no longer in use, you can delete it to release resources.After a dataset is deleted, if you need to delete the data in the dataset input and output paths ", "doc_type":"usermanual", "kw":"Deleting a Dataset,Data Management,User Guide", @@ -712,7 +692,7 @@ { "uri":"modelarts_23_0019.html", "product_code":"modelarts", - "code":"72", + "code":"70", "des":"After labeling data, you can publish the dataset to multiple versions for management. For the published versions, you can view the dataset version updates, set the curren", "doc_type":"usermanual", "kw":"Managing Dataset Versions,Data Management,User Guide", @@ -720,9 +700,59 @@ "githuburl":"" }, { - "uri":"modelarts_23_0032.html", + "uri":"modelarts_23_0180.html", + "product_code":"modelarts", + "code":"71", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"usermanual", + "kw":"Team Labeling", + "title":"Team Labeling", + "githuburl":"" + }, + { + "uri":"modelarts_23_0181.html", + "product_code":"modelarts", + "code":"72", + "des":"Generally, a small data labeling task can be completed by an individual. However, team work is required to label a large dataset. ModelArts provides the team labeling fun", + "doc_type":"usermanual", + "kw":"Introduction to Team Labeling,Team Labeling,User Guide", + "title":"Introduction to Team Labeling", + "githuburl":"" + }, + { + "uri":"modelarts_23_0182.html", "product_code":"modelarts", "code":"73", + "des":"Team labeling is managed in a unit of teams. To enable team labeling for a dataset, a team must be specified. Multiple members can be added to a team.An account can have ", + "doc_type":"usermanual", + "kw":"Team Management,Team Labeling,User Guide", + "title":"Team Management", + "githuburl":"" + }, + { + "uri":"modelarts_23_0183.html", + "product_code":"modelarts", + "code":"74", + "des":"There is no member in a new team. You need to add members who will participate in a team labeling task.A maximum of 100 members can be added to a team. If there are more ", + "doc_type":"usermanual", + "kw":"Member Management,Team Labeling,User Guide", + "title":"Member Management", + "githuburl":"" + }, + { + "uri":"modelarts_23_0210.html", + "product_code":"modelarts", + "code":"75", + "des":"For datasets with team labeling enabled, you can create team labeling tasks and assign the labeling tasks to different teams so that team members can complete the labelin", + "doc_type":"usermanual", + "kw":"Managing Team Labeling Tasks,Team Labeling,User Guide", + "title":"Managing Team Labeling Tasks", + "githuburl":"" + }, + { + "uri":"modelarts_23_0032.html", + "product_code":"modelarts", + "code":"76", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"DevEnviron (Notebook)", @@ -732,7 +762,7 @@ { "uri":"modelarts_23_0033.html", "product_code":"modelarts", - "code":"74", + "code":"77", "des":"ModelArts integrates the open-source Jupyter Notebook to provide you with online interactive development and debugging environments. You can use the Notebook on the Model", "doc_type":"usermanual", "kw":"Introduction to Notebook,DevEnviron (Notebook),User Guide", @@ -742,7 +772,7 @@ { "uri":"modelarts_23_0111.html", "product_code":"modelarts", - "code":"75", + "code":"78", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Managing Notebook Instances", @@ -752,7 +782,7 @@ { "uri":"modelarts_23_0034.html", "product_code":"modelarts", - "code":"76", + "code":"79", "des":"Before developing a model, create a notebook instance, open it, and perform encoding.You will be charged as long as your notebook instance is in the Running status. We re", "doc_type":"usermanual", "kw":"Creating a Notebook Instance,Managing Notebook Instances,User Guide", @@ -762,7 +792,7 @@ { "uri":"modelarts_23_0325.html", "product_code":"modelarts", - "code":"77", + "code":"80", "des":"You can open a created notebook instance (that is, an instance in the Running state) and start coding in the development environment.Go to the Jupyter Notebook page.In th", "doc_type":"usermanual", "kw":"Opening a Notebook Instance,Managing Notebook Instances,User Guide", @@ -772,7 +802,7 @@ { "uri":"modelarts_23_0041.html", "product_code":"modelarts", - "code":"78", + "code":"81", "des":"You can stop unwanted notebook instances to prevent unnecessary fees. You can also start a notebook instance that is in the Stopped state to use it again.Log in to the Mo", "doc_type":"usermanual", "kw":"Starting or Stopping a Notebook Instance,Managing Notebook Instances,User Guide", @@ -782,7 +812,7 @@ { "uri":"modelarts_23_0042.html", "product_code":"modelarts", - "code":"79", + "code":"82", "des":"You can delete notebook instances that are no longer used to release resources.Log in to the ModelArts management console. In the left navigation pane, choose DevEnviron ", "doc_type":"usermanual", "kw":"Deleting a Notebook Instance,Managing Notebook Instances,User Guide", @@ -792,7 +822,7 @@ { "uri":"modelarts_23_0035.html", "product_code":"modelarts", - "code":"80", + "code":"83", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Using Jupyter Notebook", @@ -802,7 +832,7 @@ { "uri":"modelarts_23_0326.html", "product_code":"modelarts", - "code":"81", + "code":"84", "des":"Jupyter Notebook is a web-based application for interactive computing. It can be applied to full-process computing: development, documentation, running code, and presenti", "doc_type":"usermanual", "kw":"Introduction to Jupyter Notebook,Using Jupyter Notebook,User Guide", @@ -812,7 +842,7 @@ { "uri":"modelarts_23_0120.html", "product_code":"modelarts", - "code":"82", + "code":"85", "des":"This section describes common operations on Jupyter Notebook.In the notebook instance list, locate the row where the target notebook instance resides and click Open in th", "doc_type":"usermanual", "kw":"Common Operations on Jupyter Notebook,Using Jupyter Notebook,User Guide", @@ -822,7 +852,7 @@ { "uri":"modelarts_23_0327.html", "product_code":"modelarts", - "code":"83", + "code":"86", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Configuring the Jupyter Notebook Environment", @@ -832,7 +862,7 @@ { "uri":"modelarts_23_0117.html", "product_code":"modelarts", - "code":"84", + "code":"87", "des":"For developers who are used to coding, the terminal function is very convenient and practical. This section describes how to enable the terminal function in a notebook in", "doc_type":"usermanual", "kw":"Using the Notebook Terminal Function,Configuring the Jupyter Notebook Environment,User Guide", @@ -842,7 +872,7 @@ { "uri":"modelarts_23_0280.html", "product_code":"modelarts", - "code":"85", + "code":"88", "des":"For a GPU-based notebook instance, you can switch different versions of CUDA on the Terminal page of Jupyter.CPU-based notebook instances do not use CUDA. Therefore, the ", "doc_type":"usermanual", "kw":"Switching the CUDA Version on the Terminal Page of a GPU-based Notebook Instance,Configuring the Jup", @@ -852,7 +882,7 @@ { "uri":"modelarts_23_0040.html", "product_code":"modelarts", - "code":"86", + "code":"89", "des":"Multiple environments have been installed in ModelArts notebook instances, including TensorFlow. You can use pip install to install external libraries from a Jupyter note", "doc_type":"usermanual", "kw":"Installing External Libraries and Kernels in Notebook Instances,Configuring the Jupyter Notebook Env", @@ -862,7 +892,7 @@ { "uri":"modelarts_23_0039.html", "product_code":"modelarts", - "code":"87", + "code":"90", "des":"In notebook instances, you can use ModelArts SDKs to manage OBS, training jobs, models, and real-time services.For details about how to use ModelArts SDKs, see ModelArts ", "doc_type":"usermanual", "kw":"Using ModelArts SDKs,Using Jupyter Notebook,User Guide", @@ -872,7 +902,7 @@ { "uri":"modelarts_23_0038.html", "product_code":"modelarts", - "code":"88", + "code":"91", "des":"If you specify Storage Path during notebook instance creation, your compiled code will be automatically stored in your specified OBS bucket. If code invocation among diff", "doc_type":"usermanual", "kw":"Synchronizing Files with OBS,Using Jupyter Notebook,User Guide", @@ -882,7 +912,7 @@ { "uri":"modelarts_23_0037.html", "product_code":"modelarts", - "code":"89", + "code":"92", "des":"After code compiling is finished, you can save the entered code as a .py file which can be used for starting training jobs.Create and open a notebook instance or open an ", "doc_type":"usermanual", "kw":"Using the Convert to Python File Function,Using Jupyter Notebook,User Guide", @@ -892,7 +922,7 @@ { "uri":"modelarts_23_0330.html", "product_code":"modelarts", - "code":"90", + "code":"93", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Using JupyterLab", @@ -902,7 +932,7 @@ { "uri":"modelarts_23_0209.html", "product_code":"modelarts", - "code":"91", + "code":"94", "des":"JupyterLab is an interactive development environment. It is a next-generation product of Jupyter Notebook. JupyterLab enables you to compile notebooks, operate terminals,", "doc_type":"usermanual", "kw":"Introduction to JupyterLab and Common Operations,Using JupyterLab,User Guide", @@ -912,7 +942,7 @@ { "uri":"modelarts_23_0331.html", "product_code":"modelarts", - "code":"92", + "code":"95", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Uploading and Downloading Data", @@ -922,7 +952,7 @@ { "uri":"modelarts_23_0332.html", "product_code":"modelarts", - "code":"93", + "code":"96", "des":"On the JupyterLab page, click Upload Files to upload a file. For details, see Uploading a File in Introduction to JupyterLab and Common Operations. If a message is displa", "doc_type":"usermanual", "kw":"Uploading Data to JupyterLab,Uploading and Downloading Data,User Guide", @@ -932,7 +962,7 @@ { "uri":"modelarts_23_0333.html", "product_code":"modelarts", - "code":"94", + "code":"97", "des":"Only files within 100 MB in JupyterLab can be downloaded to a local PC. You can perform operations in different scenarios based on the storage location selected when crea", "doc_type":"usermanual", "kw":"Downloading a File from JupyterLab,Uploading and Downloading Data,User Guide", @@ -942,7 +972,7 @@ { "uri":"modelarts_23_0335.html", "product_code":"modelarts", - "code":"95", + "code":"98", "des":"In notebook instances, you can use ModelArts SDKs to manage OBS, training jobs, models, and real-time services.For details about how to use ModelArts SDKs, see ModelArts ", "doc_type":"usermanual", "kw":"Using ModelArts SDKs,Using JupyterLab,User Guide", @@ -952,7 +982,7 @@ { "uri":"modelarts_23_0336.html", "product_code":"modelarts", - "code":"96", + "code":"99", "des":"If you specify Storage Path during notebook instance creation, your compiled code will be automatically stored in your specified OBS bucket. If code invocation among diff", "doc_type":"usermanual", "kw":"Synchronizing Files with OBS,Using JupyterLab,User Guide", @@ -962,7 +992,7 @@ { "uri":"modelarts_23_0043.html", "product_code":"modelarts", - "code":"97", + "code":"100", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Training Management", @@ -972,7 +1002,7 @@ { "uri":"modelarts_23_0044.html", "product_code":"modelarts", - "code":"98", + "code":"101", "des":"ModelArts provides model training for you to view the training effect, based on which you can adjust your model parameters. You can select resource pools (CPU or GPU) wit", "doc_type":"usermanual", "kw":"Introduction to Model Training,Training Management,User Guide", @@ -982,7 +1012,7 @@ { "uri":"modelarts_23_0156.html", "product_code":"modelarts", - "code":"99", + "code":"102", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Built-in Algorithms", @@ -992,7 +1022,7 @@ { "uri":"modelarts_23_0045.html", "product_code":"modelarts", - "code":"100", + "code":"103", "des":"Based on the frequently-used AI engines in the industry, ModelArts provides built-in algorithms to meet a wide range of your requirements. You can directly select the alg", "doc_type":"usermanual", "kw":"Introduction to Built-in Algorithms,Built-in Algorithms,User Guide", @@ -1002,7 +1032,7 @@ { "uri":"modelarts_23_0157.html", "product_code":"modelarts", - "code":"101", + "code":"104", "des":"The built-in algorithms provided by ModelArts can be used for image classification, object detection, and image semantic segmentation. The requirements for the datasets v", "doc_type":"usermanual", "kw":"Requirements on Datasets,Built-in Algorithms,User Guide", @@ -1012,7 +1042,7 @@ { "uri":"modelarts_23_0158.html", "product_code":"modelarts", - "code":"102", + "code":"105", "des":"This section describes the built-in algorithms supported by ModelArts and the running parameters supported by each algorithm. You can set running parameters for a trainin", "doc_type":"usermanual", "kw":"Algorithms and Their Running Parameters,Built-in Algorithms,User Guide", @@ -1022,7 +1052,7 @@ { "uri":"modelarts_23_0235.html", "product_code":"modelarts", - "code":"103", + "code":"106", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Creating a Training Job", @@ -1032,7 +1062,7 @@ { "uri":"modelarts_23_0046.html", "product_code":"modelarts", - "code":"104", + "code":"107", "des":"ModelArts supports multiple types of training jobs during the entire AI development process. Select a creation mode based on the algorithm source.Built-inIf you do not kn", "doc_type":"usermanual", "kw":"Introduction to Training Jobs,Creating a Training Job,User Guide", @@ -1042,7 +1072,7 @@ { "uri":"modelarts_23_0237.html", "product_code":"modelarts", - "code":"105", + "code":"108", "des":"If you do not have the algorithm development capability, you can use the built-in algorithms of ModelArts. After simple parameter adjustment, you can create a training jo", "doc_type":"usermanual", "kw":"Using Built-in Algorithms to Train Models,Creating a Training Job,User Guide", @@ -1052,7 +1082,7 @@ { "uri":"modelarts_23_0238.html", "product_code":"modelarts", - "code":"106", + "code":"109", "des":"If you use frequently-used frameworks, such as TensorFlow and MXNet, to develop algorithms locally, you can select Frequently-used to create training jobs and build model", "doc_type":"usermanual", "kw":"Using Frequently-used Frameworks to Train Models,Creating a Training Job,User Guide", @@ -1062,7 +1092,7 @@ { "uri":"modelarts_23_0239.html", "product_code":"modelarts", - "code":"107", + "code":"110", "des":"If the framework used for algorithm development is not a frequently-used framework, you can build an algorithm into a custom image and use the custom image to create a tr", "doc_type":"usermanual", "kw":"Using Custom Images to Train Models,Creating a Training Job,User Guide", @@ -1072,7 +1102,7 @@ { "uri":"modelarts_23_0159.html", "product_code":"modelarts", - "code":"108", + "code":"111", "des":"In the training job list, click Stop in the Operation column for a training job in the Running state to stop a running training job.If you have selected Save Training Par", "doc_type":"usermanual", "kw":"Stopping or Deleting a Job,Training Management,User Guide", @@ -1082,7 +1112,7 @@ { "uri":"modelarts_23_0047.html", "product_code":"modelarts", - "code":"109", + "code":"112", "des":"During model building, you may need to frequently tune the data, training parameters, or the model based on the training results to obtain a satisfactory model. ModelArts", "doc_type":"usermanual", "kw":"Managing Training Job Versions,Training Management,User Guide", @@ -1092,7 +1122,7 @@ { "uri":"modelarts_23_0048.html", "product_code":"modelarts", - "code":"110", + "code":"113", "des":"After a training job finishes, you can manage the training job versions and check whether the training result of the job is satisfactory by viewing the job details.In the", "doc_type":"usermanual", "kw":"Viewing Job Details,Training Management,User Guide", @@ -1102,7 +1132,7 @@ { "uri":"modelarts_23_0049.html", "product_code":"modelarts", - "code":"111", + "code":"114", "des":"You can store the parameter settings in ModelArts during job creation so that you can use the stored settings to create follow-up training jobs, which makes job creation ", "doc_type":"usermanual", "kw":"Managing Job Parameters,Training Management,User Guide", @@ -1112,7 +1142,7 @@ { "uri":"modelarts_23_0050.html", "product_code":"modelarts", - "code":"112", + "code":"115", "des":"You can create visualization jobs of TensorBoard and MindInsight types on ModelArts.TensorBoard supports training jobs based on the TensorFlow engine, and MindInsight sup", "doc_type":"usermanual", "kw":"Managing Visualization Jobs,Training Management,User Guide", @@ -1122,7 +1152,7 @@ { "uri":"modelarts_23_0051.html", "product_code":"modelarts", - "code":"113", + "code":"116", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Model Management", @@ -1132,7 +1162,7 @@ { "uri":"modelarts_23_0052.html", "product_code":"modelarts", - "code":"114", + "code":"117", "des":"AI model development and optimization require frequent iterations and debugging. Changes in datasets, training code, or parameters may affect the quality of models. If th", "doc_type":"usermanual", "kw":"Introduction to Model Management,Model Management,User Guide", @@ -1142,7 +1172,7 @@ { "uri":"modelarts_23_0204.html", "product_code":"modelarts", - "code":"115", + "code":"118", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Importing a Model", @@ -1152,7 +1182,7 @@ { "uri":"modelarts_23_0054.html", "product_code":"modelarts", - "code":"116", + "code":"119", "des":"You can create a training job on ModelArts and perform training to obtain a satisfactory model. Then import the model to Model Management for unified management. In addit", "doc_type":"usermanual", "kw":"Importing a Meta Model from a Training Job,Importing a Model,User Guide", @@ -1162,7 +1192,7 @@ { "uri":"modelarts_23_0205.html", "product_code":"modelarts", - "code":"117", + "code":"120", "des":"Because the configurations of models with the same functions are similar, ModelArts integrates the configurations of such models into a common template. By using this tem", "doc_type":"usermanual", "kw":"Importing a Meta Model from a Template,Importing a Model,User Guide", @@ -1172,7 +1202,7 @@ { "uri":"modelarts_23_0206.html", "product_code":"modelarts", - "code":"118", + "code":"121", "des":"For AI engines that are not supported by ModelArts, you can import the model you compile to ModelArts from custom images.For details about the specifications and descript", "doc_type":"usermanual", "kw":"Importing a Meta Model from a Container Image,Importing a Model,User Guide", @@ -1182,7 +1212,7 @@ { "uri":"modelarts_23_0207.html", "product_code":"modelarts", - "code":"119", + "code":"122", "des":"In scenarios where frequently-used frameworks are used for model development and training, you can import the model to ModelArts for unified management.The model has been", "doc_type":"usermanual", "kw":"Importing a Meta Model from OBS,Importing a Model,User Guide", @@ -1192,67 +1222,17 @@ { "uri":"modelarts_23_0055.html", "product_code":"modelarts", - "code":"120", + "code":"123", "des":"To facilitate source tracing and repeated model tuning, ModelArts provides the model version management function. You can manage models based on versions.You have importe", "doc_type":"usermanual", "kw":"Managing Model Versions,Model Management,User Guide", "title":"Managing Model Versions", "githuburl":"" }, - { - "uri":"modelarts_23_0106.html", - "product_code":"modelarts", - "code":"121", - "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", - "doc_type":"usermanual", - "kw":"Model Compression and Conversion", - "title":"Model Compression and Conversion", - "githuburl":"" - }, - { - "uri":"modelarts_23_0107.html", - "product_code":"modelarts", - "code":"122", - "des":"To obtain higher computing power, you can deploy the models created on ModelArts or a local PC on the Ascend chip. In this case, you need to compress or convert the model", - "doc_type":"usermanual", - "kw":"Compressing and Converting Models,Model Compression and Conversion,User Guide", - "title":"Compressing and Converting Models", - "githuburl":"" - }, - { - "uri":"modelarts_23_0108.html", - "product_code":"modelarts", - "code":"123", - "des":"During model conversion, the model input directory must comply with certain specifications. This section describes how to upload your model package to OBS.The requirement", - "doc_type":"usermanual", - "kw":"Model Input Path Specifications,Model Compression and Conversion,User Guide", - "title":"Model Input Path Specifications", - "githuburl":"" - }, - { - "uri":"modelarts_23_0109.html", - "product_code":"modelarts", - "code":"124", - "des":"The following describes the output path of the model run on the Ascend chip after conversion:For TensorFlow-based models, the output path must comply with the following s", - "doc_type":"usermanual", - "kw":"Model Output Path Description,Model Compression and Conversion,User Guide", - "title":"Model Output Path Description", - "githuburl":"" - }, - { - "uri":"modelarts_23_0110.html", - "product_code":"modelarts", - "code":"125", - "des":"ModelArts provides the following conversion templates based on different AI frameworks:TF-FrozenGraph-To-Ascend-C32Convert the model trained by the TensorFlow framework a", - "doc_type":"usermanual", - "kw":"Conversion Templates,Model Compression and Conversion,User Guide", - "title":"Conversion Templates", - "githuburl":"" - }, { "uri":"modelarts_23_0057.html", "product_code":"modelarts", - "code":"126", + "code":"124", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Model Deployment", @@ -1262,7 +1242,7 @@ { "uri":"modelarts_23_0058.html", "product_code":"modelarts", - "code":"127", + "code":"125", "des":"After a training job is complete and a model is generated, you can deploy the model on the Service Deployment page. You can also deploy the model imported from OBS. Model", "doc_type":"usermanual", "kw":"Introduction to Model Deployment,Model Deployment,User Guide", @@ -1272,7 +1252,7 @@ { "uri":"modelarts_23_0059.html", "product_code":"modelarts", - "code":"128", + "code":"126", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Real-Time Services", @@ -1282,7 +1262,7 @@ { "uri":"modelarts_23_0060.html", "product_code":"modelarts", - "code":"129", + "code":"127", "des":"After a model is prepared, you can deploy the model as a real-time service and predict and call the service.A maximum of one real-time service can be deployed.Data has be", "doc_type":"usermanual", "kw":"Deploying a Model as a Real-Time Service,Real-Time Services,User Guide", @@ -1292,7 +1272,7 @@ { "uri":"modelarts_23_0061.html", "product_code":"modelarts", - "code":"130", + "code":"128", "des":"After a model is deployed as a real-time service, you can access the service page to view its details.Log in to the ModelArts management console and choose Service Deploy", "doc_type":"usermanual", "kw":"Viewing Service Details,Real-Time Services,User Guide", @@ -1302,7 +1282,7 @@ { "uri":"modelarts_23_0062.html", "product_code":"modelarts", - "code":"131", + "code":"129", "des":"After a model is deployed as a real-time service, you can debug code or add files for testing on the Prediction tab page. Based on the input request (JSON text or file) d", "doc_type":"usermanual", "kw":"Testing a Service,Real-Time Services,User Guide", @@ -1312,7 +1292,7 @@ { "uri":"modelarts_23_0063.html", "product_code":"modelarts", - "code":"132", + "code":"130", "des":"If a real-time service is in the Running state, the real-time service has been deployed successfully. This service provides a standard RESTful API for users to call. Befo", "doc_type":"usermanual", "kw":"Accessing a Real-Time Service (Token-based Authentication),Real-Time Services,User Guide", @@ -1322,7 +1302,7 @@ { "uri":"modelarts_23_0065.html", "product_code":"modelarts", - "code":"133", + "code":"131", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Batch Services", @@ -1332,7 +1312,7 @@ { "uri":"modelarts_23_0066.html", "product_code":"modelarts", - "code":"134", + "code":"132", "des":"After a model is prepared, you can deploy it as a batch service. The Service Deployment > Batch Services page lists all batch services. You can enter a service name in th", "doc_type":"usermanual", "kw":"Deploying a Model as a Batch Service,Batch Services,User Guide", @@ -1342,7 +1322,7 @@ { "uri":"modelarts_23_0067.html", "product_code":"modelarts", - "code":"135", + "code":"133", "des":"When deploying a batch service, you can select the location of the output data directory. You can view the running result of the batch service that is in the Running comp", "doc_type":"usermanual", "kw":"Viewing the Batch Service Prediction Result,Batch Services,User Guide", @@ -1352,7 +1332,7 @@ { "uri":"modelarts_23_0071.html", "product_code":"modelarts", - "code":"136", + "code":"134", "des":"For a deployed service, you can modify its basic information to match service changes. You can modify the basic information about a service in either of the following way", "doc_type":"usermanual", "kw":"Modifying a Service,Model Deployment,User Guide", @@ -1362,7 +1342,7 @@ { "uri":"modelarts_23_0072.html", "product_code":"modelarts", - "code":"137", + "code":"135", "des":"You can start services in the Successful, Abnormal, or Stopped status. Services in the Deploying status cannot be started. A service is billed when it is started and in t", "doc_type":"usermanual", "kw":"Starting or Stopping a Service,Model Deployment,User Guide", @@ -1372,7 +1352,7 @@ { "uri":"modelarts_23_0073.html", "product_code":"modelarts", - "code":"138", + "code":"136", "des":"If a service is no longer in use, you can delete it to release resources.Log in to the ModelArts management console and choose Service Deployment from the left navigation", "doc_type":"usermanual", "kw":"Deleting a Service,Model Deployment,User Guide", @@ -1382,7 +1362,7 @@ { "uri":"modelarts_23_0076.html", "product_code":"modelarts", - "code":"139", + "code":"137", "des":"When using ModelArts to implement AI Development Lifecycle, you can use two different resource pools to train and deploy models.Public Resource Pool: provides public larg", "doc_type":"usermanual", "kw":"Resource Pools,User Guide", @@ -1392,7 +1372,7 @@ { "uri":"modelarts_23_0083.html", "product_code":"modelarts", - "code":"140", + "code":"138", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Custom Images", @@ -1402,7 +1382,7 @@ { "uri":"modelarts_23_0084.html", "product_code":"modelarts", - "code":"141", + "code":"139", "des":"ModelArts provides multiple frequently-used built-in engines. However, when users have special requirements for the deep learning engine and development library, the buil", "doc_type":"usermanual", "kw":"Introduction to Custom Images,Custom Images,User Guide", @@ -1412,7 +1392,7 @@ { "uri":"modelarts_23_0085.html", "product_code":"modelarts", - "code":"142", + "code":"140", "des":"ModelArts allows you to use custom images to create training jobs and import models. Before creating and uploading a custom image, understand the following information:So", "doc_type":"usermanual", "kw":"Creating and Uploading a Custom Image,Custom Images,User Guide", @@ -1422,7 +1402,7 @@ { "uri":"modelarts_23_0216.html", "product_code":"modelarts", - "code":"143", + "code":"141", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"For Training Models", @@ -1432,7 +1412,7 @@ { "uri":"modelarts_23_0217.html", "product_code":"modelarts", - "code":"144", + "code":"142", "des":"When creating an image using locally developed models and training scripts, ensure that they meet the specifications defined by ModelArts.Custom images cannot contain mal", "doc_type":"usermanual", "kw":"Specifications for Custom Images Used for Training Jobs,For Training Models,User Guide", @@ -1442,7 +1422,7 @@ { "uri":"modelarts_23_0087.html", "product_code":"modelarts", - "code":"145", + "code":"143", "des":"After creating and uploading a custom image to SWR, you can use the image to create a training job on the ModelArts management console to complete model training.You have", "doc_type":"usermanual", "kw":"Creating a Training Job Using a Custom Image (GPU),For Training Models,User Guide", @@ -1452,7 +1432,7 @@ { "uri":"modelarts_23_0218.html", "product_code":"modelarts", - "code":"146", + "code":"144", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"For Importing Models", @@ -1462,7 +1442,7 @@ { "uri":"modelarts_23_0219.html", "product_code":"modelarts", - "code":"147", + "code":"145", "des":"When creating an image using locally developed models, ensure that they meet the specifications defined by ModelArts.Custom images cannot contain malicious code.The size ", "doc_type":"usermanual", "kw":"Specifications for Custom Images Used for Importing Models,For Importing Models,User Guide", @@ -1472,7 +1452,7 @@ { "uri":"modelarts_23_0086.html", "product_code":"modelarts", - "code":"148", + "code":"146", "des":"After creating and uploading a custom image to SWR, you can use the image to import a model and deploy the model as a service on the ModelArts management console.You have", "doc_type":"usermanual", "kw":"Importing a Model Using a Custom Image,For Importing Models,User Guide", @@ -1482,7 +1462,7 @@ { "uri":"modelarts_23_0090.html", "product_code":"modelarts", - "code":"149", + "code":"147", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Model Package Specifications", @@ -1492,7 +1472,7 @@ { "uri":"modelarts_23_0091.html", "product_code":"modelarts", - "code":"150", + "code":"148", "des":"When you import models in Model Management, if the meta model is imported from OBS or a container image, the model package must meet the following specifications:The mode", "doc_type":"usermanual", "kw":"Model Package Specifications,Model Package Specifications,User Guide", @@ -1502,7 +1482,7 @@ { "uri":"modelarts_23_0092.html", "product_code":"modelarts", - "code":"151", + "code":"149", "des":"A model developer needs to compile a configuration file when publishing a model. The model configuration file describes the model usage, computing framework, precision, i", "doc_type":"usermanual", "kw":"Specifications for Compiling the Model Configuration File,Model Package Specifications,User Guide", @@ -1512,7 +1492,7 @@ { "uri":"modelarts_23_0093.html", "product_code":"modelarts", - "code":"152", + "code":"150", "des":"This section describes how to compile model inference code in ModelArts. The following also provides an example of inference code for the TensorFlow engine and an example", "doc_type":"usermanual", "kw":"Specifications for Compiling Model Inference Code,Model Package Specifications,User Guide", @@ -1522,7 +1502,7 @@ { "uri":"modelarts_23_0097.html", "product_code":"modelarts", - "code":"153", + "code":"151", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Model Templates", @@ -1532,7 +1512,7 @@ { "uri":"modelarts_23_0098.html", "product_code":"modelarts", - "code":"154", + "code":"152", "des":"Because the configurations of models with the same functions are similar, ModelArts integrates the configurations of such models into a common template. By using this tem", "doc_type":"usermanual", "kw":"Introduction to Model Templates,Model Templates,User Guide", @@ -1542,7 +1522,7 @@ { "uri":"modelarts_23_0118.html", "product_code":"modelarts", - "code":"155", + "code":"153", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Template Description", @@ -1552,7 +1532,7 @@ { "uri":"modelarts_23_0161.html", "product_code":"modelarts", - "code":"156", + "code":"154", "des":"AI engine: TensorFlow 1.8; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or appl", "doc_type":"usermanual", "kw":"TensorFlow-py27 General Template,Template Description,User Guide", @@ -1562,7 +1542,7 @@ { "uri":"modelarts_23_0162.html", "product_code":"modelarts", - "code":"157", + "code":"155", "des":"AI engine: TensorFlow 1.8; Environment: Python 3.6; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or appl", "doc_type":"usermanual", "kw":"TensorFlow-py36 General Template,Template Description,User Guide", @@ -1572,7 +1552,7 @@ { "uri":"modelarts_23_0163.html", "product_code":"modelarts", - "code":"158", + "code":"156", "des":"AI engine: MXNet 1.2.1; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or applica", "doc_type":"usermanual", "kw":"MXNet-py27 General Template,Template Description,User Guide", @@ -1582,7 +1562,7 @@ { "uri":"modelarts_23_0164.html", "product_code":"modelarts", - "code":"159", + "code":"157", "des":"AI engine: MXNet 1.2.1; Environment: Python 3.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or applica", "doc_type":"usermanual", "kw":"MXNet-py37 General Template,Template Description,User Guide", @@ -1592,7 +1572,7 @@ { "uri":"modelarts_23_0165.html", "product_code":"modelarts", - "code":"160", + "code":"158", "des":"AI engine: PyTorch 1.0; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or applica", "doc_type":"usermanual", "kw":"PyTorch-py27 General Template,Template Description,User Guide", @@ -1602,7 +1582,7 @@ { "uri":"modelarts_23_0166.html", "product_code":"modelarts", - "code":"161", + "code":"159", "des":"AI engine: PyTorch 1.0; Environment: Python 3.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or applica", "doc_type":"usermanual", "kw":"PyTorch-py37 General Template,Template Description,User Guide", @@ -1612,7 +1592,7 @@ { "uri":"modelarts_23_0167.html", "product_code":"modelarts", - "code":"162", + "code":"160", "des":"AI engine: CPU-based Caffe 1.0; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or", "doc_type":"usermanual", "kw":"Caffe-CPU-py27 General Template,Template Description,User Guide", @@ -1622,7 +1602,7 @@ { "uri":"modelarts_23_0168.html", "product_code":"modelarts", - "code":"163", + "code":"161", "des":"AI engine: GPU-based Caffe 1.0; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or", "doc_type":"usermanual", "kw":"Caffe-GPU-py27 General Template,Template Description,User Guide", @@ -1632,7 +1612,7 @@ { "uri":"modelarts_23_0169.html", "product_code":"modelarts", - "code":"164", + "code":"162", "des":"AI engine: CPU-based Caffe 1.0; Environment: Python 3.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or", "doc_type":"usermanual", "kw":"Caffe-CPU-py37 General Template,Template Description,User Guide", @@ -1642,7 +1622,7 @@ { "uri":"modelarts_23_0170.html", "product_code":"modelarts", - "code":"165", + "code":"163", "des":"AI engine: GPU-based Caffe 1.0; Environment: Python 3.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or", "doc_type":"usermanual", "kw":"Caffe-GPU-py37 General Template,Template Description,User Guide", @@ -1652,7 +1632,7 @@ { "uri":"modelarts_23_0099.html", "product_code":"modelarts", - "code":"166", + "code":"164", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Input and Output Modes", @@ -1662,7 +1642,7 @@ { "uri":"modelarts_23_0100.html", "product_code":"modelarts", - "code":"167", + "code":"165", "des":"This is a built-in input and output mode for object detection. The models using this mode are identified as object detection models. The prediction request path is /, the", "doc_type":"usermanual", "kw":"Built-in Object Detection Mode,Input and Output Modes,User Guide", @@ -1672,7 +1652,7 @@ { "uri":"modelarts_23_0101.html", "product_code":"modelarts", - "code":"168", + "code":"166", "des":"The built-in image processing input and output mode can be applied to models such as image classification, object detection, and image semantic segmentation. The predicti", "doc_type":"usermanual", "kw":"Built-in Image Processing Mode,Input and Output Modes,User Guide", @@ -1682,7 +1662,7 @@ { "uri":"modelarts_23_0102.html", "product_code":"modelarts", - "code":"169", + "code":"167", "des":"This is a built-in input and output mode for predictive analytics. The models using this mode are identified as predictive analytics models. The prediction request path i", "doc_type":"usermanual", "kw":"Built-in Predictive Analytics Mode,Input and Output Modes,User Guide", @@ -1692,7 +1672,7 @@ { "uri":"modelarts_23_0103.html", "product_code":"modelarts", - "code":"170", + "code":"168", "des":"The undefined mode does not define the input and output mode. The input and output mode is determined by the model. Select this mode only when the existing input and outp", "doc_type":"usermanual", "kw":"Undefined Mode,Input and Output Modes,User Guide", @@ -1702,7 +1682,7 @@ { "uri":"modelarts_23_0172.html", "product_code":"modelarts", - "code":"171", + "code":"169", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Examples of Custom Scripts", @@ -1712,7 +1692,7 @@ { "uri":"modelarts_23_0173.html", "product_code":"modelarts", - "code":"172", + "code":"170", "des":"TensorFlow has two types of APIs: Keras and tf. Keras and tf use different code for training and saving models, but the same code for inference.", "doc_type":"usermanual", "kw":"TensorFlow,Examples of Custom Scripts,User Guide", @@ -1722,7 +1702,7 @@ { "uri":"modelarts_23_0175.html", "product_code":"modelarts", - "code":"173", + "code":"171", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"PyTorch,Examples of Custom Scripts,User Guide", @@ -1732,7 +1712,7 @@ { "uri":"modelarts_23_0176.html", "product_code":"modelarts", - "code":"174", + "code":"172", "des":"lenet_train_test.prototxt filelenet_solver.prototxt fileTrain the model.The caffemodel file is generated after model training. Rewrite the lenet_train_test.prototxt file ", "doc_type":"usermanual", "kw":"Caffe,Examples of Custom Scripts,User Guide", @@ -1742,8 +1722,8 @@ { "uri":"modelarts_23_0177.html", "product_code":"modelarts", - "code":"175", - "des":"After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publish", + "code":"173", + "des":"Before training, download the iris.csv dataset, decompress it, and upload it to the /home/ma-user/work/ directory of the notebook instance. Download the iris.csv dataset ", "doc_type":"usermanual", "kw":"XGBoost,Examples of Custom Scripts,User Guide", "title":"XGBoost", @@ -1752,7 +1732,7 @@ { "uri":"modelarts_23_0178.html", "product_code":"modelarts", - "code":"176", + "code":"174", "des":"After the model is saved, it must be uploaded to the OBS directory before being published. The config.json configuration and customize_service.py must be contained during", "doc_type":"usermanual", "kw":"PySpark,Examples of Custom Scripts,User Guide", @@ -1762,8 +1742,8 @@ { "uri":"modelarts_23_0179.html", "product_code":"modelarts", - "code":"177", - "des":"After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publish", + "code":"175", + "des":"Before training, download the iris.csv dataset, decompress it, and upload it to the /home/ma-user/work/ directory of the notebook instance. Download the iris.csv dataset ", "doc_type":"usermanual", "kw":"Scikit Learn,Examples of Custom Scripts,User Guide", "title":"Scikit Learn", @@ -1772,7 +1752,7 @@ { "uri":"modelarts_23_0077.html", "product_code":"modelarts", - "code":"178", + "code":"176", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Permissions Management", @@ -1782,7 +1762,7 @@ { "uri":"modelarts_23_0078.html", "product_code":"modelarts", - "code":"179", + "code":"177", "des":"A fine-grained policy is a set of permissions defining which operations on which cloud services can be performed. Each policy can define multiple permissions. After a pol", "doc_type":"usermanual", "kw":"Basic Concepts,Permissions Management,User Guide", @@ -1792,7 +1772,7 @@ { "uri":"modelarts_23_0079.html", "product_code":"modelarts", - "code":"180", + "code":"178", "des":"A fine-grained policy consists of the policy version (the Version field) and statement (the Statement field).Version: Distinguishes between role-based access control (RBA", "doc_type":"usermanual", "kw":"Creating a User and Granting Permissions,Permissions Management,User Guide", @@ -1802,7 +1782,7 @@ { "uri":"modelarts_23_0080.html", "product_code":"modelarts", - "code":"181", + "code":"179", "des":"If default policies cannot meet the requirements on fine-grained access control, you can create custom policies and assign the policies to the user group.You can create c", "doc_type":"usermanual", "kw":"Creating a Custom Policy,Permissions Management,User Guide", @@ -1812,7 +1792,7 @@ { "uri":"modelarts_23_0186.html", "product_code":"modelarts", - "code":"182", + "code":"180", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Monitoring", @@ -1822,7 +1802,7 @@ { "uri":"modelarts_23_0187.html", "product_code":"modelarts", - "code":"183", + "code":"181", "des":"The cloud service platform provides Cloud Eye to help you better understand the status of your ModelArts real-time services and models. You can use Cloud Eye to automatic", "doc_type":"usermanual", "kw":"ModelArts Metrics,Monitoring,User Guide", @@ -1832,7 +1812,7 @@ { "uri":"modelarts_23_0188.html", "product_code":"modelarts", - "code":"184", + "code":"182", "des":"Setting alarm rules allows you to customize the monitored objects and notification policies so that you can know the status of ModelArts real-time services and models in ", "doc_type":"usermanual", "kw":"Setting Alarm Rules,Monitoring,User Guide", @@ -1842,7 +1822,7 @@ { "uri":"modelarts_23_0189.html", "product_code":"modelarts", - "code":"185", + "code":"183", "des":"Cloud Eye on the cloud service platform monitors the status of ModelArts real-time services and model loads. You can obtain the monitoring metrics of each ModelArts real-", "doc_type":"usermanual", "kw":"Viewing Monitoring Metrics,Monitoring,User Guide", @@ -1852,7 +1832,7 @@ { "uri":"modelarts_23_0249.html", "product_code":"modelarts", - "code":"186", + "code":"184", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Audit Logs", @@ -1862,7 +1842,7 @@ { "uri":"modelarts_23_0250.html", "product_code":"modelarts", - "code":"187", + "code":"185", "des":"CTS is available on the public cloud platform. With CTS, you can record operations associated with ModelArts for later query, audit, and backtrack operations.CTS has been", "doc_type":"usermanual", "kw":"Key Operations Recorded by CTS,Audit Logs,User Guide", @@ -1872,7 +1852,7 @@ { "uri":"modelarts_23_0251.html", "product_code":"modelarts", - "code":"188", + "code":"186", "des":"After CTS is enabled, CTS starts recording operations related to ModelArts. The CTS management console stores the last seven days of operation records. This section descr", "doc_type":"usermanual", "kw":"Viewing Audit Logs,Audit Logs,User Guide", @@ -1882,7 +1862,7 @@ { "uri":"modelarts_05_0000.html", "product_code":"modelarts", - "code":"189", + "code":"187", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"FAQs", @@ -1892,7 +1872,7 @@ { "uri":"modelarts_05_0014.html", "product_code":"modelarts", - "code":"190", + "code":"188", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"General Issues", @@ -1902,7 +1882,7 @@ { "uri":"modelarts_05_0001.html", "product_code":"modelarts", - "code":"191", + "code":"189", "des":"ModelArts is a one-stop development platform for AI developers. With data preprocessing, semi-automated data labeling, distributed training, automated model building, and", "doc_type":"usermanual", "kw":"What Is ModelArts?,General Issues,User Guide", @@ -1912,17 +1892,17 @@ { "uri":"modelarts_05_0003.html", "product_code":"modelarts", - "code":"192", - "des":"ModelArts uses Object Storage Service (OBS) to store data and model backups and snapshots. OBS provides secure, reliable, low-cost storage. For more details, see Object S", + "code":"190", + "des":"ModelArts uses Identity and Access Management (IAM) for authentication and authorization. For more information about IAM, see Identity and Access Management User Guide.Mo", "doc_type":"usermanual", - "kw":"What Are the Relationships Between ModelArts and Other Services,General Issues,User Guide", - "title":"What Are the Relationships Between ModelArts and Other Services", + "kw":"What Are The Relationships Between ModelArts And Other Services,General Issues,User Guide", + "title":"What Are The Relationships Between ModelArts And Other Services", "githuburl":"" }, { "uri":"modelarts_05_0004.html", "product_code":"modelarts", - "code":"193", + "code":"191", "des":"Log in to the console, enter the My Credentials page, and choose Access Keys > Create Access Key.In the Create Access Key dialog box that is displayed, use the login pass", "doc_type":"usermanual", "kw":"How Do I Obtain Access Keys?,General Issues,User Guide", @@ -1932,7 +1912,7 @@ { "uri":"modelarts_05_0013.html", "product_code":"modelarts", - "code":"194", + "code":"192", "des":"Before using ModelArts to develop AI models, data needs to be uploaded to an OBS bucket. You can log in to the OBS console to create an OBS bucket, create a folder, and u", "doc_type":"usermanual", "kw":"How Do I Upload Data to OBS?,General Issues,User Guide", @@ -1942,7 +1922,7 @@ { "uri":"modelarts_05_0128.html", "product_code":"modelarts", - "code":"195", + "code":"193", "des":"Supported AI frameworks and versions of ModelArts vary slightly based on the development environment, training jobs, and model inference (model management and deployment)", "doc_type":"usermanual", "kw":"Which AI Frameworks Does ModelArts Support?,General Issues,User Guide", @@ -1952,28 +1932,18 @@ { "uri":"modelarts_21_0055.html", "product_code":"modelarts", - "code":"196", + "code":"194", "des":"For common users, ModelArts provides the predictive analytics function of ExeML to train models based on structured data.For advanced users, ModelArts provides the notebo", "doc_type":"usermanual", "kw":"How Do I Use ModelArts to Train Models Based on Structured Data?,General Issues,User Guide", "title":"How Do I Use ModelArts to Train Models Based on Structured Data?", "githuburl":"" }, - { - "uri":"modelarts_21_0056.html", - "product_code":"modelarts", - "code":"197", - "des":"If an OBS directory needs to be specified for using ModelArts functions, such as creating training jobs and datasets, ensure that the OBS bucket and ModelArts are in the ", - "doc_type":"usermanual", - "kw":"Why Cannot I Find the OBS Bucket on ModelArts After Uploading Data to OBS?,General Issues,User Guide", - "title":"Why Cannot I Find the OBS Bucket on ModelArts After Uploading Data to OBS?", - "githuburl":"" - }, { "uri":"modelarts_21_0057.html", "product_code":"modelarts", - "code":"198", - "des":"No. The current ModelArts version does not support multiple projects. Customers can only use it in the default eu-de project.", + "code":"195", + "des":"The current version supports multiple projects.", "doc_type":"usermanual", "kw":"Does ModelArts Support Multiple Projects?,General Issues,User Guide", "title":"Does ModelArts Support Multiple Projects?", @@ -1982,7 +1952,7 @@ { "uri":"modelarts_21_0058.html", "product_code":"modelarts", - "code":"199", + "code":"196", "des":"To view all files stored in OBS when using notebook instances or training jobs, use either of the following methods:OBS consoleLog in to OBS console using the current acc", "doc_type":"usermanual", "kw":"How Do I View All Files in an OBS Directory on ModelArts?,General Issues,User Guide", @@ -1992,7 +1962,7 @@ { "uri":"modelarts_21_0059.html", "product_code":"modelarts", - "code":"200", + "code":"197", "des":"No. The current ModelArts version does not support encrypted files stored in OBS.", "doc_type":"usermanual", "kw":"Does ModelArts Support Encrypted Files Stored in OBS?,General Issues,User Guide", @@ -2002,7 +1972,7 @@ { "uri":"modelarts_05_0015.html", "product_code":"modelarts", - "code":"201", + "code":"198", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"ExeML", @@ -2012,7 +1982,7 @@ { "uri":"modelarts_05_0002.html", "product_code":"modelarts", - "code":"202", + "code":"199", "des":"ExeML is the process of automating model design, parameter tuning, and model training, compression, and deployment with the labeled data. The process is free of coding an", "doc_type":"usermanual", "kw":"What Is ExeML?,ExeML,User Guide", @@ -2022,7 +1992,7 @@ { "uri":"modelarts_05_0018.html", "product_code":"modelarts", - "code":"203", + "code":"200", "des":"Image classification is an image processing method that separates different classes of targets according to the features reflected in the images. With quantitative analys", "doc_type":"usermanual", "kw":"What Are Image Classification and Object Detection?,ExeML,User Guide", @@ -2032,7 +2002,7 @@ { "uri":"modelarts_05_0005.html", "product_code":"modelarts", - "code":"204", + "code":"201", "des":"The Train button turns to be available when the training images for an image classification project are classified into at least two categories, and each category contain", "doc_type":"usermanual", "kw":"What Should I Do When the Train Button Is Unavailable After I Create an Image Classification Project", @@ -2042,7 +2012,7 @@ { "uri":"modelarts_05_0006.html", "product_code":"modelarts", - "code":"205", + "code":"202", "des":"Yes. You can add multiple labels to an image.", "doc_type":"usermanual", "kw":"Can I Add Multiple Labels to an Image for an Object Detection Project?,ExeML,User Guide", @@ -2052,7 +2022,7 @@ { "uri":"modelarts_05_0008.html", "product_code":"modelarts", - "code":"206", + "code":"203", "des":"Models created in ExeML are deployed as real-time services. You can add images or compile code to test the services, as well as call the APIs using the URLs.After model d", "doc_type":"usermanual", "kw":"What Type of Service Is Deployed in ExeML?,ExeML,User Guide", @@ -2062,7 +2032,7 @@ { "uri":"modelarts_05_0010.html", "product_code":"modelarts", - "code":"207", + "code":"204", "des":"Images in JPG, JPEG, PNG, or BMP format are supported.", "doc_type":"usermanual", "kw":"What Formats of Images Are Supported by Object Detection or Image Classification Projects?,ExeML,Use", @@ -2072,7 +2042,7 @@ { "uri":"modelarts_21_0062.html", "product_code":"modelarts", - "code":"208", + "code":"205", "des":"Data files cannot be stored in the root directory of an OBS bucket.The name of files in a dataset consists of letters, digits, hyphens (-), and underscores (_), and the f", "doc_type":"usermanual", "kw":"What Are the Requirements for Training Data When You Create a Predictive Analytics Project in ExeML?", @@ -2082,7 +2052,7 @@ { "uri":"modelarts_21_0061.html", "product_code":"modelarts", - "code":"209", + "code":"206", "des":"The model cannot be downloaded. However, you can view the model or deploy the model as a real-time service on the model management page.", "doc_type":"usermanual", "kw":"Can I Download a Model After It Is Automatically Trained?,ExeML,User Guide", @@ -2092,7 +2062,7 @@ { "uri":"modelarts_21_0060.html", "product_code":"modelarts", - "code":"210", + "code":"207", "des":"Each round of training generates a training version in an ExeML project. If a training result is unsatisfactory (for example, unsatisfactory about the training precision)", "doc_type":"usermanual", "kw":"How Do I Perform Incremental Training in an ExeML Project?,ExeML,User Guide", @@ -2102,7 +2072,7 @@ { "uri":"modelarts_05_0101.html", "product_code":"modelarts", - "code":"211", + "code":"208", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Data Management", @@ -2112,7 +2082,7 @@ { "uri":"modelarts_21_0063.html", "product_code":"modelarts", - "code":"212", + "code":"209", "des":"For the data management function, there are limits on the image size when you upload images to the datasets whose labeling type is object detection or image classificatio", "doc_type":"usermanual", "kw":"Are There Size Limits for Images to be Uploaded?,Data Management,User Guide", @@ -2122,7 +2092,7 @@ { "uri":"modelarts_05_0103.html", "product_code":"modelarts", - "code":"213", + "code":"210", "des":"Failed to use the manifest file of the published dataset to import data again.Data has been changed in the OBS directory of the published dataset, for example, images hav", "doc_type":"usermanual", "kw":"Why Does Data Fail to Be Imported Using the Manifest File?,Data Management,User Guide", @@ -2132,7 +2102,7 @@ { "uri":"modelarts_05_0067.html", "product_code":"modelarts", - "code":"214", + "code":"211", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Notebook", @@ -2142,7 +2112,7 @@ { "uri":"modelarts_05_0071.html", "product_code":"modelarts", - "code":"215", + "code":"212", "des":"Log in to the ModelArts management console, and choose DevEnviron > Notebooks.In the notebook list, click Open in the Operation column of the target notebook instance to ", "doc_type":"usermanual", "kw":"How Do I Enable the Terminal Function in DevEnviron of ModelArts?,Notebook,User Guide", @@ -2150,10 +2120,10 @@ "githuburl":"" }, { - "uri":"modelarts_21_0064.html", + "uri":"modelarts_05_0022.html", "product_code":"modelarts", - "code":"216", - "des":"Log in to the ModelArts management console, and choose DevEnviron > Notebooks.In the notebook list, click Open in the Operation column of the target notebook instance to ", + "code":"213", + "des":"Multiple environments have been integrated into ModelArts Notebook. These environments contain Jupyter Notebook and Python packages, including TensorFlow, MXNet, Caffe, P", "doc_type":"usermanual", "kw":"How Do I Install External Libraries in a Notebook Instance?,Notebook,User Guide", "title":"How Do I Install External Libraries in a Notebook Instance?", @@ -2162,7 +2132,7 @@ { "uri":"modelarts_21_0065.html", "product_code":"modelarts", - "code":"217", + "code":"214", "des":"Notebook instances in DevEnviron support the Keras engine. The Keras engine is not supported in job training and model deployment (inference).Keras is an advanced neural ", "doc_type":"usermanual", "kw":"Is the Keras Engine Supported?,Notebook,User Guide", @@ -2172,7 +2142,7 @@ { "uri":"modelarts_21_0066.html", "product_code":"modelarts", - "code":"218", + "code":"215", "des":"After the training code is debugged in a notebook instance, if you need to use the training code for training jobs on ModelArts, convert the ipynb file into a Python file", "doc_type":"usermanual", "kw":"How Do I Use Training Code in Training Jobs After Debugging the Code in a Notebook Instance?,Noteboo", @@ -2182,17 +2152,27 @@ { "uri":"modelarts_21_0067.html", "product_code":"modelarts", - "code":"219", + "code":"216", "des":"In the notebook instance, error message \"No Space left...\" is displayed after the pip install command is run.You are advised to run the pip install --no-cache ** command", "doc_type":"usermanual", "kw":"What Should I Do When the System Displays an Error Message Indicating that No Space Left After I Run", "title":"What Should I Do When the System Displays an Error Message Indicating that No Space Left After I Run the pip install Command?", "githuburl":"" }, + { + "uri":"modelarts_05_0024.html", + "product_code":"modelarts", + "code":"217", + "des":"In a notebook instance, you can call the ModelArts MoXing API or SDK to exchange data with OBS for uploading a file to OBS or downloading a file from OBS to the notebook ", + "doc_type":"faq", + "kw":"How Do I Upload a File from a Notebook Instance to OBS or Download a File from OBS to a Notebook Ins", + "title":"How Do I Upload a File from a Notebook Instance to OBS or Download a File from OBS to a Notebook Instance?", + "githuburl":"" + }, { "uri":"modelarts_21_0068.html", "product_code":"modelarts", - "code":"220", + "code":"218", "des":"Small files (files smaller than 100 MB)Open a notebook instance and click Upload in the upper right corner to upload a local file to the notebook instance.Upload a small ", "doc_type":"usermanual", "kw":"How Do I Upload Local Files to a Notebook Instance?,Notebook,User Guide", @@ -2202,7 +2182,7 @@ { "uri":"modelarts_05_0045.html", "product_code":"modelarts", - "code":"221", + "code":"219", "des":"If you use OBS to store the notebook instance, after you click upload, the data is directly uploaded to the target OBS path, that is, the OBS path specified when the note", "doc_type":"usermanual", "kw":"Where Will the Data Be Uploaded to?,Notebook,User Guide", @@ -2212,7 +2192,7 @@ { "uri":"modelarts_21_0069.html", "product_code":"modelarts", - "code":"222", + "code":"220", "des":"The following uses the TensorFlow-1.8 engine as an example. The operations on other engines are similar. You only need to replace the engine name and version number in th", "doc_type":"usermanual", "kw":"Should I Access the Python Environment Same as the Notebook Kernel of the Current Instance in the Te", @@ -2222,7 +2202,7 @@ { "uri":"modelarts_21_0070.html", "product_code":"modelarts", - "code":"223", + "code":"221", "des":"If a notebook instance fails to execute code, you can locate and rectify the fault based on the following scenarios:If the execution of a cell is suspended or lasts for a", "doc_type":"usermanual", "kw":"What Do I Do If a Notebook Instance Fails to Execute Code?,Notebook,User Guide", @@ -2232,7 +2212,7 @@ { "uri":"modelarts_21_0071.html", "product_code":"modelarts", - "code":"224", + "code":"222", "des":"Currently, Terminal in ModelArts DevEnviron does not support apt-get. You can use a custom imagecustom image to support it.", "doc_type":"usermanual", "kw":"Does ModelArts DevEnviron Support apt-get?,Notebook,User Guide", @@ -2242,7 +2222,7 @@ { "uri":"modelarts_05_0080.html", "product_code":"modelarts", - "code":"225", + "code":"223", "des":"/cache is a temporary directory and will not be saved. After an instance using OBS storage is stopped, data in the ~work directory will be deleted. After a notebook insta", "doc_type":"usermanual", "kw":"Do Files in /cache Still Exist After a Notebook Instance is Stopped or Restarted? How Do I Avoid a R", @@ -2252,7 +2232,7 @@ { "uri":"modelarts_05_0081.html", "product_code":"modelarts", - "code":"226", + "code":"224", "des":"Log in to the ModelArts management console, and choose DevEnviron > Notebooks.In the Operation column of the target notebook instance in the notebook list, click Open to ", "doc_type":"usermanual", "kw":"Where Is Data Stored After the Sync OBS Function Is Used?,Notebook,User Guide", @@ -2262,7 +2242,7 @@ { "uri":"modelarts_21_0072.html", "product_code":"modelarts", - "code":"227", + "code":"225", "des":"If you select GPU when creating a notebook instance, perform the following operations to view GPU usage:Log in to the ModelArts management console, and choose DevEnviron ", "doc_type":"usermanual", "kw":"How Do I View GPU Usage on the Notebook?,Notebook,User Guide", @@ -2272,7 +2252,7 @@ { "uri":"modelarts_21_0073.html", "product_code":"modelarts", - "code":"228", + "code":"226", "des":"When creating a notebook instance, select the target Python development environment. Python2 and Python3 are supported, corresponding to Python 2.7 and Python 3.6, respec", "doc_type":"usermanual", "kw":"What Python Development Environments Does Notebook Support?,Notebook,User Guide", @@ -2282,7 +2262,7 @@ { "uri":"modelarts_21_0074.html", "product_code":"modelarts", - "code":"229", + "code":"227", "des":"The python2 environment of ModelArts supports Caffe, but the python3 environment does not support it.", "doc_type":"usermanual", "kw":"Does ModelArts Support the Caffe Engine?,Notebook,User Guide", @@ -2292,7 +2272,7 @@ { "uri":"modelarts_21_0075.html", "product_code":"modelarts", - "code":"230", + "code":"228", "des":"For security purposes, notebook instances do not support sudo privilege escalation.", "doc_type":"usermanual", "kw":"Is sudo Privilege Escalation Supported?,Notebook,User Guide", @@ -2302,7 +2282,7 @@ { "uri":"modelarts_05_0030.html", "product_code":"modelarts", - "code":"231", + "code":"229", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Training Jobs", @@ -2312,7 +2292,7 @@ { "uri":"modelarts_05_0031.html", "product_code":"modelarts", - "code":"232", + "code":"230", "des":"The code directory for creating a training job has limits on the size and number of files.Delete the files except the code from the code directory or save the files in ot", "doc_type":"usermanual", "kw":"What Can I Do If the Message \"Object directory size/quantity exceeds the limit\" Is Displayed When I ", @@ -2322,7 +2302,7 @@ { "uri":"modelarts_05_0032.html", "product_code":"modelarts", - "code":"233", + "code":"231", "des":"When you use ModelArts, your data is stored in the OBS bucket. The data has a corresponding OBS path, for example, bucket_name/dir/image.jpg. ModelArts training jobs run ", "doc_type":"usermanual", "kw":"What Can I Do If \"No such file or directory\" Is Displayed In the Training Job Log?,Training Jobs,Use", @@ -2332,7 +2312,7 @@ { "uri":"modelarts_05_0063.html", "product_code":"modelarts", - "code":"234", + "code":"232", "des":"When a model references a dependency package, select a frequently-used framework to create training jobs. In addition, place the required file or installation package in ", "doc_type":"usermanual", "kw":"How Do I Create a Training Job When a Dependency Package Is Referenced in a Model?,Training Jobs,Use", @@ -2342,7 +2322,7 @@ { "uri":"modelarts_21_0077.html", "product_code":"modelarts", - "code":"235", + "code":"233", "des":"Pay attention to the following when setting training parameters:When setting running parameters for creating a training job, you only need to set the corresponding parame", "doc_type":"usermanual", "kw":"What Should I Know When Setting Training Parameters?,Training Jobs,User Guide", @@ -2352,7 +2332,7 @@ { "uri":"modelarts_21_0078.html", "product_code":"modelarts", - "code":"236", + "code":"234", "des":"In the left navigation pane of the ModelArts management console, choose Training Management > Training Jobs to go to the Training Jobs page. In the training job list, cli", "doc_type":"usermanual", "kw":"How Do I Check Resource Usage of a Training Job?,Training Jobs,User Guide", @@ -2362,8 +2342,8 @@ { "uri":"modelarts_05_0090.html", "product_code":"modelarts", - "code":"237", - "des":"When creating a training job, you can select CPU, GPU, or Ascend resources based on the size of the training job.ModelArts mounts the disk to the /cache directory. You ca", + "code":"235", + "des":"When creating a training job, you can select CPU, GPUresources based on the size of the training job.ModelArts mounts the disk to the /cache directory. You can use this d", "doc_type":"usermanual", "kw":"What Are Sizes of the /cache Directories for Different Resource Specifications in the Training Envir", "title":"What Are Sizes of the /cache Directories for Different Resource Specifications in the Training Environment?", @@ -2372,7 +2352,7 @@ { "uri":"modelarts_21_0079.html", "product_code":"modelarts", - "code":"238", + "code":"236", "des":"In the script of the training job boot file, run the following commands to obtain the sizes of the to-be-copied and copied folders. Then determine whether folder copy is ", "doc_type":"usermanual", "kw":"How Do I Check Whether Folder Copy Is Complete During Job Training?,Training Jobs,User Guide", @@ -2382,7 +2362,7 @@ { "uri":"modelarts_21_0080.html", "product_code":"modelarts", - "code":"239", + "code":"237", "des":"Training job parameters can be automatically generated in the background or manually entered by users. Perform the following operations to obtain training job parameters:", "doc_type":"usermanual", "kw":"How Do I Obtain Training Job Parameters from the Boot File of the Training Job?,Training Jobs,User G", @@ -2392,7 +2372,7 @@ { "uri":"modelarts_21_0081.html", "product_code":"modelarts", - "code":"240", + "code":"238", "des":"ModelArts does not support access to the background of a training job.", "doc_type":"usermanual", "kw":"How Do I Access the Background of a Training Job?,Training Jobs,User Guide", @@ -2402,7 +2382,7 @@ { "uri":"modelarts_21_0082.html", "product_code":"modelarts", - "code":"241", + "code":"239", "des":"Storage directories of ModelArts training jobs do not affect each other. Environments are isolated from each other, and data of other jobs cannot be viewed.", "doc_type":"usermanual", "kw":"Is There Any Conflict When Models of Two Training Jobs Are Saved in the Same Directory of a Containe", @@ -2412,7 +2392,7 @@ { "uri":"modelarts_21_0083.html", "product_code":"modelarts", - "code":"242", + "code":"240", "des":"In a training job, only three valid digits are retained in a training output log. When the value of loss is too small, the value is displayed as 0.000. Log content is as ", "doc_type":"usermanual", "kw":"Only Three Valid Digits Are Retained in a Training Output Log. Can the Value of loss Be Changed?,Tra", @@ -2422,7 +2402,7 @@ { "uri":"modelarts_21_0084.html", "product_code":"modelarts", - "code":"243", + "code":"241", "des":"If you cannot access the corresponding folder by using os.system('cd xxx') in the boot script of the training job, you are advised to use the following method:", "doc_type":"usermanual", "kw":"Why Can't I Use os.system ('cd xxx') to Access the Corresponding Folder During Job Training?,Trainin", @@ -2432,7 +2412,7 @@ { "uri":"modelarts_21_0085.html", "product_code":"modelarts", - "code":"244", + "code":"242", "des":"ModelArts enables you to invoke a shell script, and you can use Python to invoke .sh. The procedure is as follows:Upload the .sh script to an OBS bucket. For example, upl", "doc_type":"usermanual", "kw":"How Do I Invoke a Shell Script in a Training Job to Execute the .sh File?,Training Jobs,User Guide", @@ -2442,7 +2422,7 @@ { "uri":"modelarts_05_0016.html", "product_code":"modelarts", - "code":"245", + "code":"243", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Model Management", @@ -2452,7 +2432,7 @@ { "uri":"modelarts_21_0086.html", "product_code":"modelarts", - "code":"246", + "code":"244", "des":"ModelArts does not support the import of models in .h5 format. You can convert the models in .h5 format of Keras to the TensorFlow format and then import the models to Mo", "doc_type":"usermanual", "kw":"How Do I Import the .h5 Model of Keras to ModelArts?,Model Management,User Guide", @@ -2462,7 +2442,7 @@ { "uri":"modelarts_05_0124.html", "product_code":"modelarts", - "code":"247", + "code":"245", "des":"ModelArts allows you to upload local models to OBS or import models stored in OBS directly into ModelArts.For details about how to import a model from OBS, see Importing ", "doc_type":"usermanual", "kw":"How Do I Import a Model Downloaded from OBS to ModelArts?,Model Management,User Guide", @@ -2472,7 +2452,7 @@ { "uri":"modelarts_05_0017.html", "product_code":"modelarts", - "code":"248", + "code":"246", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Service Deployment", @@ -2482,8 +2462,8 @@ { "uri":"modelarts_05_0012.html", "product_code":"modelarts", - "code":"249", - "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "code":"247", + "des":"Currently, models can only be deployed as real-time services and batch services.", "doc_type":"usermanual", "kw":"What Types of Services Can Models Be Deployed as on ModelArts?,Service Deployment,User Guide", "title":"What Types of Services Can Models Be Deployed as on ModelArts?", @@ -2492,7 +2472,7 @@ { "uri":"modelarts_05_0100.html", "product_code":"modelarts", - "code":"250", + "code":"248", "des":"Before importing a model, you need to place the corresponding inference code and configuration file in the model folder. When encoding with Python, you are advised to use", "doc_type":"usermanual", "kw":"What Should I Do If a Conflict Occurs When Deploying a Model As a Real-Time Service?,Service Deploym", @@ -2502,7 +2482,7 @@ { "uri":"modelarts_04_0099.html", "product_code":"modelarts", - "code":"251", + "code":"249", "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "doc_type":"usermanual", "kw":"Change History,User Guide", diff --git a/docs/modelarts/umn/CLASS.TXT.json b/docs/modelarts/umn/CLASS.TXT.json index fb1da540..61f078be 100644 --- a/docs/modelarts/umn/CLASS.TXT.json +++ b/docs/modelarts/umn/CLASS.TXT.json @@ -108,7 +108,7 @@ "code":"12" }, { - "desc":"ModelArts uses Object Storage Service (OBS) to store data and model backups and snapshots. OBS provides secure, reliable, low-cost storage. For more details, see Object S", + "desc":"ModelArts uses Identity and Access Management (IAM) for authentication and authorization. For more information about IAM, see Identity and Access Management User Guide.Mo", "product_code":"modelarts", "title":"Related Services", "uri":"modelarts_01_0006.html", @@ -494,15 +494,6 @@ "p_code":"53", "code":"55" }, - { - "desc":"Training a model uses a large number of labeled images. Therefore, label images before the model training. You can label images on the ModelArts management console. Alter", - "product_code":"modelarts", - "title":"Image Segmentation", - "uri":"modelarts_23_0345.html", - "doc_type":"usermanual", - "p_code":"53", - "code":"56" - }, { "desc":"Model training requires a large amount of labeled data. Therefore, before the model training, add labels to the files that are not labeled. In addition, you can modify, d", "product_code":"modelarts", @@ -510,7 +501,7 @@ "uri":"modelarts_23_0013.html", "doc_type":"usermanual", "p_code":"53", - "code":"57" + "code":"56" }, { "desc":"Named entity recognition assigns labels to named entities in text, such as time and locations. Before labeling, you need to understand the following:A label name can cont", @@ -519,7 +510,7 @@ "uri":"modelarts_23_0014.html", "doc_type":"usermanual", "p_code":"53", - "code":"58" + "code":"57" }, { "desc":"Triplet labeling is suitable for scenarios where structured information, such as subjects, predicates, and objects, needs to be labeled in statements. With this function,", @@ -528,7 +519,7 @@ "uri":"modelarts_23_0211.html", "doc_type":"usermanual", "p_code":"53", - "code":"59" + "code":"58" }, { "desc":"Model training requires a large amount of labeled data. Therefore, before the model training, label the unlabeled audio files. ModelArts enables you to label audio files ", @@ -537,7 +528,7 @@ "uri":"modelarts_23_0015.html", "doc_type":"usermanual", "p_code":"53", - "code":"60" + "code":"59" }, { "desc":"Model training requires a large amount of labeled data. Therefore, before the model training, label the unlabeled audio files. ModelArts enables you to label audio files ", @@ -546,7 +537,7 @@ "uri":"modelarts_23_0016.html", "doc_type":"usermanual", "p_code":"53", - "code":"61" + "code":"60" }, { "desc":"Model training requires a large amount of labeled data. Therefore, before the model training, label the unlabeled audio files. ModelArts enables you to label audio files.", @@ -555,16 +546,7 @@ "uri":"modelarts_23_0017.html", "doc_type":"usermanual", "p_code":"53", - "code":"62" - }, - { - "desc":"Model training requires a large amount of labeled video data. Therefore, before the model training, label the unlabeled video files. ModelArts enables you to label video ", - "product_code":"modelarts", - "title":"Video Labeling", - "uri":"modelarts_23_0282.html", - "doc_type":"usermanual", - "p_code":"53", - "code":"63" + "code":"61" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -573,7 +555,7 @@ "uri":"modelarts_23_0005.html", "doc_type":"usermanual", "p_code":"50", - "code":"64" + "code":"62" }, { "desc":"After a dataset is created, you can directly synchronize data from the dataset. Alternatively, you can import more data by importing the dataset. Data can be imported fro", @@ -581,8 +563,8 @@ "title":"Import Operation", "uri":"modelarts_23_0006.html", "doc_type":"usermanual", - "p_code":"64", - "code":"65" + "p_code":"62", + "code":"63" }, { "desc":"When a dataset is imported, the data storage directory and file name must comply with the ModelArts specifications if the data to be used is stored in OBS.Only the follow", @@ -590,8 +572,8 @@ "title":"Specifications for Importing Data from an OBS Directory", "uri":"modelarts_23_0008.html", "doc_type":"usermanual", - "p_code":"64", - "code":"66" + "p_code":"62", + "code":"64" }, { "desc":"The manifest file defines the mapping between labeling objects and content. The Manifest file import mode means that the manifest file is used for dataset import. The man", @@ -599,8 +581,8 @@ "title":"Specifications for Importing the Manifest File", "uri":"modelarts_23_0009.html", "doc_type":"usermanual", - "p_code":"64", - "code":"67" + "p_code":"62", + "code":"65" }, { "desc":"A dataset includes labeled and unlabeled data. You can select images or filter data based on the filter criteria and export to a new dataset or the specified OBS director", @@ -609,7 +591,7 @@ "uri":"modelarts_23_0214.html", "doc_type":"usermanual", "p_code":"50", - "code":"68" + "code":"66" }, { "desc":"For a created dataset, you can modify its basic information to match service changes.You have created a dataset.Log in to the ModelArts management console. In the left na", @@ -618,7 +600,7 @@ "uri":"modelarts_23_0020.html", "doc_type":"usermanual", "p_code":"50", - "code":"69" + "code":"67" }, { "desc":"ModelArts distinguishes data of the same source according to versions labeled at different time, which facilitates the selection of dataset versions during subsequent mod", @@ -627,7 +609,7 @@ "uri":"modelarts_23_0018.html", "doc_type":"usermanual", "p_code":"50", - "code":"70" + "code":"68" }, { "desc":"If a dataset is no longer in use, you can delete it to release resources.After a dataset is deleted, if you need to delete the data in the dataset input and output paths ", @@ -636,7 +618,7 @@ "uri":"modelarts_23_0021.html", "doc_type":"usermanual", "p_code":"50", - "code":"71" + "code":"69" }, { "desc":"After labeling data, you can publish the dataset to multiple versions for management. For the published versions, you can view the dataset version updates, set the curren", @@ -645,8 +627,53 @@ "uri":"modelarts_23_0019.html", "doc_type":"usermanual", "p_code":"50", + "code":"70" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"modelarts", + "title":"Team Labeling", + "uri":"modelarts_23_0180.html", + "doc_type":"usermanual", + "p_code":"50", + "code":"71" + }, + { + "desc":"Generally, a small data labeling task can be completed by an individual. However, team work is required to label a large dataset. ModelArts provides the team labeling fun", + "product_code":"modelarts", + "title":"Introduction to Team Labeling", + "uri":"modelarts_23_0181.html", + "doc_type":"usermanual", + "p_code":"71", "code":"72" }, + { + "desc":"Team labeling is managed in a unit of teams. To enable team labeling for a dataset, a team must be specified. Multiple members can be added to a team.An account can have ", + "product_code":"modelarts", + "title":"Team Management", + "uri":"modelarts_23_0182.html", + "doc_type":"usermanual", + "p_code":"71", + "code":"73" + }, + { + "desc":"There is no member in a new team. You need to add members who will participate in a team labeling task.A maximum of 100 members can be added to a team. If there are more ", + "product_code":"modelarts", + "title":"Member Management", + "uri":"modelarts_23_0183.html", + "doc_type":"usermanual", + "p_code":"71", + "code":"74" + }, + { + "desc":"For datasets with team labeling enabled, you can create team labeling tasks and assign the labeling tasks to different teams so that team members can complete the labelin", + "product_code":"modelarts", + "title":"Managing Team Labeling Tasks", + "uri":"modelarts_23_0210.html", + "doc_type":"usermanual", + "p_code":"71", + "code":"75" + }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "product_code":"modelarts", @@ -654,7 +681,7 @@ "uri":"modelarts_23_0032.html", "doc_type":"usermanual", "p_code":"", - "code":"73" + "code":"76" }, { "desc":"ModelArts integrates the open-source Jupyter Notebook to provide you with online interactive development and debugging environments. You can use the Notebook on the Model", @@ -662,8 +689,8 @@ "title":"Introduction to Notebook", "uri":"modelarts_23_0033.html", "doc_type":"usermanual", - "p_code":"73", - "code":"74" + "p_code":"76", + "code":"77" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -671,8 +698,8 @@ "title":"Managing Notebook Instances", "uri":"modelarts_23_0111.html", "doc_type":"usermanual", - "p_code":"73", - "code":"75" + "p_code":"76", + "code":"78" }, { "desc":"Before developing a model, create a notebook instance, open it, and perform encoding.You will be charged as long as your notebook instance is in the Running status. We re", @@ -680,8 +707,8 @@ "title":"Creating a Notebook Instance", "uri":"modelarts_23_0034.html", "doc_type":"usermanual", - "p_code":"75", - "code":"76" + "p_code":"78", + "code":"79" }, { "desc":"You can open a created notebook instance (that is, an instance in the Running state) and start coding in the development environment.Go to the Jupyter Notebook page.In th", @@ -689,8 +716,8 @@ "title":"Opening a Notebook Instance", "uri":"modelarts_23_0325.html", "doc_type":"usermanual", - "p_code":"75", - "code":"77" + "p_code":"78", + "code":"80" }, { "desc":"You can stop unwanted notebook instances to prevent unnecessary fees. You can also start a notebook instance that is in the Stopped state to use it again.Log in to the Mo", @@ -698,8 +725,8 @@ "title":"Starting or Stopping a Notebook Instance", "uri":"modelarts_23_0041.html", "doc_type":"usermanual", - "p_code":"75", - "code":"78" + "p_code":"78", + "code":"81" }, { "desc":"You can delete notebook instances that are no longer used to release resources.Log in to the ModelArts management console. In the left navigation pane, choose DevEnviron ", @@ -707,8 +734,8 @@ "title":"Deleting a Notebook Instance", "uri":"modelarts_23_0042.html", "doc_type":"usermanual", - "p_code":"75", - "code":"79" + "p_code":"78", + "code":"82" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -716,8 +743,8 @@ "title":"Using Jupyter Notebook", "uri":"modelarts_23_0035.html", "doc_type":"usermanual", - "p_code":"73", - "code":"80" + "p_code":"76", + "code":"83" }, { "desc":"Jupyter Notebook is a web-based application for interactive computing. It can be applied to full-process computing: development, documentation, running code, and presenti", @@ -725,8 +752,8 @@ "title":"Introduction to Jupyter Notebook", "uri":"modelarts_23_0326.html", "doc_type":"usermanual", - "p_code":"80", - "code":"81" + "p_code":"83", + "code":"84" }, { "desc":"This section describes common operations on Jupyter Notebook.In the notebook instance list, locate the row where the target notebook instance resides and click Open in th", @@ -734,8 +761,8 @@ "title":"Common Operations on Jupyter Notebook", "uri":"modelarts_23_0120.html", "doc_type":"usermanual", - "p_code":"80", - "code":"82" + "p_code":"83", + "code":"85" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -743,8 +770,8 @@ "title":"Configuring the Jupyter Notebook Environment", "uri":"modelarts_23_0327.html", "doc_type":"usermanual", - "p_code":"80", - "code":"83" + "p_code":"83", + "code":"86" }, { "desc":"For developers who are used to coding, the terminal function is very convenient and practical. This section describes how to enable the terminal function in a notebook in", @@ -752,8 +779,8 @@ "title":"Using the Notebook Terminal Function", "uri":"modelarts_23_0117.html", "doc_type":"usermanual", - "p_code":"83", - "code":"84" + "p_code":"86", + "code":"87" }, { "desc":"For a GPU-based notebook instance, you can switch different versions of CUDA on the Terminal page of Jupyter.CPU-based notebook instances do not use CUDA. Therefore, the ", @@ -761,8 +788,8 @@ "title":"Switching the CUDA Version on the Terminal Page of a GPU-based Notebook Instance", "uri":"modelarts_23_0280.html", "doc_type":"usermanual", - "p_code":"83", - "code":"85" + "p_code":"86", + "code":"88" }, { "desc":"Multiple environments have been installed in ModelArts notebook instances, including TensorFlow. You can use pip install to install external libraries from a Jupyter note", @@ -770,8 +797,8 @@ "title":"Installing External Libraries and Kernels in Notebook Instances", "uri":"modelarts_23_0040.html", "doc_type":"usermanual", - "p_code":"83", - "code":"86" + "p_code":"86", + "code":"89" }, { "desc":"In notebook instances, you can use ModelArts SDKs to manage OBS, training jobs, models, and real-time services.For details about how to use ModelArts SDKs, see ModelArts ", @@ -779,8 +806,8 @@ "title":"Using ModelArts SDKs", "uri":"modelarts_23_0039.html", "doc_type":"usermanual", - "p_code":"80", - "code":"87" + "p_code":"83", + "code":"90" }, { "desc":"If you specify Storage Path during notebook instance creation, your compiled code will be automatically stored in your specified OBS bucket. If code invocation among diff", @@ -788,8 +815,8 @@ "title":"Synchronizing Files with OBS", "uri":"modelarts_23_0038.html", "doc_type":"usermanual", - "p_code":"80", - "code":"88" + "p_code":"83", + "code":"91" }, { "desc":"After code compiling is finished, you can save the entered code as a .py file which can be used for starting training jobs.Create and open a notebook instance or open an ", @@ -797,8 +824,8 @@ "title":"Using the Convert to Python File Function", "uri":"modelarts_23_0037.html", "doc_type":"usermanual", - "p_code":"80", - "code":"89" + "p_code":"83", + "code":"92" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -806,8 +833,8 @@ "title":"Using JupyterLab", "uri":"modelarts_23_0330.html", "doc_type":"usermanual", - "p_code":"73", - "code":"90" + "p_code":"76", + "code":"93" }, { "desc":"JupyterLab is an interactive development environment. It is a next-generation product of Jupyter Notebook. JupyterLab enables you to compile notebooks, operate terminals,", @@ -815,8 +842,8 @@ "title":"Introduction to JupyterLab and Common Operations", "uri":"modelarts_23_0209.html", "doc_type":"usermanual", - "p_code":"90", - "code":"91" + "p_code":"93", + "code":"94" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -824,8 +851,8 @@ "title":"Uploading and Downloading Data", "uri":"modelarts_23_0331.html", "doc_type":"usermanual", - "p_code":"90", - "code":"92" + "p_code":"93", + "code":"95" }, { "desc":"On the JupyterLab page, click Upload Files to upload a file. For details, see Uploading a File in Introduction to JupyterLab and Common Operations. If a message is displa", @@ -833,8 +860,8 @@ "title":"Uploading Data to JupyterLab", "uri":"modelarts_23_0332.html", "doc_type":"usermanual", - "p_code":"92", - "code":"93" + "p_code":"95", + "code":"96" }, { "desc":"Only files within 100 MB in JupyterLab can be downloaded to a local PC. You can perform operations in different scenarios based on the storage location selected when crea", @@ -842,8 +869,8 @@ "title":"Downloading a File from JupyterLab", "uri":"modelarts_23_0333.html", "doc_type":"usermanual", - "p_code":"92", - "code":"94" + "p_code":"95", + "code":"97" }, { "desc":"In notebook instances, you can use ModelArts SDKs to manage OBS, training jobs, models, and real-time services.For details about how to use ModelArts SDKs, see ModelArts ", @@ -851,8 +878,8 @@ "title":"Using ModelArts SDKs", "uri":"modelarts_23_0335.html", "doc_type":"usermanual", - "p_code":"90", - "code":"95" + "p_code":"93", + "code":"98" }, { "desc":"If you specify Storage Path during notebook instance creation, your compiled code will be automatically stored in your specified OBS bucket. If code invocation among diff", @@ -860,8 +887,8 @@ "title":"Synchronizing Files with OBS", "uri":"modelarts_23_0336.html", "doc_type":"usermanual", - "p_code":"90", - "code":"96" + "p_code":"93", + "code":"99" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -870,7 +897,7 @@ "uri":"modelarts_23_0043.html", "doc_type":"usermanual", "p_code":"", - "code":"97" + "code":"100" }, { "desc":"ModelArts provides model training for you to view the training effect, based on which you can adjust your model parameters. You can select resource pools (CPU or GPU) wit", @@ -878,8 +905,8 @@ "title":"Introduction to Model Training", "uri":"modelarts_23_0044.html", "doc_type":"usermanual", - "p_code":"97", - "code":"98" + "p_code":"100", + "code":"101" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -887,8 +914,8 @@ "title":"Built-in Algorithms", "uri":"modelarts_23_0156.html", "doc_type":"usermanual", - "p_code":"97", - "code":"99" + "p_code":"100", + "code":"102" }, { "desc":"Based on the frequently-used AI engines in the industry, ModelArts provides built-in algorithms to meet a wide range of your requirements. You can directly select the alg", @@ -896,8 +923,8 @@ "title":"Introduction to Built-in Algorithms", "uri":"modelarts_23_0045.html", "doc_type":"usermanual", - "p_code":"99", - "code":"100" + "p_code":"102", + "code":"103" }, { "desc":"The built-in algorithms provided by ModelArts can be used for image classification, object detection, and image semantic segmentation. The requirements for the datasets v", @@ -905,8 +932,8 @@ "title":"Requirements on Datasets", "uri":"modelarts_23_0157.html", "doc_type":"usermanual", - "p_code":"99", - "code":"101" + "p_code":"102", + "code":"104" }, { "desc":"This section describes the built-in algorithms supported by ModelArts and the running parameters supported by each algorithm. You can set running parameters for a trainin", @@ -914,8 +941,8 @@ "title":"Algorithms and Their Running Parameters", "uri":"modelarts_23_0158.html", "doc_type":"usermanual", - "p_code":"99", - "code":"102" + "p_code":"102", + "code":"105" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -923,8 +950,8 @@ "title":"Creating a Training Job", "uri":"modelarts_23_0235.html", "doc_type":"usermanual", - "p_code":"97", - "code":"103" + "p_code":"100", + "code":"106" }, { "desc":"ModelArts supports multiple types of training jobs during the entire AI development process. Select a creation mode based on the algorithm source.Built-inIf you do not kn", @@ -932,8 +959,8 @@ "title":"Introduction to Training Jobs", "uri":"modelarts_23_0046.html", "doc_type":"usermanual", - "p_code":"103", - "code":"104" + "p_code":"106", + "code":"107" }, { "desc":"If you do not have the algorithm development capability, you can use the built-in algorithms of ModelArts. After simple parameter adjustment, you can create a training jo", @@ -941,8 +968,8 @@ "title":"Using Built-in Algorithms to Train Models", "uri":"modelarts_23_0237.html", "doc_type":"usermanual", - "p_code":"103", - "code":"105" + "p_code":"106", + "code":"108" }, { "desc":"If you use frequently-used frameworks, such as TensorFlow and MXNet, to develop algorithms locally, you can select Frequently-used to create training jobs and build model", @@ -950,8 +977,8 @@ "title":"Using Frequently-used Frameworks to Train Models", "uri":"modelarts_23_0238.html", "doc_type":"usermanual", - "p_code":"103", - "code":"106" + "p_code":"106", + "code":"109" }, { "desc":"If the framework used for algorithm development is not a frequently-used framework, you can build an algorithm into a custom image and use the custom image to create a tr", @@ -959,8 +986,8 @@ "title":"Using Custom Images to Train Models", "uri":"modelarts_23_0239.html", "doc_type":"usermanual", - "p_code":"103", - "code":"107" + "p_code":"106", + "code":"110" }, { "desc":"In the training job list, click Stop in the Operation column for a training job in the Running state to stop a running training job.If you have selected Save Training Par", @@ -968,8 +995,8 @@ "title":"Stopping or Deleting a Job", "uri":"modelarts_23_0159.html", "doc_type":"usermanual", - "p_code":"97", - "code":"108" + "p_code":"100", + "code":"111" }, { "desc":"During model building, you may need to frequently tune the data, training parameters, or the model based on the training results to obtain a satisfactory model. ModelArts", @@ -977,8 +1004,8 @@ "title":"Managing Training Job Versions", "uri":"modelarts_23_0047.html", "doc_type":"usermanual", - "p_code":"97", - "code":"109" + "p_code":"100", + "code":"112" }, { "desc":"After a training job finishes, you can manage the training job versions and check whether the training result of the job is satisfactory by viewing the job details.In the", @@ -986,8 +1013,8 @@ "title":"Viewing Job Details", "uri":"modelarts_23_0048.html", "doc_type":"usermanual", - "p_code":"97", - "code":"110" + "p_code":"100", + "code":"113" }, { "desc":"You can store the parameter settings in ModelArts during job creation so that you can use the stored settings to create follow-up training jobs, which makes job creation ", @@ -995,8 +1022,8 @@ "title":"Managing Job Parameters", "uri":"modelarts_23_0049.html", "doc_type":"usermanual", - "p_code":"97", - "code":"111" + "p_code":"100", + "code":"114" }, { "desc":"You can create visualization jobs of TensorBoard and MindInsight types on ModelArts.TensorBoard supports training jobs based on the TensorFlow engine, and MindInsight sup", @@ -1004,8 +1031,8 @@ "title":"Managing Visualization Jobs", "uri":"modelarts_23_0050.html", "doc_type":"usermanual", - "p_code":"97", - "code":"112" + "p_code":"100", + "code":"115" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1014,7 +1041,7 @@ "uri":"modelarts_23_0051.html", "doc_type":"usermanual", "p_code":"", - "code":"113" + "code":"116" }, { "desc":"AI model development and optimization require frequent iterations and debugging. Changes in datasets, training code, or parameters may affect the quality of models. If th", @@ -1022,8 +1049,8 @@ "title":"Introduction to Model Management", "uri":"modelarts_23_0052.html", "doc_type":"usermanual", - "p_code":"113", - "code":"114" + "p_code":"116", + "code":"117" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1031,8 +1058,8 @@ "title":"Importing a Model", "uri":"modelarts_23_0204.html", "doc_type":"usermanual", - "p_code":"113", - "code":"115" + "p_code":"116", + "code":"118" }, { "desc":"You can create a training job on ModelArts and perform training to obtain a satisfactory model. Then import the model to Model Management for unified management. In addit", @@ -1040,8 +1067,8 @@ "title":"Importing a Meta Model from a Training Job", "uri":"modelarts_23_0054.html", "doc_type":"usermanual", - "p_code":"115", - "code":"116" + "p_code":"118", + "code":"119" }, { "desc":"Because the configurations of models with the same functions are similar, ModelArts integrates the configurations of such models into a common template. By using this tem", @@ -1049,8 +1076,8 @@ "title":"Importing a Meta Model from a Template", "uri":"modelarts_23_0205.html", "doc_type":"usermanual", - "p_code":"115", - "code":"117" + "p_code":"118", + "code":"120" }, { "desc":"For AI engines that are not supported by ModelArts, you can import the model you compile to ModelArts from custom images.For details about the specifications and descript", @@ -1058,8 +1085,8 @@ "title":"Importing a Meta Model from a Container Image", "uri":"modelarts_23_0206.html", "doc_type":"usermanual", - "p_code":"115", - "code":"118" + "p_code":"118", + "code":"121" }, { "desc":"In scenarios where frequently-used frameworks are used for model development and training, you can import the model to ModelArts for unified management.The model has been", @@ -1067,8 +1094,8 @@ "title":"Importing a Meta Model from OBS", "uri":"modelarts_23_0207.html", "doc_type":"usermanual", - "p_code":"115", - "code":"119" + "p_code":"118", + "code":"122" }, { "desc":"To facilitate source tracing and repeated model tuning, ModelArts provides the model version management function. You can manage models based on versions.You have importe", @@ -1076,54 +1103,9 @@ "title":"Managing Model Versions", "uri":"modelarts_23_0055.html", "doc_type":"usermanual", - "p_code":"113", - "code":"120" - }, - { - "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", - "product_code":"modelarts", - "title":"Model Compression and Conversion", - "uri":"modelarts_23_0106.html", - "doc_type":"usermanual", - "p_code":"113", - "code":"121" - }, - { - "desc":"To obtain higher computing power, you can deploy the models created on ModelArts or a local PC on the Ascend chip. In this case, you need to compress or convert the model", - "product_code":"modelarts", - "title":"Compressing and Converting Models", - "uri":"modelarts_23_0107.html", - "doc_type":"usermanual", - "p_code":"121", - "code":"122" - }, - { - "desc":"During model conversion, the model input directory must comply with certain specifications. This section describes how to upload your model package to OBS.The requirement", - "product_code":"modelarts", - "title":"Model Input Path Specifications", - "uri":"modelarts_23_0108.html", - "doc_type":"usermanual", - "p_code":"121", + "p_code":"116", "code":"123" }, - { - "desc":"The following describes the output path of the model run on the Ascend chip after conversion:For TensorFlow-based models, the output path must comply with the following s", - "product_code":"modelarts", - "title":"Model Output Path Description", - "uri":"modelarts_23_0109.html", - "doc_type":"usermanual", - "p_code":"121", - "code":"124" - }, - { - "desc":"ModelArts provides the following conversion templates based on different AI frameworks:TF-FrozenGraph-To-Ascend-C32Convert the model trained by the TensorFlow framework a", - "product_code":"modelarts", - "title":"Conversion Templates", - "uri":"modelarts_23_0110.html", - "doc_type":"usermanual", - "p_code":"121", - "code":"125" - }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", "product_code":"modelarts", @@ -1131,7 +1113,7 @@ "uri":"modelarts_23_0057.html", "doc_type":"usermanual", "p_code":"", - "code":"126" + "code":"124" }, { "desc":"After a training job is complete and a model is generated, you can deploy the model on the Service Deployment page. You can also deploy the model imported from OBS. Model", @@ -1139,8 +1121,8 @@ "title":"Introduction to Model Deployment", "uri":"modelarts_23_0058.html", "doc_type":"usermanual", - "p_code":"126", - "code":"127" + "p_code":"124", + "code":"125" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1148,8 +1130,8 @@ "title":"Real-Time Services", "uri":"modelarts_23_0059.html", "doc_type":"usermanual", - "p_code":"126", - "code":"128" + "p_code":"124", + "code":"126" }, { "desc":"After a model is prepared, you can deploy the model as a real-time service and predict and call the service.A maximum of one real-time service can be deployed.Data has be", @@ -1157,8 +1139,8 @@ "title":"Deploying a Model as a Real-Time Service", "uri":"modelarts_23_0060.html", "doc_type":"usermanual", - "p_code":"128", - "code":"129" + "p_code":"126", + "code":"127" }, { "desc":"After a model is deployed as a real-time service, you can access the service page to view its details.Log in to the ModelArts management console and choose Service Deploy", @@ -1166,8 +1148,8 @@ "title":"Viewing Service Details", "uri":"modelarts_23_0061.html", "doc_type":"usermanual", - "p_code":"128", - "code":"130" + "p_code":"126", + "code":"128" }, { "desc":"After a model is deployed as a real-time service, you can debug code or add files for testing on the Prediction tab page. Based on the input request (JSON text or file) d", @@ -1175,8 +1157,8 @@ "title":"Testing a Service", "uri":"modelarts_23_0062.html", "doc_type":"usermanual", - "p_code":"128", - "code":"131" + "p_code":"126", + "code":"129" }, { "desc":"If a real-time service is in the Running state, the real-time service has been deployed successfully. This service provides a standard RESTful API for users to call. Befo", @@ -1184,8 +1166,8 @@ "title":"Accessing a Real-Time Service (Token-based Authentication)", "uri":"modelarts_23_0063.html", "doc_type":"usermanual", - "p_code":"128", - "code":"132" + "p_code":"126", + "code":"130" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1193,8 +1175,8 @@ "title":"Batch Services", "uri":"modelarts_23_0065.html", "doc_type":"usermanual", - "p_code":"126", - "code":"133" + "p_code":"124", + "code":"131" }, { "desc":"After a model is prepared, you can deploy it as a batch service. The Service Deployment > Batch Services page lists all batch services. You can enter a service name in th", @@ -1202,8 +1184,8 @@ "title":"Deploying a Model as a Batch Service", "uri":"modelarts_23_0066.html", "doc_type":"usermanual", - "p_code":"133", - "code":"134" + "p_code":"131", + "code":"132" }, { "desc":"When deploying a batch service, you can select the location of the output data directory. You can view the running result of the batch service that is in the Running comp", @@ -1211,8 +1193,8 @@ "title":"Viewing the Batch Service Prediction Result", "uri":"modelarts_23_0067.html", "doc_type":"usermanual", - "p_code":"133", - "code":"135" + "p_code":"131", + "code":"133" }, { "desc":"For a deployed service, you can modify its basic information to match service changes. You can modify the basic information about a service in either of the following way", @@ -1220,8 +1202,8 @@ "title":"Modifying a Service", "uri":"modelarts_23_0071.html", "doc_type":"usermanual", - "p_code":"126", - "code":"136" + "p_code":"124", + "code":"134" }, { "desc":"You can start services in the Successful, Abnormal, or Stopped status. Services in the Deploying status cannot be started. A service is billed when it is started and in t", @@ -1229,8 +1211,8 @@ "title":"Starting or Stopping a Service", "uri":"modelarts_23_0072.html", "doc_type":"usermanual", - "p_code":"126", - "code":"137" + "p_code":"124", + "code":"135" }, { "desc":"If a service is no longer in use, you can delete it to release resources.Log in to the ModelArts management console and choose Service Deployment from the left navigation", @@ -1238,8 +1220,8 @@ "title":"Deleting a Service", "uri":"modelarts_23_0073.html", "doc_type":"usermanual", - "p_code":"126", - "code":"138" + "p_code":"124", + "code":"136" }, { "desc":"When using ModelArts to implement AI Development Lifecycle, you can use two different resource pools to train and deploy models.Public Resource Pool: provides public larg", @@ -1248,7 +1230,7 @@ "uri":"modelarts_23_0076.html", "doc_type":"usermanual", "p_code":"", - "code":"139" + "code":"137" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1257,7 +1239,7 @@ "uri":"modelarts_23_0083.html", "doc_type":"usermanual", "p_code":"", - "code":"140" + "code":"138" }, { "desc":"ModelArts provides multiple frequently-used built-in engines. However, when users have special requirements for the deep learning engine and development library, the buil", @@ -1265,8 +1247,8 @@ "title":"Introduction to Custom Images", "uri":"modelarts_23_0084.html", "doc_type":"usermanual", - "p_code":"140", - "code":"141" + "p_code":"138", + "code":"139" }, { "desc":"ModelArts allows you to use custom images to create training jobs and import models. Before creating and uploading a custom image, understand the following information:So", @@ -1274,8 +1256,8 @@ "title":"Creating and Uploading a Custom Image", "uri":"modelarts_23_0085.html", "doc_type":"usermanual", - "p_code":"140", - "code":"142" + "p_code":"138", + "code":"140" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1283,8 +1265,8 @@ "title":"For Training Models", "uri":"modelarts_23_0216.html", "doc_type":"usermanual", - "p_code":"140", - "code":"143" + "p_code":"138", + "code":"141" }, { "desc":"When creating an image using locally developed models and training scripts, ensure that they meet the specifications defined by ModelArts.Custom images cannot contain mal", @@ -1292,8 +1274,8 @@ "title":"Specifications for Custom Images Used for Training Jobs", "uri":"modelarts_23_0217.html", "doc_type":"usermanual", - "p_code":"143", - "code":"144" + "p_code":"141", + "code":"142" }, { "desc":"After creating and uploading a custom image to SWR, you can use the image to create a training job on the ModelArts management console to complete model training.You have", @@ -1301,8 +1283,8 @@ "title":"Creating a Training Job Using a Custom Image (GPU)", "uri":"modelarts_23_0087.html", "doc_type":"usermanual", - "p_code":"143", - "code":"145" + "p_code":"141", + "code":"143" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1310,8 +1292,8 @@ "title":"For Importing Models", "uri":"modelarts_23_0218.html", "doc_type":"usermanual", - "p_code":"140", - "code":"146" + "p_code":"138", + "code":"144" }, { "desc":"When creating an image using locally developed models, ensure that they meet the specifications defined by ModelArts.Custom images cannot contain malicious code.The size ", @@ -1319,8 +1301,8 @@ "title":"Specifications for Custom Images Used for Importing Models", "uri":"modelarts_23_0219.html", "doc_type":"usermanual", - "p_code":"146", - "code":"147" + "p_code":"144", + "code":"145" }, { "desc":"After creating and uploading a custom image to SWR, you can use the image to import a model and deploy the model as a service on the ModelArts management console.You have", @@ -1328,8 +1310,8 @@ "title":"Importing a Model Using a Custom Image", "uri":"modelarts_23_0086.html", "doc_type":"usermanual", - "p_code":"146", - "code":"148" + "p_code":"144", + "code":"146" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1338,7 +1320,7 @@ "uri":"modelarts_23_0090.html", "doc_type":"usermanual", "p_code":"", - "code":"149" + "code":"147" }, { "desc":"When you import models in Model Management, if the meta model is imported from OBS or a container image, the model package must meet the following specifications:The mode", @@ -1346,8 +1328,8 @@ "title":"Model Package Specifications", "uri":"modelarts_23_0091.html", "doc_type":"usermanual", - "p_code":"149", - "code":"150" + "p_code":"147", + "code":"148" }, { "desc":"A model developer needs to compile a configuration file when publishing a model. The model configuration file describes the model usage, computing framework, precision, i", @@ -1355,8 +1337,8 @@ "title":"Specifications for Compiling the Model Configuration File", "uri":"modelarts_23_0092.html", "doc_type":"usermanual", - "p_code":"149", - "code":"151" + "p_code":"147", + "code":"149" }, { "desc":"This section describes how to compile model inference code in ModelArts. The following also provides an example of inference code for the TensorFlow engine and an example", @@ -1364,8 +1346,8 @@ "title":"Specifications for Compiling Model Inference Code", "uri":"modelarts_23_0093.html", "doc_type":"usermanual", - "p_code":"149", - "code":"152" + "p_code":"147", + "code":"150" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1374,7 +1356,7 @@ "uri":"modelarts_23_0097.html", "doc_type":"usermanual", "p_code":"", - "code":"153" + "code":"151" }, { "desc":"Because the configurations of models with the same functions are similar, ModelArts integrates the configurations of such models into a common template. By using this tem", @@ -1382,8 +1364,8 @@ "title":"Introduction to Model Templates", "uri":"modelarts_23_0098.html", "doc_type":"usermanual", - "p_code":"153", - "code":"154" + "p_code":"151", + "code":"152" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1391,8 +1373,8 @@ "title":"Template Description", "uri":"modelarts_23_0118.html", "doc_type":"usermanual", - "p_code":"153", - "code":"155" + "p_code":"151", + "code":"153" }, { "desc":"AI engine: TensorFlow 1.8; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or appl", @@ -1400,8 +1382,8 @@ "title":"TensorFlow-py27 General Template", "uri":"modelarts_23_0161.html", "doc_type":"usermanual", - "p_code":"155", - "code":"156" + "p_code":"153", + "code":"154" }, { "desc":"AI engine: TensorFlow 1.8; Environment: Python 3.6; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or appl", @@ -1409,8 +1391,8 @@ "title":"TensorFlow-py36 General Template", "uri":"modelarts_23_0162.html", "doc_type":"usermanual", - "p_code":"155", - "code":"157" + "p_code":"153", + "code":"155" }, { "desc":"AI engine: MXNet 1.2.1; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or applica", @@ -1418,8 +1400,8 @@ "title":"MXNet-py27 General Template", "uri":"modelarts_23_0163.html", "doc_type":"usermanual", - "p_code":"155", - "code":"158" + "p_code":"153", + "code":"156" }, { "desc":"AI engine: MXNet 1.2.1; Environment: Python 3.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or applica", @@ -1427,8 +1409,8 @@ "title":"MXNet-py37 General Template", "uri":"modelarts_23_0164.html", "doc_type":"usermanual", - "p_code":"155", - "code":"159" + "p_code":"153", + "code":"157" }, { "desc":"AI engine: PyTorch 1.0; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or applica", @@ -1436,8 +1418,8 @@ "title":"PyTorch-py27 General Template", "uri":"modelarts_23_0165.html", "doc_type":"usermanual", - "p_code":"155", - "code":"160" + "p_code":"153", + "code":"158" }, { "desc":"AI engine: PyTorch 1.0; Environment: Python 3.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or applica", @@ -1445,8 +1427,8 @@ "title":"PyTorch-py37 General Template", "uri":"modelarts_23_0166.html", "doc_type":"usermanual", - "p_code":"155", - "code":"161" + "p_code":"153", + "code":"159" }, { "desc":"AI engine: CPU-based Caffe 1.0; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or", @@ -1454,8 +1436,8 @@ "title":"Caffe-CPU-py27 General Template", "uri":"modelarts_23_0167.html", "doc_type":"usermanual", - "p_code":"155", - "code":"162" + "p_code":"153", + "code":"160" }, { "desc":"AI engine: GPU-based Caffe 1.0; Environment: Python 2.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or", @@ -1463,8 +1445,8 @@ "title":"Caffe-GPU-py27 General Template", "uri":"modelarts_23_0168.html", "doc_type":"usermanual", - "p_code":"155", - "code":"163" + "p_code":"153", + "code":"161" }, { "desc":"AI engine: CPU-based Caffe 1.0; Environment: Python 3.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or", @@ -1472,8 +1454,8 @@ "title":"Caffe-CPU-py37 General Template", "uri":"modelarts_23_0169.html", "doc_type":"usermanual", - "p_code":"155", - "code":"164" + "p_code":"153", + "code":"162" }, { "desc":"AI engine: GPU-based Caffe 1.0; Environment: Python 3.7; Input and output mode: undefined mode. Select an appropriate input and output mode based on the model function or", @@ -1481,8 +1463,8 @@ "title":"Caffe-GPU-py37 General Template", "uri":"modelarts_23_0170.html", "doc_type":"usermanual", - "p_code":"155", - "code":"165" + "p_code":"153", + "code":"163" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1490,8 +1472,8 @@ "title":"Input and Output Modes", "uri":"modelarts_23_0099.html", "doc_type":"usermanual", - "p_code":"153", - "code":"166" + "p_code":"151", + "code":"164" }, { "desc":"This is a built-in input and output mode for object detection. The models using this mode are identified as object detection models. The prediction request path is /, the", @@ -1499,8 +1481,8 @@ "title":"Built-in Object Detection Mode", "uri":"modelarts_23_0100.html", "doc_type":"usermanual", - "p_code":"166", - "code":"167" + "p_code":"164", + "code":"165" }, { "desc":"The built-in image processing input and output mode can be applied to models such as image classification, object detection, and image semantic segmentation. The predicti", @@ -1508,8 +1490,8 @@ "title":"Built-in Image Processing Mode", "uri":"modelarts_23_0101.html", "doc_type":"usermanual", - "p_code":"166", - "code":"168" + "p_code":"164", + "code":"166" }, { "desc":"This is a built-in input and output mode for predictive analytics. The models using this mode are identified as predictive analytics models. The prediction request path i", @@ -1517,8 +1499,8 @@ "title":"Built-in Predictive Analytics Mode", "uri":"modelarts_23_0102.html", "doc_type":"usermanual", - "p_code":"166", - "code":"169" + "p_code":"164", + "code":"167" }, { "desc":"The undefined mode does not define the input and output mode. The input and output mode is determined by the model. Select this mode only when the existing input and outp", @@ -1526,8 +1508,8 @@ "title":"Undefined Mode", "uri":"modelarts_23_0103.html", "doc_type":"usermanual", - "p_code":"166", - "code":"170" + "p_code":"164", + "code":"168" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1536,7 +1518,7 @@ "uri":"modelarts_23_0172.html", "doc_type":"usermanual", "p_code":"", - "code":"171" + "code":"169" }, { "desc":"TensorFlow has two types of APIs: Keras and tf. Keras and tf use different code for training and saving models, but the same code for inference.", @@ -1544,8 +1526,8 @@ "title":"TensorFlow", "uri":"modelarts_23_0173.html", "doc_type":"usermanual", - "p_code":"171", - "code":"172" + "p_code":"169", + "code":"170" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1553,8 +1535,8 @@ "title":"PyTorch", "uri":"modelarts_23_0175.html", "doc_type":"usermanual", - "p_code":"171", - "code":"173" + "p_code":"169", + "code":"171" }, { "desc":"lenet_train_test.prototxt filelenet_solver.prototxt fileTrain the model.The caffemodel file is generated after model training. Rewrite the lenet_train_test.prototxt file ", @@ -1562,17 +1544,17 @@ "title":"Caffe", "uri":"modelarts_23_0176.html", "doc_type":"usermanual", - "p_code":"171", - "code":"174" + "p_code":"169", + "code":"172" }, { - "desc":"After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publish", + "desc":"Before training, download the iris.csv dataset, decompress it, and upload it to the /home/ma-user/work/ directory of the notebook instance. Download the iris.csv dataset ", "product_code":"modelarts", "title":"XGBoost", "uri":"modelarts_23_0177.html", "doc_type":"usermanual", - "p_code":"171", - "code":"175" + "p_code":"169", + "code":"173" }, { "desc":"After the model is saved, it must be uploaded to the OBS directory before being published. The config.json configuration and customize_service.py must be contained during", @@ -1580,17 +1562,17 @@ "title":"PySpark", "uri":"modelarts_23_0178.html", "doc_type":"usermanual", - "p_code":"171", - "code":"176" + "p_code":"169", + "code":"174" }, { - "desc":"After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publish", + "desc":"Before training, download the iris.csv dataset, decompress it, and upload it to the /home/ma-user/work/ directory of the notebook instance. Download the iris.csv dataset ", "product_code":"modelarts", "title":"Scikit Learn", "uri":"modelarts_23_0179.html", "doc_type":"usermanual", - "p_code":"171", - "code":"177" + "p_code":"169", + "code":"175" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1599,7 +1581,7 @@ "uri":"modelarts_23_0077.html", "doc_type":"usermanual", "p_code":"", - "code":"178" + "code":"176" }, { "desc":"A fine-grained policy is a set of permissions defining which operations on which cloud services can be performed. Each policy can define multiple permissions. After a pol", @@ -1607,8 +1589,8 @@ "title":"Basic Concepts", "uri":"modelarts_23_0078.html", "doc_type":"usermanual", - "p_code":"178", - "code":"179" + "p_code":"176", + "code":"177" }, { "desc":"A fine-grained policy consists of the policy version (the Version field) and statement (the Statement field).Version: Distinguishes between role-based access control (RBA", @@ -1616,8 +1598,8 @@ "title":"Creating a User and Granting Permissions", "uri":"modelarts_23_0079.html", "doc_type":"usermanual", - "p_code":"178", - "code":"180" + "p_code":"176", + "code":"178" }, { "desc":"If default policies cannot meet the requirements on fine-grained access control, you can create custom policies and assign the policies to the user group.You can create c", @@ -1625,8 +1607,8 @@ "title":"Creating a Custom Policy", "uri":"modelarts_23_0080.html", "doc_type":"usermanual", - "p_code":"178", - "code":"181" + "p_code":"176", + "code":"179" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1635,7 +1617,7 @@ "uri":"modelarts_23_0186.html", "doc_type":"usermanual", "p_code":"", - "code":"182" + "code":"180" }, { "desc":"The cloud service platform provides Cloud Eye to help you better understand the status of your ModelArts real-time services and models. You can use Cloud Eye to automatic", @@ -1643,8 +1625,8 @@ "title":"ModelArts Metrics", "uri":"modelarts_23_0187.html", "doc_type":"usermanual", - "p_code":"182", - "code":"183" + "p_code":"180", + "code":"181" }, { "desc":"Setting alarm rules allows you to customize the monitored objects and notification policies so that you can know the status of ModelArts real-time services and models in ", @@ -1652,8 +1634,8 @@ "title":"Setting Alarm Rules", "uri":"modelarts_23_0188.html", "doc_type":"usermanual", - "p_code":"182", - "code":"184" + "p_code":"180", + "code":"182" }, { "desc":"Cloud Eye on the cloud service platform monitors the status of ModelArts real-time services and model loads. You can obtain the monitoring metrics of each ModelArts real-", @@ -1661,8 +1643,8 @@ "title":"Viewing Monitoring Metrics", "uri":"modelarts_23_0189.html", "doc_type":"usermanual", - "p_code":"182", - "code":"185" + "p_code":"180", + "code":"183" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1671,7 +1653,7 @@ "uri":"modelarts_23_0249.html", "doc_type":"usermanual", "p_code":"", - "code":"186" + "code":"184" }, { "desc":"CTS is available on the public cloud platform. With CTS, you can record operations associated with ModelArts for later query, audit, and backtrack operations.CTS has been", @@ -1679,8 +1661,8 @@ "title":"Key Operations Recorded by CTS", "uri":"modelarts_23_0250.html", "doc_type":"usermanual", - "p_code":"186", - "code":"187" + "p_code":"184", + "code":"185" }, { "desc":"After CTS is enabled, CTS starts recording operations related to ModelArts. The CTS management console stores the last seven days of operation records. This section descr", @@ -1688,8 +1670,8 @@ "title":"Viewing Audit Logs", "uri":"modelarts_23_0251.html", "doc_type":"usermanual", - "p_code":"186", - "code":"188" + "p_code":"184", + "code":"186" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1698,7 +1680,7 @@ "uri":"modelarts_05_0000.html", "doc_type":"usermanual", "p_code":"", - "code":"189" + "code":"187" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1706,8 +1688,8 @@ "title":"General Issues", "uri":"modelarts_05_0014.html", "doc_type":"usermanual", - "p_code":"189", - "code":"190" + "p_code":"187", + "code":"188" }, { "desc":"ModelArts is a one-stop development platform for AI developers. With data preprocessing, semi-automated data labeling, distributed training, automated model building, and", @@ -1715,17 +1697,17 @@ "title":"What Is ModelArts?", "uri":"modelarts_05_0001.html", "doc_type":"usermanual", - "p_code":"190", - "code":"191" + "p_code":"188", + "code":"189" }, { - "desc":"ModelArts uses Object Storage Service (OBS) to store data and model backups and snapshots. OBS provides secure, reliable, low-cost storage. For more details, see Object S", + "desc":"ModelArts uses Identity and Access Management (IAM) for authentication and authorization. For more information about IAM, see Identity and Access Management User Guide.Mo", "product_code":"modelarts", - "title":"What Are the Relationships Between ModelArts and Other Services", + "title":"What Are The Relationships Between ModelArts And Other Services", "uri":"modelarts_05_0003.html", "doc_type":"usermanual", - "p_code":"190", - "code":"192" + "p_code":"188", + "code":"190" }, { "desc":"Log in to the console, enter the My Credentials page, and choose Access Keys > Create Access Key.In the Create Access Key dialog box that is displayed, use the login pass", @@ -1733,8 +1715,8 @@ "title":"How Do I Obtain Access Keys?", "uri":"modelarts_05_0004.html", "doc_type":"usermanual", - "p_code":"190", - "code":"193" + "p_code":"188", + "code":"191" }, { "desc":"Before using ModelArts to develop AI models, data needs to be uploaded to an OBS bucket. You can log in to the OBS console to create an OBS bucket, create a folder, and u", @@ -1742,8 +1724,8 @@ "title":"How Do I Upload Data to OBS?", "uri":"modelarts_05_0013.html", "doc_type":"usermanual", - "p_code":"190", - "code":"194" + "p_code":"188", + "code":"192" }, { "desc":"Supported AI frameworks and versions of ModelArts vary slightly based on the development environment, training jobs, and model inference (model management and deployment)", @@ -1751,8 +1733,8 @@ "title":"Which AI Frameworks Does ModelArts Support?", "uri":"modelarts_05_0128.html", "doc_type":"usermanual", - "p_code":"190", - "code":"195" + "p_code":"188", + "code":"193" }, { "desc":"For common users, ModelArts provides the predictive analytics function of ExeML to train models based on structured data.For advanced users, ModelArts provides the notebo", @@ -1760,26 +1742,17 @@ "title":"How Do I Use ModelArts to Train Models Based on Structured Data?", "uri":"modelarts_21_0055.html", "doc_type":"usermanual", - "p_code":"190", - "code":"196" + "p_code":"188", + "code":"194" }, { - "desc":"If an OBS directory needs to be specified for using ModelArts functions, such as creating training jobs and datasets, ensure that the OBS bucket and ModelArts are in the ", - "product_code":"modelarts", - "title":"Why Cannot I Find the OBS Bucket on ModelArts After Uploading Data to OBS?", - "uri":"modelarts_21_0056.html", - "doc_type":"usermanual", - "p_code":"190", - "code":"197" - }, - { - "desc":"No. The current ModelArts version does not support multiple projects. Customers can only use it in the default eu-de project.", + "desc":"The current version supports multiple projects.", "product_code":"modelarts", "title":"Does ModelArts Support Multiple Projects?", "uri":"modelarts_21_0057.html", "doc_type":"usermanual", - "p_code":"190", - "code":"198" + "p_code":"188", + "code":"195" }, { "desc":"To view all files stored in OBS when using notebook instances or training jobs, use either of the following methods:OBS consoleLog in to OBS console using the current acc", @@ -1787,8 +1760,8 @@ "title":"How Do I View All Files in an OBS Directory on ModelArts?", "uri":"modelarts_21_0058.html", "doc_type":"usermanual", - "p_code":"190", - "code":"199" + "p_code":"188", + "code":"196" }, { "desc":"No. The current ModelArts version does not support encrypted files stored in OBS.", @@ -1796,8 +1769,8 @@ "title":"Does ModelArts Support Encrypted Files Stored in OBS?", "uri":"modelarts_21_0059.html", "doc_type":"usermanual", - "p_code":"190", - "code":"200" + "p_code":"188", + "code":"197" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1805,8 +1778,8 @@ "title":"ExeML", "uri":"modelarts_05_0015.html", "doc_type":"usermanual", - "p_code":"189", - "code":"201" + "p_code":"187", + "code":"198" }, { "desc":"ExeML is the process of automating model design, parameter tuning, and model training, compression, and deployment with the labeled data. The process is free of coding an", @@ -1814,8 +1787,8 @@ "title":"What Is ExeML?", "uri":"modelarts_05_0002.html", "doc_type":"usermanual", - "p_code":"201", - "code":"202" + "p_code":"198", + "code":"199" }, { "desc":"Image classification is an image processing method that separates different classes of targets according to the features reflected in the images. With quantitative analys", @@ -1823,8 +1796,8 @@ "title":"What Are Image Classification and Object Detection?", "uri":"modelarts_05_0018.html", "doc_type":"usermanual", - "p_code":"201", - "code":"203" + "p_code":"198", + "code":"200" }, { "desc":"The Train button turns to be available when the training images for an image classification project are classified into at least two categories, and each category contain", @@ -1832,8 +1805,8 @@ "title":"What Should I Do When the Train Button Is Unavailable After I Create an Image Classification Project and Label the Images?", "uri":"modelarts_05_0005.html", "doc_type":"usermanual", - "p_code":"201", - "code":"204" + "p_code":"198", + "code":"201" }, { "desc":"Yes. You can add multiple labels to an image.", @@ -1841,8 +1814,8 @@ "title":"Can I Add Multiple Labels to an Image for an Object Detection Project?", "uri":"modelarts_05_0006.html", "doc_type":"usermanual", - "p_code":"201", - "code":"205" + "p_code":"198", + "code":"202" }, { "desc":"Models created in ExeML are deployed as real-time services. You can add images or compile code to test the services, as well as call the APIs using the URLs.After model d", @@ -1850,8 +1823,8 @@ "title":"What Type of Service Is Deployed in ExeML?", "uri":"modelarts_05_0008.html", "doc_type":"usermanual", - "p_code":"201", - "code":"206" + "p_code":"198", + "code":"203" }, { "desc":"Images in JPG, JPEG, PNG, or BMP format are supported.", @@ -1859,8 +1832,8 @@ "title":"What Formats of Images Are Supported by Object Detection or Image Classification Projects?", "uri":"modelarts_05_0010.html", "doc_type":"usermanual", - "p_code":"201", - "code":"207" + "p_code":"198", + "code":"204" }, { "desc":"Data files cannot be stored in the root directory of an OBS bucket.The name of files in a dataset consists of letters, digits, hyphens (-), and underscores (_), and the f", @@ -1868,8 +1841,8 @@ "title":"What Are the Requirements for Training Data When You Create a Predictive Analytics Project in ExeML?", "uri":"modelarts_21_0062.html", "doc_type":"usermanual", - "p_code":"201", - "code":"208" + "p_code":"198", + "code":"205" }, { "desc":"The model cannot be downloaded. However, you can view the model or deploy the model as a real-time service on the model management page.", @@ -1877,8 +1850,8 @@ "title":"Can I Download a Model After It Is Automatically Trained?", "uri":"modelarts_21_0061.html", "doc_type":"usermanual", - "p_code":"201", - "code":"209" + "p_code":"198", + "code":"206" }, { "desc":"Each round of training generates a training version in an ExeML project. If a training result is unsatisfactory (for example, unsatisfactory about the training precision)", @@ -1886,8 +1859,8 @@ "title":"How Do I Perform Incremental Training in an ExeML Project?", "uri":"modelarts_21_0060.html", "doc_type":"usermanual", - "p_code":"201", - "code":"210" + "p_code":"198", + "code":"207" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1895,8 +1868,8 @@ "title":"Data Management", "uri":"modelarts_05_0101.html", "doc_type":"usermanual", - "p_code":"189", - "code":"211" + "p_code":"187", + "code":"208" }, { "desc":"For the data management function, there are limits on the image size when you upload images to the datasets whose labeling type is object detection or image classificatio", @@ -1904,8 +1877,8 @@ "title":"Are There Size Limits for Images to be Uploaded?", "uri":"modelarts_21_0063.html", "doc_type":"usermanual", - "p_code":"211", - "code":"212" + "p_code":"208", + "code":"209" }, { "desc":"Failed to use the manifest file of the published dataset to import data again.Data has been changed in the OBS directory of the published dataset, for example, images hav", @@ -1913,8 +1886,8 @@ "title":"Why Does Data Fail to Be Imported Using the Manifest File?", "uri":"modelarts_05_0103.html", "doc_type":"usermanual", - "p_code":"211", - "code":"213" + "p_code":"208", + "code":"210" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -1922,8 +1895,8 @@ "title":"Notebook", "uri":"modelarts_05_0067.html", "doc_type":"usermanual", - "p_code":"189", - "code":"214" + "p_code":"187", + "code":"211" }, { "desc":"Log in to the ModelArts management console, and choose DevEnviron > Notebooks.In the notebook list, click Open in the Operation column of the target notebook instance to ", @@ -1931,17 +1904,17 @@ "title":"How Do I Enable the Terminal Function in DevEnviron of ModelArts?", "uri":"modelarts_05_0071.html", "doc_type":"usermanual", - "p_code":"214", - "code":"215" + "p_code":"211", + "code":"212" }, { - "desc":"Log in to the ModelArts management console, and choose DevEnviron > Notebooks.In the notebook list, click Open in the Operation column of the target notebook instance to ", + "desc":"Multiple environments have been integrated into ModelArts Notebook. These environments contain Jupyter Notebook and Python packages, including TensorFlow, MXNet, Caffe, P", "product_code":"modelarts", "title":"How Do I Install External Libraries in a Notebook Instance?", - "uri":"modelarts_21_0064.html", + "uri":"modelarts_05_0022.html", "doc_type":"usermanual", - "p_code":"214", - "code":"216" + "p_code":"211", + "code":"213" }, { "desc":"Notebook instances in DevEnviron support the Keras engine. The Keras engine is not supported in job training and model deployment (inference).Keras is an advanced neural ", @@ -1949,8 +1922,8 @@ "title":"Is the Keras Engine Supported?", "uri":"modelarts_21_0065.html", "doc_type":"usermanual", - "p_code":"214", - "code":"217" + "p_code":"211", + "code":"214" }, { "desc":"After the training code is debugged in a notebook instance, if you need to use the training code for training jobs on ModelArts, convert the ipynb file into a Python file", @@ -1958,8 +1931,8 @@ "title":"How Do I Use Training Code in Training Jobs After Debugging the Code in a Notebook Instance?", "uri":"modelarts_21_0066.html", "doc_type":"usermanual", - "p_code":"214", - "code":"218" + "p_code":"211", + "code":"215" }, { "desc":"In the notebook instance, error message \"No Space left...\" is displayed after the pip install command is run.You are advised to run the pip install --no-cache ** command", @@ -1967,8 +1940,17 @@ "title":"What Should I Do When the System Displays an Error Message Indicating that No Space Left After I Run the pip install Command?", "uri":"modelarts_21_0067.html", "doc_type":"usermanual", - "p_code":"214", - "code":"219" + "p_code":"211", + "code":"216" + }, + { + "desc":"In a notebook instance, you can call the ModelArts MoXing API or SDK to exchange data with OBS for uploading a file to OBS or downloading a file from OBS to the notebook ", + "product_code":"modelarts", + "title":"How Do I Upload a File from a Notebook Instance to OBS or Download a File from OBS to a Notebook Instance?", + "uri":"modelarts_05_0024.html", + "doc_type":"faq", + "p_code":"211", + "code":"217" }, { "desc":"Small files (files smaller than 100 MB)Open a notebook instance and click Upload in the upper right corner to upload a local file to the notebook instance.Upload a small ", @@ -1976,8 +1958,8 @@ "title":"How Do I Upload Local Files to a Notebook Instance?", "uri":"modelarts_21_0068.html", "doc_type":"usermanual", - "p_code":"214", - "code":"220" + "p_code":"211", + "code":"218" }, { "desc":"If you use OBS to store the notebook instance, after you click upload, the data is directly uploaded to the target OBS path, that is, the OBS path specified when the note", @@ -1985,8 +1967,8 @@ "title":"Where Will the Data Be Uploaded to?", "uri":"modelarts_05_0045.html", "doc_type":"usermanual", - "p_code":"214", - "code":"221" + "p_code":"211", + "code":"219" }, { "desc":"The following uses the TensorFlow-1.8 engine as an example. The operations on other engines are similar. You only need to replace the engine name and version number in th", @@ -1994,8 +1976,8 @@ "title":"Should I Access the Python Environment Same as the Notebook Kernel of the Current Instance in the Terminal?", "uri":"modelarts_21_0069.html", "doc_type":"usermanual", - "p_code":"214", - "code":"222" + "p_code":"211", + "code":"220" }, { "desc":"If a notebook instance fails to execute code, you can locate and rectify the fault based on the following scenarios:If the execution of a cell is suspended or lasts for a", @@ -2003,8 +1985,8 @@ "title":"What Do I Do If a Notebook Instance Fails to Execute Code?", "uri":"modelarts_21_0070.html", "doc_type":"usermanual", - "p_code":"214", - "code":"223" + "p_code":"211", + "code":"221" }, { "desc":"Currently, Terminal in ModelArts DevEnviron does not support apt-get. You can use a custom imagecustom image to support it.", @@ -2012,8 +1994,8 @@ "title":"Does ModelArts DevEnviron Support apt-get?", "uri":"modelarts_21_0071.html", "doc_type":"usermanual", - "p_code":"214", - "code":"224" + "p_code":"211", + "code":"222" }, { "desc":"/cache is a temporary directory and will not be saved. After an instance using OBS storage is stopped, data in the ~work directory will be deleted. After a notebook insta", @@ -2021,8 +2003,8 @@ "title":"Do Files in /cache Still Exist After a Notebook Instance is Stopped or Restarted? How Do I Avoid a Restart?", "uri":"modelarts_05_0080.html", "doc_type":"usermanual", - "p_code":"214", - "code":"225" + "p_code":"211", + "code":"223" }, { "desc":"Log in to the ModelArts management console, and choose DevEnviron > Notebooks.In the Operation column of the target notebook instance in the notebook list, click Open to ", @@ -2030,8 +2012,8 @@ "title":"Where Is Data Stored After the Sync OBS Function Is Used?", "uri":"modelarts_05_0081.html", "doc_type":"usermanual", - "p_code":"214", - "code":"226" + "p_code":"211", + "code":"224" }, { "desc":"If you select GPU when creating a notebook instance, perform the following operations to view GPU usage:Log in to the ModelArts management console, and choose DevEnviron ", @@ -2039,8 +2021,8 @@ "title":"How Do I View GPU Usage on the Notebook?", "uri":"modelarts_21_0072.html", "doc_type":"usermanual", - "p_code":"214", - "code":"227" + "p_code":"211", + "code":"225" }, { "desc":"When creating a notebook instance, select the target Python development environment. Python2 and Python3 are supported, corresponding to Python 2.7 and Python 3.6, respec", @@ -2048,8 +2030,8 @@ "title":"What Python Development Environments Does Notebook Support?", "uri":"modelarts_21_0073.html", "doc_type":"usermanual", - "p_code":"214", - "code":"228" + "p_code":"211", + "code":"226" }, { "desc":"The python2 environment of ModelArts supports Caffe, but the python3 environment does not support it.", @@ -2057,8 +2039,8 @@ "title":"Does ModelArts Support the Caffe Engine?", "uri":"modelarts_21_0074.html", "doc_type":"usermanual", - "p_code":"214", - "code":"229" + "p_code":"211", + "code":"227" }, { "desc":"For security purposes, notebook instances do not support sudo privilege escalation.", @@ -2066,8 +2048,8 @@ "title":"Is sudo Privilege Escalation Supported?", "uri":"modelarts_21_0075.html", "doc_type":"usermanual", - "p_code":"214", - "code":"230" + "p_code":"211", + "code":"228" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -2075,8 +2057,8 @@ "title":"Training Jobs", "uri":"modelarts_05_0030.html", "doc_type":"usermanual", - "p_code":"189", - "code":"231" + "p_code":"187", + "code":"229" }, { "desc":"The code directory for creating a training job has limits on the size and number of files.Delete the files except the code from the code directory or save the files in ot", @@ -2084,8 +2066,8 @@ "title":"What Can I Do If the Message \"Object directory size/quantity exceeds the limit\" Is Displayed When I Create a Training Job?", "uri":"modelarts_05_0031.html", "doc_type":"usermanual", - "p_code":"231", - "code":"232" + "p_code":"229", + "code":"230" }, { "desc":"When you use ModelArts, your data is stored in the OBS bucket. The data has a corresponding OBS path, for example, bucket_name/dir/image.jpg. ModelArts training jobs run ", @@ -2093,8 +2075,8 @@ "title":"What Can I Do If \"No such file or directory\" Is Displayed In the Training Job Log?", "uri":"modelarts_05_0032.html", "doc_type":"usermanual", - "p_code":"231", - "code":"233" + "p_code":"229", + "code":"231" }, { "desc":"When a model references a dependency package, select a frequently-used framework to create training jobs. In addition, place the required file or installation package in ", @@ -2102,8 +2084,8 @@ "title":"How Do I Create a Training Job When a Dependency Package Is Referenced in a Model?", "uri":"modelarts_05_0063.html", "doc_type":"usermanual", - "p_code":"231", - "code":"234" + "p_code":"229", + "code":"232" }, { "desc":"Pay attention to the following when setting training parameters:When setting running parameters for creating a training job, you only need to set the corresponding parame", @@ -2111,8 +2093,8 @@ "title":"What Should I Know When Setting Training Parameters?", "uri":"modelarts_21_0077.html", "doc_type":"usermanual", - "p_code":"231", - "code":"235" + "p_code":"229", + "code":"233" }, { "desc":"In the left navigation pane of the ModelArts management console, choose Training Management > Training Jobs to go to the Training Jobs page. In the training job list, cli", @@ -2120,17 +2102,17 @@ "title":"How Do I Check Resource Usage of a Training Job?", "uri":"modelarts_21_0078.html", "doc_type":"usermanual", - "p_code":"231", - "code":"236" + "p_code":"229", + "code":"234" }, { - "desc":"When creating a training job, you can select CPU, GPU, or Ascend resources based on the size of the training job.ModelArts mounts the disk to the /cache directory. You ca", + "desc":"When creating a training job, you can select CPU, GPUresources based on the size of the training job.ModelArts mounts the disk to the /cache directory. You can use this d", "product_code":"modelarts", "title":"What Are Sizes of the /cache Directories for Different Resource Specifications in the Training Environment?", "uri":"modelarts_05_0090.html", "doc_type":"usermanual", - "p_code":"231", - "code":"237" + "p_code":"229", + "code":"235" }, { "desc":"In the script of the training job boot file, run the following commands to obtain the sizes of the to-be-copied and copied folders. Then determine whether folder copy is ", @@ -2138,8 +2120,8 @@ "title":"How Do I Check Whether Folder Copy Is Complete During Job Training?", "uri":"modelarts_21_0079.html", "doc_type":"usermanual", - "p_code":"231", - "code":"238" + "p_code":"229", + "code":"236" }, { "desc":"Training job parameters can be automatically generated in the background or manually entered by users. Perform the following operations to obtain training job parameters:", @@ -2147,8 +2129,8 @@ "title":"How Do I Obtain Training Job Parameters from the Boot File of the Training Job?", "uri":"modelarts_21_0080.html", "doc_type":"usermanual", - "p_code":"231", - "code":"239" + "p_code":"229", + "code":"237" }, { "desc":"ModelArts does not support access to the background of a training job.", @@ -2156,8 +2138,8 @@ "title":"How Do I Access the Background of a Training Job?", "uri":"modelarts_21_0081.html", "doc_type":"usermanual", - "p_code":"231", - "code":"240" + "p_code":"229", + "code":"238" }, { "desc":"Storage directories of ModelArts training jobs do not affect each other. Environments are isolated from each other, and data of other jobs cannot be viewed.", @@ -2165,8 +2147,8 @@ "title":"Is There Any Conflict When Models of Two Training Jobs Are Saved in the Same Directory of a Container?", "uri":"modelarts_21_0082.html", "doc_type":"usermanual", - "p_code":"231", - "code":"241" + "p_code":"229", + "code":"239" }, { "desc":"In a training job, only three valid digits are retained in a training output log. When the value of loss is too small, the value is displayed as 0.000. Log content is as ", @@ -2174,8 +2156,8 @@ "title":"Only Three Valid Digits Are Retained in a Training Output Log. Can the Value of loss Be Changed?", "uri":"modelarts_21_0083.html", "doc_type":"usermanual", - "p_code":"231", - "code":"242" + "p_code":"229", + "code":"240" }, { "desc":"If you cannot access the corresponding folder by using os.system('cd xxx') in the boot script of the training job, you are advised to use the following method:", @@ -2183,8 +2165,8 @@ "title":"Why Can't I Use os.system ('cd xxx') to Access the Corresponding Folder During Job Training?", "uri":"modelarts_21_0084.html", "doc_type":"usermanual", - "p_code":"231", - "code":"243" + "p_code":"229", + "code":"241" }, { "desc":"ModelArts enables you to invoke a shell script, and you can use Python to invoke .sh. The procedure is as follows:Upload the .sh script to an OBS bucket. For example, upl", @@ -2192,8 +2174,8 @@ "title":"How Do I Invoke a Shell Script in a Training Job to Execute the .sh File?", "uri":"modelarts_21_0085.html", "doc_type":"usermanual", - "p_code":"231", - "code":"244" + "p_code":"229", + "code":"242" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -2201,8 +2183,8 @@ "title":"Model Management", "uri":"modelarts_05_0016.html", "doc_type":"usermanual", - "p_code":"189", - "code":"245" + "p_code":"187", + "code":"243" }, { "desc":"ModelArts does not support the import of models in .h5 format. You can convert the models in .h5 format of Keras to the TensorFlow format and then import the models to Mo", @@ -2210,8 +2192,8 @@ "title":"How Do I Import the .h5 Model of Keras to ModelArts?", "uri":"modelarts_21_0086.html", "doc_type":"usermanual", - "p_code":"245", - "code":"246" + "p_code":"243", + "code":"244" }, { "desc":"ModelArts allows you to upload local models to OBS or import models stored in OBS directly into ModelArts.For details about how to import a model from OBS, see Importing ", @@ -2219,8 +2201,8 @@ "title":"How Do I Import a Model Downloaded from OBS to ModelArts?", "uri":"modelarts_05_0124.html", "doc_type":"usermanual", - "p_code":"245", - "code":"247" + "p_code":"243", + "code":"245" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -2228,17 +2210,17 @@ "title":"Service Deployment", "uri":"modelarts_05_0017.html", "doc_type":"usermanual", - "p_code":"189", - "code":"248" + "p_code":"187", + "code":"246" }, { - "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "desc":"Currently, models can only be deployed as real-time services and batch services.", "product_code":"modelarts", "title":"What Types of Services Can Models Be Deployed as on ModelArts?", "uri":"modelarts_05_0012.html", "doc_type":"usermanual", - "p_code":"248", - "code":"249" + "p_code":"246", + "code":"247" }, { "desc":"Before importing a model, you need to place the corresponding inference code and configuration file in the model folder. When encoding with Python, you are advised to use", @@ -2246,8 +2228,8 @@ "title":"What Should I Do If a Conflict Occurs When Deploying a Model As a Real-Time Service?", "uri":"modelarts_05_0100.html", "doc_type":"usermanual", - "p_code":"248", - "code":"250" + "p_code":"246", + "code":"248" }, { "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", @@ -2256,6 +2238,6 @@ "uri":"modelarts_04_0099.html", "doc_type":"usermanual", "p_code":"", - "code":"251" + "code":"249" } ] \ No newline at end of file diff --git a/docs/modelarts/umn/en-us_image_0000001156920825.png b/docs/modelarts/umn/en-us_image_0000001156920825.png new file mode 100644 index 00000000..51988655 Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001156920825.png differ diff --git a/docs/modelarts/umn/en-us_image_0000001157080841.png b/docs/modelarts/umn/en-us_image_0000001157080841.png new file mode 100644 index 00000000..c11a172a Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001157080841.png differ diff --git a/docs/modelarts/umn/en-us_image_0000001157080843.png b/docs/modelarts/umn/en-us_image_0000001157080843.png new file mode 100644 index 00000000..93720353 Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001157080843.png differ diff --git a/docs/modelarts/umn/en-us_image_0000001157080915.png b/docs/modelarts/umn/en-us_image_0000001157080915.png new file mode 100644 index 00000000..39ab8f14 Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001157080915.png differ diff --git a/docs/modelarts/umn/en-us_image_0000001278234781.png b/docs/modelarts/umn/en-us_image_0000001278234781.png new file mode 100644 index 00000000..334fea84 Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001278234781.png differ diff --git a/docs/modelarts/umn/en-us_image_0000001281686748.png b/docs/modelarts/umn/en-us_image_0000001281686748.png new file mode 100644 index 00000000..7eee8739 Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001281686748.png differ diff --git a/docs/modelarts/umn/en-us_image_0000001290603082.png b/docs/modelarts/umn/en-us_image_0000001290603082.png new file mode 100644 index 00000000..8df2101c Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001290603082.png differ diff --git a/docs/modelarts/umn/en-us_image_0000001340184197.png b/docs/modelarts/umn/en-us_image_0000001340184197.png new file mode 100644 index 00000000..49333286 Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001340184197.png differ diff --git a/docs/modelarts/umn/en-us_image_0000001340265309.png b/docs/modelarts/umn/en-us_image_0000001340265309.png new file mode 100644 index 00000000..8094320e Binary files /dev/null and b/docs/modelarts/umn/en-us_image_0000001340265309.png differ diff --git a/docs/modelarts/umn/modelarts_01_0001.html b/docs/modelarts/umn/modelarts_01_0001.html index b4229559..1c1552f5 100644 --- a/docs/modelarts/umn/modelarts_01_0001.html +++ b/docs/modelarts/umn/modelarts_01_0001.html @@ -4,6 +4,7 @@
ModelArts is a one-stop development platform for AI developers. With distributed training, automated model building, and model deployment, ModelArts helps AI developers quickly build models and efficiently manage the AI development lifecycle.
ModelArts covers all stages of AI development, including data processing and model training and deployment. The underlying technologies of ModelArts support various heterogeneous computing resources, allowing developers to flexibly select and use resources. In addition, ModelArts supports popular open-source AI development frameworks such as TensorFlow and MXNet. Developers can also use self-developed algorithm frameworks to match their usage habits.
ModelArts aims to simplify AI development.
+ModelArts supports the entire development process, including data processing, and model training, management, and deployment.
ModelArts supports various AI application scenarios, such as image classification and object detection.
Deploys models in various production environments, and supports real-time and batch inference.
Enables model building without coding and supports image classification, object detection, and predictive analytics.
ModelArts uses Object Storage Service (OBS) to store data and model backups and snapshots. OBS provides secure, reliable, low-cost storage. For more details, see Object Storage Service Console Function Overview.
+ModelArts uses Identity and Access Management (IAM) for authentication and authorization. For more information about IAM, see Identity and Access Management User Guide.
ModelArts uses Cloud Container Engine (CCE) to deploy models as real-time services. CCE enables high concurrency and provides elastic scaling. For more information about CCE, see Cloud Container Engine User Guide.
+ModelArts uses Object Storage Service (OBS) to store data and model backups and snapshots. OBS provides secure, reliable, low-cost storage. For more details, see Object Storage Service Console Function Overview.
To use an AI framework that is not supported by ModelArts, use SoftWare Repository for Container (SWR) to customize an image and import the image to ModelArts for training or inference. For more details, see .
+ModelArts uses Cloud Container Engine (CCE) to deploy models as real-time services. CCE enables high concurrency and provides elastic scaling. For more information about CCE, see Cloud Container Engine User Guide.
+To use an AI framework that is not supported by ModelArts, use SoftWare Repository for Container (SWR) to customize an image and import the image to ModelArts for training or inference. For more details, see SoftWare Repository for Container User Guide
During AI development, massive volumes of data need to be processed, and data preparation and labeling usually take more than half of the development time. ModelArts data management provides an efficient data management and labeling framework. It supports various data types such as image, text, audio, and video in a range of labeling scenarios such as image classification, object detection, speech paragraph labeling, and text classification. ModelArts data management can be used in AI projects of computer vision, natural language processing, and audio and video analysis. In addition, it provides functions such as data filtering, data analysis, team labeling, and version management for full-process data labeling.
+During AI development, massive volumes of data need to be processed, and data preparation and labeling usually take more than half of the development time. ModelArts data management provides an efficient data management and labeling framework. It supports various data types such as image, text, audio, and video in a range of labeling scenarios such as image classification, object detection, speech paragraph labeling, and text classification. ModelArts data management can be used in AI projects of computer vision, natural language processing, and audio and video analysis. In addition, it provides functions such as data filtering, data analysis, team labeling, and version management for full-process data labeling.
Team labeling enables multiple members to label a dataset, improving labeling efficiency. ModelArts allows project-based management for labeling by individual developers, small-scale labeling by small teams, and large-scale labeling by professional teams.
For large-scale team labeling, ModelArts provides team management, personnel management, and data management to implement the entire process, from project creation, allocation, management, labeling, to acceptance. For small-scale labeling by individuals and small teams, ModelArts provides an easy-to-use labeling tool to minimize project management costs.
In addition, the labeling platform ensures data security. User data is used only within the authorized scope. The labeling object allocation policy ensures user data privacy and implements data anonymization.
diff --git a/docs/modelarts/umn/modelarts_01_0013.html b/docs/modelarts/umn/modelarts_01_0013.html index e9b0704d..a65b6b02 100644 --- a/docs/modelarts/umn/modelarts_01_0013.html +++ b/docs/modelarts/umn/modelarts_01_0013.html @@ -1,10 +1,10 @@It is challenging to set up a development environment, select an AI algorithm framework and algorithm, debug code, install software, and accelerate hardware. To address these challenges, ModelArts provides DevEnviron to simplify the entire development process.
+It is challenging to set up a development environment, select an AI algorithm framework and algorithm, debug code, install software, and accelerate hardware. To address these challenges, ModelArts provides DevEnviron to simplify the entire development process.
In the machine learning and deep learning fields, popular open-source training and inference frameworks include TensorFlow, PyTorch, MXNet, and MindSpore. ModelArts supports all popular AI computing frameworks and provides a user-friendly development and debugging environment. It supports traditional machine learning algorithms, such as logistic regression, decision tree, and clustering, as well as multiple types of deep learning algorithms, such as the convolutional neural network (CNN), recurrent neural network (RNN), and long short-term memory (LSTM).
-Deep learning generally requires large-scale GPU clusters for distributed acceleration. For existing open-source frameworks, algorithm developers need to write a large amount of code for distributed training on different hardware, and the acceleration code varies depending on the framework. To resolve these issues, a distributed lightweight framework or SDK is required. The framework or SDK is built on deep learning engines such as TensorFlow, PyTorch, MXNet, and MindSpore to improve the distributed performance and usability of these engines. ModelArts MoXing perfectly suits the needs. The easy-to-use MoXing API/SDK enables you to develop deep learning at low costs.
+Deep learning generally requires large-scale GPU clusters for distributed acceleration. For existing open-source frameworks, algorithm developers need to write a large amount of code for distributed training on different hardware, and the acceleration code varies depending on the framework. To resolve these issues, a distributed lightweight framework or SDK is required. The framework or SDK is built on deep learning engines such as TensorFlow, PyTorch, MXNet, and MindSpore to improve the distributed performance and usability of these engines. ModelArts MoXing perfectly suits the needs. The easy-to-use MoXing API/SDK enables you to develop deep learning at low costs.
Generally, AI model deployment and large-scale implementation are complex.
+Generally, AI model deployment and large-scale implementation are complex.
ModelArts resolves this issue by deploying a trained model on different devices in various scenarios with only a few clicks. This secure and reliable one-stop deployment is available for individual developers, enterprises, and device manufacturers.
ModelArts uses Object Storage Service (OBS) to store data and model backups and snapshots. OBS provides secure, reliable, low-cost storage. For more details, see Object Storage Service Console Function Overview.
+ModelArts uses Identity and Access Management (IAM) for authentication and authorization. For more information about IAM, see Identity and Access Management User Guide.
ModelArts uses Cloud Container Engine (CCE) to deploy models as real-time services. CCE enables high concurrency and provides elastic scaling. For more information about CCE, see Cloud Container Engine User Guide.
+ModelArts uses Object Storage Service (OBS) to store data and model backups and snapshots. OBS provides secure, reliable, low-cost storage. For more details, see Object Storage Service Console Function Overview.
To use an AI framework that is not supported by ModelArts, use SoftWare Repository for Container (SWR) to customize an image and import the image to ModelArts for training or inference. For more details, see .
+ModelArts uses Cloud Container Engine (CCE) to deploy models as real-time services. CCE enables high concurrency and provides elastic scaling. For more information about CCE, see Cloud Container Engine User Guide.
+To use an AI framework that is not supported by ModelArts, use SoftWare Repository for Container (SWR) to customize an image and import the image to ModelArts for training or inference. For more details, see SoftWare Repository for Container User Guide.
Currently, models can only be deployed as real-time services and batch services.
+Before using ModelArts to develop AI models, data needs to be uploaded to an OBS bucket. You can log in to the OBS console to create an OBS bucket, create a folder, and upload data. For details about how to upload data, see .
+Before using ModelArts to develop AI models, data needs to be uploaded to an OBS bucket. You can log in to the OBS console to create an OBS bucket, create a folder, and upload data. For details about how to upload data, see Object Storage Service Getting Started.
Multiple environments have been integrated into ModelArts Notebook. These environments contain Jupyter Notebook and Python packages, including TensorFlow, MXNet, Caffe, PyTorch, and Spark. You can use pip install to install external libraries in Jupyter Notebook or on the Terminal page.
+For example, use Jupyter Notebook to install Shapely in the TensorFlow-1.8 environment.
+ +For example, use pip to install Shapely in the TensorFlow-1.8 environment on the Terminal page.
+cat /home/ma-user/README
+source /home/ma-user/anaconda3/bin/activate TensorFlow-1.8
+If you use another engine, replace TensorFlow-1.8 in the command with the name and version of the engine.
+A new independent running environment is opened when a ModelArts training job is created. The new environment is not associated with the packages installed in the notebook environment. Therefore, add os.system('pip install xxx') to the boot code before importing the installation package.
+For example, if you need to use the Shapely dependency package in the training job, add the following code to the boot code after the notebook instance is installed:
+import os +os.system('pip install Shapely') +import Shapely+
In a notebook instance, you can call the ModelArts MoXing API or SDK to exchange data with OBS for uploading a file to OBS or downloading a file from OBS to the notebook instance.
+Developed by the ModelArts team, MoXing is a distributed training acceleration framework built on open-source deep learning engines such as TensorFlow and PyTorch. MoXing makes model coding easier and more efficient.
+MoXing provides a set of file object APIs for reading and writing OBS files.
+Sample code:
+import moxing as mox + +# Download the OBS folder sub_dir_0 from OBS to a notebook instance. +mox.file.copy_parallel('obs://bucket_name/sub_dir_0', '/home/ma-user/work/sub_dir_0') +# Download the OBS file obs_file.txt from OBS to a notebook instance. +mox.file.copy('obs://bucket_name/obs_file.txt', '/home/ma-user/work/obs_file.txt') + +# Upload the OBS folder sub_dir_0 from a notebook instance to OBS. +mox.file.copy_parallel('/home/ma-user/work/sub_dir_0', 'obs://bucket_name/sub_dir_0') +# Upload the OBS file obs_file.txt from a notebook instance to OBS. +mox.file.copy('/home/ma-user/work/obs_file.txt', 'obs://bucket_name/obs_file.txt')+
Call the ModelArts SDK for downloading a file from OBS.
+Sample code: Download file1.txt from OBS to /home/ma-user/work/ in the notebook instance. All the bucket name, folder name, and file name are customizable.
+1 +2 +3 | from modelarts.session import Session
+session = Session()
+session.obs.download_file(src_obs_file="obs://bucket-name/dir1/file1.txt", dst_local_dir="/home/ma-user/work/")
+ |
Call the ModelArts SDK for downloading a folder from OBS.
+Sample code: Download dir1 from OBS to /home/ma-user/work/ in the notebook instance. The bucket name and folder name are customizable.
+from modelarts.session import Session +session = Session() +session.obs.download_dir(src_obs_dir="obs://bucket-name/dir1/", dst_local_dir="/home/ma-user/work/")+
Call the ModelArts SDK for uploading a file to OBS.
+Sample code: Upload file1.txt in the notebook instance to OBS bucket obs://bucket-name/dir1/. All the bucket name, folder name, and file name are customizable.
+1 +2 +3 | from modelarts.session import Session
+session = Session()
+session.obs.upload_file(src_local_file='/home/ma-user/work/file1.txt', dst_obs_dir='obs://bucket-name/dir1/')
+ |
Call the ModelArts SDK for uploading a folder to OBS.
+Sample code: Upload /work/ in the notebook instance to obs://bucket-name/dir1/work/ of bucket-name. The bucket name and folder name are customizable.
+from modelarts.session import Session +session = Session() +session.obs.upload_dir(src_local_dir='/home/ma-user/work/', dst_obs_dir='obs://bucket-name/dir1/')+
Locate the incorrect OBS path in the log, for example, obs-test/ModelArts/examples/. There are two methods to check whether it exists.
Log in to OBS console using the current account, and check whether the OBS buckets, folders, and files exist in the OBS path displayed in the log. For example, you can confirm that a given bucket is there and then check if that bucket contains the folder you are looking for based on the configured path.
import moxing as mox +
import moxing as mox mox.file.exists('obs://obs-test/ModelArts/examples/')
Create a file named pip-requirements.txt in the code directory. In this file, specify the name and version of the dependency package in the format of Package name==Version.
For example, the OBS path specified by Code Directory contains model files and the pip-requirements.txt file. The following shows the code directory structure:
-|---OBS path to the model boot file +|---OBS path to the model boot file |---model.py #Model boot file |---pip-requirements.txt #Customized configuration file, which specifies the name and version of the dependency packageThe following shows the content of the pip-requirements.txt file:
-alembic==0.8.6 +alembic==0.8.6 bleach==1.4.3 click==6.6
When you use a customized .whl file, the system cannot automatically download and install the file. Place the .whl file in the code directory, create a file named pip-requirements.txt, and specify the name of the .whl file in the created file. The dependency package must be a .whl file.
For example, the OBS path specified by Code Directory contains model files, .whl file, and pip-requirements.txt file. The following shows the code directory structure:
-|---OBS path to the model boot file +|---OBS path to the model boot file |---model.py #Model boot file |---XXX.whl #Dependency package. If multiple dependencies are required, place all of them here. |---pip-requirements.txt #Customized configuration file, which specifies the name of the dependency packageThe following shows the content of the pip-requirements.txt file:
-numpy-1.15.4-cp36-cp36m-manylinux1_x86_64.whl +numpy-1.15.4-cp36-cp36m-manylinux1_x86_64.whl tensorflow-1.8.0-cp36-cp36m-manylinux1_x86_64.whl
cd work+
cd work
When creating a training job, you can select CPU, GPU, or Ascend resources based on the size of the training job.
+When creating a training job, you can select CPU, GPU resources based on the size of the training job.
ModelArts mounts the disk to the /cache directory. You can use this directory to store temporary files. The /cache directory shares resources with the code directory. The directory has different capacities for different resource specifications.
GPU Specifications @@ -49,22 +49,6 @@ |
---|
Ascend Specifications - |
-cache Directory Capacity - |
-
---|---|
Ascend 910 - |
-3T - |
-
AI Engine and Version
Supported CUDA or Ascend Version
+Supported CUDA Version
After the configuration is complete, you can view the access key configurations of an account or IAM user on the Settings page.
ModelArts uses OBS to store data and model backups and snapshots, achieving secure, reliable, and low-cost storage. Therefore, before using ModelArts, create an OBS bucket and folders for storing data.
-The created OBS bucket and ModelArts are in the same region.
+The created OBS bucket and ModelArts are in the same region.Create a folder for storing data. For details, see Creating a Folder. For example, create a folder named flowers in the created c-flowers OBS bucket.
When you use ExeML, data management, notebook instances, training jobs, models, and services, ModelArts may need to access dependent services such as OBS and Software Repository for Container (SWR). If ModelArts is not authorized to access the services, these functions cannot be used.
You can configure access authorization in either of the following ways:
-After agency authorization is configured, the dependent service operation permissions are delegated to ModelArts so that ModelArts can use the dependent services and perform operations on resources on your behalf.
+After agency authorization is configured, the dependent service operation permissions are delegated to ModelArts so that ModelArts can use the dependent services and perform operations on resources on your behalf.
You can use the obtained access key pair (AK/SK) to authorize ModelArts to access dependent services and and perform operations on resources.
Agency
After the configuration is complete, you can view the agency configurations of an account or IAM user on the Settings page.
+After the configuration is complete, you can view the agency configurations of an account or IAM user on the Settings page.
To better manage your authorization, you can delete the authorization of an IAM user or delete the authorizations of all users in batches.
On the Settings page, the authorizations configured for IAM users under the current account are displayed. You can click Delete in the Operation column to delete the authorization of a user. After the deletion takes effect, the user cannot use ModelArts functions.
diff --git a/docs/modelarts/umn/modelarts_21_0003.html b/docs/modelarts/umn/modelarts_21_0003.html index 8797718a..4597339d 100644 --- a/docs/modelarts/umn/modelarts_21_0003.html +++ b/docs/modelarts/umn/modelarts_21_0003.html @@ -8,7 +8,7 @@├─<dataset-import-path> +Requirements for Files Uploaded to OBS
- If you do not need to upload training data in advance, create an empty folder to store files generated in the future, for example, /bucketName/data-cat.
- If you need to upload images to be labeled in advance, create an empty folder and save the images in the folder. An example of the image directory structure is /bucketName/data-cat/cat.jpg.
- If you want to upload labeled images to the OBS bucket, upload them according to the following specifications:
diff --git a/docs/modelarts/umn/modelarts_21_0004.html b/docs/modelarts/umn/modelarts_21_0004.html index 4bed9bdf..731a3d42 100644 --- a/docs/modelarts/umn/modelarts_21_0004.html +++ b/docs/modelarts/umn/modelarts_21_0004.html @@ -46,7 +46,7 @@
- The dataset for image classification requires storing labeled objects and their label files (in one-to-one relationship with the labeled objects) in the same directory. For example, if the name of the labeled object is 10.jpg, the name of the label file must be 10.txt.
Example of data files:├─<dataset-import-path> │ 10.jpg │ 10.txt │ 11.jpg @@ -17,7 +17,7 @@ │ 12.txt- Images in JPG, JPEG, PNG, and BMP formats are supported. When uploading images on the ModelArts management console, ensure that the size of an image does not exceed 5 MB and the total size of images to be uploaded in one attempt does not exceed 8 MB. If the data volume is large, use OBS Browser+ to upload images.
- A label name can contain a maximum of 32 characters, including Chinese characters, letters, digits, hyphens (-), and underscores (_).
- Image classification label file (.txt) rule:
Each row contains only one label.
-cat +cat dog ...diff --git a/docs/modelarts/umn/modelarts_21_0006.html b/docs/modelarts/umn/modelarts_21_0006.html index bc845273..b3a9d568 100644 --- a/docs/modelarts/umn/modelarts_21_0006.html +++ b/docs/modelarts/umn/modelarts_21_0006.html @@ -51,10 +51,10 @@ - Label Set
- Label Name: Enter a label name. The label name can contain only Chinese characters, letters, digits, underscores (_), and hyphens (-), which contains 1 to 32 characters.
- Add Label: Click Add Label to add one or more labels.
- Set the label color: You need to set label colors for object detection datasets, but you do not need to set label colors for image classification datasets. Select a color from the color palette on the right of a label, or enter the hexadecimal color code to set the color. +
- Label Name: Enter a label name. The label name can contain only Chinese characters, letters, digits, underscores (_), and hyphens (-), which contains 1 to 64characters.
- Add Label: Click Add Label to add one or more labels.
- Set the label color: You need to set label colors for object detection datasets, but you do not need to set label colors for image classification datasets. Select a color from the color palette on the right of a label, or enter the hexadecimal color code to set the color.
diff --git a/docs/modelarts/umn/modelarts_21_0009.html b/docs/modelarts/umn/modelarts_21_0009.html index 3bc66f49..f74180b0 100644 --- a/docs/modelarts/umn/modelarts_21_0009.html +++ b/docs/modelarts/umn/modelarts_21_0009.html @@ -9,7 +9,7 @@ Instance Flavor
- Select the resource specifications used for training. By default, the following specifications are supported:
-+
- Compute-intensive 1 instance (GPU): This flavor is billed on a pay-per-use basis.
- Compute-intensive 1 instance (CPU)
The compute flavors are for reference only. Obtain the flavors on the management console.
ExeML (GPU)
+ExeML (CPU)
Requirements on Datasets
- The name of files in a dataset cannot contain Chinese characters, plus signs (+), spaces, or tabs.
- Ensure that no damaged image exists. The supported image formats include JPG, JPEG, BMP, and PNG.
- Do not store data of different projects in the same dataset.
- To ensure the prediction accuracy of models, the training samples must be similar to the actual application scenarios.
- To ensure the generalization capability of models, datasets should cover all possible scenarios.
- In an object detection dataset, if the coordinates of the bounding box exceed the boundaries of an image, the image cannot be identified as a labeled image.
Requirements for Files Uploaded to OBS
- If you do not need to upload training data in advance, create an empty folder to store files generated in the future, for example, /bucketName/data-cat.
- If you need to upload images to be labeled in advance, create an empty folder and save the images in the folder. An example of the image directory structure is /bucketName/data-cat/cat.jpg.
- If you want to upload labeled images to the OBS bucket, upload them according to the following specifications:
- The dataset for object detection requires storing labeled objects and their label files (in one-to-one relationship with the labeled objects) in the same directory. For example, if the name of the labeled object is IMG_20180919_114745.jpg, the name of the label file must be IMG_20180919_114745.xml.
The label files for object detection must be in PASCAL VOC format. For details about the format, see Table 1.
-Example of data files:├─<dataset-import-path> +Example of data files:-├─<dataset-import-path> │ IMG_20180919_114732.jpg │ IMG_20180919_114732.xml │ IMG_20180919_114745.jpg @@ -90,7 +90,7 @@Example of the label file in KITTI format:<annotation> +Example of the label file in KITTI format:<annotation> <folder>test_data</folder> <filename>260730932.jpg</filename> <size> diff --git a/docs/modelarts/umn/modelarts_21_0010.html b/docs/modelarts/umn/modelarts_21_0010.html index 9c3fffb3..bd2653ea 100644 --- a/docs/modelarts/umn/modelarts_21_0010.html +++ b/docs/modelarts/umn/modelarts_21_0010.html @@ -46,7 +46,7 @@diff --git a/docs/modelarts/umn/modelarts_21_0012.html b/docs/modelarts/umn/modelarts_21_0012.html index b913beb7..07772b80 100644 --- a/docs/modelarts/umn/modelarts_21_0012.html +++ b/docs/modelarts/umn/modelarts_21_0012.html @@ -51,10 +51,10 @@ - Label Set
- Label Name: Enter a label name. The label name can contain only Chinese characters, letters, digits, underscores (_), and hyphens (-), which contains 1 to 32 characters.
- Add Label: Click Add Label to add one or more labels.
- Set the label color: You need to set label colors for object detection datasets, but you do not need to set label colors for image classification datasets. Select a color from the color palette on the right of a label, or enter the hexadecimal color code to set the color. +
- Label Name: Enter a label name. The label name can contain only Chinese characters, letters, digits, underscores (_), and hyphens (-), which contains 1 to 64 characters.
- Add Label: Click Add Label to add one or more labels.
- Set the label color: You need to set label colors for object detection datasets, but you do not need to set label colors for image classification datasets. Select a color from the color palette on the right of a label, or enter the hexadecimal color code to set the color.
diff --git a/docs/modelarts/umn/modelarts_21_0016.html b/docs/modelarts/umn/modelarts_21_0016.html index fc04f39e..70cacd4a 100644 --- a/docs/modelarts/umn/modelarts_21_0016.html +++ b/docs/modelarts/umn/modelarts_21_0016.html @@ -2,7 +2,7 @@ Instance Flavor
- Select the resource specifications used for training. By default, the following specifications are supported:
-+
- Compute-intensive 1 instance (GPU): This flavor is billed on a pay-per-use basis.
- Compute-intensive 1 instance (CPU)
The compute flavors are for reference only. Obtain the flavors on the management console.
ExeML (GPU)
+ExeML (CPU)
Creating a Project
ModelArts ExeML supports image classification, object detection, and predictive analytics projects. You can create any of them based on your needs. Perform the following operations to create an ExeML project.
-Procedure
- Log in to the ModelArts management console. In the left navigation pane, click ExeML. The ExeML page is displayed.
- Click Create Project in the box of your desired project. The page for creating an ExeML project is displayed.
- Enter a project name and set Training Data to the OBS path of the training data. A data file must be specified in the path. +
Procedure
- Log in to the ModelArts management console. In the left navigation pane, click ExeML. The ExeML page is displayed.
- Click Create Project in the box of your desired project. The page for creating an ExeML project is displayed.
- Enter a project name and set Training Data to the OBS path of the training data. A data file must be specified in the path.
Table 1 Parameter description Parameter
Description
@@ -12,7 +12,7 @@Name
Name of an ExeML project
-+
- Enter a maximum of 20 characters. Only digits, letters, underscores (_), and hyphens (-) are allowed. This parameter is mandatory.
- The name must start with a letter.
- Enter a maximum of 32 characters. Only digits, letters, underscores (_), and hyphens (-) are allowed. This parameter is mandatory.
- The name must start with a letter.
- Training Data
diff --git a/docs/modelarts/umn/modelarts_21_0019.html b/docs/modelarts/umn/modelarts_21_0019.html index 0231b6a0..89143c5d 100644 --- a/docs/modelarts/umn/modelarts_21_0019.html +++ b/docs/modelarts/umn/modelarts_21_0019.html @@ -8,43 +8,24 @@- After the model is deployed, view the model deployment status on the Service Deployment page.
-The deployment takes a certain period of time. If the status in the Version Manager pane changes from Deploying to Running, the deployment is complete.
Testing a Service
- On the Service Deployment page, select a service type. For example, on the ExeML page, the predictive analytics model is deployed as a real-time service by default. On the Real-Time Services page, click Prediction in the Operation column of the target service to perform a service test. For details, see Testing a Service.
- You can also use code to test a service. For details, see Accessing a Real-Time Service.
- The following describes the procedure for performing a service test after the predictive analytics model is deployed as a service on the ExeML page.
- After the model is deployed, you can test the model using code. On the ExeML page, click the target project, go to the Deployment Online tab page, select the service version in the Running state, and enter the code in the Code area.
- Click Prediction to perform the test. After the prediction is complete, the result is displayed in the Return Result area on the right. If the model accuracy does not meet your expectation, train and deploy the model again on the Label Data tab page. If you are satisfied with the model prediction result, call the API to access the real-time service as prompted. For details, see Accessing a Real-Time Service.
- attr_1 to attr_7 indicate the input data. On the Label Data tab page, the selected label column is attr_7, that is, attr_7 is the target column to be predicted. The value of attr_7 can be set to any value or left blank, which does not affect the prediction result.
+
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -{ - "data": - { - "count": 1, - "req_data": - [ - { - "attr_1": "58", - "attr_2": "management", - "attr_3": "married", - "attr_4": "tertiary", - "attr_5": "yes", - "attr_6": "no", - "attr_7": "" - } - ] - } -} -Testing a Service
- On the Service Deployment page, select a service type. For example, on the ExeML page, the predictive analytics model is deployed as a real-time service by default. On the Real-Time Services page, click Prediction in the Operation column of the target service to perform a service test. For details, see Testing a Service.
- You can also use code to test a service. For details, see Accessing a Real-Time Service.
- The following describes the procedure for performing a service test after the predictive analytics model is deployed as a service on the ExeML page.
- After the model is deployed, you can test the model using code. On the ExeML page, click the target project, go to the Deployment Online tab page, select the service version in the Running state, and enter the code in the Code area.
- Click Prediction to perform the test. After the prediction is complete, the result is displayed in the Return Result area on the right. If the model accuracy does not meet your expectation, train and deploy the model again on the Label Data tab page. If you are satisfied with the model prediction result, call the API to access the real-time service as prompted. For details, see Accessing a Real-Time Service.
- attr_1 to attr_7 indicate the input data. On the Label Data tab page, the selected label column is attr_7, that is, attr_7 is the target column to be predicted. The value of attr_7 can be set to any value or left blank, which does not affect the prediction result.
{ + "data": + { + "count": 1, + "req_data": + [ + { + "attr_1": "58", + "attr_2": "management", + "attr_3": "married", + "attr_4": "tertiary", + "attr_5": "yes", + "attr_6": "no", + "attr_7": "" + } + ] + } +}- In the preceding code snippet, predictioncol is the inference result of label column attr_7.
diff --git a/docs/modelarts/umn/modelarts_21_0038.html b/docs/modelarts/umn/modelarts_21_0038.html index 013124fc..b0be0d4e 100644 --- a/docs/modelarts/umn/modelarts_21_0038.html +++ b/docs/modelarts/umn/modelarts_21_0038.html @@ -170,403 +170,205 @@![]()
A running real-time service keeps consuming the resources. If you do not need to use the real-time service, you are advised to click Stop in the Version Manager pane to stop the service. If you want to use the service again, click Start.
Training Script (train_mnist_tf.py)
Copy the following code and name the code file train_mnist_tf.py. The code is a training script compiled based on the TensorFlow engine in Python.
-+if __name__ == '__main__': + tf.app.run(main=main)
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71 -72 -73 -74 -75 -76 -77 -78 -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function -import os +import os -import tensorflow as tf -from tensorflow.examples.tutorials.mnist import input_data +import tensorflow as tf +from tensorflow.examples.tutorials.mnist import input_data -tf.flags.DEFINE_integer('max_steps', 1000, 'number of training iterations.') -tf.flags.DEFINE_string('data_url', '/home/jnn/nfs/mnist', 'dataset directory.') -tf.flags.DEFINE_string('train_url', '/home/jnn/temp/delete', 'saved model directory.') +tf.flags.DEFINE_integer('max_steps', 1000, 'number of training iterations.') +tf.flags.DEFINE_string('data_url', '/home/jnn/nfs/mnist', 'dataset directory.') +tf.flags.DEFINE_string('train_url', '/home/jnn/temp/delete', 'saved model directory.') -FLAGS = tf.flags.FLAGS +FLAGS = tf.flags.FLAGS -def main(*args): - # Train model - print('Training model...') - mnist = input_data.read_data_sets(FLAGS.data_url, one_hot=True) - sess = tf.InteractiveSession() - serialized_tf_example = tf.placeholder(tf.string, name='tf_example') - feature_configs = {'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32),} - tf_example = tf.parse_example(serialized_tf_example, feature_configs) - x = tf.identity(tf_example['x'], name='x') - y_ = tf.placeholder('float', shape=[None, 10]) - w = tf.Variable(tf.zeros([784, 10])) - b = tf.Variable(tf.zeros([10])) - sess.run(tf.global_variables_initializer()) - y = tf.nn.softmax(tf.matmul(x, w) + b, name='y') - cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) +def main(*args): + # Train model + print('Training model...') + mnist = input_data.read_data_sets(FLAGS.data_url, one_hot=True) + sess = tf.InteractiveSession() + serialized_tf_example = tf.placeholder(tf.string, name='tf_example') + feature_configs = {'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32),} + tf_example = tf.parse_example(serialized_tf_example, feature_configs) + x = tf.identity(tf_example['x'], name='x') + y_ = tf.placeholder('float', shape=[None, 10]) + w = tf.Variable(tf.zeros([784, 10])) + b = tf.Variable(tf.zeros([10])) + sess.run(tf.global_variables_initializer()) + y = tf.nn.softmax(tf.matmul(x, w) + b, name='y') + cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) - tf.summary.scalar('cross_entropy', cross_entropy) + tf.summary.scalar('cross_entropy', cross_entropy) - train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) + train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) - correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) - accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float')) - tf.summary.scalar('accuracy', accuracy) - merged = tf.summary.merge_all() - test_writer = tf.summary.FileWriter(FLAGS.train_url, flush_secs=1) + correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) + accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float')) + tf.summary.scalar('accuracy', accuracy) + merged = tf.summary.merge_all() + test_writer = tf.summary.FileWriter(FLAGS.train_url, flush_secs=1) - for step in range(FLAGS.max_steps): - batch = mnist.train.next_batch(50) - train_step.run(feed_dict={x: batch[0], y_: batch[1]}) - if step % 10 == 0: - summary, acc = sess.run([merged, accuracy], feed_dict={x: mnist.test.images, y_: mnist.test.labels}) - test_writer.add_summary(summary, step) - print('training accuracy is:', acc) - print('Done training!') + for step in range(FLAGS.max_steps): + batch = mnist.train.next_batch(50) + train_step.run(feed_dict={x: batch[0], y_: batch[1]}) + if step % 10 == 0: + summary, acc = sess.run([merged, accuracy], feed_dict={x: mnist.test.images, y_: mnist.test.labels}) + test_writer.add_summary(summary, step) + print('training accuracy is:', acc) + print('Done training!') - builder = tf.saved_model.builder.SavedModelBuilder(os.path.join(FLAGS.train_url, 'model')) + builder = tf.saved_model.builder.SavedModelBuilder(os.path.join(FLAGS.train_url, 'model')) - tensor_info_x = tf.saved_model.utils.build_tensor_info(x) - tensor_info_y = tf.saved_model.utils.build_tensor_info(y) + tensor_info_x = tf.saved_model.utils.build_tensor_info(x) + tensor_info_y = tf.saved_model.utils.build_tensor_info(y) - prediction_signature = ( - tf.saved_model.signature_def_utils.build_signature_def( - inputs={'images': tensor_info_x}, - outputs={'scores': tensor_info_y}, - method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)) + prediction_signature = ( + tf.saved_model.signature_def_utils.build_signature_def( + inputs={'images': tensor_info_x}, + outputs={'scores': tensor_info_y}, + method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)) - builder.add_meta_graph_and_variables( - sess, [tf.saved_model.tag_constants.SERVING], - signature_def_map={ - 'predict_images': - prediction_signature, - }, - main_op=tf.tables_initializer(), - strip_default_attrs=True) + builder.add_meta_graph_and_variables( + sess, [tf.saved_model.tag_constants.SERVING], + signature_def_map={ + 'predict_images': + prediction_signature, + }, + main_op=tf.tables_initializer(), + strip_default_attrs=True) - builder.save() + builder.save() - print('Done exporting!') + print('Done exporting!') -if __name__ == '__main__': - tf.app.run(main=main) -Inference Code (customize_service.py)
Copy the following code and name the code file customize_service.py. The following inference code meets the ModelArts model package specifications.
-+ return outputs
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -from PIL import Image -import numpy as np -from model_service.tfserving_model_service import TfServingBaseService +from PIL import Image +import numpy as np +from model_service.tfserving_model_service import TfServingBaseService -class mnist_service(TfServingBaseService): - def _preprocess(self, data): - preprocessed_data = {} +class mnist_service(TfServingBaseService): + def _preprocess(self, data): + preprocessed_data = {} - for k, v in data.items(): - for file_name, file_content in v.items(): - image1 = Image.open(file_content) - image1 = np.array(image1, dtype=np.float32) - image1.resize((1, 784)) - preprocessed_data[k] = image1 + for k, v in data.items(): + for file_name, file_content in v.items(): + image1 = Image.open(file_content) + image1 = np.array(image1, dtype=np.float32) + image1.resize((1, 784)) + preprocessed_data[k] = image1 - return preprocessed_data + return preprocessed_data - def _postprocess(self, data): + def _postprocess(self, data): - outputs = {} - logits = data['scores'][0] - label = logits.index(max(logits)) - logits = ['%.3f' % logit for logit in logits] - outputs['predicted_label'] = str(label) - label_list = [str(label) for label in list(range(10))] - scores = dict(zip(label_list, logits)) - scores = sorted(scores.items(), key=lambda item: item[1], reverse=True)[:5] - outputs['scores'] = scores + outputs = {} + logits = data['scores'][0] + label = logits.index(max(logits)) + logits = ['%.3f' % logit for logit in logits] + outputs['predicted_label'] = str(label) + label_list = [str(label) for label in list(range(10))] + scores = dict(zip(label_list, logits)) + scores = sorted(scores.items(), key=lambda item: item[1], reverse=True)[:5] + outputs['scores'] = scores - return outputs -Configuration File (config.json)
Copy the following code and name the code file config.json. The configuration file meets the ModelArts model package specifications.
-+
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71 -72 -73 -74 -75 -76 -77 -78 -79 -80 -81 -82 -83 -84 -85 -86 -{ - "model_type":"TensorFlow", - "metrics":{ - "f1":0, - "accuracy":0, - "precision":0, - "recall":0 - }, - "dependencies":[ - { - "installer":"pip", - "packages":[ - { - "restraint":"ATLEAST", - "package_version":"1.15.0", - "package_name":"numpy" - }, - { - "restraint":"", - "package_version":"", - "package_name":"h5py" - }, - { - "restraint":"ATLEAST", - "package_version":"1.8.0", - "package_name":"tensorflow" - }, - { - "restraint":"ATLEAST", - "package_version":"5.2.0", - "package_name":"Pillow" - } - ] - } - ], - "model_algorithm":"image_classification", - "apis":[ - { - "procotol":"http", - "url":"/", - "request":{ - "Content-type":"multipart/form-data", - "data":{ - "type":"object", - "properties":{ - "images":{ - "type":"file" - } - } - } - }, - "method":"post", - "response":{ - "Content-type":"multipart/form-data", - "data":{ - "required":[ - "predicted_label", - "scores" - ], - "type":"object", - "properties":{ - "predicted_label":{ - "type":"string" - }, - "scores":{ - "items":{ - "minItems":2, - "items":[ - { - "type":"string" - }, - { - "type":"number" - } - ], - "type":"array", - "maxItems":2 - }, - "type":"array" - } - } - } - } - } - ] -} -{ + "model_type":"TensorFlow", + "metrics":{ + "f1":0, + "accuracy":0, + "precision":0, + "recall":0 + }, + "dependencies":[ + { + "installer":"pip", + "packages":[ + { + "restraint":"ATLEAST", + "package_version":"1.15.0", + "package_name":"numpy" + }, + { + "restraint":"", + "package_version":"", + "package_name":"h5py" + }, + { + "restraint":"ATLEAST", + "package_version":"1.8.0", + "package_name":"tensorflow" + }, + { + "restraint":"ATLEAST", + "package_version":"5.2.0", + "package_name":"Pillow" + } + ] + } + ], + "model_algorithm":"image_classification", + "apis":[ + { + "procotol":"http", + "url":"/", + "request":{ + "Content-type":"multipart/form-data", + "data":{ + "type":"object", + "properties":{ + "images":{ + "type":"file" + } + } + } + }, + "method":"post", + "response":{ + "Content-type":"multipart/form-data", + "data":{ + "required":[ + "predicted_label", + "scores" + ], + "type":"object", + "properties":{ + "predicted_label":{ + "type":"string" + }, + "scores":{ + "items":{ + "minItems":2, + "items":[ + { + "type":"string" + }, + { + "type":"number" + } + ], + "type":"array", + "maxItems":2 + }, + "type":"array" + } + } + } + } + } + ] +}diff --git a/docs/modelarts/umn/modelarts_21_0057.html b/docs/modelarts/umn/modelarts_21_0057.html index 7e9a3759..58fbbcbb 100644 --- a/docs/modelarts/umn/modelarts_21_0057.html +++ b/docs/modelarts/umn/modelarts_21_0057.html @@ -1,7 +1,7 @@Does ModelArts Support Multiple Projects?
-No. The current ModelArts version does not support multiple projects. Customers can only use it in the default eu-de project.
+The current version supports multiple projects.
diff --git a/docs/modelarts/umn/modelarts_21_0072.html b/docs/modelarts/umn/modelarts_21_0072.html index 888e002f..ee106b9d 100644 --- a/docs/modelarts/umn/modelarts_21_0072.html +++ b/docs/modelarts/umn/modelarts_21_0072.html @@ -2,7 +2,7 @@How Do I View GPU Usage on the Notebook?
If you select GPU when creating a notebook instance, perform the following operations to view GPU usage:
-
- Log in to the ModelArts management console, and choose
.- In the Operation column of the target notebook instance in the notebook list, click Open to go to the Jupyter page.
- On the Files tab page of the Jupyter page, click New and select Terminal. The Terminal page is displayed.
- Run the following command to view GPU usage:
nvidia-smi+
- Log in to the ModelArts management console, and choose
.- In the Operation column of the target notebook instance in the notebook list, click Open to go to the Jupyter page.
- On the Files tab page of the Jupyter page, click New and select Terminal. The Terminal page is displayed.
- Run the following command to view GPU usage:
nvidia-smidiff --git a/docs/modelarts/umn/modelarts_21_0077.html b/docs/modelarts/umn/modelarts_21_0077.html index e78da939..4b960472 100644 --- a/docs/modelarts/umn/modelarts_21_0077.html +++ b/docs/modelarts/umn/modelarts_21_0077.html @@ -4,7 +4,7 @@diff --git a/docs/modelarts/umn/modelarts_21_0079.html b/docs/modelarts/umn/modelarts_21_0079.html index 58291022..05ec5546 100644 --- a/docs/modelarts/umn/modelarts_21_0079.html +++ b/docs/modelarts/umn/modelarts_21_0079.html @@ -2,7 +2,7 @@Pay attention to the following when setting training parameters:
- When setting running parameters for creating a training job, you only need to set the corresponding parameter names and values. See Figure 1.
- If a parameter value is an OBS bucket path, use the path to the data. See Figure 2. -
- When creating an OBS folder in code, you need to call a MoXing API as follows:
import moxing as mox +- When creating an OBS folder in code, you need to call a MoXing API as follows:
import moxing as mox mox.file.make_dirs('obs://bucket_name/sub_dir_0/sub_dir_1')How Do I Check Whether Folder Copy Is Complete During Job Training?
In the script of the training job boot file, run the following commands to obtain the sizes of the to-be-copied and copied folders. Then determine whether folder copy is complete based on the command output.
-import moxing as mox +import moxing as mox mox.file.get_size('obs://bucket_name/obs_file',recursive=True)get_size indicates the size of the file or folder to be obtained. recursive=True indicates that the type is folder. True indicates that the type is folder, and False indicates that the type is file.
If the command output is consistent, the folder copy is complete. If the command output is inconsistent, the folder copy is not complete.
diff --git a/docs/modelarts/umn/modelarts_21_0080.html b/docs/modelarts/umn/modelarts_21_0080.html index 52fa7c7f..5605cea3 100644 --- a/docs/modelarts/umn/modelarts_21_0080.html +++ b/docs/modelarts/umn/modelarts_21_0080.html @@ -3,7 +3,7 @@How Do I Obtain Training Job Parameters from the Boot File of the Training Job?
Training job parameters can be automatically generated in the background or manually entered by users. Perform the following operations to obtain training job parameters:
- When a training job is created, train_url in the running parameters of the training job indicates a training output location, and data_url indicates a data source, which is automatically generated in the background. The test parameter is manually entered.
- After the training job is executed, you can click the job name in the training job list to view its details. You can obtain the parameter input mode from logs, as shown in Figure 1. -
- To obtain the values of train_url, data_url, and test during training, add the following code to the boot file of the training job:
import argparse +- To obtain the values of train_url, data_url, and test during training, add the following code to the boot file of the training job:
import argparse parser = argparse.ArgumentParser() parser.add_argument('--data_url', type=str, default=None, help='test') parser.add_argument('--train_url', type=str, default=None, help='test') diff --git a/docs/modelarts/umn/modelarts_21_0083.html b/docs/modelarts/umn/modelarts_21_0083.html index 945358c5..09028c63 100644 --- a/docs/modelarts/umn/modelarts_21_0083.html +++ b/docs/modelarts/umn/modelarts_21_0083.html @@ -2,7 +2,7 @@Only Three Valid Digits Are Retained in a Training Output Log. Can the Value of loss Be Changed?
In a training job, only three valid digits are retained in a training output log. When the value of loss is too small, the value is displayed as 0.000. Log content is as follows:
-INFO:tensorflow:global_step/sec: 0.382191 +INFO:tensorflow:global_step/sec: 0.382191 INFO:tensorflow:step: 81600(global step: 81600) sample/sec: 12.098 loss: 0.000 INFO:tensorflow:global_step/sec: 0.382876 INFO:tensorflow:step: 81700(global step: 81700) sample/sec: 12.298 loss: 0.000diff --git a/docs/modelarts/umn/modelarts_21_0084.html b/docs/modelarts/umn/modelarts_21_0084.html index 2c981cf4..fc190e37 100644 --- a/docs/modelarts/umn/modelarts_21_0084.html +++ b/docs/modelarts/umn/modelarts_21_0084.html @@ -2,7 +2,7 @@Why Can't I Use os.system ('cd xxx') to Access the Corresponding Folder During Job Training?
If you cannot access the corresponding folder by using os.system('cd xxx') in the boot script of the training job, you are advised to use the following method:
-import os +import os os.chdir('/home/work/user-job-dir/xxx')diff --git a/docs/modelarts/umn/modelarts_21_0085.html b/docs/modelarts/umn/modelarts_21_0085.html index a83387d5..9544c198 100644 --- a/docs/modelarts/umn/modelarts_21_0085.html +++ b/docs/modelarts/umn/modelarts_21_0085.html @@ -2,7 +2,7 @@How Do I Invoke a Shell Script in a Training Job to Execute the .sh File?
ModelArts enables you to invoke a shell script, and you can use Python to invoke .sh. The procedure is as follows:
-
- Upload the .sh script to an OBS bucket. For example, upload the .sh script to /bucket-name/code/test.sh.
- Create the .py file on a local PC, for example, test.py. The background automatically downloads the code directory to the /home/work/user-job-dir/ directory of the container. Therefore, you can invoke the .sh file in the test.py boot file as follows:
import os +
- Upload the .sh script to an OBS bucket. For example, upload the .sh script to /bucket-name/code/test.sh.
- Create the .py file on a local PC, for example, test.py. The background automatically downloads the code directory to the /home/work/user-job-dir/ directory of the container. Therefore, you can invoke the .sh file in the test.py boot file as follows:
import os os.system('bash /home/work/user-job-dir/code/test.sh')- Upload test.py to OBS. Then the file storage path is /bucket-name/code/test.py.
- When creating a training job, set the code directory to /bucket-name/code/, and the boot file directory to /bucket-name/code/test.py.
After the training job is created, you can use Python to invoke the .sh file.
diff --git a/docs/modelarts/umn/modelarts_23_0002.html b/docs/modelarts/umn/modelarts_23_0002.html index 0c662ac7..013d0803 100644 --- a/docs/modelarts/umn/modelarts_23_0002.html +++ b/docs/modelarts/umn/modelarts_23_0002.html @@ -1,124 +1,7 @@Data Management
-+ModelArts is easy to use for users with different experience.
--
- For service developers without AI development experience, you can use ExeML of ModelArts to build AI models without coding.
- For developers who are familiar with code compilation, debugging, and common AI engines, ModelArts provides online code compiling environments as well as AI development lifecycle that covers data preparation, model training, model management, and service deployment, helping the developers build models efficiently and quickly.
-ExeML
ExeML is a customized code-free model development tool that helps users start AI application development from scratch with high flexibility. ExeML automates model design, parameter tuning and training, and model compression and deployment with the labeled data. Developers do not need to develop basic and encoding capabilities, but only to upload data and complete model training and deployment as prompted by ExeML.
-For details about how to use ExeML, see Introduction to ExeML.
--AI Development Lifecycle
The AI development lifecycle on ModelArts takes developers' habits into consideration and provides a variety of engines and scenarios for developers to choose. The following describes the entire process from data preparation to service development using ModelArts.
-Figure 1 Process of using ModelArts- ---
Table 1 Process description - - - Task
-- Sub Task
-- Description
-- Reference
-- - Data preparation
-- Creating a dataset
-- Create a dataset in ModelArts to manage and preprocess your business data.
-- -- - Labeling data
-- Label and preprocess the data in your dataset based on the business logic to facilitate subsequent training. Data labeling affects the model training performance.
-- -- - Publishing a dataset
-- After labeling data, publish the database to generate a dataset version that can be used for model training.
-- -- - Development
-- Creating a notebook instance
-- Create a notebook instance as the development environment.
-- -- - Compiling code
-- Compile code in an existing notebook to directly build a model.
-- - -- - Exporting the .py file
-- Export the compiled training script as a .py file for subsequent operations, such as model training and management.
-- -- - Model training
-- Creating a training job
-- Create a training job, upload and use the compiled training script. After the training is complete, a model is generated and stored in OBS.
-- -- - (Optional) Creating a visualization job
-- Create a visualization job (TensorBoard type) to view the model training process, learn about the model, and adjust and optimize the model. Currently, visualization jobs only support the MXNet and TensorFlow engines.
-- -- - Model management
-- Compiling inference code and configuration files
-- Following the model package specifications provided by ModelArts, compile inference code and configuration files for your model, and save the inference code and configuration files to the training output location.
-- -- - Importing a model
-- Import the training model to ModelArts to facilitate service deployment.
-- -- - Model deployment
-- Deploying a model as a service
-- Deploy a model as a real-time or batch service.
-- -- - - Accessing the service
-- After the service is deployed, access the real-time service, or view the prediction result of the batch service.
-- -diff --git a/docs/modelarts/umn/modelarts_23_0004.html b/docs/modelarts/umn/modelarts_23_0004.html index 480de1fe..44ee5a72 100644 --- a/docs/modelarts/umn/modelarts_23_0004.html +++ b/docs/modelarts/umn/modelarts_23_0004.html @@ -2,16 +2,16 @@
- Introduction to Data Management
@@ -139,6 +22,8 @@- Managing Dataset Versions
+
- Team Labeling
+Creating a Dataset
To manage data using ModelArts, create a dataset. Then you can perform operations on the dataset, such as labeling data, importing data, and publishing the dataset.
-Prerequisites
+
- Before using the data management function, you need permissions to access OBS. This function cannot be used if you are not authorized to access OBS. Before using the data management function, go to the Settings page and complete access authorization using an agency.
- You have created OBS buckets and folders for storing data. In addition, the OBS buckets and ModelArts are in the same region.
- You have uploaded data to be used to OBS.
Prerequisites
- Before using the data management function, you need permissions to access OBS. This function cannot be used if you are not authorized to access OBS. Before using the data management function, go to the Settings page and complete access authorization using an agency.
- You have created OBS buckets and folders for storing data.
- You have uploaded data to be used to OBS.
-Procedure
- Log in to the ModelArts management console. In the left navigation pane, choose Data Management > Datasets. The Datasets page is displayed.
- Click Create Dataset. On the Create Dataset page, create datasets of different types based on the data type and data labeling requirements.
- Set the basic information, the name and description of the dataset.
Figure 1 Basic information about a dataset-- Select a labeling scene and type as required. For details about the types supported by ModelArts, see Dataset Types.
Figure 2 Selecting a labeling scene and type-- Set the parameters based on the dataset type. For details, see the parameters of the following dataset types: +
- Select a labeling scene and type as required. For details about the types supported by ModelArts, see Dataset Types.
Figure 2 Selecting a labeling scene and type+- Set the parameters based on the dataset type. For details, see the parameters of the following dataset types:
- Click Create in the lower right corner of the page.
After the dataset is created, the dataset management page is displayed. You can perform the following operations on the dataset: label data, publish dataset versions, manage dataset versions, modify the dataset, import data, and delete the dataset. For details about the operations supported by different types of datasets, see .
Images (Image Classification, Object Detection, and Image Segmentation)
Figure 3 Parameters of datasets for image classification and object detection++Images (Image Classification, Object Detection, )
Figure 3 Parameters of datasets for image classification and object detection++
Table 1 Dataset parameters + @@ -37,6 +37,130 @@ Parameter
- Setting label attributes: For an object detection dataset, you can click the plus sign (+) on the right to add label attributes after setting a label color. Label attributes are used to distinguish different attributes of the objects with the same label. For example, yellow kittens and black kittens have the same label cat and their label attribute is color.
+ + + Team Labeling
++ Enable or disable team labeling. Image segmentation does not support team labeling. Therefore, this parameter is unavailable when you use image segmentation.
+After enabling team labeling, enter the name and type of the team labeling task, and select the labeling team and team members. For details about the parameter settings, see Creating Team Labeling Tasks.
+Before enabling team labeling, ensure that you have added a team and members on the Labeling Teams page. If no labeling team is available, click the link on the page to go to the Labeling Teams page, and add your team and members. For details, see Introduction to Team Labeling.
+After a dataset is created with team labeling enabled, you can view the Team Labeling mark in Labeling Type.
++Audio (Sound Classification, Speech Labeling, and Speech Paragraph Labeling)
Figure 4 Parameters of datasets for sound classification, speech labeling, and speech paragraph labeling+ +++
+ + + Parameter
++ Description
++ + Input Dataset Path
++ Select the OBS path to the input dataset.
++ + Output Dataset Path
++ Select the OBS path to the output dataset.
+NOTE:+The output dataset path cannot be the same as the input dataset path or cannot be the subdirectory of the input dataset path. Select an empty directory as the Output Dataset Path.
++ + Label Set (Sound Classification)
++ Set labels only for datasets of the sound classification type.
++
- Label Name: Enter a label name. The label name can contain only letters, digits, underscores (_), and hyphens (-). The name contains 1 to 32 characters.
- Add Label: Click Add Label to add more labels.
+ + Label Management (Speech Paragraph Labeling)
++ Only datasets for speech paragraph labeling support multiple labels.
++
- Single Label
A single label is used to label a piece of audio that has only one class.++
- Label Name: Enter a label name. The label name can contain contains 1 to 32 characters. Only letters, digits, underscores (_), and hyphens (-) are allowed.
- Label Color: Set the label color in the Label Color column. You can select a color from the color palette or enter a hexadecimal color code to set the color.
- Multiple Labels
Multiple labels are suitable for multi-dimensional labeling. For example, you can label a piece of audio as both noise and speech. For speech, you can label the audio with different speakers. You can click Add Label Class to add multiple label classes. A label class can contain multiple labels. The label class and name can contain contains 1 to 32 characters. Only letters, digits, underscores (_), and hyphens (-) are allowed.++
- Label Class: Set a label class.
- Label Name: Enter a label name.
- Add Label: Click Add Label to add more labels.
+ + + Speech Labeling (Speech Paragraph Labeling)
++ Only datasets for speech paragraph labeling support speech labeling. By default, speech labeling is disabled. If this function is enabled, you can label speech content.
++Text (Text Classification, Named Entity Recognition, and Text Triplet)
Figure 5 Parameters of datasets for text classification, named entity recognition, and text triplet+ +++
Table 2 Dataset parameters + + + Parameter
++ Description
++ + Input Dataset Path
++ Select the OBS path to the input dataset.
+NOTE:+Labeled text classification data can be identified only when you import data. When creating a dataset, set an empty OBS directory. After the dataset is created, import the labeled data into it. For details about the format of the data to be imported, see Specifications for Importing Data from an OBS Directory.
++ + Output Dataset Path
++ Select the OBS path to the output dataset.
+NOTE:+The output dataset path cannot be the same as the input dataset path or cannot be the subdirectory of the input dataset path. Select an empty directory as the Output Dataset Path.
++ + Label Set (for text classification and named entity recognition)
++ +
- Label Name: Enter a label name. The label name can contain only letters, digits, underscores (_), and hyphens (-). The name contains 1 to 32 characters.
- Add Label: Click Add Label to add more labels.
- Setting a label color: Select a color from the color palette or enter the hexadecimal color code to set the color. +
+ + Label Set (for text triplet)
++ For datasets of the text triplet type, set entity labels and relationship labels.
++
- Entity Label: Set the label name and label color. You can click the plus sign (+) on the right of the color area to add multiple labels.
- Relationship Label: a relationship between two entities. Set the source entity and target entity. Therefore, add at least two entity labels before adding a relationship label.
+
+ + + Team Labeling
++ Enable or disable team labeling.
+After enabling team labeling, enter the name and type of the team labeling task, and select the labeling team and team members. For details about the parameter settings, see Creating Team Labeling Tasks.
+Before enabling team labeling, ensure that you have added a team and members on the Labeling Teams page. If no labeling team is available, click the link on the page to go to the Labeling Teams page, and add your team and members. For details, see Introduction to Team Labeling.
+After a dataset is created with team labeling enabled, you can view the Team Labeling mark in Labeling Type.
+Other (Free Format)
Figure 6 Parameters of datasets of the free format type+ +diff --git a/docs/modelarts/umn/modelarts_23_0006.html b/docs/modelarts/umn/modelarts_23_0006.html index 9325794b..8ed4e85f 100644 --- a/docs/modelarts/umn/modelarts_23_0006.html +++ b/docs/modelarts/umn/modelarts_23_0006.html @@ -35,15 +35,6 @@
Table 3 Dataset parameters + + + Parameter
++ Description
++ + Input Dataset Path
++ Select the OBS path to the input dataset.
++ Output Dataset Path
++ Select the OBS path to the output dataset.
+NOTE:+The output dataset path cannot be the same as the input dataset path or cannot be the subdirectory of the input dataset path. Select an empty directory as the Output Dataset Path.
+Follow the format specifications described in Object Detection.
- Image segmentation
-- Supported
-Follow the format specifications described in Image Segmentation.
-- Supported
-Follow the format specifications described in Image Segmentation.
-- Sound classification
Supported
@@ -94,14 +85,6 @@Follow the format specifications described in Text Triplet.
- Video
-- N/A
-- Supported
-Follow the format specifications described in Video Labeling.
-Free format
N/A
diff --git a/docs/modelarts/umn/modelarts_23_0008.html b/docs/modelarts/umn/modelarts_23_0008.html index 81552ff1..eeb609ad 100644 --- a/docs/modelarts/umn/modelarts_23_0008.html +++ b/docs/modelarts/umn/modelarts_23_0008.html @@ -6,7 +6,7 @@![]()
To import data from an OBS directory, you must have the read permission on the OBS directory.
-Image Classification
- Image classification data can be in two modes. The first mode (directory mode) supports only single labels. The second mode (.txt label files) supports multiple labels.
- Images with the same label must be stored in the same directory, and the label name is the directory name. If there are multiple levels of directories, the last level is used as the label name.
In the following example, Cat and Dog are label names.
-dataset-import-example +dataset-import-example ├─Cat │ 10.jpg │ 11.jpg @@ -17,7 +17,7 @@ 2.jpg 3.jpg- If .txt files exist in the directory, the content in the .txt files is used as the image label. This mode is better than the previous one.
In the following example, import-dir-1 and import-dir-2 are the imported subdirectories:
-dataset-import-example +dataset-import-example ├─import-dir-1 │ 10.jpg │ 10.txt @@ -38,9 +38,9 @@ Dog- Only images in JPG, JPEG, PNG, and BMP formats are supported. The size of a single image cannot exceed 5 MB, and the total size of all images uploaded at a time cannot exceed 8 MB.
Object Detection
- The simple mode of object detection requires users store labeled objects and their label files (in one-to-one relationship with the labeled objects) in the same directory. For example, if the name of the labeled object file is IMG_20180919_114745.jpg, the name of the label file must be IMG_20180919_114745.xml.
The label files for object detection must be in PASCAL VOC format. For details about the format, see Table 8.
+-Object Detection
- The simple mode of object detection requires users store labeled objects and their label files (in one-to-one relationship with the labeled objects) in the same directory. For example, if the name of the labeled object file is IMG_20180919_114745.jpg, the name of the label file must be IMG_20180919_114745.xml.
The label files for object detection must be in PASCAL VOC format. For details about the format, see Table 6.
Example:
-├─dataset-import-example +├─dataset-import-example │ IMG_20180919_114732.jpg │ IMG_20180919_114732.xml │ IMG_20180919_114745.jpg @@ -48,200 +48,68 @@ Dog│ IMG_20180919_114945.jpg │ IMG_20180919_114945.xmlA label file example is as follows:
-+
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -<?xml version="1.0" encoding="UTF-8" standalone="no"?> -<annotation> - <folder>NA</folder> - <filename>bike_1_1593531469339.png</filename> - <source> - <database>Unknown</database> - </source> - <size> - <width>554</width> - <height>606</height> - <depth>3</depth> - </size> - <segmented>0</segmented> - <object> - <name>Dog</name> - <pose>Unspecified</pose> - <truncated>0</truncated> - <difficult>0</difficult> - <occluded>0</occluded> - <bndbox> - <xmin>279</xmin> - <ymin>52</ymin> - <xmax>474</xmax> - <ymax>278</ymax> - </bndbox> - </object> - <object> - <name>Cat</name> - <pose>Unspecified</pose> - <truncated>0</truncated> - <difficult>0</difficult> - <occluded>0</occluded> - <bndbox> - <xmin>279</xmin> - <ymin>198</ymin> - <xmax>456</xmax> - <ymax>421</ymax> - </bndbox> - </object> -</annotation> -<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<annotation> + <folder>NA</folder> + <filename>bike_1_1593531469339.png</filename> + <source> + <database>Unknown</database> + </source> + <size> + <width>554</width> + <height>606</height> + <depth>3</depth> + </size> + <segmented>0</segmented> + <object> + <name>Dog</name> + <pose>Unspecified</pose> + <truncated>0</truncated> + <difficult>0</difficult> + <occluded>0</occluded> + <bndbox> + <xmin>279</xmin> + <ymin>52</ymin> + <xmax>474</xmax> + <ymax>278</ymax> + </bndbox> + </object> + <object> + <name>Cat</name> + <pose>Unspecified</pose> + <truncated>0</truncated> + <difficult>0</difficult> + <occluded>0</occluded> + <bndbox> + <xmin>279</xmin> + <ymin>198</ymin> + <xmax>456</xmax> + <ymax>421</ymax> + </bndbox> + </object> +</annotation>
- Only images in JPG, JPEG, PNG, and BMP formats are supported. The size of a single image cannot exceed 5 MB, and the total size of all images uploaded at a time cannot exceed 8 MB.
Image Segmentation
-
- The simple mode of image segmentation requires users store labeled objects and their label files (in one-to-one relationship with the labeled objects) in the same directory. For example, if the name of the labeled object file is IMG_20180919_114746.jpg, the name of the label file must be IMG_20180919_114746.xml.
Fields mask_source and mask_color are added to the label file in PASCAL VOC format. For details about the format, see Table 4.
-Example:
-├─dataset-import-example -│ IMG_20180919_114732.jpg -│ IMG_20180919_114732.xml -│ IMG_20180919_114745.jpg -│ IMG_20180919_114745.xml -│ IMG_20180919_114945.jpg -│ IMG_20180919_114945.xml-A label file example is as follows:
--
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -<?xml version="1.0" encoding="UTF-8" standalone="no"?> -<annotation> - <folder>NA</folder> - <filename>image_0006.jpg</filename> - <source> - <database>Unknown</database> - </source> - <size> - <width>230</width> - <height>300</height> - <depth>3</depth> - </size> - <segmented>1</segmented> - <mask_source>obs://xianao/out/dataset-8153-Jmf5ylLjRmSacj9KevS/annotation/V001/segmentationClassRaw/image_0006.png</mask_source> - <object> - <name>bike</name> - <pose>Unspecified</pose> - <truncated>0</truncated> - <difficult>0</difficult> - <mask_color>193,243,53</mask_color> - <occluded>0</occluded> - <polygon> - <x1>71</x1> - <y1>48</y1> - <x2>75</x2> - <y2>73</y2> - <x3>49</x3> - <y3>69</y3> - <x4>68</x4> - <y4>92</y4> - <x5>90</x5> - <y5>101</y5> - <x6>45</x6> - <y6>110</y6> - <x7>71</x7> - <y7>48</y7> - </polygon> - </object> -</annotation> -Text Classification
Text classification supports two import modes.
-
- The labeled objects and labels for text classification are in the same text file. You can specify a separator to separate the labeled objects and labels, as well as multiple labeled objects.
For example, the following shows an example text file. The Tab key is used to separate the labeled object from the label.It touches good and responds quickly. I don't know how it performs in the future. positive +
- The labeled objects and labels for text classification are in the same text file. You can specify a separator to separate the labeled objects and labels, as well as multiple labeled objects.
For example, the following shows an example text file. The Tab key is used to separate the labeled object from the label.It touches good and responds quickly. I don't know how it performs in the future. positive Three months ago, I bought a very good phone and replaced my old one with it. It can operate longer between charges. positive Why does my phone heat up if I charge it for a while? The volume button stuck after being pressed down. negative It's a gift for Father's Day. The logistics is fast and I received it in 24 hours. I like the earphones because the bass sounds feel good and they would not fall off. positive- The labeled objects and label files for text classification are text files, and correspond to each other based on the rows. For example, the first row in a label file indicates the label of the first row in the file of the labeled object.
For example, the content of labeled object COMMENTS_20180919_114745.txt is as follows:
-It touches good and responds quickly. I don't know how it performs in the future. +It touches good and responds quickly. I don't know how it performs in the future. Three months ago, I bought a very good phone and replaced my old one with it. It can operate longer between charges. Why does my phone heat up if I charge it for a while? The volume button stuck after being pressed down. It's a gift for Father's Day. The logistics is fast and I received it in 24 hours. I like the earphones because the bass sounds feel good and they would not fall off.The content of label file COMMENTS_20180919_114745_result.txt is as follows:
-positive +positive negative negative positiveThe data format requires users to store labeled objects and their label files (in one-to-one relationship with the labeled objects) in the same directory. For example, if the name of the labeled object file is COMMENTS_20180919_114745.txt, the name of the label file must be COMMENTS _20180919_114745_result.txt.
Example of data file storage:
-├─dataset-import-example +├─dataset-import-example │ COMMENTS_20180919_114732.txt │ COMMENTS _20180919_114732_result.txt │ COMMENTS _20180919_114745.txt @@ -252,7 +120,7 @@ positiveSound Classification
For sound classification, sound files with the same label must be stored in the same directory, and the label name is the directory name.
Example:
-dataset-import-example +dataset-import-example ├─Cat │ 10.wav │ 11.wav diff --git a/docs/modelarts/umn/modelarts_23_0009.html b/docs/modelarts/umn/modelarts_23_0009.html index 7fab0d5c..55205abe 100644 --- a/docs/modelarts/umn/modelarts_23_0009.html +++ b/docs/modelarts/umn/modelarts_23_0009.html @@ -5,60 +5,35 @@![]()
There are many requirements on the Manifest file compilation. Import new data from OBS. Generally, Manifest file import is used for data migration of ModelArts in different regions or using different accounts. If you have labeled data in a region using ModelArts, you can obtain the manifest file of the published dataset from the output path. Then you can import the dataset using the manifest file to ModelArts of other regions or accounts. The imported data carries the labeling information and does not need to be labeled again, improving development efficiency.
The manifest file that contains information about the original file and labeling can be used in labeling, training, and inference scenarios. The manifest file that contains only information about the original file can be used in inference scenarios or used to generate an unlabeled dataset. The manifest file must meet the following requirements:
-
- The manifest file uses the UTF-8 encoding format. The source value of text classification can contain Chinese characters. However, Chinese characters are not recommended for other parameters.
- The manifest file uses the JSON Lines format (jsonlines.org). A line contains one JSON object.
{"source": "/path/to/image1.jpg", "annotation": ... } +-
- The manifest file uses the UTF-8 encoding format. The source value of text classification can contain Chinese characters. However, Chinese characters are not recommended for other parameters.
- The manifest file uses the JSON Lines format (jsonlines.org). A line contains one JSON object.
{"source": "/path/to/image1.jpg", "annotation": ... } {"source": "/path/to/image2.jpg", "annotation": ... } {"source": "/path/to/image3.jpg", "annotation": ... }In the preceding example, the manifest file contains multiple lines of JSON object.
- The manifest file can be generated by users, third-party tools, or ModelArts Data Labeling. The file name can be any valid file name. To facilitate the internal use of the ModelArts system, the file name generated by the ModelArts Data Labeling function consists of the following character strings: DatasetName-VersionName.manifest. For example, animal-v201901231130304123.manifest.
-Image Classification
+ "inference-loc":"/path/to/inference-output" +}-Image Segmentation
{ - "annotation": [{ - "annotation-format": "PASCAL VOC", - "type": "modelarts/image_segmentation", - "annotation-loc": "s3://path/to/annotation/image1.xml", - "creation-time": "2020-12-16 21:36:27", - "annotated-by": "human" - }], - "usage": "train", - "source": "s3://path/to/image1.jpg", - "id": "16d196c19bf61994d7deccafa435398c", - "sample-type": 0 -}-- -
- The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
- annotation-loc indicates the path for saving the label file. This parameter is mandatory for image segmentation and object detection but optional for other labeling types.
- annotation-format indicates the format of the label file. This parameter is optional. The default value is PASCAL VOC. Only PASCAL VOC is supported.
- sample-type indicates a sample format. Value 0 indicates image, 1 text, 2 audio, 4 table, and 6 video.
- --
Table 4 PASCAL VOC format parameters - - - Parameter
-- Mandatory
-- Description
-- - folder
-- Yes
-- Directory where the data source is located
-- - filename
-- Yes
-- Name of the file to be labeled
-- - size
-- Yes
-- Image pixel
--
- width: image width. This parameter is mandatory.
- height: image height. This parameter is mandatory.
- depth: number of image channels. This parameter is mandatory.
- - segmented
-- Yes
-- Segmented or not
-- - mask_source
-- No
-- Segmentation mask path
-- - - object
-- Yes
-- Object detection information. Multiple object{} functions are generated for multiple objects.
--
- name: class of the labeled content. This parameter is mandatory.
- pose: shooting angle of the labeled content. This parameter is mandatory.
- truncated: whether the labeled content is truncated (0 indicates that the content is not truncated). This parameter is mandatory.
- occluded: whether the labeled content is occluded (0 indicates that the content is not occluded). This parameter is mandatory.
- difficult: whether the labeled object is difficult to identify (0 indicates that the object is easy to identify). This parameter is mandatory.
- confidence: confidence score of the labeled object. The value ranges from 0 to 1. This parameter is optional.
- bndbox: bounding box type. This parameter is mandatory. For details about the possible values, see Table 5.
- mask_color: label color, which is represented by the RGB value. This parameter is mandatory.
--
Table 5 Bounding box types - - - Type
-- Shape
-- Labeling Information
-- - - polygon
-- Polygon
-- Coordinates of points
-<x1>100<x1>
-<y1>100<y1>
-<x2>200<x2>
-<y2>100<y2>
-<x3>250<x3>
-<y3>150<y3>
-<x4>200<x4>
-<y4>200<y4>
-<x5>100<x5>
-<y5>200<y5>
-<x6>50<x6>
-<y6>150<y6>
-<x7>100<x7>
-<y7>100<y7>
-Example:-<?xml version="1.0" encoding="UTF-8" standalone="no"?> -<annotation> - <folder>NA</folder> - <filename>image_0006.jpg</filename> - <source> - <database>Unknown</database> - </source> - <size> - <width>230</width> - <height>300</height> - <depth>3</depth> - </size> - <segmented>1</segmented> - <mask_source>obs://xianao/out/dataset-8153-Jmf5ylLjRmSacj9KevS/annotation/V001/segmentationClassRaw/image_0006.png</mask_source> - <object> - <name>bike</name> - <pose>Unspecified</pose> - <truncated>0</truncated> - <difficult>0</difficult> - <mask_color>193,243,53</mask_color> - <occluded>0</occluded> - <polygon> - <x1>71</x1> - <y1>48</y1> - <x2>75</x2> - <y2>73</y2> - <x3>49</x3> - <y3>69</y3> - <x4>68</x4> - <y4>92</y4> - <x5>90</x5> - <y5>101</y5> - <x6>45</x6> - <y6>110</y6> - <x7>71</x7> - <y7>48</y7> - </polygon> - </object> -</annotation>-Text Classification
{ +-Text Classification
{ "source": "content://I like this product ", "id":"XGDVGS", "annotation": [ @@ -349,7 +180,7 @@ }The content parameter indicates the text to be labeled (in UTF-8 encoding format, which can be Chinese). The other parameters are the same as those described in Image Classification. For details, see Table 1.
Named Entity Recognition
{ +Named Entity Recognition
{ "source":"content://Michael Jordan is the most famous basketball player in the world.", "usage":"TRAIN", "annotation":[ @@ -377,35 +208,35 @@ }The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
-Table 6 describes the property parameters. For example, if you want to extract Michael from "source":"content://Michael Jordan", the value of start_index is 0 and that of end_index is 7.
+Table 4 describes the property parameters. For example, if you want to extract Michael from "source":"content://Michael Jordan", the value of start_index is 0 and that of end_index is 7.
-
Table 6 Description of property parameters Parameter
+-
Table 4 Description of property parameters - - Parameter
Data Type
+- Data Type
Description
+Description
@modelarts:start_index
+- - @modelarts:start_index
Integer
+- Integer
Start position of the text. The value starts from 0, including the characters specified by start_index.
+Start position of the text. The value starts from 0, including the characters specified by start_index.
@modelarts:end_index
+- @modelarts:end_index
Integer
+- Integer
End position of the text, excluding the characters specified by end_index.
+End position of the text, excluding the characters specified by end_index.
Text Triplet
{ +Text Triplet
{ "source":"content://"Three Body" is a series of long science fiction novels created by Liu Cix.", "usage":"TRAIN", "annotation":[ @@ -459,46 +290,46 @@The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
Table 5 property parameters describes the property parameters. @modelarts:start_index and @modelarts:end_index are the same as those of named entity recognition. For example, when source is set to content://"Three Body" is a series of long science fiction novels created by Liu Cix., Liu Cix is an entity person, Three Body is an entity book, the person is the author of the book, and the book is works of the person.
-
Table 7 Description of property parameters Parameter
+-
Table 5 Description of property parameters - - Parameter
Data Type
+- Data Type
Description
+Description
@modelarts:start_index
+- - @modelarts:start_index
Integer
+- Integer
Start position of the triplet entities. The value starts from 0, including the characters specified by start_index.
+Start position of the triplet entities. The value starts from 0, including the characters specified by start_index.
@modelarts:end_index
+- - @modelarts:end_index
Integer
+- Integer
End position of the triplet entities, excluding the characters specified by end_index.
+End position of the triplet entities, excluding the characters specified by end_index.
@modelarts:from
+- - @modelarts:from
String
+- String
Start entity ID of the triplet relationship.
+Start entity ID of the triplet relationship.
@modelarts:to
+- @modelarts:to
String
+- String
Entity ID pointed to in the triplet relationship.
+Entity ID pointed to in the triplet relationship.
Object Detection
{ +Object Detection
{ "source":"s3://path/to/image1.jpg", "usage":"TRAIN", "annotation": [ @@ -512,99 +343,99 @@ }-
- The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
- annotation-loc indicates the path for saving the label file. This parameter is mandatory for object detection and image segmentation but optional for other labeling types.
- annotation-format indicates the format of the label file. This parameter is optional. The default value is PASCAL VOC. Only PASCAL VOC is supported.
Table 8 PASCAL VOC format parameters Parameter
+-
Table 6 PASCAL VOC format parameters - - Parameter
Mandatory
+- Mandatory
Description
+Description
folder
+- - folder
Yes
+- Yes
Directory where the data source is located
+Directory where the data source is located
filename
+- - filename
Yes
+- Yes
Name of the file to be labeled
+Name of the file to be labeled
size
+- - size
Yes
+- Yes
Image pixel
+Image pixel
- width: image width. This parameter is mandatory.
- height: image height. This parameter is mandatory.
- depth: number of image channels. This parameter is mandatory.
segmented
+- - segmented
Yes
+- Yes
Segmented or not
+Segmented or not
object
+- object
Yes
+- Yes
Object detection information. Multiple object{} functions are generated for multiple objects.
-+
- name: class of the labeled content. This parameter is mandatory.
- pose: shooting angle of the labeled content. This parameter is mandatory.
- truncated: whether the labeled content is truncated (0 indicates that the content is not truncated). This parameter is mandatory.
- occluded: whether the labeled content is occluded (0 indicates that the content is not occluded). This parameter is mandatory.
- difficult: whether the labeled object is difficult to identify (0 indicates that the object is easy to identify). This parameter is mandatory.
- confidence: confidence score of the labeled object. The value ranges from 0 to 1. This parameter is optional.
- bndbox: bounding box type. This parameter is mandatory. For details about the possible values, see Table 9.
Object detection information. Multiple object{} functions are generated for multiple objects.
+
- name: class of the labeled content. This parameter is mandatory.
- pose: shooting angle of the labeled content. This parameter is mandatory.
- truncated: whether the labeled content is truncated (0 indicates that the content is not truncated). This parameter is mandatory.
- occluded: whether the labeled content is occluded (0 indicates that the content is not occluded). This parameter is mandatory.
- difficult: whether the labeled object is difficult to identify (0 indicates that the object is easy to identify). This parameter is mandatory.
- confidence: confidence score of the labeled object. The value ranges from 0 to 1. This parameter is optional.
- bndbox: bounding box type. This parameter is mandatory. For details about the possible values, see Table 7.
Table 9 Description of bounding box types Type
+-
Table 7 Description of bounding box types - - Type
Shape
+- Shape
Labeling Information
+Labeling Information
point
+- - point
Point
+- Point
Coordinates of a point
+Coordinates of a point
<x>100<x>
<y>100<y>
line
+- - line
Line
+- Line
Coordinates of points
+Coordinates of points
<x1>100<x1>
<y1>100<y1>
<x2>200<x2>
<y2>200<y2>
bndbox
+- - bndbox
Rectangle
+- Rectangle
Coordinates of the upper left and lower right points
+Coordinates of the upper left and lower right points
<xmin>100<xmin>
<ymin>100<ymin>
<xmax>200<xmax>
<ymax>200<ymax>
polygon
+- - polygon
Polygon
+- Polygon
Coordinates of points
+Coordinates of points
<x1>100<x1>
<y1>100<y1>
<x2>200<x2>
@@ -619,11 +450,11 @@<y6>150<y6>
circle
+- circle
Circle
+- Circle
Center coordinates and radius
+Center coordinates and radius
<cx>100<cx>
<cy>100<cy>
<r>50<r>
@@ -632,7 +463,7 @@Example:-<annotation> +Example:<annotation> <folder>test_data</folder> <filename>260730932.jpg</filename> <size> @@ -711,7 +542,7 @@ </annotation>Sound Classification
{ +-Sound Classification
{ "source": "s3://path/to/pets.wav", "annotation": [ @@ -725,7 +556,7 @@ }The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
Speech Labeling
{ +-Speech Labeling
{ "source":"s3://path/to/audio1.wav", "annotation":[ { @@ -740,7 +571,7 @@ }
- The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
- The @modelarts:content parameter in property indicates speech labeling. The data type is String.
Speech Paragraph Labeling
{ +Speech Paragraph Labeling
{ "source":"s3://path/to/audio1.wav", "usage":"TRAIN", "annotation":[ @@ -771,43 +602,43 @@ } ] }-
- The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
- Table 10 describes the property parameters. -
Table 10 Description of property parameters Parameter
+
- The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
- Table 8 describes the property parameters. +
Table 8 Description of property parameters - - Parameter
Data Type
+- Data Type
Description
+Description
@modelarts:start_time
+- - @modelarts:start_time
String
+- String
Start time of the sound. The format is hh:mm:ss.SSS.
+Start time of the sound. The format is hh:mm:ss.SSS.
hh indicates the hour, mm indicates the minute, ss indicates the second, and SSS indicates the millisecond.
@modelarts:end_time
+- - @modelarts:end_time
String
+- String
End time of the sound. The format is hh:mm:ss.SSS.
+End time of the sound. The format is hh:mm:ss.SSS.
hh indicates the hour, mm indicates the minute, ss indicates the second, and SSS indicates the millisecond.
@modelarts:source
+- - @modelarts:source
String
+- String
Sound source
+Sound source
@modelarts:content
+@@ -815,7 +646,7 @@ - - @modelarts:content
String
+- String
Sound content
+Sound content
Video Labeling
{ +Video Labeling
{ "annotation": [{ "annotation-format": "PASCAL VOC", "type": "modelarts/object_detection", @@ -835,132 +666,132 @@ }-
- The parameters such as source, usage, and annotation are the same as those described in Image Classification. For details, see Table 1.
- annotation-loc indicates the path for saving the label file. This parameter is mandatory for object detection but optional for other labeling types.
- annotation-format indicates the format of the label file. This parameter is optional. The default value is PASCAL VOC. Only PASCAL VOC is supported.
- sample-type indicates a sample format. Value 0 indicates image, 1 text, 2 audio, 4 table, and 6 video.
Table 11 property parameters Parameter
+-
Table 9 property parameters - - Parameter
Data Type
+- Data Type
Description
+Description
@modelarts:parent_duration
+- - @modelarts:parent_duration
Double
+- Double
Duration of the labeled video, in seconds
+Duration of the labeled video, in seconds
@modelarts:time_in_video
+- - @modelarts:time_in_video
Double
+- Double
Timestamp of the labeled video frame, in seconds
+Timestamp of the labeled video frame, in seconds
@modelarts:parent_source
+- @modelarts:parent_source
String
+- String
OBS path of the labeled video
+OBS path of the labeled video
Table 12 PASCAL VOC format parameters Parameter
+-
Table 10 PASCAL VOC format parameters - - Parameter
Mandatory
+- Mandatory
Description
+Description
folder
+- - folder
Yes
+- Yes
Directory where the data source is located
+Directory where the data source is located
filename
+- - filename
Yes
+- Yes
Name of the file to be labeled
+Name of the file to be labeled
size
+- - size
Yes
+- Yes
Image pixel
+Image pixel
- width: image width. This parameter is mandatory.
- height: image height. This parameter is mandatory.
- depth: number of image channels. This parameter is mandatory.
segmented
+- - segmented
Yes
+- Yes
Segmented or not
+Segmented or not
object
+- object
Yes
+- Yes
Object detection information. Multiple object{} functions are generated for multiple objects.
-+
- name: class of the labeled content. This parameter is mandatory.
- pose: shooting angle of the labeled content. This parameter is mandatory.
- truncated: whether the labeled content is truncated (0 indicates that the content is not truncated). This parameter is mandatory.
- occluded: whether the labeled content is occluded (0 indicates that the content is not occluded). This parameter is mandatory.
- difficult: whether the labeled object is difficult to identify (0 indicates that the object is easy to identify). This parameter is mandatory.
- confidence: confidence score of the labeled object. The value ranges from 0 to 1. This parameter is optional.
- bndbox: bounding box type. This parameter is mandatory. For details about the possible values, see Table 13.
Object detection information. Multiple object{} functions are generated for multiple objects.
+
- name: class of the labeled content. This parameter is mandatory.
- pose: shooting angle of the labeled content. This parameter is mandatory.
- truncated: whether the labeled content is truncated (0 indicates that the content is not truncated). This parameter is mandatory.
- occluded: whether the labeled content is occluded (0 indicates that the content is not occluded). This parameter is mandatory.
- difficult: whether the labeled object is difficult to identify (0 indicates that the object is easy to identify). This parameter is mandatory.
- confidence: confidence score of the labeled object. The value ranges from 0 to 1. This parameter is optional.
- bndbox: bounding box type. This parameter is mandatory. For details about the possible values, see Table 11.
Table 13 Bounding box types Type
+-
Table 11 Bounding box types - - Type
Shape
+- Shape
Labeling Information
+Labeling Information
point
+- - point
Point
+- Point
Coordinates of a point
+Coordinates of a point
<x>100<x>
<y>100<y>
line
+- - line
Line
+- Line
Coordinates of points
+Coordinates of points
<x1>100<x1>
<y1>100<y1>
<x2>200<x2>
<y2>200<y2>
bndbox
+- - bndbox
Rectangle
+- Rectangle
Coordinates of the upper left and lower right points
+Coordinates of the upper left and lower right points
<xmin>100<xmin>
<ymin>100<ymin>
<xmax>200<xmax>
<ymax>200<ymax>
polygon
+- - polygon
Polygon
+- Polygon
Coordinates of points
+Coordinates of points
<x1>100<x1>
<y1>100<y1>
<x2>200<x2>
@@ -975,11 +806,11 @@<y6>150<y6>
circle
+- circle
Circle
+- Circle
Center coordinates and radius
+Center coordinates and radius
<cx>100<cx>
<cy>100<cy>
<r>50<r>
@@ -988,7 +819,7 @@Example:<annotation> +Example:<annotation> <folder>test_data</folder> <filename>260730932_t1.473722.jpg.jpg</filename> <size> diff --git a/docs/modelarts/umn/modelarts_23_0010.html b/docs/modelarts/umn/modelarts_23_0010.html index 63167dd3..42fdc399 100644 --- a/docs/modelarts/umn/modelarts_23_0010.html +++ b/docs/modelarts/umn/modelarts_23_0010.html @@ -9,8 +9,6 @@- Object Detection
-
- Image Segmentation
-- Text Classification
- Named Entity Recognition
@@ -23,8 +21,6 @@- Speech Paragraph Labeling
-
- Video Labeling
-diff --git a/docs/modelarts/umn/modelarts_23_0011.html b/docs/modelarts/umn/modelarts_23_0011.html index fd5a51f7..625d3983 100644 --- a/docs/modelarts/umn/modelarts_23_0011.html +++ b/docs/modelarts/umn/modelarts_23_0011.html @@ -15,7 +15,7 @@-The following filter criteria are supported. You can set one or more filter criteria.
- Label: Select All or one or more labels you specified.
- Sample Creation Time: Select Within 1 month, Within 1 day, or Custom to customize a time range.
- File Name or Path: Filter files by file name or file storage path.
- Labeled By: Select the name of the user who performs the labeling operation.
Labeling Images (Manually)
The dataset details page displays images on the All, Labeled, and Unlabeled tabs. Images on the All tab page are displayed by default. Click an image to preview it. For the images that have been labeled, the label information is displayed at the bottom of the preview page.
+-Labeling Images (Manually)
The dataset details page displays images on the All, Labeled, and Unlabeled tabs. Images on the All tab page are displayed by default. Click an image to preview it. For the images that have been labeled, the label information is displayed at the bottom of the preview page.
- On the Unlabeled tab page, select the images to be labeled.
- Manual selection: In the image list, click the selection box in the upper left corner of an image to enter the selection mode, indicating that the image is selected. You can select multiple images of the same type and add labels to them together.
- Batch selection: If all the images on the current page of the image list belong to the same type, you can click Select Images on Current Page in the upper right corner to select all the images on the current page.
- Add labels to the selected images.
diff --git a/docs/modelarts/umn/modelarts_23_0012.html b/docs/modelarts/umn/modelarts_23_0012.html index d1d6b600..8d502192 100644 --- a/docs/modelarts/umn/modelarts_23_0012.html +++ b/docs/modelarts/umn/modelarts_23_0012.html @@ -17,11 +17,11 @@
- In the label adding area on the right, set the label in the Label text box.
Click the Label text box and select an existing label from the drop-down list. If the existing labels cannot meet the requirements, you can go to the page for modifying the dataset and add labels.
- Confirm the Labels of Selected Image information and click OK. The selected image is automatically moved to the Labeled tab page. On the Unlabeled and All tab pages, the labeling information is updated along with the labeling process, including the added label names and the number of images for each label.
Labeling Images (Manually)
The dataset details page provides the Labeled and Unlabeled tabs. The All tab page is displayed by default.
+Labeling Images (Manually)
The dataset details page provides the Labeled and Unlabeled tabs. The All tab page is displayed by default.
- On the Unlabeled tab page, click an image. The image labeling page is displayed. For details about how to use common buttons on the Labeled tab page, see Table 2.
- In the left tool bar, select a proper labeling shape. The default labeling shape is a rectangle. In this example, the rectangle is used for labeling.
-![]()
On the left of the page, multiple tools are provided for you to label images. However, you can use only one tool at a time.
Table 1 Supported bounding box Icon
+-
Table 1 Supported bounding box Icon
diff --git a/docs/modelarts/umn/modelarts_23_0013.html b/docs/modelarts/umn/modelarts_23_0013.html index 80ed9262..bd97afcc 100644 --- a/docs/modelarts/umn/modelarts_23_0013.html +++ b/docs/modelarts/umn/modelarts_23_0013.html @@ -7,7 +7,7 @@ Description
-Starting Labeling
- Log in to the ModelArts management console. In the left navigation pane, choose Data Management > Datasets. The Datasets page is displayed.
- In the dataset list, select the dataset to be labeled based on the labeling type, and click the dataset name to go to the Dashboard tab page of the dataset.
By default, the Dashboard tab page of the current dataset version is displayed. If you need to label the dataset of another version, click the Versions tab and then click Set to Current Version in the right pane. For details, see Managing Dataset Versions.
- On the Dashboard page of the dataset, click Label in the upper right corner. The dataset details page is displayed. By default, all data of the dataset is displayed on the dataset details page.
Labeling Content
The dataset details page displays the labeled and unlabeled text files in the dataset. The Unlabeled tab page is displayed by default.
+Labeling Content
The dataset details page displays the labeled and unlabeled text files in the dataset. The Unlabeled tab page is displayed by default.
diff --git a/docs/modelarts/umn/modelarts_23_0014.html b/docs/modelarts/umn/modelarts_23_0014.html index 965b46cd..827b0855 100644 --- a/docs/modelarts/umn/modelarts_23_0014.html +++ b/docs/modelarts/umn/modelarts_23_0014.html @@ -6,7 +6,7 @@
- On the Unlabeled tab page, the objects to be labeled are listed in the left pane. In the list, click the text object to be labeled, and select a label in the Label Set area in the right pane. Multiple labels can be added to a labeling object.
You can repeat this operation to select objects and add labels to the objects.
Figure 1 Labeling for text classification- After all objects are labeled, click Save Current Page at the bottom of the page to complete labeling text files on the Unlabeled tab page.
-Starting Labeling
- Log in to the ModelArts management console. In the left navigation pane, choose Data Management > Datasets. The Datasets page is displayed.
- In the dataset list, select the dataset to be labeled based on the labeling type, and click the dataset name to go to the Dashboard tab page of the dataset.
By default, the Dashboard tab page of the current dataset version is displayed. If you need to label the dataset of another version, click the Versions tab and then click Set to Current Version in the right pane. For details, see Managing Dataset Versions.
- On the Dashboard page of the dataset, click Label in the upper right corner. The dataset details page is displayed. By default, all data of the dataset is displayed on the dataset details page.
Labeling Content
The dataset details page displays the labeled and unlabeled text files in the dataset. The Unlabeled tab page is displayed by default.
+Labeling Content
The dataset details page displays the labeled and unlabeled text files in the dataset. The Unlabeled tab page is displayed by default.
- On the Unlabeled tab page, the objects to be labeled are listed in the left pane. In the list, click the text object to be labeled, select a part of text displayed under Label Set for labeling, and select a label in the Label Set area in the right pane. Multiple labels can be added to a labeling object.
You can repeat this operation to select objects and add labels to the objects.
Figure 1 Labeling for named entity recognitiondiff --git a/docs/modelarts/umn/modelarts_23_0018.html b/docs/modelarts/umn/modelarts_23_0018.html index a7e998f1..9421a582 100644 --- a/docs/modelarts/umn/modelarts_23_0018.html +++ b/docs/modelarts/umn/modelarts_23_0018.html @@ -47,7 +47,7 @@Directory Structure of Related Files After the Dataset Is Published
Datasets are managed based on OBS directories. After a new version is published, the directory is generated based on the new version in the output dataset path.
Take an image classification dataset as an example. After the dataset is published, the directory structure of related files generated in OBS is as follows:
-|-- user-specified-output-path +|-- user-specified-output-path |-- DatasetName-datasetId |-- annotation |-- VersionMame1 @@ -56,7 +56,7 @@ ... |-- ...The following uses object detection as an example. If a manifest file is imported to the dataset, the following provides the directory structure of related files after the dataset is published:
-|-- user-specified-output-path +|-- user-specified-output-path |-- DatasetName-datasetId |-- annotation |-- VersionMame1 diff --git a/docs/modelarts/umn/modelarts_23_0033.html b/docs/modelarts/umn/modelarts_23_0033.html index f3f3a0b1..bb9a2311 100644 --- a/docs/modelarts/umn/modelarts_23_0033.html +++ b/docs/modelarts/umn/modelarts_23_0033.html @@ -104,7 +104,7 @@Constraints
+
- For security purposes, the root permission is not granted to the notebook instances integrated in ModelArts. You can use the non-privileged user jovyan or ma-user (using Multi-Engine) to perform operations. Therefore, you cannot use apt-get to install the OS software.
- Notebook instances support only standalone training under the current AI engine framework. If you need to use distributed training, use ModelArts training jobs and specify multiple nodes in the resource pool.
- ModelArts DevEnviron does not support apt-get. You can use a custom image to train a model.
- Notebook instances do not support GUI-related libraries, such as PyQt.
- Notebook instances created using Ascend specifications cannot be attached to EVS disks.
- Notebook instances cannot be connected to DWS and database services.
- Notebook instances cannot directly read files in OBS. You need to download the files to the local host. To access data in OBS, use or for interaction.
- DevEnviron does not support TensorBoard. Use the visualization job function under Training Jobs.
- After a notebook instance is created, you cannot modify its specifications. For example, you cannot change the CPU specifications to GPU specifications or change the work environment. Therefore, select the specifications required by the service when creating a notebook instance, or save your code and data to OBS in a timely manner during development so that you can quickly upload the code and data to a new notebook instance.
- If the code output is still displayed after you close the page and open it again, use Terminal.
Constraints
- For security purposes, the root permission is not granted to the notebook instances integrated in ModelArts. You can use the non-privileged user jovyan or ma-user (using Multi-Engine) to perform operations. Therefore, you cannot use apt-get to install the OS software.
- Notebook instances support only standalone training under the current AI engine framework. If you need to use distributed training, use ModelArts training jobs and specify multiple nodes in the resource pool.
- ModelArts DevEnviron does not support apt-get. You can use a custom image to train a model.
- Notebook instances do not support GUI-related libraries, such as PyQt.
- Notebook instances cannot be connected to DWS and database services.
- Notebook instances cannot directly read files in OBS. You need to download the files to the local host. To access data in OBS, use or for interaction.
- DevEnviron does not support TensorBoard. Use the visualization job function under Training Jobs.
- After a notebook instance is created, you cannot modify its specifications. For example, you cannot change the CPU specifications to GPU specifications or change the work environment. Therefore, select the specifications required by the service when creating a notebook instance, or save your code and data to OBS in a timely manner during development so that you can quickly upload the code and data to a new notebook instance.
- If the code output is still displayed after you close the page and open it again, use Terminal.
diff --git a/docs/modelarts/umn/modelarts_23_0034.html b/docs/modelarts/umn/modelarts_23_0034.html index b86f220a..51c430aa 100644 --- a/docs/modelarts/umn/modelarts_23_0034.html +++ b/docs/modelarts/umn/modelarts_23_0034.html @@ -58,7 +58,7 @@Instance Flavor
If you select a public resource pool, available flavors vary depending on the selected type.
-+
- If you select CPU for Type, available options include 2 vCPUs | 8 GiB and 8 vCPUs | 32 GiB.
- If you select GPU for Type, the available option is GPU: 1 x v100NV32 CPU: 8 vCPUs | 64 GiB.
- If you select Ascend for Type, available options include Ascend: 1 x Ascend 910 CPU: 24 vCPUs | 96 GiB and Ascend: 8 x Ascend 910 CPU: 192 vCPUs | 720 GiB.
- If you select CPU for Type, available options include 2 vCPUs | 8 GiB and 8 vCPUs | 32 GiB.
- If you select GPU for Type, the available option is GPU: 1 x v100NV32 CPU: 8 vCPUs | 64 GiB.
Storage
diff --git a/docs/modelarts/umn/modelarts_23_0039.html b/docs/modelarts/umn/modelarts_23_0039.html index a6d67ba9..eb01c754 100644 --- a/docs/modelarts/umn/modelarts_23_0039.html +++ b/docs/modelarts/umn/modelarts_23_0039.html @@ -4,70 +4,37 @@In notebook instances, you can use ModelArts SDKs to manage OBS, training jobs, models, and real-time services.
For details about how to use ModelArts SDKs, see ModelArts SDK Reference.
Notebooks carry the authentication (AK/SK) and region information about login users. Therefore, SDK session authentication can be completed without entering parameters.
-diff --git a/docs/modelarts/umn/modelarts_23_0044.html b/docs/modelarts/umn/modelarts_23_0044.html index f87d8c88..7e00067c 100644 --- a/docs/modelarts/umn/modelarts_23_0044.html +++ b/docs/modelarts/umn/modelarts_23_0044.html @@ -3,37 +3,44 @@Example Code
- Creating a training job
+
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -from modelarts.session import Session -from modelarts.estimator import Estimator -session = Session() -estimator = Estimator( - modelarts_session=session, - framework_type='PyTorch', # AI engine name - framework_version='PyTorch-1.0.0-python3.6', # AI engine version - code_dir='/obs-bucket-name/src/', # Training script directory - boot_file='/obs-bucket-name/src/pytorch_sentiment.py', # Training startup script directory - log_url='/obs-bucket-name/log/', # Training log directory - hyperparameters=[ - {"label":"classes", - "value": "10"}, - {"label":"lr", - "value": "0.001"} - ], - output_path='/obs-bucket-name/output/', # Training output directory - train_instance_type='modelarts.vm.gpu.p100', # Training environment specifications - train_instance_count=1, # Number of training nodes - job_description='pytorch-sentiment with ModelArts SDK') # Training job description -job_instance = estimator.fit(inputs='/obs-bucket-name/data/train/', wait=False, job_name='my_training_job') -Example Code
-
- Creating a training job
from modelarts.session import Session +from modelarts.estimator import Estimator +session = Session() +estimator = Estimator( + modelarts_session=session, + framework_type='PyTorch', # AI engine name + framework_version='PyTorch-1.0.0-python3.6', # AI engine version + code_dir='/obs-bucket-name/src/', # Training script directory + boot_file='/obs-bucket-name/src/pytorch_sentiment.py', # Training startup script directory + log_url='/obs-bucket-name/log/', # Training log directory + hyperparameters=[ + {"label":"classes", + "value": "10"}, + {"label":"lr", + "value": "0.001"} + ], + output_path='/obs-bucket-name/output/', # Training output directory + train_instance_type='modelarts.vm.gpu.p100', # Training environment specifications + train_instance_count=1, # Number of training nodes + job_description='pytorch-sentiment with ModelArts SDK') # Training job description +job_instance = estimator.fit(inputs='/obs-bucket-name/data/train/', wait=False, job_name='my_training_job')
- Querying a model list
-
1 -2 -3 -4 -from modelarts.session import Session -from modelarts.model import Model -session = Session() -model_list_resp = Model.get_model_list(session, model_status="published", model_name="digit", order="desc") -- Querying service details
+
1 -2 -3 -4 -5 -from modelarts.session import Session -from modelarts.model import Predictor -session = Session() -predictor_instance = Predictor(session, service_id="input your service_id") -predictor_info_resp = predictor_instance.get_service_info() -
- Querying a model list
from modelarts.session import Session +from modelarts.model import Model +session = Session() +model_list_resp = Model.get_model_list(session, model_status="published", model_name="digit", order="desc")+- Querying service details
from modelarts.session import Session +from modelarts.model import Predictor +session = Session() +predictor_instance = Predictor(session, service_id="input your service_id") +predictor_info_resp = predictor_instance.get_service_info()Introduction to Model Training
ModelArts provides model training for you to view the training effect, based on which you can adjust your model parameters. You can select resource pools (CPU or GPU) with different instance flavors for model training. In addition to the models developed by users, ModelArts also provides built-in algorithms. You can directly adjust parameters of the built-in algorithms, instead of developing a model by yourself, to obtain a satisfactory model.
Description of the Model Training Function
-
Table 1 Function description Function
+diff --git a/docs/modelarts/umn/modelarts_23_0051.html b/docs/modelarts/umn/modelarts_23_0051.html index 0463c178..651657bc 100644 --- a/docs/modelarts/umn/modelarts_23_0051.html +++ b/docs/modelarts/umn/modelarts_23_0051.html @@ -10,8 +10,6 @@
Table 1 Function description - - Function
Description
+Description
Reference
Built-in algorithms
+- - Built-in algorithms
Based on the frequently-used AI engines in the industry, ModelArts provides built-in algorithms to meet a wide range of your requirements. You can directly select the algorithms for training jobs, without concerning model development.
+Based on the frequently-used AI engines in the industry, ModelArts provides built-in algorithms to meet a wide range of your requirements. You can directly select the algorithms for training jobs, without concerning model development.
Training job management
+- - Training job management
You can create training jobs, manage training job versions, and view details of training jobs, and evaluation details.
+You can create training jobs, manage training job versions, and view details of training jobs, and evaluation details.
Job parameter management
++ - Job parameter management
You can save the parameter settings of a training job (including the data source, algorithm source, running parameters, resource pool parameters, and more) as a job parameter, which can be directly used when you create a training job, eliminating the need to set parameters one by one. As such, the configuration efficiency can be greatly improved.
+You can save the parameter settings of a training job (including the data source, algorithm source, running parameters, resource pool parameters, and more) as a job parameter, which can be directly used when you create a training job, eliminating the need to set parameters one by one. As such, the configuration efficiency can be greatly improved.
+ Model training visualization
++ TensorBoard and MindInsight effectively display the computational graph of a model in the running process, the trend of all metrics in time, and the data used in the training.
++ +- Managing Model Versions
-
- Model Compression and Conversion
diff --git a/docs/modelarts/umn/modelarts_23_0052.html b/docs/modelarts/umn/modelarts_23_0052.html index 309172f4..26aefb70 100644 --- a/docs/modelarts/umn/modelarts_23_0052.html +++ b/docs/modelarts/umn/modelarts_23_0052.html @@ -1,26 +1,28 @@
-Introduction to Model Management
-AI model development and optimization require frequent iterations and debugging. Changes in datasets, training code, or parameters may affect the quality of models. If the metadata of the development process cannot be managed in a unified manner, the optimal model may fail to be reproduced.
+AI model development and optimization require frequent iterations and debugging. Changes in datasets, training code, or parameters may affect the quality of models. If the metadata of the development process cannot be managed in a unified manner, the optimal model may fail to be reproduced.
ModelArts model management allows you to import models generated with all training versions to manage all iterated and debugged models in a unified manner.
+Usage Restrictions
- In an automatic learning project, after a model is deployed, the model is automatically uploaded to the model management list. However, models generated by automatic learning cannot be downloaded and can be used only for deployment and rollout.
Methods of Importing a Model
+
- Importing from Trained Models: You can create a training job on ModelArts and complete model training. After obtaining a satisfactory model, import the model to the Model Management page for model deployment.
- Importing from a Template: Because the configurations of models with the same functions are similar, ModelArts integrates the configurations of such models into a common template. By using this template, you can easily and quickly import models without compiling the config.json configuration file.
- Importing from a Container Image: For AI engines that are not supported by ModelArts, you can import the model you compile to ModelArts using custom images.
- Importing from OBS: If you use a frequently-used framework to develop and train a model locally, you can import the model to ModelArts for model deployment.
Model Management Functions
-
Table 1 Model management functions - Supported Function
+
Table 1 Model management functions - - Supported Function
Description
+Description
+ - - Import the trained models to ModelArts for unified management. You can import models using four methods. The following provides the operation guide for each method.
- +Import the trained models to ModelArts for unified management. You can import models using four methods. The following provides the operation guide for each method.
++ diff --git a/docs/modelarts/umn/modelarts_23_0060.html b/docs/modelarts/umn/modelarts_23_0060.html index 19ef6604..a943e104 100644 --- a/docs/modelarts/umn/modelarts_23_0060.html +++ b/docs/modelarts/umn/modelarts_23_0060.html @@ -113,9 +113,20 @@ - - To facilitate source tracing and repeated model tuning, ModelArts provides the model version management function. You can manage models based on versions.
+To facilitate source tracing and repeated model tuning, ModelArts provides the model version management function. You can manage models based on versions.
CPU: 8 vCPUs | 64 GiB GPU: 1 x V100
++ - ExeML specifications (CPU)
+ExeML specifications (GPU)
Suitable for running GPU models.
++ Only be used by models trained in ExeML projects.
++ + CPU: 2 vCPUs | 8 GiB
++ Suitable for models with only CPU loads.
+diff --git a/docs/modelarts/umn/modelarts_23_0061.html b/docs/modelarts/umn/modelarts_23_0061.html index 449d02d9..59b4c485 100644 --- a/docs/modelarts/umn/modelarts_23_0061.html +++ b/docs/modelarts/umn/modelarts_23_0061.html @@ -201,12 +201,12 @@ + CPU: 8 vCPUs | 32 GiB GPU: 1 x T4
+Suitable for models requiring CPU and GPU (NVIDIA T4) resources.
Pound key (#) indicates that a variable is referenced. The matched character string must be enclosed in single quotation marks.
-#{Built-in variable} == 'Character string' +#{Built-in variable} == 'Character string' #{Built-in variable} matches 'Regular expression'
- Example 1:
If the account name for invoking the inference request is User A, the specified version is matched.
-#DOMAIN_NAME == 'User A'+#DOMAIN_NAME == 'User A'- Example 2:
If the account name in the inference request starts with op, the specified version is matched.
-#DOMAIN_NAME matches 'op.*'+#DOMAIN_NAME matches 'op.*'
Table 5 Common regular expressions - @@ -264,13 +264,13 @@ Character
Figure 1 Traffic distribution by user-- If multiple versions of a real-time service are deployed for dark launch, customized settings can be used to access different versions through the header.
Start with #HEADER_, indicating that the header is referenced as a condition.#HEADER_{key} == '{value}' +- If multiple versions of a real-time service are deployed for dark launch, customized settings can be used to access different versions through the header.
Start with #HEADER_, indicating that the header is referenced as a condition.#HEADER_{key} == '{value}' #HEADER_{key} matches '{value}'
- Example 1:
If the header of an inference HTTP request contains a version and the value is 0.0.1, the condition is met. Otherwise, the condition is not met.
-#HEADER_version == '0.0.1'+#HEADER_version == '0.0.1'- Example 2:
If the header of an inference HTTP request contains testheader and the value starts with mock, the rule is matched.
-#HEADER_testheader matches 'mock.*'+#HEADER_testheader matches 'mock.*'Figure 2 Using the header to access different versions- If a real-time service version supports different running configurations, you can use Setting Name and Setting Value to specify customized running parameters so that different users can use different running configurations.
Example:
diff --git a/docs/modelarts/umn/modelarts_23_0063.html b/docs/modelarts/umn/modelarts_23_0063.html index 5211411c..493592b6 100644 --- a/docs/modelarts/umn/modelarts_23_0063.html +++ b/docs/modelarts/umn/modelarts_23_0063.html @@ -8,7 +8,7 @@For details about how to obtain a token, see Obtaining a User Token.
- On the Body tab page, file input and text input are available.
- File input
Select form-data. Set KEY to the input parameter of the model, for example, images. Set VALUE to an image to be inferred (only one image can be inferred).
- Text input
Select raw and then JSON(application/json). Enter the request body in the text box below. An example request body is as follows:
-{ +{ "meta": { "uuid": "10eb0091-887f-4839-9929-cbc884f1e20e" }, @@ -31,11 +31,11 @@-Method 2: Run the cURL Command to Send an Inference Request
The command for sending inference requests can be input as a file or text.
- File input
curl -k -F 'images=@Image path' -H 'X-Auth-Token:Token value' -X POST Real-time service URL+diff --git a/docs/modelarts/umn/modelarts_23_0066.html b/docs/modelarts/umn/modelarts_23_0066.html index 21a75d86..4f3b0a4d 100644 --- a/docs/modelarts/umn/modelarts_23_0066.html +++ b/docs/modelarts/umn/modelarts_23_0066.html @@ -67,20 +67,20 @@
- File input
curl -k -F 'images=@Image path' -H 'X-Auth-Token:Token value' -X POST Real-time service URL
- -k indicates that SSL websites can be accessed without using a security certificate.
- -F indicates file input. In this example, the parameter name is images, which can be changed as required. The image storage path follows @.
- -H indicates the header of the POST command. X-Auth-Token is the KEY value on the Headers page. Token value indicates the obtained token.
- POST is followed by the API URL of the real-time service.
The following is an example of the cURL command for inference with file input:
-curl -k -F 'images=@/home/data/test.png' -H 'X-Auth-Token:MIISkAY***80T9wHQ==' -X POST https://modelarts-infers-1.xxx/v1/infers/eb3e0c54-3dfa-4750-af0c-95c45e5d3e83-- Text input
curl -k -d '{"data":{"req_data":[{"sepal_length":3,"sepal_width":1,"petal_length":2.2,"petal_width":4}]}}' -H 'X-Auth-Token:MIISkAY***80T9wHQ==' -H 'Content-type: application/json' -X POST https://modelarts-infers-1.xxx/v1/infers/eb3e0c54-3dfa-4750-af0c-95c45e5d3e83+curl -k -F 'images=@/home/data/test.png' -H 'X-Auth-Token:MIISkAY***80T9wHQ==' -X POST https://modelarts-infers-1.xxx/v1/infers/eb3e0c54-3dfa-4750-af0c-95c45e5d3e83+- Text input
curl -k -d '{"data":{"req_data":[{"sepal_length":3,"sepal_width":1,"petal_length":2.2,"petal_width":4}]}}' -H 'X-Auth-Token:MIISkAY***80T9wHQ==' -H 'Content-type: application/json' -X POST https://modelarts-infers-1.xxx/v1/infers/eb3e0c54-3dfa-4750-af0c-95c45e5d3e83-d indicates the text input of the request body.
-Manifest File Specifications
Batch services of the inference platform support the manifest file. The manifest file describes the input and output of data.
Example input manifest file
- File name: test.manifest
- File content:
{"source": "<obs path>/test/data/1.jpg"} +Example input manifest file
- File name: test.manifest
- File content:
{"source": "<obs path>/test/data/1.jpg"} {"source": "https://infers-data.obs.xxx.com:443/xgboosterdata/data.csv?AccessKeyId=2Q0V0TQ461N26DDL18RB&Expires=1550611914&Signature=wZBttZj5QZrReDhz1uDzwve8GpY%3D&x-obs-security-token=gQpzb3V0aGNoaW5hixvY8V9a1SnsxmGoHYmB1SArYMyqnQT-ZaMSxHvl68kKLAy5feYvLDM..."}-- File requirements:
+
- The file name extension must be .manifest.
- The file content is in JSON format. Each row describes a piece of input data, which must be accurate to a file instead of a folder.
- File requirements:
- The file name extension must be .manifest.
- The file content is in JSON format. Each row describes a piece of input data, which must be accurate to a file instead of a folder.
- A source field must be defined for the JSON content. The field value is the OBS URL of the file: <obs path>/{{Bucket name}}/{{Object name}}.
Example output manifest file
-If you use an input manifest file, the output directory will contain an output manifest file.
- Assume that the output path is //test-bucket/test/. The result is stored in the following path:
OBS bucket/directory name +If you use an input manifest file, the output directory will contain an output manifest file.@@ -89,72 +89,34 @@
- Assume that the output path is //test-bucket/test/. The result is stored in the following path:
OBS bucket/directory name ├── test-bucket │ ├── test │ │ ├── infer-result-0.manifest │ │ ├── infer-result │ │ │ ├── 1.jpg_result.txt │ │ │ ├── 2.jpg_result.txt-- Content of the infer-result-0.manifest file:
{"source": "<obs path>/obs-data-bucket/test/data/1.jpg", "inference-loc": "<obs path>/test-bucket/test/infer-result/1.jpg_result.txt"} +- Content of the infer-result-0.manifest file:
{"source": "<obs path>/obs-data-bucket/test/data/1.jpg", "inference-loc": "<obs path>/test-bucket/test/infer-result/1.jpg_result.txt"} {"source ": "https://infers-data.obs.xxx.com:443/xgboosterdata/2.jpg?AccessKeyId=2Q0V0TQ461N26DDL18RB&Expires=1550611914&Signature=wZBttZj5QZrReDhz1uDzwve8GpY%3D&x-obs-security-token=gQpzb3V0aGNoaW5hixvY8V9a1SnsxmGoHYmB1SArYMyqnQT-ZaMSxHvl68kKLAy5feYvLDMNZWxzhBZ6Q-3HcoZMh9gISwQOVBwm4ZytB_m8sg1fL6isU7T3CnoL9jmvDGgT9VBC7dC1EyfSJrUcqfB...", "inference-loc": "obs://test-bucket/test/infer-result/2.jpg_result.txt"}Example Mapping
The following example shows the relationship between the configuration file, mapping rule, CSV data, and inference request.
Assume that the apis parameter in the configuration file used by your model is as follows:
-+]
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -[ +[ { - "protocol": "http", - "method": "post", - "url": "/", - "request": { - "type": "object", - "properties": { - "data": { - "type": "object", - "properties": { - "req_data": { - "type": "array", - "items": [ + "protocol": "http", + "method": "post", + "url": "/", + "request": { + "type": "object", + "properties": { + "data": { + "type": "object", + "properties": { + "req_data": { + "type": "array", + "items": [ { - "type": "object", - "properties": { - "input_1": { - "type": "number" + "type": "object", + "properties": { + "input_1": { + "type": "number" }, - "input_2": { - "type": "number" + "input_2": { + "type": "number" }, - "input_3": { - "type": "number" + "input_3": { + "type": "number" }, - "input_4": { - "type": "number" + "input_4": { + "type": "number" } } } @@ -165,11 +127,9 @@ } } } -] -At this point, the corresponding mapping relationship is shown below. The ModelArts management console automatically resolves the mapping relationship from the configuration file. When calling a ModelArts API, write the mapping relationship by yourself according to the rule.
-{ +{ "type": "object", "properties": { "data": { @@ -206,11 +166,11 @@ } }The data for inference, that is, the CSV data, is in the following format. The data must be separated by commas (,).
-5.1,3.5,1.4,0.2 +5.1,3.5,1.4,0.2 4.9,3.0,1.4,0.2 4.7,3.2,1.3,0.2Depending on the defined mapping relationship, the inference request is shown below. The format is similar to the format used by the real-time service.
-{ +{ "data": { "req_data": [{ "input_1": 5.1, diff --git a/docs/modelarts/umn/modelarts_23_0076.html b/docs/modelarts/umn/modelarts_23_0076.html index bfcd4f83..d3ee6cc0 100644 --- a/docs/modelarts/umn/modelarts_23_0076.html +++ b/docs/modelarts/umn/modelarts_23_0076.html @@ -23,11 +23,6 @@The default value is Dedicated for Service Deployment and cannot be changed.
- Billing Mode
-- Select a billing mode, Yearly/Monthly or Pay-per-use. The Yearly/Monthly billing mode is supported only in CN North-Beijing4.
-Name
Name of a dedicated resource pool.
diff --git a/docs/modelarts/umn/modelarts_23_0079.html b/docs/modelarts/umn/modelarts_23_0079.html index 8f5599ef..e50605be 100644 --- a/docs/modelarts/umn/modelarts_23_0079.html +++ b/docs/modelarts/umn/modelarts_23_0079.html @@ -13,7 +13,7 @@ -Example Policies
- A policy can define a single permission, such as the permission to deny ExeML project deletion.
{ +Example Policies
- A policy can define a single permission, such as the permission to deny ExeML project deletion.
{ "Version": "1.1", "Statement": [ { @@ -24,7 +24,7 @@ } ] }-- A policy can define multiple permissions, such as the permissions to delete an ExeML version and an ExeML project.
{ +- A policy can define multiple permissions, such as the permissions to delete an ExeML version and an ExeML project.
{ "Version": "1.1", "Statement": [ { diff --git a/docs/modelarts/umn/modelarts_23_0080.html b/docs/modelarts/umn/modelarts_23_0080.html index df047ada..8a62d791 100644 --- a/docs/modelarts/umn/modelarts_23_0080.html +++ b/docs/modelarts/umn/modelarts_23_0080.html @@ -8,7 +8,7 @@Precautions
- The permissions to use ModelArts depend on OBS authorization. Therefore, you need to grant OBS system permissions to users.
- A custom policy can contain actions of multiple services that are globally accessible or accessible through region-specific projects.
- To define permissions required to access both global and project-level services, create two custom policies and specify the scope as Global services and Project-level services. Then grant the two policies to the users.
Example Custom Policies of OBS
ModelArts is a project-level service, and OBS is a global service. Therefore, you need to create custom policies for the two services respectively and grant them to users. The permissions to use ModelArts depend on OBS authorization. The following example shows the minimum permissions for OBS, including the permissions for OBS buckets and objects. After being granted the minimum permissions for OBS, users can access OBS from ModelArts without restrictions.
-{ +{ "Version": "1.1", "Statement": [ { @@ -35,7 +35,7 @@Example Custom Policies of ModelArts
-
- Example: Denying ExeML project deletion
A deny policy must be used in conjunction with other policies to take effect. If the permissions assigned to a user contain both Allow and Deny actions, the Deny actions take precedence over the Allow actions.
The following method can be used if you need to assign permissions of the ModelArts FullAccess policy to a user but also forbid the user from deleting ExeML projects. Create a custom policy for denying ExeML project deletion, and assign both policies to the group the user belongs to. Then the user can perform all operations on ModelArts except deleting ExeML projects. The following is an example deny policy:
-{ +{ "Version": "1.1", "Statement": [ { @@ -47,7 +47,7 @@ ] }- Example: Allowing users to use only development environments
The following is a policy configuration example for this user:
-{ +{ "Version": "1.1", "Statement": [ diff --git a/docs/modelarts/umn/modelarts_23_0085.html b/docs/modelarts/umn/modelarts_23_0085.html index 9359b04c..763f75b8 100644 --- a/docs/modelarts/umn/modelarts_23_0085.html +++ b/docs/modelarts/umn/modelarts_23_0085.html @@ -6,7 +6,7 @@Obtain the custom images used by ModelArts for model training and import from the SWR service management list. Upload the custom images you create to SWR.
- Specifications for custom images. For details about how to use a custom image for a training job, see Specifications for Custom Images Used for Training Jobs. For details about how to use a custom image for model import, see Specifications for Custom Images Used for Importing Models.
diff --git a/docs/modelarts/umn/modelarts_23_0087.html b/docs/modelarts/umn/modelarts_23_0087.html index 82487337..362857d6 100644 --- a/docs/modelarts/umn/modelarts_23_0087.html +++ b/docs/modelarts/umn/modelarts_23_0087.html @@ -2,7 +2,7 @@Creating and Uploading a Custom Image
- Purchase a cloud server or use a local host to set up the Docker environment.
- Obtain the basic image from the local environment.
- Compile a Dockerfile based on your requirements to build a custom image. For details about how to efficiently compile a Dockerfile, see .
+
- After customizing an image, upload the image to SWR by referring to .
Creating and Uploading a Custom Image
- Purchase a cloud server or use a local host to set up the Docker environment.
- Obtain the basic image from the local environment.
- Compile a Dockerfile based on your requirements to build a custom image.
- After customizing an image, upload the image to SWR by referring to "Uploading an Image Through a Container Engine Client" in Software Repository for Container User Guide.
Creating a Training Job Using a Custom Image (GPU)
After creating and uploading a custom image to SWR, you can use the image to create a training job on the ModelArts management console to complete model training.
-Prerequisites
+
- You have created a custom image package based on ModelArts specifications. For details about the specifications you need to comply with when using a custom image to create training jobs, see Specifications for Custom Images Used for Training Jobs.
- You have uploaded the custom image to SWR. For details, see .
Prerequisites
- You have created a custom image package based on ModelArts specifications. For details about the specifications you need to comply with when using a custom image to create training jobs, see Specifications for Custom Images Used for Training Jobs.
- You have uploaded the custom image to SWR. For details, see Creating and Uploading a Custom Image.
diff --git a/docs/modelarts/umn/modelarts_23_0092.html b/docs/modelarts/umn/modelarts_23_0092.html index 60c79b30..7df81b79 100644 --- a/docs/modelarts/umn/modelarts_23_0092.html +++ b/docs/modelarts/umn/modelarts_23_0092.html @@ -411,337 +411,161 @@Creating a Training Job
Log in to the ModelArts management console and create a training job according to Creating a Training Job. When using a custom image to create a job, pay attention to the settings of Algorithm Source, Environment Variable, and Resource Pool.
- Algorithm Source
Select Custom.
diff --git a/docs/modelarts/umn/modelarts_23_0091.html b/docs/modelarts/umn/modelarts_23_0091.html index c4c1dfbe..a2c25f38 100644 --- a/docs/modelarts/umn/modelarts_23_0091.html +++ b/docs/modelarts/umn/modelarts_23_0091.html @@ -7,21 +7,71 @@ModelArts also provides custom script examples of common AI engines. For details, see Examples of Custom Scripts.
Model Package Example
- Structure of the TensorFlow-based model package
When publishing the model, you only need to specify the ocr directory.
-OBS bucket/directory name +OBS bucket/directory name |── ocr | ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files | │ ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code | │ ├── saved_model.pb (Mandatory) Protocol buffer file, which contains the diagram description of the model -| │ ├── variables Name of a fixed sub-directory, which contains the weight and deviation rate of the model. It is mandatory for the main file of the *.pb model. +| │ ├── variables Name of a fixed sub-directory, which contains the weight and deviation rate of the model. It is mandatory for the main file of the *.pb model. | │ │ ├── variables.index Mandatory | │ │ ├── variables.data-00000-of-00001 Mandatory -| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. +| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. | │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.-- Structure of the Image-based model package
When publishing the model, you only need to specify the resnet directory.
-OBS bucket/directory name +- Structure of the MXNet-based model package
When publishing the model, you only need to specify the resnet directory.
+OBS bucket/directory name |── resnet | ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files -| │ ├──config.json (Mandatory) Model configuration file (the address of the SWR image must be configured). The file name is fixed to config.json. Only one model configuration file is supported.+| │ ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code +| │ ├── resnet-50-symbol.json (Mandatory) Model definition file, which contains the neural network description of the model +| │ ├── resnet-50-0000.params (Mandatory) Model variable parameter file, which contains parameter and weight information +| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. +| │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory. +- Structure of the Image-based model package
When publishing the model, you only need to specify the resnet directory.
+OBS bucket/directory name +|── resnet +| ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files +| │ ├──config.json (Mandatory) Model configuration file (the address of the SWR image must be configured). The file name is fixed to config.json. Only one model configuration file is supported.+- Structure of the PySpark-based model package
When publishing the model, you only need to specify the resnet directory.
+OBS bucket/directory name +|── resnet +| ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files +| │ ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code +| │ ├── spark_model (Mandatory) Model directory, which contains the model content saved by PySpark +| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. +| │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.+- Structure of the PyTorch-based model package
When publishing the model, you only need to specify the resnet directory.
+OBS bucket/directory name +|── resnet +| ├── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files +| │ ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code +| │ ├── resnet50.pth (Mandatory) PyTorch model file, which contains variable and weight information and is saved as state_dict +| │ ├──config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. +| │ ├──customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.+- Structure of the Caffe-based model package
When publishing the model, you only need to specify the resnet directory.+OBS bucket/directory name +|── resnet +| |── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files +| | |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code +| | |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure +| | |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information +| | |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. +| | |── customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.+- Structure of the XGBoost-based model package
When publishing the model, you only need to specify the resnet directory.+OBS bucket/directory name +|── resnet +| |── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files +| | |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code +| | |── *.m (Mandatory): Model file whose extension name is .m +| | |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. +| | |── customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.+- Structure of the Scikit_Learn-based model package
When publishing the model, you only need to specify the resnet directory.OBS bucket/directory name +|── resnet +| |── model (Mandatory) Name of a fixed subdirectory, which is used to store model-related files +| | |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code +| | |── *.m (Mandatory): Model file whose extension name is .m +| | |── config.json (Mandatory) Model configuration file. The file name is fixed to config.json. Only one model configuration file is supported. +| | |── customize_service.py (Optional) Model inference code. The file name is fixed to customize_service.py. Only one model inference code file exists. The files on which customize_service.py depends can be directly stored in the model directory.+Example of the Object Detection Model Configuration File
The following code uses the TensorFlow engine as an example. You can modify the model_type parameter based on the actual engine type.
- Model input
Value: image files
-- Model output
-
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -``` -{ - "detection_classes": [ - "face", - "arm" - ], - "detection_boxes": [ - [ - 33.6, - 42.6, - 104.5, - 203.4 - ], - [ - 103.1, - 92.8, - 765.6, - 945.7 - ] - ], - "detection_scores": [0.99, 0.73] -} -``` -- Configuration file
+```
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71 -72 -73 -``` +- Model output
``` { - "model_type": "TensorFlow", - "model_algorithm": "object_detection", - "metrics": { - "f1": 0.345294, - "accuracy": 0.462963, - "precision": 0.338977, - "recall": 0.351852 + "detection_classes": [ + "face", + "arm" + ], + "detection_boxes": [ + [ + 33.6, + 42.6, + 104.5, + 203.4 + ], + [ + 103.1, + 92.8, + 765.6, + 945.7 + ] + ], + "detection_scores": [0.99, 0.73] +} +```+- Configuration file
``` +{ + "model_type": "TensorFlow", + "model_algorithm": "object_detection", + "metrics": { + "f1": 0.345294, + "accuracy": 0.462963, + "precision": 0.338977, + "recall": 0.351852 }, - "apis": [{ - "protocol": "http", - "url": "/", - "method": "post", - "request": { - "Content-type": "multipart/form-data", - "data": { - "type": "object", - "properties": { - "images": { - "type": "file" + "apis": [{ + "protocol": "http", + "url": "/", + "method": "post", + "request": { + "Content-type": "multipart/form-data", + "data": { + "type": "object", + "properties": { + "images": { + "type": "file" } } } }, - "response": { - "Content-type": "multipart/form-data", - "data": { - "type": "object", - "properties": { - "detection_classes": { - "type": "array", - "items": [{ - "type": "string" + "response": { + "Content-type": "multipart/form-data", + "data": { + "type": "object", + "properties": { + "detection_classes": { + "type": "array", + "items": [{ + "type": "string" }] }, - "detection_boxes": { - "type": "array", - "items": [{ - "type": "array", - "minItems": 4, - "maxItems": 4, - "items": [{ - "type": "number" + "detection_boxes": { + "type": "array", + "items": [{ + "type": "array", + "minItems": 4, + "maxItems": 4, + "items": [{ + "type": "number" }] }] }, - "detection_scores": { - "type": "array", - "items": [{ - "type": "number" + "detection_scores": { + "type": "array", + "items": [{ + "type": "number" }] } } } } }], - "dependencies": [{ - "installer": "pip", - "packages": [{ - "restraint": "EXACT", - "package_version": "1.15.0", - "package_name": "numpy" + "dependencies": [{ + "installer": "pip", + "packages": [{ + "restraint": "EXACT", + "package_version": "1.15.0", + "package_name": "numpy" }, { - "restraint": "EXACT", - "package_version": "5.2.0", - "package_name": "Pillow" + "restraint": "EXACT", + "package_version": "5.2.0", + "package_name": "Pillow" } ] }] } -``` -Example of the Image Classification Model Configuration File
The following code uses the TensorFlow engine as an example. You can modify the model_type parameter based on the actual engine type.
- Model input
Value: image files
-- Model output
-
1 -2 -3 -4 -5 -6 -7 -8 -9 -``` -{ - "predicted_label": "flower", - "scores": [ - ["rose", 0.99], - ["begonia", 0.01] - ] -} -``` -- Configuration file
+```
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -``` +- Model output
``` { - "model_type": "TensorFlow", - "model_algorithm": "image_classification", - "metrics": { - "f1": 0.345294, - "accuracy": 0.462963, - "precision": 0.338977, - "recall": 0.351852 + "predicted_label": "flower", + "scores": [ + ["rose", 0.99], + ["begonia", 0.01] + ] +} +```+- Configuration file
``` +{ + "model_type": "TensorFlow", + "model_algorithm": "image_classification", + "metrics": { + "f1": 0.345294, + "accuracy": 0.462963, + "precision": 0.338977, + "recall": 0.351852 }, - "apis": [{ - "protocol": "http", - "url": "/", - "method": "post", - "request": { - "Content-type": "multipart/form-data", - "data": { - "type": "object", - "properties": { - "images": { - "type": "file" + "apis": [{ + "protocol": "http", + "url": "/", + "method": "post", + "request": { + "Content-type": "multipart/form-data", + "data": { + "type": "object", + "properties": { + "images": { + "type": "file" } } } }, - "response": { - "Content-type": "multipart/form-data", - "data": { - "type": "object", - "properties": { - "predicted_label": { - "type": "string" + "response": { + "Content-type": "multipart/form-data", + "data": { + "type": "object", + "properties": { + "predicted_label": { + "type": "string" }, - "scores": { - "type": "array", - "items": [{ - "type": "array", - "minItems": 2, - "maxItems": 2, - "items": [ + "scores": { + "type": "array", + "items": [{ + "type": "array", + "minItems": 2, + "maxItems": 2, + "items": [ { - "type": "string" + "type": "string" }, { - "type": "number" + "type": "number" } ] }] @@ -750,236 +574,116 @@ } } }], - "dependencies": [{ - "installer": "pip", - "packages": [{ - "restraint": "ATLEAST", - "package_version": "1.15.0", - "package_name": "numpy" + "dependencies": [{ + "installer": "pip", + "packages": [{ + "restraint": "ATLEAST", + "package_version": "1.15.0", + "package_name": "numpy" }, { - "restraint": "", - "package_version": "", - "package_name": "Pillow" + "restraint": "", + "package_version": "", + "package_name": "Pillow" } ] }] } -``` -Example of the Predictive Analytics Model Configuration File
The following code uses the TensorFlow engine as an example. You can modify the model_type parameter based on the actual engine type.
-
- Model input
-
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -``` -{ - "data": { - "req_data": [ - { - "buying_price": "high", - "maint_price": "high", - "doors": "2", - "persons": "2", - "lug_boot": "small", - "safety": "low", - "acceptability": "acc" - }, - { - "buying_price": "high", - "maint_price": "high", - "doors": "2", - "persons": "2", - "lug_boot": "small", - "safety": "low", - "acceptability": "acc" - } - ] - } -} -``` -- Model output
-
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -``` -{ - "data": { - "resp_data": [ - { - "predict_result": "unacc" - }, - { - "predict_result": "unacc" - } - ] - } -} -``` -- Configuration file
+```
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71 -72 -73 -74 -75 -76 -77 -``` +
- Model input
``` { - "model_type": "TensorFlow", - "model_algorithm": "predict_analysis", - "metrics": { - "f1": 0.345294, - "accuracy": 0.462963, - "precision": 0.338977, - "recall": 0.351852 + "data": { + "req_data": [ + { + "buying_price": "high", + "maint_price": "high", + "doors": "2", + "persons": "2", + "lug_boot": "small", + "safety": "low", + "acceptability": "acc" + }, + { + "buying_price": "high", + "maint_price": "high", + "doors": "2", + "persons": "2", + "lug_boot": "small", + "safety": "low", + "acceptability": "acc" + } + ] + } +} +```+- Model output
``` +{ + "data": { + "resp_data": [ + { + "predict_result": "unacc" + }, + { + "predict_result": "unacc" + } + ] + } +} +```+- Configuration file
``` +{ + "model_type": "TensorFlow", + "model_algorithm": "predict_analysis", + "metrics": { + "f1": 0.345294, + "accuracy": 0.462963, + "precision": 0.338977, + "recall": 0.351852 }, - "apis": [ + "apis": [ { - "protocol": "http", - "url": "/", - "method": "post", - "request": { - "Content-type": "application/json", - "data": { - "type": "object", - "properties": { - "data": { - "type": "object", - "properties": { - "req_data": { - "items": [ + "protocol": "http", + "url": "/", + "method": "post", + "request": { + "Content-type": "application/json", + "data": { + "type": "object", + "properties": { + "data": { + "type": "object", + "properties": { + "req_data": { + "items": [ { - "type": "object", - "properties": { + "type": "object", + "properties": { } }], - "type": "array" + "type": "array" } } } } } }, - "response": { - "Content-type": "multipart/form-data", - "data": { - "type": "object", - "properties": { - "data": { - "type": "object", - "properties": { - "resp_data": { - "type": "array", - "items": [ + "response": { + "Content-type": "multipart/form-data", + "data": { + "type": "object", + "properties": { + "data": { + "type": "object", + "properties": { + "resp_data": { + "type": "array", + "items": [ { - "type": "object", - "properties": { + "type": "object", + "properties": { } }] } @@ -989,132 +693,74 @@ } } }], - "dependencies": [ + "dependencies": [ { - "installer": "pip", - "packages": [ + "installer": "pip", + "packages": [ { - "restraint": "EXACT", - "package_version": "1.15.0", - "package_name": "numpy" + "restraint": "EXACT", + "package_version": "1.15.0", + "package_name": "numpy" }, { - "restraint": "EXACT", - "package_version": "5.2.0", - "package_name": "Pillow" + "restraint": "EXACT", + "package_version": "5.2.0", + "package_name": "Pillow" }] }] } -``` -Example of the Custom Image Model Configuration File
The model input and output are similar to those in Example of the Object Detection Model Configuration File.
-+}
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -{ - "model_algorithm": "image_classification", - "model_type": "Image", +{ + "model_algorithm": "image_classification", + "model_type": "Image", - "metrics": { - "f1": 0.345294, - "accuracy": 0.462963, - "precision": 0.338977, - "recall": 0.351852 + "metrics": { + "f1": 0.345294, + "accuracy": 0.462963, + "precision": 0.338977, + "recall": 0.351852 }, - "apis": [{ - "protocol": "http", - "url": "/", - "method": "post", - "request": { - "Content-type": "multipart/form-data", - "data": { - "type": "object", - "properties": { - "images": { - "type": "file" + "apis": [{ + "protocol": "http", + "url": "/", + "method": "post", + "request": { + "Content-type": "multipart/form-data", + "data": { + "type": "object", + "properties": { + "images": { + "type": "file" } } } }, - "response": { - "Content-type": "multipart/form-data", - "data": { - "type": "object", - "required": [ - "predicted_label", - "scores" + "response": { + "Content-type": "multipart/form-data", + "data": { + "type": "object", + "required": [ + "predicted_label", + "scores" ], - "properties": { - "predicted_label": { - "type": "string" + "properties": { + "predicted_label": { + "type": "string" }, - "scores": { - "type": "array", - "items": [{ - "type": "array", - "minItems": 2, - "maxItems": 2, - "items": [{ - "type": "string" + "scores": { + "type": "array", + "items": [{ + "type": "array", + "minItems": 2, + "maxItems": 2, + "items": [{ + "type": "string" }, { - "type": "number" + "type": "number" } ] }] @@ -1123,13 +769,11 @@ } } }] -} -Example of the Machine Learning Model Configuration File
The following uses XGBoost as an example:
-
- Model input
{ +{ "data": { "req_data": [{ "sepal_length": 5, @@ -1150,7 +794,7 @@ } }-
- Model output
{ +{ "data": { "resp_data": [{ "predict_result": "Iris-setosa" @@ -1160,7 +804,7 @@ } }-
- Configuration file
{ +{ "model_type": "XGBoost", "model_algorithm": "xgboost_iris_test", "runtime": "python2.7", @@ -1224,84 +868,34 @@Example of a Model Configuration File Using a Custom Dependency Package
The following example defines the NumPy 1.16.4 dependency environment.
-+ }
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -{ - "model_algorithm": "image_classification", - "model_type": "TensorFlow", - "runtime": "python3.6", - "apis": [{ - "procotol": "http", - "url": "/", - "method": "post", - "request": { - "Content-type": "multipart/form-data", - "data": { - "type": "object", - "properties": { - "images": { - "type": "file" +{ + "model_algorithm": "image_classification", + "model_type": "TensorFlow", + "runtime": "python3.6", + "apis": [{ + "procotol": "http", + "url": "/", + "method": "post", + "request": { + "Content-type": "multipart/form-data", + "data": { + "type": "object", + "properties": { + "images": { + "type": "file" } } } }, - "response": { - "Content-type": "applicaton/json", - "data": { - "type": "object", - "properties": { - "mnist_result": { - "type": "array", - "item": [{ - "type": "string" + "response": { + "Content-type": "applicaton/json", + "data": { + "type": "object", + "properties": { + "mnist_result": { + "type": "array", + "item": [{ + "type": "string" }] } } @@ -1309,24 +903,22 @@ } } ], - "metrics": { - "f1": 0.124555, - "recall": 0.171875, - "precision": 0.0023493892851938493, - "accuracy": 0.00746268656716417 + "metrics": { + "f1": 0.124555, + "recall": 0.171875, + "precision": 0.0023493892851938493, + "accuracy": 0.00746268656716417 }, - "dependencies": [{ - "installer": "pip", - "packages": [{ - "restraint": "EXACT", - "package_version": "1.16.4", - "package_name": "numpy" + "dependencies": [{ + "installer": "pip", + "packages": [{ + "restraint": "EXACT", + "package_version": "1.16.4", + "package_name": "numpy" } ] }] - } -diff --git a/docs/modelarts/umn/modelarts_23_0093.html b/docs/modelarts/umn/modelarts_23_0093.html index d8f3449e..0561ad9d 100644 --- a/docs/modelarts/umn/modelarts_23_0093.html +++ b/docs/modelarts/umn/modelarts_23_0093.html @@ -101,22 +101,22 @@-![]()
- You can choose to rewrite the preprocess and postprocess methods to implement preprocessing of the API input and postprocessing of the inference output.
- Rewriting the init method of the BaseService inheritance class may cause a model to run abnormally.
- The attribute that can be used is the local path where the model resides. The attribute name is self.model_path. In addition, PySpark-based models can use self.spark to obtain the SparkSession object in customize_service.py.
-![]()
An absolute path is required for reading files in the inference code. You can obtain the absolute path of the model from the self.model_path attribute.
-
- When TensorFlow, Caffe, or MXNet is used, self.model_path indicates the path of the model file. See the following example:
# Store the label.json file in the model directory. The following information is read: +-
- When TensorFlow, Caffe, or MXNet is used, self.model_path indicates the path of the model file. See the following example:
# Store the label.json file in the model directory. The following information is read: with open(os.path.join(self.model_path, 'label.json')) as f: self.label = json.load(f)
- When PyTorch, Scikit_Learn, or PySpark is used, self.model_path indicates the path of the model file. See the following example:
# Store the label.json file in the model directory. The following information is read: +
- When PyTorch, Scikit_Learn, or PySpark is used, self.model_path indicates the path of the model file. See the following example:
# Store the label.json file in the model directory. The following information is read: dir_path = os.path.dirname(os.path.realpath(self.model_path)) with open(os.path.join(dir_path, 'label.json')) as f: self.label = json.load(f)- Two types of content-type APIs can be used for inputting data: multipart/form-data and application/json
- multipart/form-data request
curl -X POST \ +- Two types of content-type APIs can be used for inputting data: multipart/form-data and application/json
- multipart/form-data request
curl -X POST \ <modelarts-inference-endpoint> \ -F image1=@cat.jpg \ -F images2=@horse.jpgThe corresponding input data is as follows:
-[ +[ { "image1":{ "cat.jpg":"<cat..jpg file io>" @@ -128,81 +128,54 @@ with open(os.path.join(dir_path, 'label.json')) as f: } } ]-- application/json request
curl -X POST \ +- application/json request
curl -X POST \ <modelarts-inference-endpoint> \ -d '{ "images":"base64 encode image" }'The corresponding input data is python dict.
-{ +{ "images":"base64 encode image" }-TensorFlow Inference Script Example
The following is an example of TensorFlow MnistService.
- Inference code
-
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 from PIL import Image -import numpy as np -from model_service.tfserving_model_service import TfServingBaseService +TensorFlow Inference Script Example
The following is an example of TensorFlow MnistService.-
- Inference code
+from PIL import Image +import numpy as np +from model_service.tfserving_model_service import TfServingBaseService -class mnist_service(TfServingBaseService): +class mnist_service(TfServingBaseService): - def _preprocess(self, data): - preprocessed_data = {} + def _preprocess(self, data): + preprocessed_data = {} - for k, v in data.items(): - for file_name, file_content in v.items(): - image1 = Image.open(file_content) - image1 = np.array(image1, dtype=np.float32) - image1.resize((1, 784)) - preprocessed_data[k] = image1 + for k, v in data.items(): + for file_name, file_content in v.items(): + image1 = Image.open(file_content) + image1 = np.array(image1, dtype=np.float32) + image1.resize((1, 784)) + preprocessed_data[k] = image1 - return preprocessed_data + return preprocessed_data - def _postprocess(self, data): + def _postprocess(self, data): - infer_output = {} + infer_output = {} - for output_name, result in data.items(): + for output_name, result in data.items(): - infer_output["mnist_result"] = result[0].index(max(result[0])) + infer_output["mnist_result"] = result[0].index(max(result[0])) - return infer_output -- Request
curl -X POST \ Real-time service address \ -F images=@test.jpg-- Response
{"mnist_result": 7}+ return infer_output +- Request
curl -X POST \ Real-time service address \ -F images=@test.jpg+- Response
{"mnist_result": 7}The preceding code example resizes images imported to the user's form to adapt to the model input shape. The 32×32 image is read from the Pillow library and resized to 1×784 to match the model input. In subsequent processing, convert the model output into a list for the RESTful API to display.
XGBoost Inference Script Example
# coding:utf-8 +XGBoost Inference Script Example
# coding:utf-8 import collections import json import xgboost as xgb @@ -239,233 +212,119 @@ class user_Service(XgSklServingBaseService):Inference Script Example of the Custom Inference Logic
First, define a dependency package in the configuration file. For details, see Example of a Model Configuration File Using a Custom Dependency Package. Then, use the following code example to implement the loading and inference of the model in saved_model format.
-+ def __del__(self): + self.sess.close()
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 - 11 - 12 - 13 - 14 - 15 - 16 - 17 - 18 - 19 - 20 - 21 - 22 - 23 - 24 - 25 - 26 - 27 - 28 - 29 - 30 - 31 - 32 - 33 - 34 - 35 - 36 - 37 - 38 - 39 - 40 - 41 - 42 - 43 - 44 - 45 - 46 - 47 - 48 - 49 - 50 - 51 - 52 - 53 - 54 - 55 - 56 - 57 - 58 - 59 - 60 - 61 - 62 - 63 - 64 - 65 - 66 - 67 - 68 - 69 - 70 - 71 - 72 - 73 - 74 - 75 - 76 - 77 - 78 - 79 - 80 - 81 - 82 - 83 - 84 - 85 - 86 - 87 - 88 - 89 - 90 - 91 - 92 - 93 - 94 - 95 - 96 - 97 - 98 - 99 -100 -101 -102 -103 -104 -105 -106 -107 -108 -109 -110 -111 -112 -113 -# -*- coding: utf-8 -*- -import json -import os -import threading +# -*- coding: utf-8 -*- +import json +import os +import threading -import numpy as np -import tensorflow as tf -from PIL import Image +import numpy as np +import tensorflow as tf +from PIL import Image -from model_service.tfserving_model_service import TfServingBaseService -import logging +from model_service.tfserving_model_service import TfServingBaseService +import logging -logger = logging.getLogger(__name__) +logger = logging.getLogger(__name__) -class MnistService(TfServingBaseService): +class MnistService(TfServingBaseService): - def __init__(self, model_name, model_path): - self.model_name = model_name - self.model_path = model_path - self.model_inputs = {} - self.model_outputs = {} + def __init__(self, model_name, model_path): + self.model_name = model_name + self.model_path = model_path + self.model_inputs = {} + self.model_outputs = {} - # The label file can be loaded here and used in the post-processing function. - # Directories for storing the label.txt file on OBS and in the model package + # The label file can be loaded here and used in the post-processing function. + # Directories for storing the label.txt file on OBS and in the model package - # with open(os.path.join(self.model_path, 'label.txt')) as f: - # self.label = json.load(f) + # with open(os.path.join(self.model_path, 'label.txt')) as f: + # self.label = json.load(f) - # Load the model in saved_model format in non-blocking mode to prevent blocking timeout. - thread = threading.Thread(target=self.get_tf_sess) - thread.start() + # Load the model in saved_model format in non-blocking mode to prevent blocking timeout. + thread = threading.Thread(target=self.get_tf_sess) + thread.start() - def get_tf_sess(self): - # Load the model in saved_model format. + def get_tf_sess(self): + # Load the model in saved_model format. - # The session will be reused. Do not use the with statement. - sess = tf.Session(graph=tf.Graph()) - meta_graph_def = tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], self.model_path) - signature_defs = meta_graph_def.signature_def + # The session will be reused. Do not use the with statement. + sess = tf.Session(graph=tf.Graph()) + meta_graph_def = tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], self.model_path) + signature_defs = meta_graph_def.signature_def - self.sess = sess + self.sess = sess - signature = [] + signature = [] - # only one signature allowed - for signature_def in signature_defs: - signature.append(signature_def) - if len(signature) == 1: - model_signature = signature[0] - else: - logger.warning("signatures more than one, use serving_default signature") - model_signature = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY + # only one signature allowed + for signature_def in signature_defs: + signature.append(signature_def) + if len(signature) == 1: + model_signature = signature[0] + else: + logger.warning("signatures more than one, use serving_default signature") + model_signature = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY - logger.info("model signature: %s", model_signature) + logger.info("model signature: %s", model_signature) - for signature_name in meta_graph_def.signature_def[model_signature].inputs: - tensorinfo = meta_graph_def.signature_def[model_signature].inputs[signature_name] - name = tensorinfo.name - op = self.sess.graph.get_tensor_by_name(name) - self.model_inputs[signature_name] = op + for signature_name in meta_graph_def.signature_def[model_signature].inputs: + tensorinfo = meta_graph_def.signature_def[model_signature].inputs[signature_name] + name = tensorinfo.name + op = self.sess.graph.get_tensor_by_name(name) + self.model_inputs[signature_name] = op - logger.info("model inputs: %s", self.model_inputs) + logger.info("model inputs: %s", self.model_inputs) - for signature_name in meta_graph_def.signature_def[model_signature].outputs: - tensorinfo = meta_graph_def.signature_def[model_signature].outputs[signature_name] - name = tensorinfo.name - op = self.sess.graph.get_tensor_by_name(name) + for signature_name in meta_graph_def.signature_def[model_signature].outputs: + tensorinfo = meta_graph_def.signature_def[model_signature].outputs[signature_name] + name = tensorinfo.name + op = self.sess.graph.get_tensor_by_name(name) - self.model_outputs[signature_name] = op + self.model_outputs[signature_name] = op - logger.info("model outputs: %s", self.model_outputs) + logger.info("model outputs: %s", self.model_outputs) - def _preprocess(self, data): - # Two request modes using HTTPS - # 1. The request in form-data file format is as follows: data = {"Request key value":{"File name":<File io>}} - # 2. Request in JSON format is as follows: data = json.loads("JSON body transferred by the API") - preprocessed_data = {} + def _preprocess(self, data): + # Two request modes using HTTPS + # 1. The request in form-data file format is as follows: data = {"Request key value":{"File name":<File io>}} + # 2. Request in JSON format is as follows: data = json.loads("JSON body transferred by the API") + preprocessed_data = {} - for k, v in data.items(): - for file_name, file_content in v.items(): - image1 = Image.open(file_content) - image1 = np.array(image1, dtype=np.float32) - image1.resize((1, 28, 28)) - preprocessed_data[k] = image1 + for k, v in data.items(): + for file_name, file_content in v.items(): + image1 = Image.open(file_content) + image1 = np.array(image1, dtype=np.float32) + image1.resize((1, 28, 28)) + preprocessed_data[k] = image1 - return preprocessed_data + return preprocessed_data - def _inference(self, data): + def _inference(self, data): - feed_dict = {} - for k, v in data.items(): - if k not in self.model_inputs.keys(): - logger.error("input key %s is not in model inputs %s", k, list(self.model_inputs.keys())) - raise Exception("input key %s is not in model inputs %s" % (k, list(self.model_inputs.keys()))) - feed_dict[self.model_inputs[k]] = v + feed_dict = {} + for k, v in data.items(): + if k not in self.model_inputs.keys(): + logger.error("input key %s is not in model inputs %s", k, list(self.model_inputs.keys())) + raise Exception("input key %s is not in model inputs %s" % (k, list(self.model_inputs.keys()))) + feed_dict[self.model_inputs[k]] = v - result = self.sess.run(self.model_outputs, feed_dict=feed_dict) - logger.info('predict result : ' + str(result)) + result = self.sess.run(self.model_outputs, feed_dict=feed_dict) + logger.info('predict result : ' + str(result)) - return result + return result - def _postprocess(self, data): - infer_output = {"mnist_result": []} - for output_name, results in data.items(): + def _postprocess(self, data): + infer_output = {"mnist_result": []} + for output_name, results in data.items(): - for result in results: - infer_output["mnist_result"].append(np.argmax(result)) + for result in results: + infer_output["mnist_result"].append(np.argmax(result)) - return infer_output + return infer_output - def __del__(self): - self.sess.close() -diff --git a/docs/modelarts/umn/modelarts_23_0100.html b/docs/modelarts/umn/modelarts_23_0100.html index 242da471..77464bb4 100644 --- a/docs/modelarts/umn/modelarts_23_0100.html +++ b/docs/modelarts/umn/modelarts_23_0100.html @@ -38,7 +38,7 @@The JSON Schema of the inference result is as follows:
-{ +{ "type": "object", "properties": { "detection_classes": { diff --git a/docs/modelarts/umn/modelarts_23_0102.html b/docs/modelarts/umn/modelarts_23_0102.html index 6e911894..6fd687e6 100644 --- a/docs/modelarts/umn/modelarts_23_0102.html +++ b/docs/modelarts/umn/modelarts_23_0102.html @@ -42,7 +42,7 @@ReqData is of the Object type and indicates the inference data. The data structure is determined by the application scenario. For models using this mode, the preprocessing logic in the custom model inference code should be able to correctly process the data inputted in the format defined by the mode.
The JSON Schema of a prediction request is as follows:
-{ +{ "type": "object", "properties": { "data": { @@ -101,7 +101,7 @@Similar to ReqData, RespData is also of the Object type and indicates the prediction result. Its structure is determined by the application scenario. For models using this mode, the postprocessing logic in the custom model inference code should be able to correctly output data in the format defined by the mode.
The JSON Schema of a prediction result is as follows:
-{ +{ "type": "object", "properties": { "data": { diff --git a/docs/modelarts/umn/modelarts_23_0157.html b/docs/modelarts/umn/modelarts_23_0157.html index 3a5563e6..743c5043 100644 --- a/docs/modelarts/umn/modelarts_23_0157.html +++ b/docs/modelarts/umn/modelarts_23_0157.html @@ -3,14 +3,14 @@Requirements on Datasets
The built-in algorithms provided by ModelArts can be used for image classification, object detection, and image semantic segmentation. The requirements for the datasets vary according to the built-in algorithms used for different purposes. Before using a built-in algorithm to create a training job, you are advised to prepare a dataset based on the requirements of the algorithm.
Image Classification
The training dataset must be stored in the OBS bucket. The following shows the OBS path structure of the dataset:
-|-- data_url +|-- data_url |--a.jpg |--a.txt |--b.jpg |--b.txt ...
- data_url indicates the folder name. You can customize the folder name. Images and label files cannot be stored in the root directory of an OBS bucket.
- Images and label files must have the same name. The .txt files are label files for image classification. The images can be in JPG, JPEG, PNG, or BMP format.
- The first row of label files for image classification indicates the category name of images, which can be Chinese characters, English letters, or digits. The following provides an example of file content:
cat-- In addition to the preceding files and folders, no other files or folders can exist in the data_url folder.
- You can directly use an existing image classification dataset with published versions in Data Management of ModelArts.
- You can also name sub-folders in the data_url directory by label, as shown in the following:
|-- data_url +- In addition to the preceding files and folders, no other files or folders can exist in the data_url folder.
- You can directly use an existing image classification dataset with published versions in Data Management of ModelArts.
- You can also name sub-folders in the data_url directory by label, as shown in the following:
|-- data_url |--cat |--a.jpg |--a.txt @@ -21,69 +21,42 @@Object Detection and Locating
The training dataset must be stored in the OBS bucket. The following shows the OBS path structure of the dataset:
-|-- data_url +|-- data_url |--a.jpg |--a.xml |--b.jpg |--b.xml ...-
- data_url indicates the folder name. You can customize the folder name. Images and label files cannot be stored in the root directory of an OBS bucket.
- Images and label files must have the same name. The .xml files are label files for object detection. The images can be in JPG, JPEG, PNG, or BMP format.
- In addition to the preceding files and folders, no other files or folders can exist in the data_url folder.
- You can directly use an existing object detection dataset with published versions in Data Management of ModelArts.
- The following provides a label file for object detection. The key parameters are size (image size), object (object information), and name (label name, which can be Chinese characters, English letters, or digits). Note that the values of xmin, ymin, xmax, and ymax in the bndbox field cannot exceed the value of size. That is, the value of min cannot be less than 0, and the value of max cannot be greater than the value of width or height.
+
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -<?xml version="1.0" encoding="UTF-8" standalone="no"?> -<annotation> - <folder>Images</folder> - <filename>IMG_20180919_120022.jpg</filename> - <source> - <database>Unknown</database> - </source> - <size> - <width>800</width> - <height>600</height> - <depth>1</depth> - </size> - <segmented>0</segmented> - <object> - <name>yunbao</name> - <pose>Unspecified</pose> - <truncated>0</truncated> - <difficult>0</difficult> - <bndbox> - <xmin>216.00</xmin> - <ymin>108.00</ymin> - <xmax>705.00</xmax> - <ymax>488.00</ymax> - </bndbox> - </object> -</annotation> -
- data_url indicates the folder name. You can customize the folder name. Images and label files cannot be stored in the root directory of an OBS bucket.
- Images and label files must have the same name. The .xml files are label files for object detection. The images can be in JPG, JPEG, PNG, or BMP format.
- In addition to the preceding files and folders, no other files or folders can exist in the data_url folder.
- You can directly use an existing object detection dataset with published versions in Data Management of ModelArts.
- The following provides a label file for object detection. The key parameters are size (image size), object (object information), and name (label name, which can be Chinese characters, English letters, or digits). Note that the values of xmin, ymin, xmax, and ymax in the bndbox field cannot exceed the value of size. That is, the value of min cannot be less than 0, and the value of max cannot be greater than the value of width or height.
<?xml version="1.0" encoding="UTF-8" standalone="no"?> +<annotation> + <folder>Images</folder> + <filename>IMG_20180919_120022.jpg</filename> + <source> + <database>Unknown</database> + </source> + <size> + <width>800</width> + <height>600</height> + <depth>1</depth> + </size> + <segmented>0</segmented> + <object> + <name>yunbao</name> + <pose>Unspecified</pose> + <truncated>0</truncated> + <difficult>0</difficult> + <bndbox> + <xmin>216.00</xmin> + <ymin>108.00</ymin> + <xmax>705.00</xmax> + <ymax>488.00</ymax> + </bndbox> + </object> +</annotation>diff --git a/docs/modelarts/umn/modelarts_23_0162.html b/docs/modelarts/umn/modelarts_23_0162.html index cbd6f063..d2fce226 100644 --- a/docs/modelarts/umn/modelarts_23_0162.html +++ b/docs/modelarts/umn/modelarts_23_0162.html @@ -8,23 +8,23 @@Image Semantic Segmentation
The training dataset must be stored in the OBS bucket. The following shows the OBS path structure of the dataset:
-|-- data_url +|-- data_url |--Image |--a.jpg |--b.jpg @@ -96,7 +69,7 @@ |--val.txtDescription:
diff --git a/docs/modelarts/umn/modelarts_23_0161.html b/docs/modelarts/umn/modelarts_23_0161.html index c2474992..090cf33c 100644 --- a/docs/modelarts/umn/modelarts_23_0161.html +++ b/docs/modelarts/umn/modelarts_23_0161.html @@ -8,23 +8,23 @@
- data_url, Image, and Label indicate the OBS folder names. The Image folder stores images for semantic segmentation, and the Label folder stores labeled images.
- The name and format of the images for semantic segmentation must be the same as those of the corresponding labeled images. Images in JPG, JPEG, PNG, and BMP formats are supported.
- In the preceding code snippet, train.txt and val.txt are two list files. train.txt is the list file of the training set, and val.txt is the list file of the validation set. It is recommended that the ratio of the training set to the validation set be 8:2.
In the list file, the relative paths of images and labels are separated by spaces. Different pieces of data are separated by newline characters. The following gives an example:
-Image/a.jpg Label/a.jpg +Image/a.jpg Label/a.jpg Image/b.jpg Label/b.jpg ...Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.Model Package Example
Structure of the TensorFlow-based model package
When publishing the model, you only need to specify the model directory.
-OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── saved_model.pb (Mandatory) Protocol buffer file, which contains the diagram description of the model - ├── variables Mandatory for the main file of the *.pb model. The folder must be named variables and contains the weight deviation of the model. + ├── variables Mandatory for the main file of the *.pb model. The folder must be named variables and contains the weight deviation of the model. ├── variables.index Mandatory ├── variables.data-00000-of-00001 Mandatory - ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.Model Package Example
Structure of the TensorFlow-based model package
When publishing the model, you only need to specify the model directory.
-OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── saved_model.pb (Mandatory) Protocol buffer file, which contains the diagram description of the model - ├── variables Mandatory for the main file of the *.pb model. The folder must be named variables and contains the weight deviation of the model. + ├── variables Mandatory for the main file of the *.pb model. The folder must be named variables and contains the weight deviation of the model. ├── variables.index Mandatory ├── variables.data-00000-of-00001 Mandatory - ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.diff --git a/docs/modelarts/umn/modelarts_23_0163.html b/docs/modelarts/umn/modelarts_23_0163.html index 513c844a..90ebd007 100644 --- a/docs/modelarts/umn/modelarts_23_0163.html +++ b/docs/modelarts/umn/modelarts_23_0163.html @@ -8,21 +8,21 @@diff --git a/docs/modelarts/umn/modelarts_23_0164.html b/docs/modelarts/umn/modelarts_23_0164.html index 7c03e233..18469fc0 100644 --- a/docs/modelarts/umn/modelarts_23_0164.html +++ b/docs/modelarts/umn/modelarts_23_0164.html @@ -8,21 +8,21 @@Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.Model Package Example
Structure of the MXNet-based model package
When publishing the model, you only need to specify the model directory.
-OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── resnet-50-symbol.json (Mandatory) Model definition file, which contains the neural network description of the model ├── resnet-50-0000.params (Mandatory) Model variable parameter file, which contains parameter and weight information - ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.Model Package Example
Structure of the MXNet-based model package
When publishing the model, you only need to specify the model directory.
-OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── resnet-50-symbol.json (Mandatory) Model definition file, which contains the neural network description of the model ├── resnet-50-0000.params (Mandatory) Model variable parameter file, which contains parameter and weight information - ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.diff --git a/docs/modelarts/umn/modelarts_23_0165.html b/docs/modelarts/umn/modelarts_23_0165.html index 68c47ec2..a9c0515a 100644 --- a/docs/modelarts/umn/modelarts_23_0165.html +++ b/docs/modelarts/umn/modelarts_23_0165.html @@ -8,20 +8,20 @@Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.Model Package Example
Structure of the PyTorch-based model package
When publishing the model, you only need to specify the model directory.
-OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── resnet50.pth (Mandatory) PyTorch model file, which contains variable and weight information - ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.diff --git a/docs/modelarts/umn/modelarts_23_0166.html b/docs/modelarts/umn/modelarts_23_0166.html index 4a4555cf..1b4a0e52 100644 --- a/docs/modelarts/umn/modelarts_23_0166.html +++ b/docs/modelarts/umn/modelarts_23_0166.html @@ -8,20 +8,20 @@Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.Model Package Example
Structure of the PyTorch-based model package
When publishing the model, you only need to specify the model directory.
-OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. ├── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code ├── resnet50.pth (Mandatory) PyTorch model file, which contains variable and weight information - ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ ├──customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.diff --git a/docs/modelarts/umn/modelarts_23_0167.html b/docs/modelarts/umn/modelarts_23_0167.html index 45c7c32e..aa5cef74 100644 --- a/docs/modelarts/umn/modelarts_23_0167.html +++ b/docs/modelarts/umn/modelarts_23_0167.html @@ -8,20 +8,20 @@Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.Model Package Example
Structure of the Caffe-based model package
-When publishing the model, you only need to specify the model directory.diff --git a/docs/modelarts/umn/modelarts_23_0168.html b/docs/modelarts/umn/modelarts_23_0168.html index 29da8ca1..6dc45112 100644 --- a/docs/modelarts/umn/modelarts_23_0168.html +++ b/docs/modelarts/umn/modelarts_23_0168.html @@ -8,20 +8,20 @@OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +When publishing the model, you only need to specify the model directory.OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information - |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.Model Package Example
Structure of the Caffe-based model package
-When publishing the model, you only need to specify the model directory.diff --git a/docs/modelarts/umn/modelarts_23_0169.html b/docs/modelarts/umn/modelarts_23_0169.html index 0dcadea7..5d4575a1 100644 --- a/docs/modelarts/umn/modelarts_23_0169.html +++ b/docs/modelarts/umn/modelarts_23_0169.html @@ -8,20 +8,20 @@OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +When publishing the model, you only need to specify the model directory.OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information - |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling the Model Configuration File.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.diff --git a/docs/modelarts/umn/modelarts_23_0170.html b/docs/modelarts/umn/modelarts_23_0170.html index 9b70110f..0a1d4cfe 100644 --- a/docs/modelarts/umn/modelarts_23_0170.html +++ b/docs/modelarts/umn/modelarts_23_0170.html @@ -8,20 +8,20 @@Model Package Example
Structure of the Caffe-based model package
-When publishing the model, you only need to specify the model directory.OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +When publishing the model, you only need to specify the model directory.OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information - |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.Input and Output Mode
Undefined Mode can be overwritten. That is, you can select another input and output mode during model creation.
Model Package Specifications
-
- The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
- The model inference code file is optional. If the file exists, the file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to compile the model inference code file, see Specifications for Compiling Model Inference Code.
- The structure of the model package imported using the template is as follows:
model/ +
- The structure of the model package imported using the template is as follows:
model/ │ ├── Model file //(Mandatory) The model file format varies according to the engine. For details, see the model package example. ├── Custom Python package //(Optional) User's Python package, which can be directly referenced in the model inference code -├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.+├── customize_service.py //(Optional) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.diff --git a/docs/modelarts/umn/modelarts_23_0173.html b/docs/modelarts/umn/modelarts_23_0173.html index a26e12d7..dd3cef5e 100644 --- a/docs/modelarts/umn/modelarts_23_0173.html +++ b/docs/modelarts/umn/modelarts_23_0173.html @@ -2,733 +2,369 @@Model Package Example
Structure of the Caffe-based model package
-When publishing the model, you only need to specify the model directory.OBS bucket/directory name -|── model (Mandatory) The folder must be named model and is used to store model-related files. +When publishing the model, you only need to specify the model directory.OBS bucket/directory name +|── model (Mandatory) The folder must be named model and is used to store model-related files. |── <<Custom Python package>> (Optional) User's Python package, which can be directly referenced in the model inference code |── deploy.prototxt (Mandatory) Caffe model file, which contains information such as the model network structure |── resnet.caffemodel (Mandatory) Caffe model file, which contains variable and weight information - |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.+ |── customize_service.py (Optional) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.TensorFlow
TensorFlow has two types of APIs: Keras and tf. Keras and tf use different code for training and saving models, but the same code for inference.
--Training a Model (Keras API)
+model.summary() +# Train the model. +model.fit(x_train, y_train, epochs=2) +# Evaluate the model. +model.evaluate(x_test, y_test)
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 from keras.models import Sequential -model = Sequential() -from keras.layers import Dense -import tensorflow as tf +-Training a Model (Keras API)
from keras.models import Sequential +model = Sequential() +from keras.layers import Dense +import tensorflow as tf -# Import a training dataset. -mnist = tf.keras.datasets.mnist -(x_train, y_train),(x_test, y_test) = mnist.load_data() -x_train, x_test = x_train / 255.0, x_test / 255.0 +# Import a training dataset. +mnist = tf.keras.datasets.mnist +(x_train, y_train),(x_test, y_test) = mnist.load_data() +x_train, x_test = x_train / 255.0, x_test / 255.0 -print(x_train.shape) +print(x_train.shape) -from keras.layers import Dense -from keras.models import Sequential -import keras -from keras.layers import Dense, Activation, Flatten, Dropout +from keras.layers import Dense +from keras.models import Sequential +import keras +from keras.layers import Dense, Activation, Flatten, Dropout -# Define a model network. -model = Sequential() -model.add(Flatten(input_shape=(28,28))) -model.add(Dense(units=5120,activation='relu')) -model.add(Dropout(0.2)) +# Define a model network. +model = Sequential() +model.add(Flatten(input_shape=(28,28))) +model.add(Dense(units=5120,activation='relu')) +model.add(Dropout(0.2)) -model.add(Dense(units=10, activation='softmax')) +model.add(Dense(units=10, activation='softmax')) -# Define an optimizer and loss functions. -model.compile(optimizer='adam', - loss='sparse_categorical_crossentropy', - metrics=['accuracy']) +# Define an optimizer and loss functions. +model.compile(optimizer='adam', + loss='sparse_categorical_crossentropy', + metrics=['accuracy']) -model.summary() -# Train the model. -model.fit(x_train, y_train, epochs=2) -# Evaluate the model. -model.evaluate(x_test, y_test) --Saving a Model (Keras API)
+) +builder.save()
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 from keras import backend as K +-Saving a Model (Keras API)
from keras import backend as K -# K.get_session().run(tf.global_variables_initializer()) +# K.get_session().run(tf.global_variables_initializer()) -# Define the inputs and outputs of the prediction API. -# The key values of the inputs and outputs dictionaries are used as the index keys for the input and output tensors of the model. - # The input and output definitions of the model must match the custom inference script. -predict_signature = tf.saved_model.signature_def_utils.predict_signature_def( - inputs={"images" : model.input}, - outputs={"scores" : model.output} -) +# Define the inputs and outputs of the prediction API. +# The key values of the inputs and outputs dictionaries are used as the index keys for the input and output tensors of the model. + # The input and output definitions of the model must match the custom inference script. +predict_signature = tf.saved_model.signature_def_utils.predict_signature_def( + inputs={"images" : model.input}, + outputs={"scores" : model.output} +) -# Define a save path. -builder = tf.saved_model.builder.SavedModelBuilder('./mnist_keras/') +# Define a save path. +builder = tf.saved_model.builder.SavedModelBuilder('./mnist_keras/') -builder.add_meta_graph_and_variables( +builder.add_meta_graph_and_variables( - sess = K.get_session(), - # The tf.saved_model.tag_constants.SERVING tag needs to be defined for inference and deployment. - tags=[tf.saved_model.tag_constants.SERVING], - """ - signature_def_map: Only single items can exist, or the corresponding key needs to be defined as follows: - tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY - """ - signature_def_map={ - tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: - predict_signature - } + sess = K.get_session(), + # The tf.saved_model.tag_constants.SERVING tag needs to be defined for inference and deployment. + tags=[tf.saved_model.tag_constants.SERVING], + """ + signature_def_map: Only single items can exist, or the corresponding key needs to be defined as follows: + tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY + """ + signature_def_map={ + tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: + predict_signature + } -) -builder.save() --Training a Model (tf API)
+print('Training model...') +mnist = read_data_sets(data_path, one_hot=True) +sess = tf.InteractiveSession() +serialized_tf_example = tf.placeholder(tf.string, name='tf_example') +feature_configs = {'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32), } +tf_example = tf.parse_example(serialized_tf_example, feature_configs) +x = tf.identity(tf_example['x'], name='x') # use tf.identity() to assign name +y_ = tf.placeholder('float', shape=[None, 10]) +w = tf.Variable(tf.zeros([784, 10])) +b = tf.Variable(tf.zeros([10])) +sess.run(tf.global_variables_initializer()) +y = tf.nn.softmax(tf.matmul(x, w) + b, name='y') +cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) +train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) +values, indices = tf.nn.top_k(y, 10) +table = tf.contrib.lookup.index_to_string_table_from_tensor( + tf.constant([str(i) for i in range(10)])) +prediction_classes = table.lookup(tf.to_int64(indices)) +for _ in range(training_iteration): + batch = mnist.train.next_batch(50) + train_step.run(feed_dict={x: batch[0], y_: batch[1]}) +correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) +accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float')) +print('training accuracy %g' % sess.run( + accuracy, feed_dict={ + x: mnist.test.images, + y_: mnist.test.labels + })) +print('Done training!')
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 - 11 - 12 - 13 - 14 - 15 - 16 - 17 - 18 - 19 - 20 - 21 - 22 - 23 - 24 - 25 - 26 - 27 - 28 - 29 - 30 - 31 - 32 - 33 - 34 - 35 - 36 - 37 - 38 - 39 - 40 - 41 - 42 - 43 - 44 - 45 - 46 - 47 - 48 - 49 - 50 - 51 - 52 - 53 - 54 - 55 - 56 - 57 - 58 - 59 - 60 - 61 - 62 - 63 - 64 - 65 - 66 - 67 - 68 - 69 - 70 - 71 - 72 - 73 - 74 - 75 - 76 - 77 - 78 - 79 - 80 - 81 - 82 - 83 - 84 - 85 - 86 - 87 - 88 - 89 - 90 - 91 - 92 - 93 - 94 - 95 - 96 - 97 - 98 - 99 -100 -101 -102 -103 -104 -105 -106 -107 -108 -109 -110 -111 -112 -113 -114 -115 -116 -117 -118 -119 -120 -121 -122 -123 -124 -125 -126 -127 -128 -129 -130 -131 -132 -133 -134 -135 -136 -137 -138 -139 -140 -141 -142 -143 -144 -145 -146 -147 -148 -149 -150 -151 -152 -153 -154 -155 -156 -157 -158 -159 -160 -161 -162 -163 -164 -165 -166 -167 -168 -169 -170 -171 -172 -173 -174 -175 -176 -177 -178 -179 -180 -181 -182 -183 -184 -185 -186 -187 -188 -189 -190 -191 -192 -193 -194 -195 -196 -197 -198 -199 -200 -201 -202 -203 -204 -205 -206 -207 -208 -209 -210 -211 -212 -213 -214 -215 -216 -217 -218 -219 -220 -221 -222 -223 -224 -225 from __future__ import print_function +-Training a Model (tf API)
from __future__ import print_function -import gzip -import os -import urllib +import gzip +import os +import urllib -import numpy -import tensorflow as tf -from six.moves import urllib +import numpy +import tensorflow as tf +from six.moves import urllib -# Training data is obtained from the Yann LeCun official website http://yann.lecun.com/exdb/mnist/. -SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/' -TRAIN_IMAGES = 'train-images-idx3-ubyte.gz' -TRAIN_LABELS = 'train-labels-idx1-ubyte.gz' -TEST_IMAGES = 't10k-images-idx3-ubyte.gz' -TEST_LABELS = 't10k-labels-idx1-ubyte.gz' -VALIDATION_SIZE = 5000 +# Training data is obtained from the Yann LeCun official website http://yann.lecun.com/exdb/mnist/. +SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/' +TRAIN_IMAGES = 'train-images-idx3-ubyte.gz' +TRAIN_LABELS = 'train-labels-idx1-ubyte.gz' +TEST_IMAGES = 't10k-images-idx3-ubyte.gz' +TEST_LABELS = 't10k-labels-idx1-ubyte.gz' +VALIDATION_SIZE = 5000 -def maybe_download(filename, work_directory): - """Download the data from Yann's website, unless it's already here.""" - if not os.path.exists(work_directory): - os.mkdir(work_directory) - filepath = os.path.join(work_directory, filename) - if not os.path.exists(filepath): - filepath, _ = urllib.request.urlretrieve(SOURCE_URL + filename, filepath) - statinfo = os.stat(filepath) - print('Successfully downloaded %s %d bytes.' % (filename, statinfo.st_size)) - return filepath +def maybe_download(filename, work_directory): + """Download the data from Yann's website, unless it's already here.""" + if not os.path.exists(work_directory): + os.mkdir(work_directory) + filepath = os.path.join(work_directory, filename) + if not os.path.exists(filepath): + filepath, _ = urllib.request.urlretrieve(SOURCE_URL + filename, filepath) + statinfo = os.stat(filepath) + print('Successfully downloaded %s %d bytes.' % (filename, statinfo.st_size)) + return filepath -def _read32(bytestream): - dt = numpy.dtype(numpy.uint32).newbyteorder('>') - return numpy.frombuffer(bytestream.read(4), dtype=dt)[0] +def _read32(bytestream): + dt = numpy.dtype(numpy.uint32).newbyteorder('>') + return numpy.frombuffer(bytestream.read(4), dtype=dt)[0] -def extract_images(filename): - """Extract the images into a 4D uint8 numpy array [index, y, x, depth].""" - print('Extracting %s' % filename) - with gzip.open(filename) as bytestream: - magic = _read32(bytestream) - if magic != 2051: - raise ValueError( - 'Invalid magic number %d in MNIST image file: %s' % - (magic, filename)) - num_images = _read32(bytestream) - rows = _read32(bytestream) - cols = _read32(bytestream) - buf = bytestream.read(rows * cols * num_images) - data = numpy.frombuffer(buf, dtype=numpy.uint8) - data = data.reshape(num_images, rows, cols, 1) - return data +def extract_images(filename): + """Extract the images into a 4D uint8 numpy array [index, y, x, depth].""" + print('Extracting %s' % filename) + with gzip.open(filename) as bytestream: + magic = _read32(bytestream) + if magic != 2051: + raise ValueError( + 'Invalid magic number %d in MNIST image file: %s' % + (magic, filename)) + num_images = _read32(bytestream) + rows = _read32(bytestream) + cols = _read32(bytestream) + buf = bytestream.read(rows * cols * num_images) + data = numpy.frombuffer(buf, dtype=numpy.uint8) + data = data.reshape(num_images, rows, cols, 1) + return data -def dense_to_one_hot(labels_dense, num_classes=10): - """Convert class labels from scalars to one-hot vectors.""" - num_labels = labels_dense.shape[0] - index_offset = numpy.arange(num_labels) * num_classes - labels_one_hot = numpy.zeros((num_labels, num_classes)) - labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1 - return labels_one_hot +def dense_to_one_hot(labels_dense, num_classes=10): + """Convert class labels from scalars to one-hot vectors.""" + num_labels = labels_dense.shape[0] + index_offset = numpy.arange(num_labels) * num_classes + labels_one_hot = numpy.zeros((num_labels, num_classes)) + labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1 + return labels_one_hot -def extract_labels(filename, one_hot=False): - """Extract the labels into a 1D uint8 numpy array [index].""" - print('Extracting %s' % filename) - with gzip.open(filename) as bytestream: - magic = _read32(bytestream) - if magic != 2049: - raise ValueError( - 'Invalid magic number %d in MNIST label file: %s' % - (magic, filename)) - num_items = _read32(bytestream) - buf = bytestream.read(num_items) - labels = numpy.frombuffer(buf, dtype=numpy.uint8) - if one_hot: - return dense_to_one_hot(labels) - return labels +def extract_labels(filename, one_hot=False): + """Extract the labels into a 1D uint8 numpy array [index].""" + print('Extracting %s' % filename) + with gzip.open(filename) as bytestream: + magic = _read32(bytestream) + if magic != 2049: + raise ValueError( + 'Invalid magic number %d in MNIST label file: %s' % + (magic, filename)) + num_items = _read32(bytestream) + buf = bytestream.read(num_items) + labels = numpy.frombuffer(buf, dtype=numpy.uint8) + if one_hot: + return dense_to_one_hot(labels) + return labels -class DataSet(object): - """Class encompassing test, validation and training MNIST data set.""" +class DataSet(object): + """Class encompassing test, validation and training MNIST data set.""" - def __init__(self, images, labels, fake_data=False, one_hot=False): - """Construct a DataSet. one_hot arg is used only if fake_data is true.""" + def __init__(self, images, labels, fake_data=False, one_hot=False): + """Construct a DataSet. one_hot arg is used only if fake_data is true.""" - if fake_data: - self._num_examples = 10000 - self.one_hot = one_hot - else: - assert images.shape[0] == labels.shape[0], ( - 'images.shape: %s labels.shape: %s' % (images.shape, - labels.shape)) - self._num_examples = images.shape[0] + if fake_data: + self._num_examples = 10000 + self.one_hot = one_hot + else: + assert images.shape[0] == labels.shape[0], ( + 'images.shape: %s labels.shape: %s' % (images.shape, + labels.shape)) + self._num_examples = images.shape[0] - # Convert shape from [num examples, rows, columns, depth] - # to [num examples, rows*columns] (assuming depth == 1) - assert images.shape[3] == 1 - images = images.reshape(images.shape[0], - images.shape[1] * images.shape[2]) - # Convert from [0, 255] -> [0.0, 1.0]. - images = images.astype(numpy.float32) - images = numpy.multiply(images, 1.0 / 255.0) - self._images = images - self._labels = labels - self._epochs_completed = 0 - self._index_in_epoch = 0 + # Convert shape from [num examples, rows, columns, depth] + # to [num examples, rows*columns] (assuming depth == 1) + assert images.shape[3] == 1 + images = images.reshape(images.shape[0], + images.shape[1] * images.shape[2]) + # Convert from [0, 255] -> [0.0, 1.0]. + images = images.astype(numpy.float32) + images = numpy.multiply(images, 1.0 / 255.0) + self._images = images + self._labels = labels + self._epochs_completed = 0 + self._index_in_epoch = 0 - @property - def images(self): - return self._images + @property + def images(self): + return self._images - @property - def labels(self): - return self._labels + @property + def labels(self): + return self._labels - @property - def num_examples(self): - return self._num_examples + @property + def num_examples(self): + return self._num_examples - @property - def epochs_completed(self): - return self._epochs_completed + @property + def epochs_completed(self): + return self._epochs_completed - def next_batch(self, batch_size, fake_data=False): - """Return the next `batch_size` examples from this data set.""" - if fake_data: - fake_image = [1] * 784 - if self.one_hot: - fake_label = [1] + [0] * 9 - else: - fake_label = 0 - return [fake_image for _ in range(batch_size)], [ - fake_label for _ in range(batch_size) - ] - start = self._index_in_epoch - self._index_in_epoch += batch_size - if self._index_in_epoch > self._num_examples: - # Finished epoch - self._epochs_completed += 1 - # Shuffle the data - perm = numpy.arange(self._num_examples) - numpy.random.shuffle(perm) - self._images = self._images[perm] - self._labels = self._labels[perm] - # Start next epoch - start = 0 - self._index_in_epoch = batch_size - assert batch_size <= self._num_examples - end = self._index_in_epoch - return self._images[start:end], self._labels[start:end] + def next_batch(self, batch_size, fake_data=False): + """Return the next `batch_size` examples from this data set.""" + if fake_data: + fake_image = [1] * 784 + if self.one_hot: + fake_label = [1] + [0] * 9 + else: + fake_label = 0 + return [fake_image for _ in range(batch_size)], [ + fake_label for _ in range(batch_size) + ] + start = self._index_in_epoch + self._index_in_epoch += batch_size + if self._index_in_epoch > self._num_examples: + # Finished epoch + self._epochs_completed += 1 + # Shuffle the data + perm = numpy.arange(self._num_examples) + numpy.random.shuffle(perm) + self._images = self._images[perm] + self._labels = self._labels[perm] + # Start next epoch + start = 0 + self._index_in_epoch = batch_size + assert batch_size <= self._num_examples + end = self._index_in_epoch + return self._images[start:end], self._labels[start:end] -def read_data_sets(train_dir, fake_data=False, one_hot=False): - """Return training, validation and testing data sets.""" +def read_data_sets(train_dir, fake_data=False, one_hot=False): + """Return training, validation and testing data sets.""" - class DataSets(object): - pass + class DataSets(object): + pass - data_sets = DataSets() + data_sets = DataSets() - if fake_data: - data_sets.train = DataSet([], [], fake_data=True, one_hot=one_hot) - data_sets.validation = DataSet([], [], fake_data=True, one_hot=one_hot) - data_sets.test = DataSet([], [], fake_data=True, one_hot=one_hot) - return data_sets + if fake_data: + data_sets.train = DataSet([], [], fake_data=True, one_hot=one_hot) + data_sets.validation = DataSet([], [], fake_data=True, one_hot=one_hot) + data_sets.test = DataSet([], [], fake_data=True, one_hot=one_hot) + return data_sets - local_file = maybe_download(TRAIN_IMAGES, train_dir) - train_images = extract_images(local_file) + local_file = maybe_download(TRAIN_IMAGES, train_dir) + train_images = extract_images(local_file) - local_file = maybe_download(TRAIN_LABELS, train_dir) - train_labels = extract_labels(local_file, one_hot=one_hot) + local_file = maybe_download(TRAIN_LABELS, train_dir) + train_labels = extract_labels(local_file, one_hot=one_hot) - local_file = maybe_download(TEST_IMAGES, train_dir) - test_images = extract_images(local_file) + local_file = maybe_download(TEST_IMAGES, train_dir) + test_images = extract_images(local_file) - local_file = maybe_download(TEST_LABELS, train_dir) - test_labels = extract_labels(local_file, one_hot=one_hot) + local_file = maybe_download(TEST_LABELS, train_dir) + test_labels = extract_labels(local_file, one_hot=one_hot) - validation_images = train_images[:VALIDATION_SIZE] - validation_labels = train_labels[:VALIDATION_SIZE] - train_images = train_images[VALIDATION_SIZE:] - train_labels = train_labels[VALIDATION_SIZE:] + validation_images = train_images[:VALIDATION_SIZE] + validation_labels = train_labels[:VALIDATION_SIZE] + train_images = train_images[VALIDATION_SIZE:] + train_labels = train_labels[VALIDATION_SIZE:] - data_sets.train = DataSet(train_images, train_labels) - data_sets.validation = DataSet(validation_images, validation_labels) - data_sets.test = DataSet(test_images, test_labels) - return data_sets + data_sets.train = DataSet(train_images, train_labels) + data_sets.validation = DataSet(validation_images, validation_labels) + data_sets.test = DataSet(test_images, test_labels) + return data_sets -training_iteration = 1000 +training_iteration = 1000 -modelarts_example_path = './modelarts-mnist-train-save-deploy-example' +modelarts_example_path = './modelarts-mnist-train-save-deploy-example' -export_path = modelarts_example_path + '/model/' -data_path = './' +export_path = modelarts_example_path + '/model/' +data_path = './' -print('Training model...') -mnist = read_data_sets(data_path, one_hot=True) -sess = tf.InteractiveSession() -serialized_tf_example = tf.placeholder(tf.string, name='tf_example') -feature_configs = {'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32), } -tf_example = tf.parse_example(serialized_tf_example, feature_configs) -x = tf.identity(tf_example['x'], name='x') # use tf.identity() to assign name -y_ = tf.placeholder('float', shape=[None, 10]) -w = tf.Variable(tf.zeros([784, 10])) -b = tf.Variable(tf.zeros([10])) -sess.run(tf.global_variables_initializer()) -y = tf.nn.softmax(tf.matmul(x, w) + b, name='y') -cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) -train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) -values, indices = tf.nn.top_k(y, 10) -table = tf.contrib.lookup.index_to_string_table_from_tensor( - tf.constant([str(i) for i in range(10)])) -prediction_classes = table.lookup(tf.to_int64(indices)) -for _ in range(training_iteration): - batch = mnist.train.next_batch(50) - train_step.run(feed_dict={x: batch[0], y_: batch[1]}) -correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) -accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float')) -print('training accuracy %g' % sess.run( - accuracy, feed_dict={ - x: mnist.test.images, - y_: mnist.test.labels - })) -print('Done training!') --Saving a Model (tf API)
+print('Done exporting!')
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 # Export the model. -# The model needs to be saved using the saved_model API. -print('Exporting trained model to', export_path) -builder = tf.saved_model.builder.SavedModelBuilder(export_path) +-Saving a Model (tf API)
# Export the model. +# The model needs to be saved using the saved_model API. +print('Exporting trained model to', export_path) +builder = tf.saved_model.builder.SavedModelBuilder(export_path) -tensor_info_x = tf.saved_model.utils.build_tensor_info(x) -tensor_info_y = tf.saved_model.utils.build_tensor_info(y) +tensor_info_x = tf.saved_model.utils.build_tensor_info(x) +tensor_info_y = tf.saved_model.utils.build_tensor_info(y) -# Define the inputs and outputs of the prediction API. -# The key values of the inputs and outputs dictionaries are used as the index keys for the input and output tensors of the model. - # The input and output definitions of the model must match the custom inference script. -prediction_signature = ( - tf.saved_model.signature_def_utils.build_signature_def( - inputs={'images': tensor_info_x}, - outputs={'scores': tensor_info_y}, - method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)) +# Define the inputs and outputs of the prediction API. +# The key values of the inputs and outputs dictionaries are used as the index keys for the input and output tensors of the model. + # The input and output definitions of the model must match the custom inference script. +prediction_signature = ( + tf.saved_model.signature_def_utils.build_signature_def( + inputs={'images': tensor_info_x}, + outputs={'scores': tensor_info_y}, + method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)) -legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op') -builder.add_meta_graph_and_variables( - # Set tag to serve/tf.saved_model.tag_constants.SERVING. - sess, [tf.saved_model.tag_constants.SERVING], - signature_def_map={ - 'predict_images': - prediction_signature, - }, - legacy_init_op=legacy_init_op) +legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op') +builder.add_meta_graph_and_variables( + # Set tag to serve/tf.saved_model.tag_constants.SERVING. + sess, [tf.saved_model.tag_constants.SERVING], + signature_def_map={ + 'predict_images': + prediction_signature, + }, + legacy_init_op=legacy_init_op) -builder.save() +builder.save() -print('Done exporting!') -Inference Code (Keras and tf APIs)
+ # The output corresponding to model saving in the preceding training part is {"scores":<array>}. + # Postprocess the HTTPS output. + def _postprocess(self, data): + infer_output = {"mnist_result": []} + # Iterate the model output. + for output_name, results in data.items(): + for result in results: + infer_output["mnist_result"].append(result.index(max(result))) + return infer_output
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 from PIL import Image -import numpy as np -from model_service.tfserving_model_service import TfServingBaseService +-Inference Code (Keras and tf APIs)
from PIL import Image +import numpy as np +from model_service.tfserving_model_service import TfServingBaseService -class mnist_service(TfServingBaseService): +class mnist_service(TfServingBaseService): - # Match the model input with the user's HTTPS API input during preprocessing. - # The model input corresponding to the preceding training part is {"images":<array>}. - def _preprocess(self, data): + # Match the model input with the user's HTTPS API input during preprocessing. + # The model input corresponding to the preceding training part is {"images":<array>}. + def _preprocess(self, data): - preprocessed_data = {} - images = [] - # Iterate the input data. - for k, v in data.items(): - for file_name, file_content in v.items(): - image1 = Image.open(file_content) - image1 = np.array(image1, dtype=np.float32) - image1.resize((1,784)) - images.append(image1) - # Return the numpy array. - images = np.array(images,dtype=np.float32) - # Perform batch processing on multiple input samples and ensure that the shape is the same as that inputted during training. - images.resize((len(data), 784)) - preprocessed_data['images'] = images - return preprocessed_data + preprocessed_data = {} + images = [] + # Iterate the input data. + for k, v in data.items(): + for file_name, file_content in v.items(): + image1 = Image.open(file_content) + image1 = np.array(image1, dtype=np.float32) + image1.resize((1,784)) + images.append(image1) + # Return the numpy array. + images = np.array(images,dtype=np.float32) + # Perform batch processing on multiple input samples and ensure that the shape is the same as that inputted during training. + images.resize((len(data), 784)) + preprocessed_data['images'] = images + return preprocessed_data - # Processing logic of the inference for invoking the parent class. + # Processing logic of the inference for invoking the parent class. - # The output corresponding to model saving in the preceding training part is {"scores":<array>}. - # Postprocess the HTTPS output. - def _postprocess(self, data): - infer_output = {"mnist_result": []} - # Iterate the model output. - for output_name, results in data.items(): - for result in results: - infer_output["mnist_result"].append(result.index(max(result))) - return infer_output -diff --git a/docs/modelarts/umn/modelarts_23_0175.html b/docs/modelarts/umn/modelarts_23_0175.html index 7d0d52a8..9f10159c 100644 --- a/docs/modelarts/umn/modelarts_23_0175.html +++ b/docs/modelarts/umn/modelarts_23_0175.html @@ -1,375 +1,190 @@PyTorch
--Training a Model
+for epoch in range(1, 2 + 1): + train(model, device, train_loader, optimizer, epoch) + test(model, device, test_loader)
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71 -72 -73 -74 -75 -76 -77 -78 -79 -80 from __future__ import print_function -import argparse -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.optim as optim -from torchvision import datasets, transforms +-Training a Model
from __future__ import print_function +import argparse +import torch +import torch.nn as nn +import torch.nn.functional as F +import torch.optim as optim +from torchvision import datasets, transforms -# Define a network structure. -class Net(nn.Module): - def __init__(self): - super(Net, self).__init__() -# The second dimension of the input must be 784. - self.hidden1 = nn.Linear(784, 5120, bias=False) - self.output = nn.Linear(5120, 10, bias=False) +# Define a network structure. +class Net(nn.Module): + def __init__(self): + super(Net, self).__init__() +# The second dimension of the input must be 784. + self.hidden1 = nn.Linear(784, 5120, bias=False) + self.output = nn.Linear(5120, 10, bias=False) - def forward(self, x): - x = x.view(x.size()[0], -1) - x = F.relu((self.hidden1(x))) - x = F.dropout(x, 0.2) - x = self.output(x) - return F.log_softmax(x) + def forward(self, x): + x = x.view(x.size()[0], -1) + x = F.relu((self.hidden1(x))) + x = F.dropout(x, 0.2) + x = self.output(x) + return F.log_softmax(x) -def train(model, device, train_loader, optimizer, epoch): - model.train() - for batch_idx, (data, target) in enumerate(train_loader): - data, target = data.to(device), target.to(device) - optimizer.zero_grad() - output = model(data) - loss = F.cross_entropy(output, target) - loss.backward() - optimizer.step() - if batch_idx % 10 == 0: - print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( - epoch, batch_idx * len(data), len(train_loader.dataset), - 100. * batch_idx / len(train_loader), loss.item())) +def train(model, device, train_loader, optimizer, epoch): + model.train() + for batch_idx, (data, target) in enumerate(train_loader): + data, target = data.to(device), target.to(device) + optimizer.zero_grad() + output = model(data) + loss = F.cross_entropy(output, target) + loss.backward() + optimizer.step() + if batch_idx % 10 == 0: + print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( + epoch, batch_idx * len(data), len(train_loader.dataset), + 100. * batch_idx / len(train_loader), loss.item())) -def test( model, device, test_loader): - model.eval() - test_loss = 0 - correct = 0 - with torch.no_grad(): - for data, target in test_loader: - data, target = data.to(device), target.to(device) - output = model(data) - test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss - pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability - correct += pred.eq(target.view_as(pred)).sum().item() +def test( model, device, test_loader): + model.eval() + test_loss = 0 + correct = 0 + with torch.no_grad(): + for data, target in test_loader: + data, target = data.to(device), target.to(device) + output = model(data) + test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss + pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability + correct += pred.eq(target.view_as(pred)).sum().item() - test_loss /= len(test_loader.dataset) + test_loss /= len(test_loader.dataset) - print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( - test_loss, correct, len(test_loader.dataset), - 100. * correct / len(test_loader.dataset))) + print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( + test_loss, correct, len(test_loader.dataset), + 100. * correct / len(test_loader.dataset))) -device = torch.device("cpu") +device = torch.device("cpu") -batch_size=64 +batch_size=64 -kwargs={} +kwargs={} -train_loader = torch.utils.data.DataLoader( - datasets.MNIST('.', train=True, download=True, - transform=transforms.Compose([ - transforms.ToTensor() - ])), - batch_size=batch_size, shuffle=True, **kwargs) -test_loader = torch.utils.data.DataLoader( - datasets.MNIST('.', train=False, transform=transforms.Compose([ - transforms.ToTensor() - ])), - batch_size=1000, shuffle=True, **kwargs) +train_loader = torch.utils.data.DataLoader( + datasets.MNIST('.', train=True, download=True, + transform=transforms.Compose([ + transforms.ToTensor() + ])), + batch_size=batch_size, shuffle=True, **kwargs) +test_loader = torch.utils.data.DataLoader( + datasets.MNIST('.', train=False, transform=transforms.Compose([ + transforms.ToTensor() + ])), + batch_size=1000, shuffle=True, **kwargs) -model = Net().to(device) -optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5) -optimizer = optim.Adam(model.parameters()) +model = Net().to(device) +optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5) +optimizer = optim.Adam(model.parameters()) -for epoch in range(1, 2 + 1): - train(model, device, train_loader, optimizer, epoch) - test(model, device, test_loader) -Saving a Model
+
1 -2 -# The model must be saved using state_dict and can be deployed remotely. -torch.save(model.state_dict(), "pytorch_mnist/mnist_mlp.pt") --Saving a Model
# The model must be saved using state_dict and can be deployed remotely. +torch.save(model.state_dict(), "pytorch_mnist/mnist_mlp.pt")Inference Code
+ return model
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 - 11 - 12 - 13 - 14 - 15 - 16 - 17 - 18 - 19 - 20 - 21 - 22 - 23 - 24 - 25 - 26 - 27 - 28 - 29 - 30 - 31 - 32 - 33 - 34 - 35 - 36 - 37 - 38 - 39 - 40 - 41 - 42 - 43 - 44 - 45 - 46 - 47 - 48 - 49 - 50 - 51 - 52 - 53 - 54 - 55 - 56 - 57 - 58 - 59 - 60 - 61 - 62 - 63 - 64 - 65 - 66 - 67 - 68 - 69 - 70 - 71 - 72 - 73 - 74 - 75 - 76 - 77 - 78 - 79 - 80 - 81 - 82 - 83 - 84 - 85 - 86 - 87 - 88 - 89 - 90 - 91 - 92 - 93 - 94 - 95 - 96 - 97 - 98 - 99 -100 from PIL import Image -import log -from model_service.pytorch_model_service import PTServingBaseService -import torch.nn.functional as F +-Inference Code
from PIL import Image +import log +from model_service.pytorch_model_service import PTServingBaseService +import torch.nn.functional as F -import torch.nn as nn -import torch -import json +import torch.nn as nn +import torch +import json -import numpy as np +import numpy as np -logger = log.getLogger(__name__) +logger = log.getLogger(__name__) -import torchvision.transforms as transforms +import torchvision.transforms as transforms -# Define model preprocessing. -infer_transformation = transforms.Compose([ - transforms.Resize((28,28)), - # Transform to a PyTorch tensor. - transforms.ToTensor() -]) +# Define model preprocessing. +infer_transformation = transforms.Compose([ + transforms.Resize((28,28)), + # Transform to a PyTorch tensor. + transforms.ToTensor() +]) -import os +import os -class PTVisionService(PTServingBaseService): +class PTVisionService(PTServingBaseService): - def __init__(self, model_name, model_path): - # Call the constructor of the parent class. - super(PTVisionService, self).__init__(model_name, model_path) - # Call the customized function to load the model. - self.model = Mnist(model_path) - # Load tags. - self.label = [0,1,2,3,4,5,6,7,8,9] - # Labels can also be loaded by label file. - # Store the label.json file in the model directory. The following information is read: - dir_path = os.path.dirname(os.path.realpath(self.model_path)) - with open(os.path.join(dir_path, 'label.json')) as f: - self.label = json.load(f) + def __init__(self, model_name, model_path): + # Call the constructor of the parent class. + super(PTVisionService, self).__init__(model_name, model_path) + # Call the customized function to load the model. + self.model = Mnist(model_path) + # Load tags. + self.label = [0,1,2,3,4,5,6,7,8,9] + # Labels can also be loaded by label file. + # Store the label.json file in the model directory. The following information is read: + dir_path = os.path.dirname(os.path.realpath(self.model_path)) + with open(os.path.join(dir_path, 'label.json')) as f: + self.label = json.load(f) - def _preprocess(self, data): + def _preprocess(self, data): - preprocessed_data = {} - for k, v in data.items(): - input_batch = [] - for file_name, file_content in v.items(): - with Image.open(file_content) as image1: - # Gray processing - image1 = image1.convert("L") - if torch.cuda.is_available(): - input_batch.append(infer_transformation(image1).cuda()) - else: - input_batch.append(infer_transformation(image1)) - input_batch_var = torch.autograd.Variable(torch.stack(input_batch, dim=0), volatile=True) - print(input_batch_var.shape) - preprocessed_data[k] = input_batch_var + preprocessed_data = {} + for k, v in data.items(): + input_batch = [] + for file_name, file_content in v.items(): + with Image.open(file_content) as image1: + # Gray processing + image1 = image1.convert("L") + if torch.cuda.is_available(): + input_batch.append(infer_transformation(image1).cuda()) + else: + input_batch.append(infer_transformation(image1)) + input_batch_var = torch.autograd.Variable(torch.stack(input_batch, dim=0), volatile=True) + print(input_batch_var.shape) + preprocessed_data[k] = input_batch_var - return preprocessed_data + return preprocessed_data - def _postprocess(self, data): - results = [] - for k, v in data.items(): - result = torch.argmax(v[0]) - result = {k: self.label[result]} - results.append(result) - return results + def _postprocess(self, data): + results = [] + for k, v in data.items(): + result = torch.argmax(v[0]) + result = {k: self.label[result]} + results.append(result) + return results -class Net(nn.Module): - def __init__(self): - super(Net, self).__init__() - self.hidden1 = nn.Linear(784, 5120, bias=False) - self.output = nn.Linear(5120, 10, bias=False) +class Net(nn.Module): + def __init__(self): + super(Net, self).__init__() + self.hidden1 = nn.Linear(784, 5120, bias=False) + self.output = nn.Linear(5120, 10, bias=False) - def forward(self, x): - x = x.view(x.size()[0], -1) - x = F.relu((self.hidden1(x))) - x = F.dropout(x, 0.2) - x = self.output(x) - return F.log_softmax(x) + def forward(self, x): + x = x.view(x.size()[0], -1) + x = F.relu((self.hidden1(x))) + x = F.dropout(x, 0.2) + x = self.output(x) + return F.log_softmax(x) -def Mnist(model_path, **kwargs): - # Generate a network. - model = Net() - # Load the model. - if torch.cuda.is_available(): - device = torch.device('cuda') - model.load_state_dict(torch.load(model_path, map_location="cuda:0")) - else: - device = torch.device('cpu') - model.load_state_dict(torch.load(model_path, map_location=device)) - # CPU or GPU mapping - model.to(device) - # Declare an inference mode. - model.eval() +def Mnist(model_path, **kwargs): + # Generate a network. + model = Net() + # Load the model. + if torch.cuda.is_available(): + device = torch.device('cuda') + model.load_state_dict(torch.load(model_path, map_location="cuda:0")) + else: + device = torch.device('cpu') + model.load_state_dict(torch.load(model_path, map_location=device)) + # CPU or GPU mapping + model.to(device) + # Declare an inference mode. + model.eval() - return model -diff --git a/docs/modelarts/umn/modelarts_23_0176.html b/docs/modelarts/umn/modelarts_23_0176.html index ed2bf77a..b695babd 100644 --- a/docs/modelarts/umn/modelarts_23_0176.html +++ b/docs/modelarts/umn/modelarts_23_0176.html @@ -2,757 +2,382 @@Caffe
-Training and Saving a Model
lenet_train_test.prototxt file
-+
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 - 11 - 12 - 13 - 14 - 15 - 16 - 17 - 18 - 19 - 20 - 21 - 22 - 23 - 24 - 25 - 26 - 27 - 28 - 29 - 30 - 31 - 32 - 33 - 34 - 35 - 36 - 37 - 38 - 39 - 40 - 41 - 42 - 43 - 44 - 45 - 46 - 47 - 48 - 49 - 50 - 51 - 52 - 53 - 54 - 55 - 56 - 57 - 58 - 59 - 60 - 61 - 62 - 63 - 64 - 65 - 66 - 67 - 68 - 69 - 70 - 71 - 72 - 73 - 74 - 75 - 76 - 77 - 78 - 79 - 80 - 81 - 82 - 83 - 84 - 85 - 86 - 87 - 88 - 89 - 90 - 91 - 92 - 93 - 94 - 95 - 96 - 97 - 98 - 99 -100 -101 -102 -103 -104 -105 -106 -107 -108 -109 -110 -111 -112 -113 -114 -115 -116 -117 -118 -119 -120 -121 -122 -123 -124 -125 -126 -127 -128 -129 -130 -131 -132 -133 -134 -135 -136 -137 -138 -139 -140 -141 -142 -143 -144 -145 -146 -147 -148 -149 -150 -151 -152 -153 -154 -155 -156 -157 -158 -159 -160 -161 -162 -163 -164 -165 -166 -167 -168 -name: "LeNet" -layer { - name: "mnist" - type: "Data" - top: "data" - top: "label" - include { - phase: TRAIN - } - transform_param { - scale: 0.00390625 - } - data_param { - source: "examples/mnist/mnist_train_lmdb" - batch_size: 64 - backend: LMDB - } -} -layer { - name: "mnist" - type: "Data" - top: "data" - top: "label" - include { - phase: TEST - } - transform_param { - scale: 0.00390625 - } - data_param { - source: "examples/mnist/mnist_test_lmdb" - batch_size: 100 - backend: LMDB - } -} -layer { - name: "conv1" - type: "Convolution" - bottom: "data" - top: "conv1" - param { - lr_mult: 1 - } - param { - lr_mult: 2 - } - convolution_param { - num_output: 20 - kernel_size: 5 - stride: 1 - weight_filler { - type: "xavier" - } - bias_filler { - type: "constant" - } - } -} -layer { - name: "pool1" - type: "Pooling" - bottom: "conv1" - top: "pool1" - pooling_param { - pool: MAX - kernel_size: 2 - stride: 2 - } -} -layer { - name: "conv2" - type: "Convolution" - bottom: "pool1" - top: "conv2" - param { - lr_mult: 1 - } - param { - lr_mult: 2 - } - convolution_param { - num_output: 50 - kernel_size: 5 - stride: 1 - weight_filler { - type: "xavier" - } - bias_filler { - type: "constant" - } - } -} -layer { - name: "pool2" - type: "Pooling" - bottom: "conv2" - top: "pool2" - pooling_param { - pool: MAX - kernel_size: 2 - stride: 2 - } -} -layer { - name: "ip1" - type: "InnerProduct" - bottom: "pool2" - top: "ip1" - param { - lr_mult: 1 - } - param { - lr_mult: 2 - } - inner_product_param { - num_output: 500 - weight_filler { - type: "xavier" - } - bias_filler { - type: "constant" - } - } -} -layer { - name: "relu1" - type: "ReLU" - bottom: "ip1" - top: "ip1" -} -layer { - name: "ip2" - type: "InnerProduct" - bottom: "ip1" - top: "ip2" - param { - lr_mult: 1 - } - param { - lr_mult: 2 - } - inner_product_param { - num_output: 10 - weight_filler { - type: "xavier" - } - bias_filler { - type: "constant" - } - } -} -layer { - name: "accuracy" - type: "Accuracy" - bottom: "ip2" - bottom: "label" - top: "accuracy" - include { - phase: TEST - } -} -layer { - name: "loss" - type: "SoftmaxWithLoss" - bottom: "ip2" - bottom: "label" - top: "loss" -} -name: "LeNet" +layer { + name: "mnist" + type: "Data" + top: "data" + top: "label" + include { + phase: TRAIN + } + transform_param { + scale: 0.00390625 + } + data_param { + source: "examples/mnist/mnist_train_lmdb" + batch_size: 64 + backend: LMDB + } +} +layer { + name: "mnist" + type: "Data" + top: "data" + top: "label" + include { + phase: TEST + } + transform_param { + scale: 0.00390625 + } + data_param { + source: "examples/mnist/mnist_test_lmdb" + batch_size: 100 + backend: LMDB + } +} +layer { + name: "conv1" + type: "Convolution" + bottom: "data" + top: "conv1" + param { + lr_mult: 1 + } + param { + lr_mult: 2 + } + convolution_param { + num_output: 20 + kernel_size: 5 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + } + } +} +layer { + name: "pool1" + type: "Pooling" + bottom: "conv1" + top: "pool1" + pooling_param { + pool: MAX + kernel_size: 2 + stride: 2 + } +} +layer { + name: "conv2" + type: "Convolution" + bottom: "pool1" + top: "conv2" + param { + lr_mult: 1 + } + param { + lr_mult: 2 + } + convolution_param { + num_output: 50 + kernel_size: 5 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + } + } +} +layer { + name: "pool2" + type: "Pooling" + bottom: "conv2" + top: "pool2" + pooling_param { + pool: MAX + kernel_size: 2 + stride: 2 + } +} +layer { + name: "ip1" + type: "InnerProduct" + bottom: "pool2" + top: "ip1" + param { + lr_mult: 1 + } + param { + lr_mult: 2 + } + inner_product_param { + num_output: 500 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + } + } +} +layer { + name: "relu1" + type: "ReLU" + bottom: "ip1" + top: "ip1" +} +layer { + name: "ip2" + type: "InnerProduct" + bottom: "ip1" + top: "ip2" + param { + lr_mult: 1 + } + param { + lr_mult: 2 + } + inner_product_param { + num_output: 10 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + } + } +} +layer { + name: "accuracy" + type: "Accuracy" + bottom: "ip2" + bottom: "label" + top: "accuracy" + include { + phase: TEST + } +} +layer { + name: "loss" + type: "SoftmaxWithLoss" + bottom: "ip2" + bottom: "label" + top: "loss" +}lenet_solver.prototxt file
-+
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -# The train/test net protocol buffer definition -net: "examples/mnist/lenet_train_test.prototxt" -# test_iter specifies how many forward passes the test should carry out. -# In the case of MNIST, we have test batch size 100 and 100 test iterations, -# covering the full 10,000 testing images. -test_iter: 100 -# Carry out testing every 500 training iterations. -test_interval: 500 -# The base learning rate, momentum and the weight decay of the network. -base_lr: 0.01 -momentum: 0.9 -weight_decay: 0.0005 -# The learning rate policy -lr_policy: "inv" -gamma: 0.0001 -power: 0.75 -# Display every 100 iterations -display: 100 -# The maximum number of iterations -max_iter: 1000 -# snapshot intermediate results -snapshot: 5000 -snapshot_prefix: "examples/mnist/lenet" -# solver mode: CPU or GPU -solver_mode: CPU -# The train/test net protocol buffer definition +net: "examples/mnist/lenet_train_test.prototxt" +# test_iter specifies how many forward passes the test should carry out. +# In the case of MNIST, we have test batch size 100 and 100 test iterations, +# covering the full 10,000 testing images. +test_iter: 100 +# Carry out testing every 500 training iterations. +test_interval: 500 +# The base learning rate, momentum and the weight decay of the network. +base_lr: 0.01 +momentum: 0.9 +weight_decay: 0.0005 +# The learning rate policy +lr_policy: "inv" +gamma: 0.0001 +power: 0.75 +# Display every 100 iterations +display: 100 +# The maximum number of iterations +max_iter: 1000 +# snapshot intermediate results +snapshot: 5000 +snapshot_prefix: "examples/mnist/lenet" +# solver mode: CPU or GPU +solver_mode: CPUTrain the model.
-./build/tools/caffe train --solver=examples/mnist/lenet_solver.prototxt+./build/tools/caffe train --solver=examples/mnist/lenet_solver.prototxtThe caffemodel file is generated after model training. Rewrite the lenet_train_test.prototxt file to the lenet_deploy.prototxt file used for deployment by modifying input and output layers.
-+
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 - 11 - 12 - 13 - 14 - 15 - 16 - 17 - 18 - 19 - 20 - 21 - 22 - 23 - 24 - 25 - 26 - 27 - 28 - 29 - 30 - 31 - 32 - 33 - 34 - 35 - 36 - 37 - 38 - 39 - 40 - 41 - 42 - 43 - 44 - 45 - 46 - 47 - 48 - 49 - 50 - 51 - 52 - 53 - 54 - 55 - 56 - 57 - 58 - 59 - 60 - 61 - 62 - 63 - 64 - 65 - 66 - 67 - 68 - 69 - 70 - 71 - 72 - 73 - 74 - 75 - 76 - 77 - 78 - 79 - 80 - 81 - 82 - 83 - 84 - 85 - 86 - 87 - 88 - 89 - 90 - 91 - 92 - 93 - 94 - 95 - 96 - 97 - 98 - 99 -100 -101 -102 -103 -104 -105 -106 -107 -108 -109 -110 -111 -112 -113 -114 -115 -116 -117 -118 -119 -120 -121 -122 -123 -124 -125 -126 -127 -128 -129 -name: "LeNet" -layer { - name: "data" - type: "Input" - top: "data" - input_param { shape: { dim: 1 dim: 1 dim: 28 dim: 28 } } -} -layer { - name: "conv1" - type: "Convolution" - bottom: "data" - top: "conv1" - param { - lr_mult: 1 - } - param { - lr_mult: 2 - } - convolution_param { - num_output: 20 - kernel_size: 5 - stride: 1 - weight_filler { - type: "xavier" - } - bias_filler { - type: "constant" - } - } -} -layer { - name: "pool1" - type: "Pooling" - bottom: "conv1" - top: "pool1" - pooling_param { - pool: MAX - kernel_size: 2 - stride: 2 - } -} -layer { - name: "conv2" - type: "Convolution" - bottom: "pool1" - top: "conv2" - param { - lr_mult: 1 - } - param { - lr_mult: 2 - } - convolution_param { - num_output: 50 - kernel_size: 5 - stride: 1 - weight_filler { - type: "xavier" - } - bias_filler { - type: "constant" - } - } -} -layer { - name: "pool2" - type: "Pooling" - bottom: "conv2" - top: "pool2" - pooling_param { - pool: MAX - kernel_size: 2 - stride: 2 - } -} -layer { - name: "ip1" - type: "InnerProduct" - bottom: "pool2" - top: "ip1" - param { - lr_mult: 1 - } - param { - lr_mult: 2 - } - inner_product_param { - num_output: 500 - weight_filler { - type: "xavier" - } - bias_filler { - type: "constant" - } - } -} -layer { - name: "relu1" - type: "ReLU" - bottom: "ip1" - top: "ip1" -} -layer { - name: "ip2" - type: "InnerProduct" - bottom: "ip1" - top: "ip2" - param { - lr_mult: 1 - } - param { - lr_mult: 2 - } - inner_product_param { - num_output: 10 - weight_filler { - type: "xavier" - } - bias_filler { - type: "constant" - } - } -} -layer { - name: "prob" - type: "Softmax" - bottom: "ip2" - top: "prob" -} -name: "LeNet" +layer { + name: "data" + type: "Input" + top: "data" + input_param { shape: { dim: 1 dim: 1 dim: 28 dim: 28 } } +} +layer { + name: "conv1" + type: "Convolution" + bottom: "data" + top: "conv1" + param { + lr_mult: 1 + } + param { + lr_mult: 2 + } + convolution_param { + num_output: 20 + kernel_size: 5 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + } + } +} +layer { + name: "pool1" + type: "Pooling" + bottom: "conv1" + top: "pool1" + pooling_param { + pool: MAX + kernel_size: 2 + stride: 2 + } +} +layer { + name: "conv2" + type: "Convolution" + bottom: "pool1" + top: "conv2" + param { + lr_mult: 1 + } + param { + lr_mult: 2 + } + convolution_param { + num_output: 50 + kernel_size: 5 + stride: 1 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + } + } +} +layer { + name: "pool2" + type: "Pooling" + bottom: "conv2" + top: "pool2" + pooling_param { + pool: MAX + kernel_size: 2 + stride: 2 + } +} +layer { + name: "ip1" + type: "InnerProduct" + bottom: "pool2" + top: "ip1" + param { + lr_mult: 1 + } + param { + lr_mult: 2 + } + inner_product_param { + num_output: 500 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + } + } +} +layer { + name: "relu1" + type: "ReLU" + bottom: "ip1" + top: "ip1" +} +layer { + name: "ip2" + type: "InnerProduct" + bottom: "ip1" + top: "ip2" + param { + lr_mult: 1 + } + param { + lr_mult: 2 + } + inner_product_param { + num_output: 10 + weight_filler { + type: "xavier" + } + bias_filler { + type: "constant" + } + } +} +layer { + name: "prob" + type: "Softmax" + bottom: "ip2" + top: "prob" +}Inference Code
+ return predicted
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 from model_service.caffe_model_service import CaffeBaseService +-Inference Code
from model_service.caffe_model_service import CaffeBaseService -import numpy as np +import numpy as np -import os, json +import os, json -import caffe +import caffe -from PIL import Image +from PIL import Image -class LenetService(CaffeBaseService): +class LenetService(CaffeBaseService): - def __init__(self, model_name, model_path): - # Call the inference method of the parent class. - super(LenetService, self).__init__(model_name, model_path) + def __init__(self, model_name, model_path): + # Call the inference method of the parent class. + super(LenetService, self).__init__(model_name, model_path) - # Configure preprocessing information. - transformer = caffe.io.Transformer({'data': self.net.blobs['data'].data.shape}) - # Transform to NCHW. - transformer.set_transpose('data', (2, 0, 1)) - # Perform normalization. - transformer.set_raw_scale('data', 255.0) + # Configure preprocessing information. + transformer = caffe.io.Transformer({'data': self.net.blobs['data'].data.shape}) + # Transform to NCHW. + transformer.set_transpose('data', (2, 0, 1)) + # Perform normalization. + transformer.set_raw_scale('data', 255.0) - # If the batch size is set to 1, inference is supported for only one image. - self.net.blobs['data'].reshape(1, 1, 28, 28) - self.transformer = transformer + # If the batch size is set to 1, inference is supported for only one image. + self.net.blobs['data'].reshape(1, 1, 28, 28) + self.transformer = transformer - # Define the class labels. - self.label = [0,1,2,3,4,5,6,7,8,9] + # Define the class labels. + self.label = [0,1,2,3,4,5,6,7,8,9] - def _preprocess(self, data): + def _preprocess(self, data): - for k, v in data.items(): - for file_name, file_content in v.items(): - im = caffe.io.load_image(file_content, color=False) - # Pre-process the images. - self.net.blobs['data'].data[...] = self.transformer.preprocess('data', im) + for k, v in data.items(): + for file_name, file_content in v.items(): + im = caffe.io.load_image(file_content, color=False) + # Pre-process the images. + self.net.blobs['data'].data[...] = self.transformer.preprocess('data', im) - return + return - def _postprocess(self, data): + def _postprocess(self, data): - data = data['prob'][0, :] - predicted = np.argmax(data) - predicted = {"predicted" : str(predicted) } + data = data['prob'][0, :] + predicted = np.argmax(data) + predicted = {"predicted" : str(predicted) } - return predicted -diff --git a/docs/modelarts/umn/modelarts_23_0177.html b/docs/modelarts/umn/modelarts_23_0177.html index 5d50c501..ba260f22 100644 --- a/docs/modelarts/umn/modelarts_23_0177.html +++ b/docs/modelarts/umn/modelarts_23_0177.html @@ -1,68 +1,39 @@XGBoost
--Training and Saving a Model
-
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 import pandas as pd -import xgboost as xgb -from sklearn.model_selection import train_test_split +-Training and Saving a Model
import pandas as pd +import xgboost as xgb +from sklearn.model_selection import train_test_split -# Prepare training data and setting parameters -iris = pd.read_csv('/data/iris.csv') -X = iris.drop(['virginica'],axis=1) -y = iris[['virginica']] -X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234565) -params = { - 'booster': 'gbtree', - 'objective': 'multi:softmax', - 'num_class': 3, - 'gamma': 0.1, - 'max_depth': 6, - 'lambda': 2, - 'subsample': 0.7, - 'colsample_bytree': 0.7, - 'min_child_weight': 3, - 'silent': 1, - 'eta': 0.1, - 'seed': 1000, - 'nthread': 4, -} -plst = params.items() -dtrain = xgb.DMatrix(X_train, y_train) -num_rounds = 500 -model = xgb.train(plst, dtrain, num_rounds) -model.save_model('/tmp/xgboost.m') -After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publishing. For details about the definition method, see Model Package Specifications.
+# Prepare training data and setting parameters +iris = pd.read_csv('/home/ma-user/work/iris.csv') +X = iris.drop(['variety'],axis=1) +y = iris[['variety']] +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234565) +params = { + 'booster': 'gbtree', + 'objective': 'multi:softmax', + 'num_class': 3, + 'gamma': 0.1, + 'max_depth': 6, + 'lambda': 2, + 'subsample': 0.7, + 'colsample_bytree': 0.7, + 'min_child_weight': 3, + 'silent': 1, + 'eta': 0.1, + 'seed': 1000, + 'nthread': 4, +} +plst = params.items() +dtrain = xgb.DMatrix(X_train, y_train) +num_rounds = 500 +model = xgb.train(plst, dtrain, num_rounds) +model.save_model('/tmp/xgboost.m') +Before training, download the iris.csv dataset, decompress it, and upload it to the /home/ma-user/work/ directory of the notebook instance. Download the iris.csv dataset from https://gist.github.com/netj/8836201.
+After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publishing. For details about the definition method, see Model Package Specifications.
Inference Code
# coding:utf-8 +Inference Code
# coding:utf-8 import collections import json import xgboost as xgb diff --git a/docs/modelarts/umn/modelarts_23_0178.html b/docs/modelarts/umn/modelarts_23_0178.html index 7dc8c0d8..a7520977 100644 --- a/docs/modelarts/umn/modelarts_23_0178.html +++ b/docs/modelarts/umn/modelarts_23_0178.html @@ -1,146 +1,76 @@PySpark
--Training and Saving a Model
+# Save the model to a local directory. +# Save model to local path. +model.save("/tmp/spark_model")
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 from pyspark.ml import Pipeline, PipelineModel -from pyspark.ml.linalg import Vectors -from pyspark.ml.classification import LogisticRegression +-Training and Saving a Model
from pyspark.ml import Pipeline, PipelineModel +from pyspark.ml.linalg import Vectors +from pyspark.ml.classification import LogisticRegression -# Prepare training data using tuples. -# Prepare training data from a list of (label, features) tuples. -training = spark.createDataFrame([ - (1.0, Vectors.dense([0.0, 1.1, 0.1])), - (0.0, Vectors.dense([2.0, 1.0, -1.0])), - (0.0, Vectors.dense([2.0, 1.3, 1.0])), - (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"]) +# Prepare training data using tuples. +# Prepare training data from a list of (label, features) tuples. +training = spark.createDataFrame([ + (1.0, Vectors.dense([0.0, 1.1, 0.1])), + (0.0, Vectors.dense([2.0, 1.0, -1.0])), + (0.0, Vectors.dense([2.0, 1.3, 1.0])), + (1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"]) -# Create a training instance. The logistic regression algorithm is used for training. -# Create a LogisticRegression instance. This instance is an Estimator. -lr = LogisticRegression(maxIter=10, regParam=0.01) +# Create a training instance. The logistic regression algorithm is used for training. +# Create a LogisticRegression instance. This instance is an Estimator. +lr = LogisticRegression(maxIter=10, regParam=0.01) -# Train the logistic regression model. -# Learn a LogisticRegression model. This uses the parameters stored in lr. -model = lr.fit(training) +# Train the logistic regression model. +# Learn a LogisticRegression model. This uses the parameters stored in lr. +model = lr.fit(training) -# Save the model to a local directory. -# Save model to local path. -model.save("/tmp/spark_model") -After the model is saved, it must be uploaded to the OBS directory before being published. The config.json configuration and customize_service.py must be contained during publishing. For details about the definition method, see Model Package Specifications.
Inference Code
+ # Post-process data. + def _postprocess(self, pre_data): + logger.info("Get new data to respond...") + predict_str = pre_data.toPandas().to_json(orient='records') + predict_result = json.loads(predict_str) + return predict_result
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 # coding:utf-8 -import collections -import json -import traceback +-Inference Code
# coding:utf-8 +import collections +import json +import traceback -import model_service.log as log -from model_service.spark_model_service import SparkServingBaseService -from pyspark.ml.classification import LogisticRegression +import model_service.log as log +from model_service.spark_model_service import SparkServingBaseService +from pyspark.ml.classification import LogisticRegression -logger = log.getLogger(__name__) +logger = log.getLogger(__name__) -class user_Service(SparkServingBaseService): - # Pre-process data. - def _preprocess(self, data): - logger.info("Begin to handle data from user data...") - # Read data. - req_json = json.loads(data, object_pairs_hook=collections.OrderedDict) - try: - # Convert data to the spark dataframe format. - predict_spdf = self.spark.createDataFrame(pd.DataFrame(req_json["data"]["req_data"])) - except Exception as e: - logger.error("check your request data does meet the requirements ?") - logger.error(traceback.format_exc()) - raise Exception("check your request data does meet the requirements ?") - return predict_spdf +class user_Service(SparkServingBaseService): + # Pre-process data. + def _preprocess(self, data): + logger.info("Begin to handle data from user data...") + # Read data. + req_json = json.loads(data, object_pairs_hook=collections.OrderedDict) + try: + # Convert data to the spark dataframe format. + predict_spdf = self.spark.createDataFrame(pd.DataFrame(req_json["data"]["req_data"])) + except Exception as e: + logger.error("check your request data does meet the requirements ?") + logger.error(traceback.format_exc()) + raise Exception("check your request data does meet the requirements ?") + return predict_spdf - # Perform model inference. - def _inference(self, data): - try: - # Load a model file. - predict_model = LogisticRegression.load(self.model_path) - # Perform data inference. - prediction_result = predict_model.transform(data) - except Exception as e: - logger.error(traceback.format_exc()) - raise Exception("Unable to load model and do dataframe transformation.") - return prediction_result + # Perform model inference. + def _inference(self, data): + try: + # Load a model file. + predict_model = LogisticRegression.load(self.model_path) + # Perform data inference. + prediction_result = predict_model.transform(data) + except Exception as e: + logger.error(traceback.format_exc()) + raise Exception("Unable to load model and do dataframe transformation.") + return prediction_result - # Post-process data. - def _postprocess(self, pre_data): - logger.info("Get new data to respond...") - predict_str = pre_data.toPandas().to_json(orient='records') - predict_result = json.loads(predict_str) - return predict_result -diff --git a/docs/modelarts/umn/modelarts_23_0179.html b/docs/modelarts/umn/modelarts_23_0179.html index 42fd5caa..620fcfb6 100644 --- a/docs/modelarts/umn/modelarts_23_0179.html +++ b/docs/modelarts/umn/modelarts_23_0179.html @@ -1,102 +1,55 @@Scikit Learn
-Training and Saving a Model
-
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -import json -import pandas as pd -from sklearn.datasets import load_iris -from sklearn.model_selection import train_test_split -from sklearn.linear_model import LogisticRegression -from sklearn.externals import joblib -iris = pd.read_csv('/data/iris.csv') -X = iris.drop(['virginica'],axis=1) -y = iris[['virginica']] -# Create a LogisticRegression instance and train model -logisticRegression = LogisticRegression(C=1000.0, random_state=0) -logisticRegression.fit(X,y) -# Save model to local path -joblib.dump(logisticRegression, '/tmp/sklearn.m') -After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publishing. For details about the definition method, see Model Package Specifications.
+-Training and Saving a Model
import json +import pandas as pd +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.linear_model import LogisticRegression +from sklearn.externals import joblib +iris = pd.read_csv('/home/ma-user/work/iris.csv') +X = iris.drop(['variety'],axis=1) +y = iris[['variety']] +# Create a LogisticRegression instance and train model +logisticRegression = LogisticRegression(C=1000.0, random_state=0) +logisticRegression.fit(X,y) +# Save model to local path +joblib.dump(logisticRegression, '/tmp/sklearn.m')+Before training, download the iris.csv dataset, decompress it, and upload it to the /home/ma-user/work/ directory of the notebook instance. Download the iris.csv dataset from https://gist.github.com/netj/8836201.
+After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publishing. For details about the definition method, see Model Package Specifications.
Inference Code
+ # predict result process + def _postprocess(self,data): + resp_data = [] + for element in data: + resp_data.append({"predictresult": element}) + return resp_data
1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 # coding:utf-8 -import collections -import json -from sklearn.externals import joblib -from model_service.python_model_service import XgSklServingBaseService +-Inference Code
# coding:utf-8 +import collections +import json +from sklearn.externals import joblib +from model_service.python_model_service import XgSklServingBaseService -class user_Service(XgSklServingBaseService): +class user_Service(XgSklServingBaseService): - # request data preprocess - def _preprocess(self, data): - list_data = [] - json_data = json.loads(data, object_pairs_hook=collections.OrderedDict) - for element in json_data["data"]["req_data"]: - array = [] - for each in element: - array.append(element[each]) - list_data.append(array) - return list_data + # request data preprocess + def _preprocess(self, data): + list_data = [] + json_data = json.loads(data, object_pairs_hook=collections.OrderedDict) + for element in json_data["data"]["req_data"]: + array = [] + for each in element: + array.append(element[each]) + list_data.append(array) + return list_data - # predict - def _inference(self, data): - sk_model = joblib.load(self.model_path) - pre_result = sk_model.predict(data) - pre_result = pre_result.tolist() - return pre_result + # predict + def _inference(self, data): + sk_model = joblib.load(self.model_path) + pre_result = sk_model.predict(data) + pre_result = pre_result.tolist() + return pre_result - # predict result process - def _postprocess(self,data): - resp_data = [] - for element in data: - resp_data.append({"predictresult": element}) - return resp_data -diff --git a/docs/modelarts/umn/modelarts_23_0181.html b/docs/modelarts/umn/modelarts_23_0181.html index abc7eafc..80823083 100644 --- a/docs/modelarts/umn/modelarts_23_0181.html +++ b/docs/modelarts/umn/modelarts_23_0181.html @@ -4,7 +4,7 @@Generally, a small data labeling task can be completed by an individual. However, team work is required to label a large dataset. ModelArts provides the team labeling function. A labeling team can be formed to manage labeling for the same dataset.
-![]()
The team labeling function supports only datasets for image classification, object detection, text classification, named entity recognition, text triplet, and speech paragraph labeling.
How to Enable Team Labeling
- When creating a dataset, enable Team Labeling and select a team or task manager.
Figure 1 Enabling during dataset creation+How to Enable Team Labeling
- When creating a dataset, enable Team Labeling and select a team or task manager.
Figure 1 Enabling during dataset creation- If team labeling is not enabled for a dataset that has been created, create a team labeling task to enable team labeling. For details about how to create a team labeling task, see Creating Team Labeling Tasks.
Figure 2 Creating a team labeling task in a dataset listFigure 3 Creating a team labeling taskFigure 4 Creating a team labeling task on the dataset details pagediff --git a/docs/modelarts/umn/modelarts_23_0182.html b/docs/modelarts/umn/modelarts_23_0182.html index 14773ffc..dd059163 100644 --- a/docs/modelarts/umn/modelarts_23_0182.html +++ b/docs/modelarts/umn/modelarts_23_0182.html @@ -4,11 +4,13 @@diff --git a/docs/modelarts/umn/modelarts_23_0187.html b/docs/modelarts/umn/modelarts_23_0187.html index 00cfa25c..0d37311b 100644 --- a/docs/modelarts/umn/modelarts_23_0187.html +++ b/docs/modelarts/umn/modelarts_23_0187.html @@ -71,23 +71,6 @@Team labeling is managed in a unit of teams. To enable team labeling for a dataset, a team must be specified. Multiple members can be added to a team.
-Background
- An account can have a maximum of 10 teams.
- An account must have at least one team to enable team labeling for datasets. If the account has no team, add a team by referring to Adding a Team.
Adding a Team
- In the left navigation pane of the ModelArts management console, choose Data Management > Labeling Teams. The Labeling Teams page is displayed.
- On the Labeling Teams page, click Add Team.
- In the displayed Add Team dialog box, enter a team name and description and click OK. The labeling team is added.
The new team is displayed on the Labeling Teams page. You can view team details in the right pane. There is no member in the new team. Add members to the new team by referring to Adding a Member.
+Adding a Team
- In the left navigation pane of the ModelArts management console, choose Data Management > Labeling Teams. The Labeling Teams page is displayed.
- On the Labeling Teams page, click Add Team.
- In the displayed Add Team dialog box, enter a team name and description and click OK. The labeling team is added.
Figure 1 Adding a team+The new team is displayed on the Labeling Teams page. You can view team details in the right pane. There is no member in the new team. Add members to the new team by referring to Adding a Member.
Deleting a Team
You can delete a team that is no longer used.
On the Labeling Teams page, select the target team and click Delete. In the dialog box that is displayed, click OK.
+Figure 2 Deleting a teamdiff --git a/docs/modelarts/umn/modelarts_23_0183.html b/docs/modelarts/umn/modelarts_23_0183.html index 32c3fcc2..3338cedc 100644 --- a/docs/modelarts/umn/modelarts_23_0183.html +++ b/docs/modelarts/umn/modelarts_23_0183.html @@ -17,6 +17,7 @@Deleting Members
- Deleting a single member
In the Team Details area, select the desired member, and click Delete in the Operation column. In the dialog box that is displayed, click OK.
- Batch Deletion
In the Team Details area, select members to be deleted and click Delete. In the dialog box that is displayed, click OK.
+Figure 3 Batch deletion1 minute
- gpu_mem_usage
-- GPU Memory Usage
-- GPU memory usage of ModelArts
-Unit: %
-- ≥ 0%
-- Measurement object:
-ModelArts models
-Dimension:
-model_id
-- 1 minute
-successfully_called_times
Number of Successful Calls
diff --git a/docs/modelarts/umn/modelarts_23_0207.html b/docs/modelarts/umn/modelarts_23_0207.html index 14f123bf..d38666bb 100644 --- a/docs/modelarts/umn/modelarts_23_0207.html +++ b/docs/modelarts/umn/modelarts_23_0207.html @@ -2,10 +2,87 @@Importing a Meta Model from OBS
In scenarios where frequently-used frameworks are used for model development and training, you can import the model to ModelArts for unified management.
-Prerequisites
+
- The model has been developed and trained, and the type and version of the AI engine it uses is supported by ModelArts. Common engines supported by ModelArts and their runtime ranges are described as follows:
- The imported model, inference code, and configuration file must comply with the requirements of ModelArts. For details, see Model Package Specifications, Specifications for Compiling the Model Configuration File, and Specifications for Compiling Model Inference Code.
- The model package that has completed training, inference code, and configuration file have been uploaded to the OBS directory.
- The OBS directory you use and ModelArts are in the same region.
Prerequisites
- The model has been developed and trained, and the type and version of the AI engine it uses is supported by ModelArts. Common engines supported by ModelArts and their runtime ranges are described as follows: +
-
Table 1 Supported AI engines and their runtime + + + Engine
++ Runtime
++ Precautions
++ + TensorFlow
++ python3.6
+python2.7
+tf1.13-python2.7-gpu
+tf1.13-python2.7-cpu
+tf1.13-python3.6-gpu
+tf1.13-python3.6-cpu
+tf1.13-python3.7-cpu
+tf1.13-python3.7-gpu
+tf2.1-python3.7
++ +
- TensorFlow 1.8.0 is used in python2.7 and python3.6.
- python3.6, python2.7, and tf2.1-python3.7 indicate that the model can run on both CPUs and GPUs. For other runtime values, if the suffix contains cpu or gpu, the model can run only on CPUs or GPUs.
- The default runtime is python2.7.
+ + MXNet
++ python3.7
+python3.6
++ +
- MXNet 1.2.1 is used in python3.6 and python3.7.
- python3.6 and python3.7 indicate that the model can run on both CPUs and GPUs.
- The default runtime is python3.6.
+ + Caffe
++ python3.6
+python3.7
+python3.6-gpu
+python3.7-gpu
+python3.6-cpu
+python3.7-cpu
++ +
- Caffe 1.0.0 is used in python3.6, python3.7, python3.6-gpu, python3.7-gpu, python3.6-cpu, and python3.7-cpu.
- python 3.6 and python3.7 can only be used to run models on CPUs. For other runtime values, if the suffix contains cpu or gpu, the model can run only on CPUs or GPUs. Use the runtime of python3.6-gpu, python3.7-gpu, python3.6-cpu, or python3.7-cpu.
- The default runtime is python3.6.
+ + Spark_MLlib
++ python3.6
++ +
- Spark_MLlib 2.3.2 is used in python3.6.
- python 3.6 can only be used to run models on CPUs.
+ + Scikit_Learn
++ python3.6
++ +
- Scikit_Learn 0.18.1 is used in python3.6.
- python 3.6 can only be used to run models on CPUs.
+ + XGBoost
++ python3.6
++ +
- XGBoost 0.80 is used in python3.6.
- python 3.6 can only be used to run models on CPUs.
+ + + PyTorch
++ python3.6
+python3.7
+pytorch1.4-python3.7
++ +
- PyTorch 1.0 is used in python3.6 and python3.7.
- python3.6, python3.7, and pytorch1.4-python3.7 indicate that the model can run on both CPUs and GPUs.
- The default runtime is python3.6.
Procedure
- Log in to the ModelArts management console, and choose Model Management > Models in the left navigation pane. The Models page is displayed.
- Click Import in the upper left corner. The Import page is displayed.
- On the Import page, set related parameters.
- Set basic information about the model. For details about the parameters, see Table 1. -
Table 1 Parameters of basic model information Parameter
+- The imported model, inference code, and configuration file must comply with the requirements of ModelArts. For details, see Model Package Specifications, Specifications for Compiling the Model Configuration File, and Specifications for Compiling Model Inference Code.
- The model package that has completed training, inference code, and configuration file have been uploaded to the OBS directory.
- The OBS directory you use and ModelArts are in the same region.
+ +Procedure
- Log in to the ModelArts management console, and choose Model Management > Models in the left navigation pane. The Models page is displayed.
- Click Import in the upper left corner. The Import page is displayed.
- On the Import page, set related parameters.
- Set basic information about the model. For details about the parameters, see Table 2. + -
- Select the meta model source and set related parameters. Meta Model Source has four options based on the scenario. Set Meta Model Source to OBS. For details about the parameters, see Table 2.
For the meta model imported from OBS, you need to compile the inference code and configuration file by referring to Model Package Specifications and place the inference code and configuration files in the model folder storing the meta model. If the selected directory does not contain the corresponding inference code and configuration files, the model cannot be imported.
+- Select the meta model source and set related parameters. Meta Model Source has four options based on the scenario. Set Meta Model Source to OBS. For details about the parameters, see Table 3.
For the meta model imported from OBS, you need to compile the inference code and configuration file by referring to Model Package Specifications and place the inference code and configuration files in the model folder storing the meta model. If the selected directory does not contain the corresponding inference code and configuration files, the model cannot be imported.
Figure 1 Setting Meta Model Source to OBS-
Table 2 Parameters of the meta model source Parameter
+
Table 3 Parameters of the meta model source Parameter
@@ -82,7 +159,7 @@ - Description
Follow-Up Procedure
+
- Model Deployment: On the Models page, click the triangle next to a model name to view all versions of the model. Locate the row that contains the target version, click Deploy in the Operation column, and select the deployment type configured when importing the model from the drop-down list. On the Deploy page, set parameters by referring to Introduction to Model Deployment.
Follow-Up Procedure
- Model Deployment: On the Models page, click the triangle next to a model name to view all versions of the model. Locate the row that contains the target version, click Deploy in the Operation column, and select the deployment type configured when importing the model from the drop-down list. On the Deploy page, set parameters by referring to Introduction to Model Deployment .
diff --git a/docs/modelarts/umn/modelarts_23_0210.html b/docs/modelarts/umn/modelarts_23_0210.html index 313afda1..67b15205 100644 --- a/docs/modelarts/umn/modelarts_23_0210.html +++ b/docs/modelarts/umn/modelarts_23_0210.html @@ -15,7 +15,6 @@On the labeling platform, each member can view the images that are not labeled, to be corrected, rejected, to be reviewed, approved, and accepted. Pay attention to the images rejected by the administrator and the images to be corrected.
If the Reviewer role is assigned for a team labeling task, the labeling result needs to be reviewed. After the labeling result is reviewed, it is submitted to the administrator for acceptance.
-Figure 1 Labeling platformTask Acceptance (Administrator)
- Initiating acceptance
After team members complete data labeling, the dataset creator can initiate acceptance to check labeling results. The acceptance can be initiated only when a labeling member has labeled data. Otherwise, the acceptance initiation button is unavailable.
- On the Labeling Progress tab page, click Initiate Acceptance to accept tasks.
- In the displayed dialog box, set Sample Policy to By percentage or By quantity. Click OK to start the acceptance.
By percentage: Sampling is performed based on a percentage for acceptance.
@@ -40,7 +39,7 @@diff --git a/docs/modelarts/umn/modelarts_23_0211.html b/docs/modelarts/umn/modelarts_23_0211.html index 3bba11d0..0db68cc3 100644 --- a/docs/modelarts/umn/modelarts_23_0211.html +++ b/docs/modelarts/umn/modelarts_23_0211.html @@ -12,7 +12,7 @@ - Acceptance Scope
- All: all data that has been labeled by the current team, including Accepted, Pending Acceptance, and Rejected data. It refers to all sample files in the dataset.
- All rejects: rejects all data that has been labeled by the current team. That is, all labeled data is rejected to the labeling personnel. +
- All: all data that has been labeled by the current team, including Accepted, Pending Acceptance, and Rejected data. It refers to all sample files in the dataset.
- All rejects: rejects all data that has been labeled by the current team. That is, all labeled data is rejected to the labeling personnel.
- Accepted and pending acceptance: accepts the data that passes the acceptance or is in the Pending Acceptance state in the sample files and rejects the data that fails the acceptance to the labeling personnel.
- Accepted: accepts the data that has passed the acceptance in the sample files and rejects the data that is in the Pending Acceptance state or fails the acceptance to the labeling personnel.
-Starting Labeling
- Log in to the ModelArts management console. In the left navigation pane, choose Data Management > Datasets. The Datasets page is displayed.
- In the dataset list, select the dataset to be labeled based on the labeling type, and click the dataset name to go to the Dashboard tab page of the dataset.
By default, the Dashboard tab page of the current dataset version is displayed. If you need to label the dataset of another version, click the Versions tab and then click Set to Current Version in the right pane. For details, see Managing Dataset Versions.
- On the Dashboard page of the dataset, click Label in the upper right corner. The dataset details page is displayed. By default, all data of the dataset is displayed on the dataset details page.
Labeling Content
The dataset details page displays the labeled and unlabeled text objects in the dataset. The Unlabeled tab page is displayed by default.
+Labeling Content
The dataset details page displays the labeled and unlabeled text objects in the dataset. The Unlabeled tab page is displayed by default.
diff --git a/docs/modelarts/umn/modelarts_23_0217.html b/docs/modelarts/umn/modelarts_23_0217.html index 72164bca..8e520a03 100644 --- a/docs/modelarts/umn/modelarts_23_0217.html +++ b/docs/modelarts/umn/modelarts_23_0217.html @@ -8,14 +8,14 @@
- On the Unlabeled tab page, the objects to be labeled are listed in the left pane. In the list, click a text object, select the corresponding text content on the right pane, and select an entity name from the displayed entity list to label the content.
Figure 3 Labeling an entity- After labeling multiple entities, click the source entity and target entity in sequence and select a relationship type from the displayed relationship list to label the relationship.
Figure 4 Labeling a relationship- After all objects are labeled, click Save Current Page at the bottom of the page.
Overview of a Basic Image Package
To facilitate code download, training log output, and log file upload to OBS, ModelArts provides basic image packages for creating custom images. The basic images provided by ModelArts have the following features:
- Some necessary tools are available in the basic image. You need to create a custom image based on the basic images provided by ModelArts.
- ModelArts continuously updates the basic image versions. For compatible updates, after the basic images are updated, you can still use the old images. For incompatible updates, the custom images created based on the old version cannot run on ModelArts, but the approved custom images can still be used.
- If a custom image fails to be approved and the audit log contains an error message indicating that the basic image does not match, you need to use a new basic image to create an image.
Run the following command to obtain a ModelArts image:
-docker pull <Address for obtaining a basic image>+docker pull <Address for obtaining a basic image>After customizing an image, upload it to SWR. Make sure that you have created an organization and obtained the password for logging in to SWR. For details, see .
-docker push swr.<region>.xxx.com/<Organization to which the target image belongs>/<Image name>+docker push swr.<region>.xxx.com/<Organization to which the target image belongs>/<Image name>Obtain basic images based on chip requirements:
-CPU-based Basic Images
Address for obtaining a basic image
-swr.<region>.xxx.com/modelarts-job-dev-image/custom-cpu-base:1.3+swr.<region>.xxx.com/modelarts-job-dev-image/custom-cpu-base:1.3
Table 1 Optional parameters @@ -81,9 +81,9 @@ Parameter
GPU-based Basic Images
- Image of the CUDA 10.0, 10.1, or 10.2 version, using Ubuntu 18.04 as the basic image and with MoXing pre-installed by default
swr.<region>.xxx.com/modelarts-job-dev-image/custom-base-<cuda version>-<python version>-<os>-<arch>:<image tag>-- Image of the CUDA 8, 9, or 92 version, with MoXing pre-installed by default
swr.<region>.xxx.com/modelarts-job-dev-image/custom-gpu-<cuda version>-inner-moxing-<python version>:<image tag>-- Image of the CUDA 8, 9, or 92 version
swr.<region>.xxx.com/modelarts-job-dev-image/custom-gpu-<cuda version>-base:<image tag>+GPU-based Basic Images
- Image of the CUDA 10.0, 10.1, or 10.2 version, using Ubuntu 18.04 as the basic image and with MoXing pre-installed by default
swr.<region>.xxx.com/modelarts-job-dev-image/custom-base-<cuda version>-<python version>-<os>-<arch>:<image tag>+- Image of the CUDA 8, 9, or 92 version, with MoXing pre-installed by default
swr.<region>.xxx.com/modelarts-job-dev-image/custom-gpu-<cuda version>-inner-moxing-<python version>:<image tag>+- Image of the CUDA 8, 9, or 92 version
swr.<region>.xxx.com/modelarts-job-dev-image/custom-gpu-<cuda version>-base:<image tag>
Table 4 Optional parameters diff --git a/docs/modelarts/umn/modelarts_23_0219.html b/docs/modelarts/umn/modelarts_23_0219.html index 1ce23f91..333c6a60 100644 --- a/docs/modelarts/umn/modelarts_23_0219.html +++ b/docs/modelarts/umn/modelarts_23_0219.html @@ -3,11 +3,11 @@ Parameter
Specifications for Custom Images Used for Importing Models
When creating an image using locally developed models, ensure that they meet the specifications defined by ModelArts.
Specifications for Custom Images Used for Model Management
- Custom images cannot contain malicious code.
- The size of a custom image cannot exceed 30 GB.
- External port of images
The external service port of the image must be 8080. The inference interface must be consistent with the URL defined by apis in the config.json file. The inference interface can be directly accessed when the image is started. The following is an example of accessing the mnist image. The image contains the model trained with the mnist dataset. The model can identify handwritten digits in images. In this example, listen_ip indicates the IP address of the container.
-
- Sample request: curl -X POST \ http://{listen_ip}:8080/ \ -F images=@seven.jpg
- Sample response
{"mnist_result": 7}+
- Sample request: curl -X POST \ http://{listen_ip}:8080/ \ -F images=@seven.jpg
- Sample response
{"mnist_result": 7}- Health check port
A custom image must provide a health check interface for ModelArts to call. The health check interface is configured in the config.json file. For details, see the model configuration file compilation description. A sample health check interface is as follows:
-
- URI
GET /health-- Sample request: curl -X GET \ http://{listen_ip}:8080/health
- Sample response
{"health": "true"}+
- URI
GET /health+- Sample request: curl -X GET \ http://{listen_ip}:8080/health
- Sample response
{"health": "true"}- Status code
Table 1 Status code @@ -30,9 +30,9 @@ diff --git a/docs/modelarts/umn/modelarts_23_0238.html b/docs/modelarts/umn/modelarts_23_0238.html index 3043ea4a..bb268786 100644 --- a/docs/modelarts/umn/modelarts_23_0238.html +++ b/docs/modelarts/umn/modelarts_23_0238.html @@ -16,9 +16,9 @@ Status Code
- System Version
AI Engine and Version
+- AI Engine and Version
Supported CUDA or Ascend Version
+Supported CUDA Version
- Ubuntu 16.04
TF-1.13.1-python3.6
+- TF-1.13.1-python3.6
CUDA 10.0
+CUDA 10.0
TF-1.8.0-python3.6
@@ -53,9 +53,9 @@- Ubuntu 16.04
Caffe-1.0.0-python2.7
+- Caffe-1.0.0-python2.7
CUDA 8.0
+CUDA 8.0
Spark_MLlib
@@ -66,9 +66,9 @@- Ubuntu 16.04
Spark-2.3.2-python3.6
+- Spark-2.3.2-python3.6
N/A
+N/A
XGBoost-Sklearn
@@ -79,9 +79,9 @@- Ubuntu 16.04
Scikit_Learn-0.18.1-python3.6
+- Scikit_Learn-0.18.1-python3.6
N/A
+N/A
PyTorch
@@ -92,9 +92,9 @@- Ubuntu 16.04
PyTorch-1.3.0-python3.6
+- PyTorch-1.3.0-python3.6
CUDA 10.0
+CUDA 10.0
diff --git a/docs/modelarts/umn/modelarts_23_0239.html b/docs/modelarts/umn/modelarts_23_0239.html index d1554027..4bba106d 100644 --- a/docs/modelarts/umn/modelarts_23_0239.html +++ b/docs/modelarts/umn/modelarts_23_0239.html @@ -2,7 +2,7 @@ PyTorch-1.0.0-python3.6
@@ -110,9 +110,9 @@- Ubuntu16.04
MXNet-1.2.1-python3.6
+- MXNet-1.2.1-python3.6
CUDA 9.0
+CUDA 9.0
Using Custom Images to Train Models
If the framework used for algorithm development is not a frequently-used framework, you can build an algorithm into a custom image and use the custom image to create a training job.
-Prerequisites
+
- Data has been prepared. Specifically, you have created an available dataset in ModelArts, or you have uploaded the dataset used for training to the OBS directory.
- If the algorithm source is Custom, create an image and upload the image to SWR. For details, see .
- The training script has been uploaded to the OBS directory.
- At least one empty folder has been created on OBS for storing the training output.
- The account is not in arrears because resources are consumed when training jobs are running.
- The OBS directory you use and ModelArts are in the same region.
Prerequisites
- Data has been prepared. Specifically, you have created an available dataset in ModelArts, or you have uploaded the dataset used for training to the OBS directory.
- If the algorithm source is Custom, create an image and upload the image to SWR. For details, see Creating and Uploading a Custom Image.
- The training script has been uploaded to the OBS directory.
- At least one empty folder has been created on OBS for storing the training output.
- The account is not in arrears because resources are consumed when training jobs are running.
- The OBS directory you use and ModelArts are in the same region.
@@ -30,7 +30,7 @@Precautions
- In the dataset directory specified for a training job, the names of the files (such as the image file, audio file, and label file) containing data used for training contain 0 to 255 characters. If the names of certain files in the dataset directory contain over 255 characters, the training job will ignore these files and use data in the valid files for training. If the names of all files in the dataset directory contain over 255 characters, no data is available for the training job and the training job fails.
- In the training script, the Data Source and Training Output Path parameters must be set to the OBS path. Use the to perform read and write operations in the path.
Custom
For details about custom image specifications, see Specifications for Custom Images Used for Training Jobs.
-+
- Image Path: SWR URL after the image is uploaded to SWR. For details about how to upload an image, see Creating and Uploading a Custom Image.
- Code Directory: OBS path for storing the training code file.
- Image Path: SWR URL after the image is uploaded to SWR. For details about how to upload an image, see Creating and Uploading a Custom Image.
- Code Directory: OBS path for storing the training code file.
- Boot Command: Command to boot the training job after the image is started. Set this parameter based on site requirements. If the custom image is based on a basic ModelArts image, set parameters by referring to Creating a Training Job Using a Custom Image (GPU).
Data Source
diff --git a/docs/modelarts/umn/modelarts_23_0332.html b/docs/modelarts/umn/modelarts_23_0332.html index aa26609d..659808aa 100644 --- a/docs/modelarts/umn/modelarts_23_0332.html +++ b/docs/modelarts/umn/modelarts_23_0332.html @@ -5,26 +5,17 @@Step 1: Uploading Files to OBS
Use the OBS API to upload large files because OBS Console has restrictions on the file size and quantity.
Step 2: Downloading Files from OBS to Notebook Instances
A notebook instance can be mounted to OBS or EVS as the storage location. The operation method varies depending on the instance types.
-
- Downloading files to notebook instances with EVS attached
Read an OBS file. For example, if you read the obs://bucket_name/obs_file.txt file, the content is returned as strings.+
1 -file_str = mox.file.read('obs://bucket_name/obs_file.txt') -
- Downloading files to notebook instances with EVS attached
Read an OBS file. For example, if you read the obs://bucket_name/obs_file.txt file, the content is returned as strings.-file_str = mox.file.read('obs://bucket_name/obs_file.txt')You can also open the file object and read data from it. Both methods are equivalent.+
1 -2 -with mox.file.File('obs://bucket_name/obs_file.txt', 'r') as f: - file_str = f.read() -You can also open the file object and read data from it. Both methods are equivalent.with mox.file.File('obs://bucket_name/obs_file.txt', 'r') as f: + file_str = f.read()- Use the OBS API in the ModelArts SDK to download data to notebook instances.
![]()
If the size of a single file exceeds 5 GB, the file cannot be uploaded in this mode. Use the MoXing API to upload large files.
Sample code:
-+
1 -2 -3 -from modelarts.session import Session -session = Session() -session.download_data(bucket_path="/bucket-name/dir1/sdk.txt", path="/home/user/sdk/obs.txt") -from modelarts.session import Session +session = Session() +session.download_data(bucket_path="/bucket-name/dir1/sdk.txt", path="/home/user/sdk/obs.txt")- Downloading files to notebook instances using OBS for data storage
diff --git a/docs/modelarts/umn/modelarts_23_0333.html b/docs/modelarts/umn/modelarts_23_0333.html index c321f8ff..772fd208 100644 --- a/docs/modelarts/umn/modelarts_23_0333.html +++ b/docs/modelarts/umn/modelarts_23_0333.html @@ -4,13 +4,9 @@Upload files to the OBS path specified during notebook instance creation and synchronize the files from OBS to the notebook instances using Sync OBS.
Only files within 100 MB in JupyterLab can be downloaded to a local PC. You can perform operations in different scenarios based on the storage location selected when creating a notebook instance.
Notebook Instances Using OBS Storage
For notebook instances that use OBS storage, you can use OBS or the ModelArts SDK to download files from OBS to a local PC.
- Method 1: Downloading the files using OBS
Use OBS to download the files to the local PC. If you have a large amount of data, use OBS Browser+ to download data or folders.
-- Method 2: Downloading the files using the ModelArts SDKs
- Download and install the ModelArts SDKs on your local PC. For details, see .
- Authenticate sessions of the ModelArts SDKs. For details, see .
- from OBS to the local PC. The sample code is as follows: