{"__v":1,"_id":"57db128c5056c819009fffbb","api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"body":"Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With the Predictive Vision Service, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree!\n\nThe Predictive Vision Service is part of the Einstein suite of technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions.\n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"We provide the Predictive Vision Service to selected customers through a pilot program. The Predictive Vision Service isn’t generally available unless or until Salesforce announces its general availability in documentation or in press releases or public statements. We can’t guarantee general availability within any particular time frame or at all. Make your purchase decisions only on the basis of generally available products and features. You can provide feedback and suggestions for the Predictive Vision Service on the [IdeaExchange](https://success.salesforce.com/ideaSearch) in the Success Community.\",\n  \"title\": \"Note:\"\n}\n[/block]\n<sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub>","category":"57db122d5056c819009fffb8","createdAt":"2016-09-15T21:28:44.503Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"introduction-to-the-einstein-predictive-vision-service","sync_unique":"","title":"Introduction to the Salesforce Einstein Predictive Vision Service (Pilot)","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Introduction to the Salesforce Einstein Predictive Vision Service (Pilot)


Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With the Predictive Vision Service, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree! The Predictive Vision Service is part of the Einstein suite of technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions. [block:callout] { "type": "info", "body": "We provide the Predictive Vision Service to selected customers through a pilot program. The Predictive Vision Service isn’t generally available unless or until Salesforce announces its general availability in documentation or in press releases or public statements. We can’t guarantee general availability within any particular time frame or at all. Make your purchase decisions only on the basis of generally available products and features. You can provide feedback and suggestions for the Predictive Vision Service on the [IdeaExchange](https://success.salesforce.com/ideaSearch) in the Success Community.", "title": "Note:" } [/block] <sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub>
Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With the Predictive Vision Service, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree! The Predictive Vision Service is part of the Einstein suite of technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions. [block:callout] { "type": "info", "body": "We provide the Predictive Vision Service to selected customers through a pilot program. The Predictive Vision Service isn’t generally available unless or until Salesforce announces its general availability in documentation or in press releases or public statements. We can’t guarantee general availability within any particular time frame or at all. Make your purchase decisions only on the basis of generally available products and features. You can provide feedback and suggestions for the Predictive Vision Service on the [IdeaExchange](https://success.salesforce.com/ideaSearch) in the Success Community.", "title": "Note:" } [/block] <sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub>
{"__v":1,"_id":"57db1540c2a3a434005f7242","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"The Predictive Vision Service API enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases.\n \nFor example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences.\n \nImages contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include:\n\n- Visual search—Expand the ways that your customers can discover your products and increase sales.\n - Provide customers with visual filters to find products that best match their preferences while browsing online.\n - Allow customers to take photos of your products to discover where they can make purchases online or in-store.\n\n\n- Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity.\n - Better understand customer preferences and lifestyle through their social media images.\n -  Monitor user-generated images through communities and review boards to improve products and quality of service.\n - Evaluate banner advertisement exposure during broadcast events to drive higher ROI.\n\n\n- Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service.\n - Identify product issues before sending out a field technician to increase case resolution time.\n - Discover which products are out of stock or misplaced to streamline inventory restocking.\n - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors.\n\n\n#Deep Learning in a Nutshell#\nDeep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data.\n\nIn deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. \n\nDeep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.","category":"57db122d5056c819009fffb8","createdAt":"2016-09-15T21:40:16.560Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"what-is-the-predictive-vision-service","sync_unique":"","title":"What is the Predictive Vision Service?","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

What is the Predictive Vision Service?


The Predictive Vision Service API enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases. For example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences. Images contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include: - Visual search—Expand the ways that your customers can discover your products and increase sales. - Provide customers with visual filters to find products that best match their preferences while browsing online. - Allow customers to take photos of your products to discover where they can make purchases online or in-store. - Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity. - Better understand customer preferences and lifestyle through their social media images. - Monitor user-generated images through communities and review boards to improve products and quality of service. - Evaluate banner advertisement exposure during broadcast events to drive higher ROI. - Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service. - Identify product issues before sending out a field technician to increase case resolution time. - Discover which products are out of stock or misplaced to streamline inventory restocking. - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors. #Deep Learning in a Nutshell# Deep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data. In deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. Deep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.
The Predictive Vision Service API enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases. For example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences. Images contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include: - Visual search—Expand the ways that your customers can discover your products and increase sales. - Provide customers with visual filters to find products that best match their preferences while browsing online. - Allow customers to take photos of your products to discover where they can make purchases online or in-store. - Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity. - Better understand customer preferences and lifestyle through their social media images. - Monitor user-generated images through communities and review boards to improve products and quality of service. - Evaluate banner advertisement exposure during broadcast events to drive higher ROI. - Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service. - Identify product issues before sending out a field technician to increase case resolution time. - Discover which products are out of stock or misplaced to streamline inventory restocking. - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors. #Deep Learning in a Nutshell# Deep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data. In deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. Deep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.
{"__v":1,"_id":"57db17575641201900b35ba9","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with the Predictive Vision Service.\n\n- **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Get Started scenario, we create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: beach (images of beach scenes) and mountain (images of mountain scenes). A multi-label dataset contains three or more labels.\n\n- **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is  “beach.” Images of mountains have a label of “mountain.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on.\n\n- **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset.\n \n- **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset).\n\n- **Prediction**—The results that the model returns as to how closely the input matches data in the dataset.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/e8fe5c5-metamind_ds_updates_df_docs.png\",\n        \"metamind_ds_updates_df_docs.png\",\n        1000,\n        500,\n        \"#14abdb\"\n      ]\n    }\n  ]\n}\n[/block]","category":"57db122d5056c819009fffb8","createdAt":"2016-09-15T21:49:11.143Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"predictive-vision-service-terminology","sync_unique":"","title":"Predictive Vision Service Terminology","type":"basic","updates":["580664694ea93f3700b5f1ab"],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Predictive Vision Service Terminology


We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with the Predictive Vision Service. - **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Get Started scenario, we create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: beach (images of beach scenes) and mountain (images of mountain scenes). A multi-label dataset contains three or more labels. - **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is “beach.” Images of mountains have a label of “mountain.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on. - **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset. - **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset). - **Prediction**—The results that the model returns as to how closely the input matches data in the dataset. [block:image] { "images": [ { "image": [ "https://files.readme.io/e8fe5c5-metamind_ds_updates_df_docs.png", "metamind_ds_updates_df_docs.png", 1000, 500, "#14abdb" ] } ] } [/block]
We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with the Predictive Vision Service. - **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Get Started scenario, we create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: beach (images of beach scenes) and mountain (images of mountain scenes). A multi-label dataset contains three or more labels. - **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is “beach.” Images of mountains have a label of “mountain.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on. - **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset. - **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset). - **Prediction**—The results that the model returns as to how closely the input matches data in the dataset. [block:image] { "images": [ { "image": [ "https://files.readme.io/e8fe5c5-metamind_ds_updates_df_docs.png", "metamind_ds_updates_df_docs.png", 1000, 500, "#14abdb" ] } ] } [/block]
{"__v":1,"_id":"57fd16faeaa77f19008b8221","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"##Get a Predictive Services Account##\n\n1. From a browser, navigate to the sign up page at [https://api.metamind.io/signup](https://api.metamind.io/signup).\n\n2. Click **Sign Up Using Salesforce**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/f96bf28-sign_up.png\",\n        \"sign_up.png\",\n        444,\n        599,\n        \"#f0e9df\"\n      ]\n    }\n  ]\n}\n[/block]\n3. On the Salesforce login page, type your username and password, and click **Log In**.  If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/037038d-log_in.png\",\n        \"log_in.png\",\n        439,\n        602,\n        \"#0d84d3\"\n      ]\n    }\n  ]\n}\n[/block]\n4. Click **Allow** so the page can access basic information, such as your email address, and perform requests.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/29b88c3-allow_access.png\",\n        \"allow_access.png\",\n        425,\n        477,\n        \"#f3f2f9\"\n      ]\n    }\n  ]\n}\n[/block]\n5. On the activation page, click **Download Key** to save the key locally. The key file is named `predictive_services.pem`. Make a note of where you save this file because you'll need it to authenticate when you call the API.","category":"57db122d5056c819009fffb8","createdAt":"2016-10-11T16:44:42.625Z","excerpt":"Before you can access the Predictive Vision Service API, you first create an account and download your key.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":3,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"what-you-need-to-call-api","sync_unique":"","title":"What You Need to Call the API","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

What You Need to Call the API

Before you can access the Predictive Vision Service API, you first create an account and download your key.

##Get a Predictive Services Account## 1. From a browser, navigate to the sign up page at [https://api.metamind.io/signup](https://api.metamind.io/signup). 2. Click **Sign Up Using Salesforce**. [block:image] { "images": [ { "image": [ "https://files.readme.io/f96bf28-sign_up.png", "sign_up.png", 444, 599, "#f0e9df" ] } ] } [/block] 3. On the Salesforce login page, type your username and password, and click **Log In**. If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4. [block:image] { "images": [ { "image": [ "https://files.readme.io/037038d-log_in.png", "log_in.png", 439, 602, "#0d84d3" ] } ] } [/block] 4. Click **Allow** so the page can access basic information, such as your email address, and perform requests. [block:image] { "images": [ { "image": [ "https://files.readme.io/29b88c3-allow_access.png", "allow_access.png", 425, 477, "#f3f2f9" ] } ] } [/block] 5. On the activation page, click **Download Key** to save the key locally. The key file is named `predictive_services.pem`. Make a note of where you save this file because you'll need it to authenticate when you call the API.
##Get a Predictive Services Account## 1. From a browser, navigate to the sign up page at [https://api.metamind.io/signup](https://api.metamind.io/signup). 2. Click **Sign Up Using Salesforce**. [block:image] { "images": [ { "image": [ "https://files.readme.io/f96bf28-sign_up.png", "sign_up.png", 444, 599, "#f0e9df" ] } ] } [/block] 3. On the Salesforce login page, type your username and password, and click **Log In**. If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4. [block:image] { "images": [ { "image": [ "https://files.readme.io/037038d-log_in.png", "log_in.png", 439, 602, "#0d84d3" ] } ] } [/block] 4. Click **Allow** so the page can access basic information, such as your email address, and perform requests. [block:image] { "images": [ { "image": [ "https://files.readme.io/29b88c3-allow_access.png", "allow_access.png", 425, 477, "#f3f2f9" ] } ] } [/block] 5. On the activation page, click **Download Key** to save the key locally. The key file is named `predictive_services.pem`. Make a note of where you save this file because you'll need it to authenticate when you call the API.
{"__v":0,"_id":"57ed8c81da9c632b008e66f4","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Predictive Vision Service API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together.\n\nIf you need help as you go through these steps, check out the [Predictive Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-29T21:49:53.249Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex_qs_scenario","sync_unique":"","title":"Scenario","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Scenario


To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Predictive Vision Service API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together. If you need help as you go through these steps, check out the [Predictive Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Predictive Vision Service API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together. If you need help as you go through these steps, check out the [Predictive Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
{"__v":1,"_id":"57ed8d88da9c632b008e66fd","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Predictive Services account.\n\n- **Install Git**—To get the Visualforce and Apex code, you need git to clone the repos.","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-29T21:54:16.224Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-prereqs","sync_unique":"","title":"Prerequisites","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Prerequisites


- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Predictive Services account. - **Install Git**—To get the Visualforce and Apex code, you need git to clone the repos.
- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Predictive Services account. - **Install Git**—To get the Visualforce and Apex code, you need git to clone the repos.
{"__v":0,"_id":"58815763d3a4e40f00438d1d","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"1. Log in to Salesforce.\n\n2. Click **Files**. \n\n3. Click **Upload File**. \n\n4. Navigate to the directory where you saved the `predictive_services.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/246fb4e-files_key.png\",\n        \"files_key.png\",\n        927,\n        244,\n        \"#f2f9fa\"\n      ]\n    }\n  ]\n}\n[/block]","category":"57eecc61095dda17004b3bb7","createdAt":"2017-01-20T00:18:43.721Z","excerpt":"You must upload your key to Salesforce Files so that the Apex controller class can access it.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"upload-your-key","sync_unique":"","title":"Upload Your Key","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Upload Your Key

You must upload your key to Salesforce Files so that the Apex controller class can access it.

1. Log in to Salesforce. 2. Click **Files**. 3. Click **Upload File**. 4. Navigate to the directory where you saved the `predictive_services.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you. [block:image] { "images": [ { "image": [ "https://files.readme.io/246fb4e-files_key.png", "files_key.png", 927, 244, "#f2f9fa" ] } ] } [/block]
1. Log in to Salesforce. 2. Click **Files**. 3. Click **Upload File**. 4. Navigate to the directory where you saved the `predictive_services.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you. [block:image] { "images": [ { "image": [ "https://files.readme.io/246fb4e-files_key.png", "files_key.png", 927, 244, "#f2f9fa" ] } ] } [/block]
{"__v":0,"_id":"57ed9909f0f1912400bec10f","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. Clone the JWT repo by using this command.\n```git clone https://github.com/salesforceidentity/jwt```\n\n2. Clone the Apex code repo by using this command.\n```git clone https://github.com/MetaMind/apex-utils```","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-29T22:43:21.085Z","excerpt":"Now that you’ve got an account set up, get the code from GitHub.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":4,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-get-the-code","sync_unique":"","title":"Get the Code","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Get the Code

Now that you’ve got an account set up, get the code from GitHub.

1. Clone the JWT repo by using this command. ```git clone https://github.com/salesforceidentity/jwt``` 2. Clone the Apex code repo by using this command. ```git clone https://github.com/MetaMind/apex-utils```
1. Clone the JWT repo by using this command. ```git clone https://github.com/salesforceidentity/jwt``` 2. Clone the Apex code repo by using this command. ```git clone https://github.com/MetaMind/apex-utils```
{"__v":1,"_id":"57ed9a79d707a824005fa45a","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. Log in to Salesforce.\n\n2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. \n\n3. Click **New Remote Site**. \n\n4. Enter a name for the remote site.\n\n5. In the Remote Site URL field, enter `https://api.metamind.io`. \n\n6. Click **Save**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/d316e9b-remote_site.png\",\n        \"remote_site.png\",\n        329,\n        139,\n        \"#e4e7d8\"\n      ]\n    }\n  ]\n}\n[/block]","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-29T22:49:29.135Z","excerpt":"Before you can call the Predictive Vision Service API from Apex, you must add the API endpoint as a remote site.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":5,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-create-remote-site","sync_unique":"","title":"Create a Remote Site","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Create a Remote Site

Before you can call the Predictive Vision Service API from Apex, you must add the API endpoint as a remote site.

1. Log in to Salesforce. 2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. 3. Click **New Remote Site**. 4. Enter a name for the remote site. 5. In the Remote Site URL field, enter `https://api.metamind.io`. 6. Click **Save**. [block:image] { "images": [ { "image": [ "https://files.readme.io/d316e9b-remote_site.png", "remote_site.png", 329, 139, "#e4e7d8" ] } ] } [/block]
1. Log in to Salesforce. 2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. 3. Click **New Remote Site**. 4. Enter a name for the remote site. 5. In the Remote Site URL field, enter `https://api.metamind.io`. 6. Click **Save**. [block:image] { "images": [ { "image": [ "https://files.readme.io/d316e9b-remote_site.png", "remote_site.png", 329, 139, "#e4e7d8" ] } ] } [/block]
{"__v":1,"_id":"57eececbb79f200e00c354f2","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. \n \n2. Click **New**.\n\n3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save.\n\n4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**.\n\n5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**.\n\n6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**.\n\n7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**.\n\n8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**.\n\n9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**.\n\n10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**.\n\n11. Copy the VisionController code from `README.md` into the Apex Class tab and click **Save**. This class is all the code from `public class VisionController {` to the closing brace `}`.  In this example, the expiration is one hour (3600 seconds).\n\n12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \" // Get a new token\\n JWT jwt = new JWT('RS256');\\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for a Predictive Services account\\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\\n jwt.iss = 'developer.force.com';\\n jwt.sub = 'yourname@example.com';\\n jwt.aud = 'https://api.metamind.io/v1/oauth2/token';\\n jwt.exp = '3600';\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-30T20:44:59.590Z","excerpt":"In this step, you create the Apex classes that call the API and do all of the heavy lifting.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":6,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-create-classes","sync_unique":"","title":"Create the Apex Classes","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Create the Apex Classes

In this step, you create the Apex classes that call the API and do all of the heavy lifting.

1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. 2. Click **New**. 3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save. 4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**. 5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**. 6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**. 7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**. 8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**. 9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**. 10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**. 11. Copy the VisionController code from `README.md` into the Apex Class tab and click **Save**. This class is all the code from `public class VisionController {` to the closing brace `}`. In this example, the expiration is one hour (3600 seconds). 12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. [block:code] { "codes": [ { "code": " // Get a new token\n JWT jwt = new JWT('RS256');\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for a Predictive Services account\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\n jwt.iss = 'developer.force.com';\n jwt.sub = 'yourname@example.com';\n jwt.aud = 'https://api.metamind.io/v1/oauth2/token';\n jwt.exp = '3600';", "language": "text" } ] } [/block]
1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. 2. Click **New**. 3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save. 4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**. 5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**. 6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**. 7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**. 8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**. 9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**. 10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**. 11. Copy the VisionController code from `README.md` into the Apex Class tab and click **Save**. This class is all the code from `public class VisionController {` to the closing brace `}`. In this example, the expiration is one hour (3600 seconds). 12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. [block:code] { "codes": [ { "code": " // Get a new token\n JWT jwt = new JWT('RS256');\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for a Predictive Services account\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\n jwt.iss = 'developer.force.com';\n jwt.sub = 'yourname@example.com';\n jwt.aud = 'https://api.metamind.io/v1/oauth2/token';\n jwt.exp = '3600';", "language": "text" } ] } [/block]
{"__v":1,"_id":"57eed0ae7a53690e000abc2b","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. \n \n2. Click **New**.\n\n3. Enter a label and name of Predict.\n\n4. From the `README.md` file, copy all of the code from `<apex:page Controller=\"VisionController\">` to `</apex:page>` and paste it into the code editor.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/fbe5801-vf_page.png\",\n        \"vf_page.png\",\n        969,\n        774,\n        \"#f5f5f2\"\n      ]\n    }\n  ]\n}\n[/block]\n5. Click **Save**.\n\n6. Click **Preview** to test out the page.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/f6dab44-prediction.png\",\n        \"prediction.png\",\n        396,\n        333,\n        \"#f6f7f5\"\n      ]\n    }\n  ]\n}\n[/block]\nYour page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog.\n\nCongratulations! You wrote code to call the Predictive Vision Service API to make a prediction with an image, and all from within your Salesforce org.","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-30T20:53:02.193Z","excerpt":"Now you create a Visualforce page that calls the classes that you just created to make a prediction.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":7,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-create-vf-page","sync_unique":"","title":"Create the Visualforce Page","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Create the Visualforce Page

Now you create a Visualforce page that calls the classes that you just created to make a prediction.

1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. 2. Click **New**. 3. Enter a label and name of Predict. 4. From the `README.md` file, copy all of the code from `<apex:page Controller="VisionController">` to `</apex:page>` and paste it into the code editor. [block:image] { "images": [ { "image": [ "https://files.readme.io/fbe5801-vf_page.png", "vf_page.png", 969, 774, "#f5f5f2" ] } ] } [/block] 5. Click **Save**. 6. Click **Preview** to test out the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/f6dab44-prediction.png", "prediction.png", 396, 333, "#f6f7f5" ] } ] } [/block] Your page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog. Congratulations! You wrote code to call the Predictive Vision Service API to make a prediction with an image, and all from within your Salesforce org.
1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. 2. Click **New**. 3. Enter a label and name of Predict. 4. From the `README.md` file, copy all of the code from `<apex:page Controller="VisionController">` to `</apex:page>` and paste it into the code editor. [block:image] { "images": [ { "image": [ "https://files.readme.io/fbe5801-vf_page.png", "vf_page.png", 969, 774, "#f5f5f2" ] } ] } [/block] 5. Click **Save**. 6. Click **Preview** to test out the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/f6dab44-prediction.png", "prediction.png", 396, 333, "#f6f7f5" ] } ] } [/block] Your page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog. Congratulations! You wrote code to call the Predictive Vision Service API to make a prediction with an image, and all from within your Salesforce org.
{"__v":0,"_id":"57dee7f884019d2000e95aea","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Predictive Vision Service REST API for all these tasks.\n\nIf you need help as you go through these steps, check out the [Predictive Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.\n\nYou’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its users.\n \nTo perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem.\n \nYour task is straightforward: Create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:16:08.208Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"scenario","sync_unique":"","title":"Scenario","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Scenario


After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Predictive Vision Service REST API for all these tasks. If you need help as you go through these steps, check out the [Predictive Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers. You’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its users. To perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem. Your task is straightforward: Create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.
After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Predictive Vision Service REST API for all these tasks. If you need help as you go through these steps, check out the [Predictive Services developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers. You’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its users. To perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem. Your task is straightforward: Create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.
{"__v":1,"_id":"57dee8020b50fc0e00554d0d","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Predictive Services account.\n\n- **Get the data**—Download [http://metamind.io/images/mountainvsbeach.zip](http://metamind.io/images/mountainvsbeach.zip) and unzip it on your file system. This file contains 99 images:\n \n  - 49 beach images to add to the dataset\n  - 50 mountain images to add to the dataset\n  \n    <sub>If you use the Service, Salesforce may make available certain images to you (\"Provided Images\"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub>\n \n- **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:16:18.583Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"prerequisites","sync_unique":"","title":"Prerequisites","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Prerequisites


- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Predictive Services account. - **Get the data**—Download [http://metamind.io/images/mountainvsbeach.zip](http://metamind.io/images/mountainvsbeach.zip) and unzip it on your file system. This file contains 99 images: - 49 beach images to add to the dataset - 50 mountain images to add to the dataset <sub>If you use the Service, Salesforce may make available certain images to you ("Provided Images"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub> - **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)
- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Predictive Services account. - **Get the data**—Download [http://metamind.io/images/mountainvsbeach.zip](http://metamind.io/images/mountainvsbeach.zip) and unzip it on your file system. This file contains 99 images: - 49 beach images to add to the dataset - 50 mountain images to add to the dataset <sub>If you use the Service, Salesforce may make available certain images to you ("Provided Images"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub> - **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)
{"__v":1,"_id":"57dee81ca31fca170074f2f9","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. From your browser, navigate to [https://jwt.io](https://jwt.io).\n\n2. Use the RSA key in the .pem file you downloaded when you created your account to get a token.\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Tip\",\n  \"body\": \"The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it.\"\n}\n[/block]","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:16:44.212Z","excerpt":"The Predictive Vision Service API uses OAuth 2.0 JWT bearer token flow for authorization. Follow these steps to use the [https://jwt.io](https://jwt.io) site to generate a token.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"set-up-auth","sync_unique":"","title":"Set Up Authorization","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Set Up Authorization

The Predictive Vision Service API uses OAuth 2.0 JWT bearer token flow for authorization. Follow these steps to use the [https://jwt.io](https://jwt.io) site to generate a token.

1. From your browser, navigate to [https://jwt.io](https://jwt.io). 2. Use the RSA key in the .pem file you downloaded when you created your account to get a token. [block:callout] { "type": "info", "title": "Tip", "body": "The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it." } [/block]
1. From your browser, navigate to [https://jwt.io](https://jwt.io). 2. Use the RSA key in the .pem file you downloaded when you created your account to get a token. [block:callout] { "type": "info", "title": "Tip", "body": "The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it." } [/block]
{"__v":0,"_id":"57dee82ea31fca170074f2fa","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"In the following command, replace `<TOKEN>` with your JWT token and run the command. This command creates a dataset called Beach and Mountain that contains a beach label and a mountain label as defined by the `labels` parameter. You use this dataset to create the model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=Beach and Mountain\\\" -F \\\"labels=beach,mountain\\\" https://api.metamind.io/v1/vision/datasets\",\n      \"language\": \"curl\",\n      \"name\": null\n    }\n  ]\n}\n[/block]\nIf the command completes successfully, it returns a response that contains the dataset ID and the IDs of the labels. Make a note of these IDs because you need them in the future.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"id\\\": 24,\\n  \\\"name\\\": \\\"Beach and Mountain\\\",\\n  \\\"createdAt\\\": \\\"2016-09-09T22:39:22.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2016-09-09T22:39:22.000+0000\\\",\\n  \\\"labelSummary\\\": {\\n    \\\"labels\\\": [\\n      {\\n        \\\"id\\\": 45,\\n        \\\"datasetId\\\": 24,\\n        \\\"name\\\": \\\"beach\\\",\\n        \\\"numExamples\\\": 0\\n      },\\n      {\\n        \\\"id\\\": 44,\\n        \\\"datasetId\\\": 24,\\n        \\\"name\\\": \\\"mountain\\\",\\n        \\\"numExamples\\\": 0\\n      }\\n    ]\\n  },\\n  \\\"totalExamples\\\": 0,\\n  \\\"totalLabels\\\": 2,\\n  \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Tell Me More##\nThere are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe results look something like this.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"object\\\": \\\"list\\\",\\n  \\\"data\\\": [\\n    {\\n      \\\"id\\\": 24,\\n      \\\"name\\\": \\\"Beach and Mountain\\\",\\n      \\\"createdAt\\\": \\\"2016-09-09T22:39:22.000+0000\\\",\\n     \\\"updatedAt\\\": \\\"2016-09-09T22:39:22.000+0000\\\",\\n     \\\"labelSummary\\\": {\\n           \\\"labels\\\": [\\n          {\\n              \\\"id\\\": 37,\\n              \\\"datasetId\\\": 24,\\n              \\\"name\\\": \\\"beach\\\",\\n              \\\"numExamples\\\": 49\\n          },\\n          {\\n            \\\"id\\\": 36,\\n            \\\"datasetId\\\": 24,\\n            \\\"name\\\": \\\"mountain\\\",\\n            \\\"numExamples\\\": 50\\n          }\\n        ]\\n      },\\n      \\\"object\\\": \\\"dataset\\\"\\n    },\\n    {\\n      \\\"id\\\": 25,\\n      \\\"name\\\": \\\"Brain Scans\\\",\\n      \\\"createdAt\\\": \\\"2016-08-24T21:35:27.000+0000\\\",\\n     \\\"updatedAt\\\": \\\"2016-08-24T21:35:27.000+0000\\\",\\n      \\\"object\\\": \\\"dataset\\\"\\n    }\\n  ]\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nTo delete a dataset, use the DELETE verb and pass in the dataset ID.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X DELETE -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nDeleting a dataset returns an HTTP status of 204, but no JSON response is returned.","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:17:02.894Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":3,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"step-1-create-the-dataset","sync_unique":"","title":"Step 1: Create the Dataset","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Step 1: Create the Dataset


In the following command, replace `<TOKEN>` with your JWT token and run the command. This command creates a dataset called Beach and Mountain that contains a beach label and a mountain label as defined by the `labels` parameter. You use this dataset to create the model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain\" -F \"labels=beach,mountain\" https://api.metamind.io/v1/vision/datasets", "language": "curl", "name": null } ] } [/block] If the command completes successfully, it returns a response that contains the dataset ID and the IDs of the labels. Make a note of these IDs because you need them in the future. [block:code] { "codes": [ { "code": "{\n \"id\": 24,\n \"name\": \"Beach and Mountain\",\n \"createdAt\": \"2016-09-09T22:39:22.000+0000\",\n \"updatedAt\": \"2016-09-09T22:39:22.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 45,\n \"datasetId\": 24,\n \"name\": \"beach\",\n \"numExamples\": 0\n },\n {\n \"id\": 44,\n \"datasetId\": 24,\n \"name\": \"mountain\",\n \"numExamples\": 0\n }\n ]\n },\n \"totalExamples\": 0,\n \"totalLabels\": 2,\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] ##Tell Me More## There are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets", "language": "curl" } ] } [/block] The results look something like this. [block:code] { "codes": [ { "code": "{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": 24,\n \"name\": \"Beach and Mountain\",\n \"createdAt\": \"2016-09-09T22:39:22.000+0000\",\n \"updatedAt\": \"2016-09-09T22:39:22.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 37,\n \"datasetId\": 24,\n \"name\": \"beach\",\n \"numExamples\": 49\n },\n {\n \"id\": 36,\n \"datasetId\": 24,\n \"name\": \"mountain\",\n \"numExamples\": 50\n }\n ]\n },\n \"object\": \"dataset\"\n },\n {\n \"id\": 25,\n \"name\": \"Brain Scans\",\n \"createdAt\": \"2016-08-24T21:35:27.000+0000\",\n \"updatedAt\": \"2016-08-24T21:35:27.000+0000\",\n \"object\": \"dataset\"\n }\n ]\n}", "language": "json" } ] } [/block] To delete a dataset, use the DELETE verb and pass in the dataset ID. [block:code] { "codes": [ { "code": "curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] Deleting a dataset returns an HTTP status of 204, but no JSON response is returned.
In the following command, replace `<TOKEN>` with your JWT token and run the command. This command creates a dataset called Beach and Mountain that contains a beach label and a mountain label as defined by the `labels` parameter. You use this dataset to create the model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain\" -F \"labels=beach,mountain\" https://api.metamind.io/v1/vision/datasets", "language": "curl", "name": null } ] } [/block] If the command completes successfully, it returns a response that contains the dataset ID and the IDs of the labels. Make a note of these IDs because you need them in the future. [block:code] { "codes": [ { "code": "{\n \"id\": 24,\n \"name\": \"Beach and Mountain\",\n \"createdAt\": \"2016-09-09T22:39:22.000+0000\",\n \"updatedAt\": \"2016-09-09T22:39:22.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 45,\n \"datasetId\": 24,\n \"name\": \"beach\",\n \"numExamples\": 0\n },\n {\n \"id\": 44,\n \"datasetId\": 24,\n \"name\": \"mountain\",\n \"numExamples\": 0\n }\n ]\n },\n \"totalExamples\": 0,\n \"totalLabels\": 2,\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] ##Tell Me More## There are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets", "language": "curl" } ] } [/block] The results look something like this. [block:code] { "codes": [ { "code": "{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": 24,\n \"name\": \"Beach and Mountain\",\n \"createdAt\": \"2016-09-09T22:39:22.000+0000\",\n \"updatedAt\": \"2016-09-09T22:39:22.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 37,\n \"datasetId\": 24,\n \"name\": \"beach\",\n \"numExamples\": 49\n },\n {\n \"id\": 36,\n \"datasetId\": 24,\n \"name\": \"mountain\",\n \"numExamples\": 50\n }\n ]\n },\n \"object\": \"dataset\"\n },\n {\n \"id\": 25,\n \"name\": \"Brain Scans\",\n \"createdAt\": \"2016-08-24T21:35:27.000+0000\",\n \"updatedAt\": \"2016-08-24T21:35:27.000+0000\",\n \"object\": \"dataset\"\n }\n ]\n}", "language": "json" } ] } [/block] To delete a dataset, use the DELETE verb and pass in the dataset ID. [block:code] { "codes": [ { "code": "curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] Deleting a dataset returns an HTTP status of 204, but no JSON response is returned.
{"__v":1,"_id":"57dee8472e40090e007fe0a5","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. Now you’re ready to add examples to the dataset. Because you’re classifying images, you’re adding images. If you don’t have the .zip file of images, see the  [Prerequisites](doc:prerequisites)  to get it. To upload the images, copy the scripts from the [Scripts to Add Images to the Dataset](doc:scripts-to-add-images-to-the-dataset) page into a file and update these values.\n - `TOKEN`—Replace with your JWT token.\n - `<ID_OF_BEACH_LABEL>` and `<ID_OF_MTN_LABEL>`—Replace with the IDs of the beach label and the mountain label you saved earlier.\n - `<PATH_TO_FILE>`—Replace with the path on your local file system where you unzipped the image files. The file name is contained in the script, so you need to only insert the directory location with the end slash. For example, if you’re on Windows, `data=@<PATH_TO_FILE>661860605.jpg` becomes something like `data=@c:\\MountainsvsBeach\\Beaches\\661860605.jpg.`  \n - `<DATASET_ID>`—Replace with the ID of the dataset you created.\n\n\n2. Open a terminal/command window.\n\n3. Copy the cURL commands from the file and paste them into the terminal/command window to add the examples to the dataset. You’re adding 99 examples, so it may take a while for the commands to complete. If you’ll be loading images frequently, you can create a batch file or script and run it from the command line.\n\nIf you’re feeling adventurous, you can also use `slurp.sh` available from [https://github.com/MetaMind/api-utils](https://github.com/MetaMind/api-utils). This script loops over a directory, creates a dataset, and then adds all the images in that directory to the dataset.\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Tip\",\n  \"body\": \"The Predictive Vision Service API supports only PNG,  JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap) image file types. Keep in mind that the maximum image file size is 5 MB. Be sure that your image files are within that limit.\"\n}\n[/block]\n##Tell Me More##\nAfter you add a label to a dataset, you can’t delete the label. If you need to remove a label, you must recreate the dataset with the correct labels.\n\nYou can add a label to a dataset, add an example to a dataset, and delete an example from a dataset up until you train it. If you try to add a label or add or delete an example after you’ve trained the dataset and created a model, you receive an HTTP status of 400 and an error.\n\nTo modify the images in a dataset after you successfully train it, you must recreate the dataset and labels and add the images. Therefore, it’s a good idea to script all the commands to create a dataset and labels and add the images to the dataset so that you can recreate the dataset if necessary. Keep in mind that the new dataset has a different ID, so you must replace instances of the old ID with the new ID.\n\nAfter you add examples to the dataset, use this call to return all examples in the dataset.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe command returns all the dataset examples and the associated label information for each example.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"object\\\": \\\"list\\\",\\n  \\\"data\\\": [\\n    {\\n      \\\"id\\\": 578,\\n      \\\"name\\\": \\\"2373848.jpg\\\",\\n      \\\"createdAt\\\": \\\"2016-09-13T16:16:49.000+0000\\\",\\n      \\\"label\\\": {\\n        \\\"id\\\": 45,\\n        \\\"datasetId\\\": 24,\\n        \\\"name\\\": \\\"beach\\\"\\n      },\\n      \\\"object\\\": \\\"example\\\"\\n    },\\n    {\\n      \\\"id\\\": 579,\\n      \\\"name\\\": \\\"2373848.jpg\\\",\\n      \\\"createdAt\\\": \\\"2016-09-13T16:17:52.000+0000\\\",\\n      \\\"label\\\": {\\n        \\\"id\\\": 45,\\n        \\\"datasetId\\\": 24,\\n        \\\"name\\\": \\\"beach\\\"\\n      },\\n      \\\"object\\\": \\\"example\\\"\\n    }\\n  ]\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"body\": \"This command returns every example in the dataset, so the results could potentially return a lot of data. Use this only with datasets that have less that 1,000 examples.\",\n  \"title\": \"Warning\"\n}\n[/block]","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:17:27.374Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":4,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"step-2-add-examples-to-the-dataset","sync_unique":"","title":"Step 2: Add Examples to the Dataset","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Step 2: Add Examples to the Dataset


1. Now you’re ready to add examples to the dataset. Because you’re classifying images, you’re adding images. If you don’t have the .zip file of images, see the [Prerequisites](doc:prerequisites) to get it. To upload the images, copy the scripts from the [Scripts to Add Images to the Dataset](doc:scripts-to-add-images-to-the-dataset) page into a file and update these values. - `TOKEN`—Replace with your JWT token. - `<ID_OF_BEACH_LABEL>` and `<ID_OF_MTN_LABEL>`—Replace with the IDs of the beach label and the mountain label you saved earlier. - `<PATH_TO_FILE>`—Replace with the path on your local file system where you unzipped the image files. The file name is contained in the script, so you need to only insert the directory location with the end slash. For example, if you’re on Windows, `data=@<PATH_TO_FILE>661860605.jpg` becomes something like `data=@c:\MountainsvsBeach\Beaches\661860605.jpg.` - `<DATASET_ID>`—Replace with the ID of the dataset you created. 2. Open a terminal/command window. 3. Copy the cURL commands from the file and paste them into the terminal/command window to add the examples to the dataset. You’re adding 99 examples, so it may take a while for the commands to complete. If you’ll be loading images frequently, you can create a batch file or script and run it from the command line. If you’re feeling adventurous, you can also use `slurp.sh` available from [https://github.com/MetaMind/api-utils](https://github.com/MetaMind/api-utils). This script loops over a directory, creates a dataset, and then adds all the images in that directory to the dataset. [block:callout] { "type": "info", "title": "Tip", "body": "The Predictive Vision Service API supports only PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap) image file types. Keep in mind that the maximum image file size is 5 MB. Be sure that your image files are within that limit." } [/block] ##Tell Me More## After you add a label to a dataset, you can’t delete the label. If you need to remove a label, you must recreate the dataset with the correct labels. You can add a label to a dataset, add an example to a dataset, and delete an example from a dataset up until you train it. If you try to add a label or add or delete an example after you’ve trained the dataset and created a model, you receive an HTTP status of 400 and an error. To modify the images in a dataset after you successfully train it, you must recreate the dataset and labels and add the images. Therefore, it’s a good idea to script all the commands to create a dataset and labels and add the images to the dataset so that you can recreate the dataset if necessary. Keep in mind that the new dataset has a different ID, so you must replace instances of the old ID with the new ID. After you add examples to the dataset, use this call to return all examples in the dataset. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples", "language": "curl" } ] } [/block] The command returns all the dataset examples and the associated label information for each example. [block:code] { "codes": [ { "code": "{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": 578,\n \"name\": \"2373848.jpg\",\n \"createdAt\": \"2016-09-13T16:16:49.000+0000\",\n \"label\": {\n \"id\": 45,\n \"datasetId\": 24,\n \"name\": \"beach\"\n },\n \"object\": \"example\"\n },\n {\n \"id\": 579,\n \"name\": \"2373848.jpg\",\n \"createdAt\": \"2016-09-13T16:17:52.000+0000\",\n \"label\": {\n \"id\": 45,\n \"datasetId\": 24,\n \"name\": \"beach\"\n },\n \"object\": \"example\"\n }\n ]\n}", "language": "json" } ] } [/block] [block:callout] { "type": "warning", "body": "This command returns every example in the dataset, so the results could potentially return a lot of data. Use this only with datasets that have less that 1,000 examples.", "title": "Warning" } [/block]
1. Now you’re ready to add examples to the dataset. Because you’re classifying images, you’re adding images. If you don’t have the .zip file of images, see the [Prerequisites](doc:prerequisites) to get it. To upload the images, copy the scripts from the [Scripts to Add Images to the Dataset](doc:scripts-to-add-images-to-the-dataset) page into a file and update these values. - `TOKEN`—Replace with your JWT token. - `<ID_OF_BEACH_LABEL>` and `<ID_OF_MTN_LABEL>`—Replace with the IDs of the beach label and the mountain label you saved earlier. - `<PATH_TO_FILE>`—Replace with the path on your local file system where you unzipped the image files. The file name is contained in the script, so you need to only insert the directory location with the end slash. For example, if you’re on Windows, `data=@<PATH_TO_FILE>661860605.jpg` becomes something like `data=@c:\MountainsvsBeach\Beaches\661860605.jpg.` - `<DATASET_ID>`—Replace with the ID of the dataset you created. 2. Open a terminal/command window. 3. Copy the cURL commands from the file and paste them into the terminal/command window to add the examples to the dataset. You’re adding 99 examples, so it may take a while for the commands to complete. If you’ll be loading images frequently, you can create a batch file or script and run it from the command line. If you’re feeling adventurous, you can also use `slurp.sh` available from [https://github.com/MetaMind/api-utils](https://github.com/MetaMind/api-utils). This script loops over a directory, creates a dataset, and then adds all the images in that directory to the dataset. [block:callout] { "type": "info", "title": "Tip", "body": "The Predictive Vision Service API supports only PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap) image file types. Keep in mind that the maximum image file size is 5 MB. Be sure that your image files are within that limit." } [/block] ##Tell Me More## After you add a label to a dataset, you can’t delete the label. If you need to remove a label, you must recreate the dataset with the correct labels. You can add a label to a dataset, add an example to a dataset, and delete an example from a dataset up until you train it. If you try to add a label or add or delete an example after you’ve trained the dataset and created a model, you receive an HTTP status of 400 and an error. To modify the images in a dataset after you successfully train it, you must recreate the dataset and labels and add the images. Therefore, it’s a good idea to script all the commands to create a dataset and labels and add the images to the dataset so that you can recreate the dataset if necessary. Keep in mind that the new dataset has a different ID, so you must replace instances of the old ID with the new ID. After you add examples to the dataset, use this call to return all examples in the dataset. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples", "language": "curl" } ] } [/block] The command returns all the dataset examples and the associated label information for each example. [block:code] { "codes": [ { "code": "{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": 578,\n \"name\": \"2373848.jpg\",\n \"createdAt\": \"2016-09-13T16:16:49.000+0000\",\n \"label\": {\n \"id\": 45,\n \"datasetId\": 24,\n \"name\": \"beach\"\n },\n \"object\": \"example\"\n },\n {\n \"id\": 579,\n \"name\": \"2373848.jpg\",\n \"createdAt\": \"2016-09-13T16:17:52.000+0000\",\n \"label\": {\n \"id\": 45,\n \"datasetId\": 24,\n \"name\": \"beach\"\n },\n \"object\": \"example\"\n }\n ]\n}", "language": "json" } ] } [/block] [block:callout] { "type": "warning", "body": "This command returns every example in the dataset, so the results could potentially return a lot of data. Use this only with datasets that have less that 1,000 examples.", "title": "Warning" } [/block]
{"__v":1,"_id":"57dee8740b50fc0e00554d0e","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"Training the dataset creates the model that delivers the predictions.\n\n1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. Replace `<DATASET_ID>` with your dataset ID in this command and then run it. This command trains the dataset and creates a model with the name specified in the name parameter.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=Beach and Mountain Model\\\" -F \\\"datasetId=<DATASET_ID>\\\" https://api.metamind.io/v1/vision/train\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 24,\\n  \\\"name\\\": \\\"Beach and Mountain Model\\\",\\n  \\\"status\\\": \\\"QUEUED\\\",\\n  \\\"progress\\\": 0,\\n  \\\"createdAt\\\": \\\"2016-09-11T23:44:16.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2016-09-11T23:44:16.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 3,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"queuePosition\\\": 1,\\n  \\\"modelId\\\": \\\"FCY2FLDSYIKEU4RNSMUDAQXZMA\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, replace `<YOUR_MODEL_ID>` in this command with your model ID, and run the command.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/train/<YOUR_MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 24,\\n  \\\"name\\\": \\\"Beach and Mountain Model\\\",\\n  \\\"status\\\": \\\"RUNNING\\\",\\n  \\\"progress\\\": 0.34,\\n  \\\"createdAt\\\": \\\"2016-09-06T20:53:01.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2016-09-06T20:53:05.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 3,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"FCY2FLDSYIKEU4RNSMUDAQXZMA\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Tell Me More##\nAfter you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/models/<MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe command returns a response similar to this one.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"metricsData\\\": {\\n    \\\"f1\\\": [\\n        0.9090909090909092,\\n        0.9411764705882352\\n    ],\\n    \\\"labels\\\": [\\n      \\\"beach\\\",\\n      \\\"mountain\\\"\\n    ],\\n    \\\"testAccuracy\\\": 0.9286,\\n    \\\"trainingLoss\\\": 0.0104,\\n    \\\"confusionMatrix\\\": [\\n        [\\n            5,\\n            0\\n        ],\\n        [\\n            1,\\n            8\\n        ]\\n    ],\\n    \\\"trainingAccuracy\\\": 0.9976\\n  },\\n  \\\"createdAt\\\": \\\"2016-09-15T15:32:52.000+0000\\\",\\n  \\\"id\\\": \\\"FCY2FLDSYIKEU4RNSMUDAQXZMA\\\",\\n  \\\"object\\\": \\\"metrics\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:18:12.443Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":5,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"step-3-train-the-dataset","sync_unique":"","title":"Step 3: Train the Dataset","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Step 3: Train the Dataset


Training the dataset creates the model that delivers the predictions. 1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. Replace `<DATASET_ID>` with your dataset ID in this command and then run it. This command trains the dataset and creates a model with the name specified in the name parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain Model\" -F \"datasetId=<DATASET_ID>\" https://api.metamind.io/v1/vision/train", "language": "curl" } ] } [/block] The response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 24,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2016-09-11T23:44:16.000+0000\",\n \"updatedAt\": \"2016-09-11T23:44:16.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"object\": \"training\",\n \"queuePosition\": 1,\n \"modelId\": \"FCY2FLDSYIKEU4RNSMUDAQXZMA\"\n}", "language": "json" } ] } [/block] 2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, replace `<YOUR_MODEL_ID>` in this command with your model ID, and run the command. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/train/<YOUR_MODEL_ID>", "language": "curl" } ] } [/block] The response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 24,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"RUNNING\",\n \"progress\": 0.34,\n \"createdAt\": \"2016-09-06T20:53:01.000+0000\",\n \"updatedAt\": \"2016-09-06T20:53:05.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"object\": \"training\",\n \"modelId\": \"FCY2FLDSYIKEU4RNSMUDAQXZMA\"\n}", "language": "json" } ] } [/block] ##Tell Me More## After you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/<MODEL_ID>", "language": "curl" } ] } [/block] The command returns a response similar to this one. [block:code] { "codes": [ { "code": "{\n \"metricsData\": {\n \"f1\": [\n 0.9090909090909092,\n 0.9411764705882352\n ],\n \"labels\": [\n \"beach\",\n \"mountain\"\n ],\n \"testAccuracy\": 0.9286,\n \"trainingLoss\": 0.0104,\n \"confusionMatrix\": [\n [\n 5,\n 0\n ],\n [\n 1,\n 8\n ]\n ],\n \"trainingAccuracy\": 0.9976\n },\n \"createdAt\": \"2016-09-15T15:32:52.000+0000\",\n \"id\": \"FCY2FLDSYIKEU4RNSMUDAQXZMA\",\n \"object\": \"metrics\"\n}", "language": "json" } ] } [/block]
Training the dataset creates the model that delivers the predictions. 1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. Replace `<DATASET_ID>` with your dataset ID in this command and then run it. This command trains the dataset and creates a model with the name specified in the name parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain Model\" -F \"datasetId=<DATASET_ID>\" https://api.metamind.io/v1/vision/train", "language": "curl" } ] } [/block] The response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 24,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2016-09-11T23:44:16.000+0000\",\n \"updatedAt\": \"2016-09-11T23:44:16.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"object\": \"training\",\n \"queuePosition\": 1,\n \"modelId\": \"FCY2FLDSYIKEU4RNSMUDAQXZMA\"\n}", "language": "json" } ] } [/block] 2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, replace `<YOUR_MODEL_ID>` in this command with your model ID, and run the command. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/train/<YOUR_MODEL_ID>", "language": "curl" } ] } [/block] The response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 24,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"RUNNING\",\n \"progress\": 0.34,\n \"createdAt\": \"2016-09-06T20:53:01.000+0000\",\n \"updatedAt\": \"2016-09-06T20:53:05.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"object\": \"training\",\n \"modelId\": \"FCY2FLDSYIKEU4RNSMUDAQXZMA\"\n}", "language": "json" } ] } [/block] ##Tell Me More## After you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/<MODEL_ID>", "language": "curl" } ] } [/block] The command returns a response similar to this one. [block:code] { "codes": [ { "code": "{\n \"metricsData\": {\n \"f1\": [\n 0.9090909090909092,\n 0.9411764705882352\n ],\n \"labels\": [\n \"beach\",\n \"mountain\"\n ],\n \"testAccuracy\": 0.9286,\n \"trainingLoss\": 0.0104,\n \"confusionMatrix\": [\n [\n 5,\n 0\n ],\n [\n 1,\n 8\n ]\n ],\n \"trainingAccuracy\": 0.9976\n },\n \"createdAt\": \"2016-09-15T15:32:52.000+0000\",\n \"id\": \"FCY2FLDSYIKEU4RNSMUDAQXZMA\",\n \"object\": \"metrics\"\n}", "language": "json" } ] } [/block]
{"__v":0,"_id":"57dee892b269380e0020a0f5","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"Now that the data is uploaded and you created a model, you’re ready to use it to make predictions. You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. \n\nYou can classify an image in these ways. \n- Reference the file by a URL\n- Upload the file by its path\n- Upload the image in a base64 string\n\nFor this example, you’ll reference this picture by the file URL.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/4d870d7-546212389.jpg\",\n        \"546212389.jpg\",\n        1024,\n        1024,\n        \"#cac9c5\"\n      ]\n    }\n  ]\n}\n[/block]\n1. In the following command, replace: \n - `<TOKEN>` with your JWT token\n - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset\n \nThen run the command from the command line.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://metamind.io/images/546212389.jpg\\\" -F \\\"modelId=<YOUR_MODEL_ID>\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns results similar to the following.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"beach\\\",\\n      \\\"probability\\\": 0.980938732624054\\n    },\\n    {\\n      \\\"label\\\": \\\"mountain\\\",\\n      \\\"probability\\\": 0.0190612580627203\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nThe model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). In this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first.\n\nIf you run a prediction against a model that’s still training, the response contains a null `probabilities` object and a status of 404.\n\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Caution\",\n  \"body\": \"The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data.\"\n}\n[/block]\n##Tell Me More##\nYou can also classify a local image by uploading the image. Instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location like this.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleContent=@C:\\\\Mountains vs Beach\\\\Beaches\\\\546212389.jpg\\\" -F \\\"modelId=<YOUR_MODEL_ID>\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nCreating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need.\n\nYou’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Predictive Vision Service API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:18:42.562Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":6,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"step-4-classify-an-image","sync_unique":"","title":"Step 4: Classify an Image","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Step 4: Classify an Image


Now that the data is uploaded and you created a model, you’re ready to use it to make predictions. You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. You can classify an image in these ways. - Reference the file by a URL - Upload the file by its path - Upload the image in a base64 string For this example, you’ll reference this picture by the file URL. [block:image] { "images": [ { "image": [ "https://files.readme.io/4d870d7-546212389.jpg", "546212389.jpg", 1024, 1024, "#cac9c5" ] } ] } [/block] 1. In the following command, replace: - `<TOKEN>` with your JWT token - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset Then run the command from the command line. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model returns results similar to the following. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"beach\",\n \"probability\": 0.980938732624054\n },\n {\n \"label\": \"mountain\",\n \"probability\": 0.0190612580627203\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] The model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). In this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first. If you run a prediction against a model that’s still training, the response contains a null `probabilities` object and a status of 404. [block:callout] { "type": "warning", "title": "Caution", "body": "The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data." } [/block] ##Tell Me More## You can also classify a local image by uploading the image. Instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location like this. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\Mountains vs Beach\\Beaches\\546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] Creating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need. You’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Predictive Vision Service API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.
Now that the data is uploaded and you created a model, you’re ready to use it to make predictions. You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. You can classify an image in these ways. - Reference the file by a URL - Upload the file by its path - Upload the image in a base64 string For this example, you’ll reference this picture by the file URL. [block:image] { "images": [ { "image": [ "https://files.readme.io/4d870d7-546212389.jpg", "546212389.jpg", 1024, 1024, "#cac9c5" ] } ] } [/block] 1. In the following command, replace: - `<TOKEN>` with your JWT token - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset Then run the command from the command line. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model returns results similar to the following. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"beach\",\n \"probability\": 0.980938732624054\n },\n {\n \"label\": \"mountain\",\n \"probability\": 0.0190612580627203\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] The model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). In this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first. If you run a prediction against a model that’s still training, the response contains a null `probabilities` object and a status of 404. [block:callout] { "type": "warning", "title": "Caution", "body": "The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data." } [/block] ##Tell Me More## You can also classify a local image by uploading the image. Instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location like this. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\Mountains vs Beach\\Beaches\\546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] Creating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need. You’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Predictive Vision Service API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.
{"__v":0,"_id":"57dee8be84019d2000e95af2","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"These commands add the images that you extracted from the .zip file to the dataset that you specify. To use these commands, replace all the information in the carets (<>) with values specific to your implementation.\n##Beach Label Images##\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=2373848.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>2373848.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=77880132.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>77880132.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=109558771.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>109558771.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=113598932.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>113598932.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=127501310.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>127501310.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=136506260.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>136506260.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=138058680.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>138058680.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=155304150.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>155304150.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=157938028.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>157938028.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=158727632.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>158727632.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=163372001.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>163372001.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=165085087.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>165085087.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=167756373.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>167756373.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=169689764.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>169689764.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=173961317.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>173961317.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=175369218.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>175369218.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=175551857.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>175551857.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=177192429.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>177192429.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=181494079.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>181494079.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=183198687.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>183198687.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=187653290.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>187653290.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=187653343.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>187653343.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=200526159-001.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>200526159-001.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=459120189.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>459120189.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=478413621.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>478413621.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=487712573.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>487712573.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=497925190.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>497925190.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=498573275.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>498573275.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=506811131.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>506811131.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=510250437.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>510250437.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=510250447.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>510250447.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=523990156.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>523990156.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=532030367.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>532030367.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=535836075.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>535836075.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=537632953.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>537632953.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=544202064.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>544202064.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=552692449.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>552692449.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=557920903.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>557920903.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=566675649.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>566675649.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=567867139.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>567867139.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=573017466.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>573017466.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=584805430.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>584805430.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=599858576.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>599858576.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=599859278.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>599859278.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=608166981.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>608166981.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=654711687.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>654711687.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=659803277.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>659803277.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=661860605.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>661860605.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=52647494.jpg\\\" -F \\\"labelId=<ID_OF_BEACH_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>52647494.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n##Mountain Label Images##\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=85212077.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>85212077.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=89171826.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>89171826.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=120227629.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>120227629.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=148981106.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>148981106.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=148981110.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>148981110.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=150513237.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>150513237.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=174714919.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>174714919.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=176613589.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>176613589.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=451932127.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>451932127.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=451932135.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>451932135.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=452872720.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>452872720.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=460401152.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>460401152.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=460401174.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>460401174.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=462382650.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>462382650.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=462530961.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>462530961.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=465212420.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>465212420.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=468056198.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>468056198.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=476947398.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>476947398.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=479111308.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>479111308.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=481493664.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>481493664.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=483951488.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>483951488.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=483957176.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>483957176.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=483957414.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>483957414.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=498264868.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>498264868.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=502694924.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>502694924.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=505543839.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>505543839.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=508065107.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>508065107.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=508066207.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>508066207.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=511584727.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>511584727.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=521811667.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>521811667.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=527961035.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>527961035.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=543126233.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>543126233.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=549525751.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>549525751.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=551752415.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>551752415.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=564748939.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>564748939.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=565788073.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>565788073.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=578339672.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>578339672.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=578339718.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>578339718.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=578339724.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>578339724.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=583673532.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>583673532.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=583779616.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>583779616.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=587571002.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>587571002.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=587571024.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>587571024.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=593231376.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>593231376.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=593280120.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>593280120.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=593388816.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>593388816.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=594243060.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>594243060.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=601053832.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>601053832.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=601053842.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>601053842.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\\ncurl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=660548647.jpg\\\" -F \\\"labelId=<ID_OF_MTN_LABEL>\\\" -F \\\"data=@<PATH_TO_FILE>660548647.jpg\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:19:26.995Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":7,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"scripts-to-add-images-to-the-dataset","sync_unique":"","title":"Scripts to Add Images to the Dataset","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Scripts to Add Images to the Dataset


These commands add the images that you extracted from the .zip file to the dataset that you specify. To use these commands, replace all the information in the carets (<>) with values specific to your implementation. ##Beach Label Images## [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=2373848.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>2373848.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=77880132.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>77880132.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=109558771.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>109558771.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=113598932.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>113598932.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=127501310.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>127501310.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=136506260.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>136506260.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=138058680.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>138058680.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=155304150.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>155304150.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=157938028.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>157938028.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=158727632.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>158727632.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=163372001.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>163372001.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=165085087.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>165085087.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=167756373.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>167756373.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=169689764.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>169689764.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=173961317.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>173961317.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=175369218.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>175369218.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=175551857.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>175551857.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=177192429.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>177192429.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=181494079.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>181494079.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=183198687.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>183198687.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=187653290.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>187653290.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=187653343.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>187653343.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=200526159-001.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>200526159-001.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=459120189.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>459120189.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=478413621.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>478413621.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=487712573.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>487712573.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=497925190.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>497925190.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=498573275.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>498573275.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=506811131.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>506811131.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=510250437.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>510250437.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=510250447.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>510250447.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=523990156.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>523990156.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=532030367.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>532030367.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=535836075.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>535836075.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=537632953.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>537632953.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=544202064.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>544202064.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=552692449.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>552692449.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=557920903.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>557920903.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=566675649.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>566675649.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=567867139.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>567867139.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=573017466.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>573017466.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=584805430.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>584805430.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=599858576.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>599858576.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=599859278.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>599859278.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=608166981.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>608166981.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=654711687.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>654711687.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=659803277.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>659803277.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=661860605.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>661860605.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=52647494.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>52647494.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples", "language": "curl" } ] } [/block] ##Mountain Label Images## [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=85212077.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>85212077.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=89171826.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>89171826.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=120227629.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>120227629.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=148981106.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>148981106.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=148981110.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>148981110.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=150513237.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>150513237.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=174714919.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>174714919.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=176613589.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>176613589.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=451932127.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>451932127.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=451932135.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>451932135.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=452872720.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>452872720.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=460401152.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>460401152.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=460401174.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>460401174.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=462382650.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>462382650.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=462530961.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>462530961.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=465212420.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>465212420.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=468056198.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>468056198.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=476947398.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>476947398.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=479111308.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>479111308.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=481493664.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>481493664.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=483951488.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>483951488.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=483957176.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>483957176.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=483957414.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>483957414.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=498264868.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>498264868.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=502694924.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>502694924.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=505543839.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>505543839.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=508065107.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>508065107.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=508066207.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>508066207.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=511584727.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>511584727.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=521811667.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>521811667.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=527961035.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>527961035.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=543126233.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>543126233.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=549525751.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>549525751.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=551752415.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>551752415.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=564748939.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>564748939.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=565788073.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>565788073.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=578339672.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>578339672.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=578339718.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>578339718.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=578339724.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>578339724.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=583673532.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>583673532.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=583779616.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>583779616.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=587571002.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>587571002.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=587571024.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>587571024.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=593231376.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>593231376.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=593280120.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>593280120.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=593388816.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>593388816.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=594243060.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>594243060.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=601053832.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>601053832.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=601053842.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>601053842.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=660548647.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>660548647.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples", "language": "curl" } ] } [/block]
These commands add the images that you extracted from the .zip file to the dataset that you specify. To use these commands, replace all the information in the carets (<>) with values specific to your implementation. ##Beach Label Images## [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=2373848.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>2373848.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=77880132.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>77880132.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=109558771.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>109558771.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=113598932.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>113598932.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=127501310.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>127501310.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=136506260.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>136506260.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=138058680.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>138058680.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=155304150.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>155304150.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=157938028.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>157938028.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=158727632.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>158727632.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=163372001.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>163372001.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=165085087.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>165085087.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=167756373.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>167756373.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=169689764.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>169689764.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=173961317.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>173961317.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=175369218.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>175369218.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=175551857.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>175551857.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=177192429.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>177192429.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=181494079.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>181494079.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=183198687.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>183198687.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=187653290.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>187653290.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=187653343.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>187653343.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=200526159-001.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>200526159-001.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=459120189.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>459120189.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=478413621.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>478413621.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=487712573.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>487712573.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=497925190.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>497925190.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=498573275.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>498573275.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=506811131.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>506811131.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=510250437.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>510250437.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=510250447.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>510250447.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=523990156.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>523990156.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=532030367.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>532030367.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=535836075.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>535836075.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=537632953.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>537632953.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=544202064.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>544202064.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=552692449.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>552692449.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=557920903.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>557920903.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=566675649.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>566675649.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=567867139.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>567867139.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=573017466.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>573017466.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=584805430.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>584805430.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=599858576.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>599858576.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=599859278.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>599859278.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=608166981.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>608166981.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=654711687.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>654711687.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=659803277.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>659803277.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=661860605.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>661860605.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=52647494.jpg\" -F \"labelId=<ID_OF_BEACH_LABEL>\" -F \"data=@<PATH_TO_FILE>52647494.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples", "language": "curl" } ] } [/block] ##Mountain Label Images## [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=85212077.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>85212077.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=89171826.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>89171826.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=120227629.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>120227629.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=148981106.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>148981106.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=148981110.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>148981110.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=150513237.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>150513237.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=174714919.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>174714919.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=176613589.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>176613589.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=451932127.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>451932127.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=451932135.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>451932135.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=452872720.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>452872720.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=460401152.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>460401152.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=460401174.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>460401174.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=462382650.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>462382650.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=462530961.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>462530961.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=465212420.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>465212420.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=468056198.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>468056198.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=476947398.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>476947398.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=479111308.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>479111308.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=481493664.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>481493664.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=483951488.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>483951488.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=483957176.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>483957176.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=483957414.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>483957414.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=498264868.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>498264868.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=502694924.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>502694924.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=505543839.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>505543839.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=508065107.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>508065107.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=508066207.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>508066207.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=511584727.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>511584727.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=521811667.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>521811667.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=527961035.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>527961035.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=543126233.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>543126233.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=549525751.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>549525751.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=551752415.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>551752415.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=564748939.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>564748939.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=565788073.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>565788073.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=578339672.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>578339672.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=578339718.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>578339718.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=578339724.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>578339724.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=583673532.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>583673532.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=583779616.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>583779616.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=587571002.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>587571002.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=587571024.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>587571024.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=593231376.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>593231376.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=593280120.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>593280120.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=593388816.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>593388816.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=594243060.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>594243060.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=601053832.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>601053832.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=601053842.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>601053842.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=660548647.jpg\" -F \"labelId=<ID_OF_MTN_LABEL>\" -F \"data=@<PATH_TO_FILE>660548647.jpg\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>/examples", "language": "curl" } ] } [/block]
{"__v":1,"_id":"57eec7f1cc36920e00bff4cd","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"**Food Image Model**—This model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`.\n\nThis cURL command makes a prediction against the food model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://metamind.io/images/foodimage.jpg\\\" -F \\\"modelId=FoodImageClassifier\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns a result similar to the following for the pizza image referenced by `foodimage.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"pizza\\\",\\n      \\\"probability\\\": 0.4895147383213043\\n    },\\n    {\\n      \\\"label\\\": \\\"flatbread\\\",\\n      \\\"probability\\\": 0.30357491970062256\\n    },\\n    {\\n      \\\"label\\\": \\\"focaccia\\\",\\n      \\\"probability\\\": 0.10683325678110123\\n    },\\n    {\\n      \\\"label\\\": \\\"frittata\\\",\\n      \\\"probability\\\": 0.05281512811779976\\n    },\\n    {\\n      \\\"label\\\": \\\"pepperoni\\\",\\n      \\\"probability\\\": 0.029621008783578873\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n**General Image Model**—This model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`.\n\nThis cURL command makes a prediction against the general image model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://metamind.io/images/generalimage.jpg\\\" -F \\\"modelId=GeneralImageClassifier\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model return a result similar to the following for the tree frog image referenced by `generalimage.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"tree frog, tree-frog\\\",\\n      \\\"probability\\\": 0.7963114976882935\\n    },\\n    {\\n      \\\"label\\\": \\\"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\\\",\\n      \\\"probability\\\": 0.1978749930858612\\n    },\\n    {\\n      \\\"label\\\": \\\"banded gecko\\\",\\n      \\\"probability\\\": 0.001511271228082478\\n    },\\n    {\\n      \\\"label\\\": \\\"African chameleon, Chamaeleo chamaeleon\\\",\\n      \\\"probability\\\": 0.0013212867779657245\\n    },\\n    {\\n      \\\"label\\\": \\\"bullfrog, Rana catesbeiana\\\",\\n      \\\"probability\\\": 0.0011536618694663048\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","category":"57eec6257a53690e000abc2a","createdAt":"2016-09-30T20:15:45.871Z","excerpt":"The Predictive Vision Service offers two pre-built models that you can use as long as you have a valid JWT token. These models are a good way to get started with the API because you can use them to work with and test the API without having to gather data and create your own model.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"use-pre-built-models","sync_unique":"","title":"Use the Pre-Built Models","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Use the Pre-Built Models

The Predictive Vision Service offers two pre-built models that you can use as long as you have a valid JWT token. These models are a good way to get started with the API because you can use them to work with and test the API without having to gather data and create your own model.

**Food Image Model**—This model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`. This cURL command makes a prediction against the food model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/foodimage.jpg\" -F \"modelId=FoodImageClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the pizza image referenced by `foodimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"pizza\",\n \"probability\": 0.4895147383213043\n },\n {\n \"label\": \"flatbread\",\n \"probability\": 0.30357491970062256\n },\n {\n \"label\": \"focaccia\",\n \"probability\": 0.10683325678110123\n },\n {\n \"label\": \"frittata\",\n \"probability\": 0.05281512811779976\n },\n {\n \"label\": \"pepperoni\",\n \"probability\": 0.029621008783578873\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] **General Image Model**—This model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`. This cURL command makes a prediction against the general image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/generalimage.jpg\" -F \"modelId=GeneralImageClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model return a result similar to the following for the tree frog image referenced by `generalimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"tree frog, tree-frog\",\n \"probability\": 0.7963114976882935\n },\n {\n \"label\": \"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\",\n \"probability\": 0.1978749930858612\n },\n {\n \"label\": \"banded gecko\",\n \"probability\": 0.001511271228082478\n },\n {\n \"label\": \"African chameleon, Chamaeleo chamaeleon\",\n \"probability\": 0.0013212867779657245\n },\n {\n \"label\": \"bullfrog, Rana catesbeiana\",\n \"probability\": 0.0011536618694663048\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
**Food Image Model**—This model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`. This cURL command makes a prediction against the food model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/foodimage.jpg\" -F \"modelId=FoodImageClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the pizza image referenced by `foodimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"pizza\",\n \"probability\": 0.4895147383213043\n },\n {\n \"label\": \"flatbread\",\n \"probability\": 0.30357491970062256\n },\n {\n \"label\": \"focaccia\",\n \"probability\": 0.10683325678110123\n },\n {\n \"label\": \"frittata\",\n \"probability\": 0.05281512811779976\n },\n {\n \"label\": \"pepperoni\",\n \"probability\": 0.029621008783578873\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] **General Image Model**—This model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`. This cURL command makes a prediction against the general image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/generalimage.jpg\" -F \"modelId=GeneralImageClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model return a result similar to the following for the tree frog image referenced by `generalimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"tree frog, tree-frog\",\n \"probability\": 0.7963114976882935\n },\n {\n \"label\": \"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\",\n \"probability\": 0.1978749930858612\n },\n {\n \"label\": \"banded gecko\",\n \"probability\": 0.001511271228082478\n },\n {\n \"label\": \"African chameleon, Chamaeleo chamaeleon\",\n \"probability\": 0.0013212867779657245\n },\n {\n \"label\": \"bullfrog, Rana catesbeiana\",\n \"probability\": 0.0011536618694663048\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
{"__v":0,"_id":"584ef98d85373a1b00143e16","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"## Datasets ##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"[Create a Dataset](doc:create-a-dataset)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=**{DATASET_NAME}**\\\" -F \\\"labels=**{LABEL1}**,**{LABEL2}**\\\" https://api.metamind.io/v1/vision/datasets\",\n    \"1-0\": \"[Get a Dataset](doc:get-a-dataset)\",\n    \"2-0\": \"[Get All Datasets](doc:get-all-datasets)\",\n    \"1-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}\",\n    \"2-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets\",\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\"\n  },\n  \"cols\": 2,\n  \"rows\": 3\n}\n[/block]\n## Labels ##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\",\n    \"0-0\": \"[Create a Label](doc:create-a-label)\",\n    \"1-0\": \"[Get a Label](doc:get-a-label)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=**{LABEL_NAME}**\\\" https://api.metamind.io/v1/vision/datasets/{DATASET_ID}/labels\",\n    \"1-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/labels/{LABEL_ID}\"\n  },\n  \"cols\": 2,\n  \"rows\": 2\n}\n[/block]\n## Examples ##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"[Create an Example](doc:create-an-example)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=**{EXAMPLE_NAME}**\\\" -F \\\"labelId={LABEL_ID}\\\" -F \\\"data=@{DIRECTORY/IMAGE_FILE}\\\" https://api.metamind.io/v1/vision/datasets/{DATASET_ID}/examples\",\n    \"1-0\": \"[Get an Example](doc:get-an-example)\",\n    \"1-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}\",\n    \"2-0\": \"[Get All Examples](doc:get-all-examples)\",\n    \"2-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples\",\n    \"3-0\": \"[Delete an Example](doc:delete-an-example)\",\n    \"3-1\": \"curl -X DELETE -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}\",\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\"\n  },\n  \"cols\": 2,\n  \"rows\": 4\n}\n[/block]\n## Training and Models ##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"[Train a Dataset](doc:train-a-dataset)\",\n    \"1-0\": \"[Get Training Status](doc:get-training-status)\",\n    \"2-0\": \"[Get Model Metrics](doc:get-model-metrics)\",\n    \"3-0\": \"[Get All Models](doc:get-all-models)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=**{MODEL_NAME}**\\\" -F \\\"datasetId=**{DATASET_ID}**\\\" https://api.metamind.io/v1/vision/train\",\n    \"1-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/train/{MODEL_ID}\",\n    \"2-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/models/{MODEL_ID}\",\n    \"3-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/models\",\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\"\n  },\n  \"cols\": 2,\n  \"rows\": 4\n}\n[/block]\n## Predictions ##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"[Prediction with Image Base64 String](doc:prediction-with-image-base64-string)\",\n    \"1-0\": \"[Prediction with Image File](doc:prediction-with-image-file)\",\n    \"2-0\": \"[Prediction with Image URL](doc:prediction-with-image-url)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleBase64Content=**{BASE64_STRING}**\\\" -F \\\"modelId=**{MODEL_ID}**\\\" https://api.metamind.io/v1/vision/predict\",\n    \"1-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleContent=**{DIRECTORY/IMAGE_FILE}**\\\" -F \\\"modelId=**{MODEL_ID}**\\\" https://api.metamind.io/v1/vision/predict\",\n    \"2-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=**{IMAGE_URL}**\\\" -F \\\"modelId=**{MODEL_ID}**\\\" https://api.metamind.io/v1/vision/predict\"\n  },\n  \"cols\": 2,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-12-12T19:25:01.689Z","excerpt":"A summary of the API calls you can make to programmatically work with datasets, labels, examples, models, and predictions.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"predictive-vision-service-api","sync_unique":"","title":"Predictive Vision Service API","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Predictive Vision Service API

A summary of the API calls you can make to programmatically work with datasets, labels, examples, models, and predictions.

## Datasets ## [block:parameters] { "data": { "0-0": "[Create a Dataset](doc:create-a-dataset)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{DATASET_NAME}**\" -F \"labels=**{LABEL1}**,**{LABEL2}**\" https://api.metamind.io/v1/vision/datasets", "1-0": "[Get a Dataset](doc:get-a-dataset)", "2-0": "[Get All Datasets](doc:get-all-datasets)", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets", "h-0": "Method", "h-1": "Call" }, "cols": 2, "rows": 3 } [/block] ## Labels ## [block:parameters] { "data": { "h-0": "Method", "h-1": "Call", "0-0": "[Create a Label](doc:create-a-label)", "1-0": "[Get a Label](doc:get-a-label)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{LABEL_NAME}**\" https://api.metamind.io/v1/vision/datasets/{DATASET_ID}/labels", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/labels/{LABEL_ID}" }, "cols": 2, "rows": 2 } [/block] ## Examples ## [block:parameters] { "data": { "0-0": "[Create an Example](doc:create-an-example)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{EXAMPLE_NAME}**\" -F \"labelId={LABEL_ID}\" -F \"data=@{DIRECTORY/IMAGE_FILE}\" https://api.metamind.io/v1/vision/datasets/{DATASET_ID}/examples", "1-0": "[Get an Example](doc:get-an-example)", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}", "2-0": "[Get All Examples](doc:get-all-examples)", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples", "3-0": "[Delete an Example](doc:delete-an-example)", "3-1": "curl -X DELETE -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}", "h-0": "Method", "h-1": "Call" }, "cols": 2, "rows": 4 } [/block] ## Training and Models ## [block:parameters] { "data": { "0-0": "[Train a Dataset](doc:train-a-dataset)", "1-0": "[Get Training Status](doc:get-training-status)", "2-0": "[Get Model Metrics](doc:get-model-metrics)", "3-0": "[Get All Models](doc:get-all-models)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{MODEL_NAME}**\" -F \"datasetId=**{DATASET_ID}**\" https://api.metamind.io/v1/vision/train", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/train/{MODEL_ID}", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/{MODEL_ID}", "3-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/models", "h-0": "Method", "h-1": "Call" }, "cols": 2, "rows": 4 } [/block] ## Predictions ## [block:parameters] { "data": { "0-0": "[Prediction with Image Base64 String](doc:prediction-with-image-base64-string)", "1-0": "[Prediction with Image File](doc:prediction-with-image-file)", "2-0": "[Prediction with Image URL](doc:prediction-with-image-url)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleBase64Content=**{BASE64_STRING}**\" -F \"modelId=**{MODEL_ID}**\" https://api.metamind.io/v1/vision/predict", "1-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=**{DIRECTORY/IMAGE_FILE}**\" -F \"modelId=**{MODEL_ID}**\" https://api.metamind.io/v1/vision/predict", "2-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=**{IMAGE_URL}**\" -F \"modelId=**{MODEL_ID}**\" https://api.metamind.io/v1/vision/predict" }, "cols": 2, "rows": 3 } [/block]
## Datasets ## [block:parameters] { "data": { "0-0": "[Create a Dataset](doc:create-a-dataset)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{DATASET_NAME}**\" -F \"labels=**{LABEL1}**,**{LABEL2}**\" https://api.metamind.io/v1/vision/datasets", "1-0": "[Get a Dataset](doc:get-a-dataset)", "2-0": "[Get All Datasets](doc:get-all-datasets)", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets", "h-0": "Method", "h-1": "Call" }, "cols": 2, "rows": 3 } [/block] ## Labels ## [block:parameters] { "data": { "h-0": "Method", "h-1": "Call", "0-0": "[Create a Label](doc:create-a-label)", "1-0": "[Get a Label](doc:get-a-label)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{LABEL_NAME}**\" https://api.metamind.io/v1/vision/datasets/{DATASET_ID}/labels", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/labels/{LABEL_ID}" }, "cols": 2, "rows": 2 } [/block] ## Examples ## [block:parameters] { "data": { "0-0": "[Create an Example](doc:create-an-example)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{EXAMPLE_NAME}**\" -F \"labelId={LABEL_ID}\" -F \"data=@{DIRECTORY/IMAGE_FILE}\" https://api.metamind.io/v1/vision/datasets/{DATASET_ID}/examples", "1-0": "[Get an Example](doc:get-an-example)", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}", "2-0": "[Get All Examples](doc:get-all-examples)", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples", "3-0": "[Delete an Example](doc:delete-an-example)", "3-1": "curl -X DELETE -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}", "h-0": "Method", "h-1": "Call" }, "cols": 2, "rows": 4 } [/block] ## Training and Models ## [block:parameters] { "data": { "0-0": "[Train a Dataset](doc:train-a-dataset)", "1-0": "[Get Training Status](doc:get-training-status)", "2-0": "[Get Model Metrics](doc:get-model-metrics)", "3-0": "[Get All Models](doc:get-all-models)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{MODEL_NAME}**\" -F \"datasetId=**{DATASET_ID}**\" https://api.metamind.io/v1/vision/train", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/train/{MODEL_ID}", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/{MODEL_ID}", "3-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/{DATSET_ID}/models", "h-0": "Method", "h-1": "Call" }, "cols": 2, "rows": 4 } [/block] ## Predictions ## [block:parameters] { "data": { "0-0": "[Prediction with Image Base64 String](doc:prediction-with-image-base64-string)", "1-0": "[Prediction with Image File](doc:prediction-with-image-file)", "2-0": "[Prediction with Image URL](doc:prediction-with-image-url)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleBase64Content=**{BASE64_STRING}**\" -F \"modelId=**{MODEL_ID}**\" https://api.metamind.io/v1/vision/predict", "1-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=**{DIRECTORY/IMAGE_FILE}**\" -F \"modelId=**{MODEL_ID}**\" https://api.metamind.io/v1/vision/predict", "2-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=**{IMAGE_URL}**\" -F \"modelId=**{MODEL_ID}**\" https://api.metamind.io/v1/vision/predict" }, "cols": 2, "rows": 3 } [/block]
{"__v":7,"_id":"57e031ea80aef10e00899160","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain\" -F \"labels=beach,mountain\" https://api.metamind.io/v1/vision/datasets"}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 57,\n  \"name\": \"Beach and Mountain\",\n  \"createdAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"updatedAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 611,\n        \"datasetId\": 57,\n        \"name\": \"beach\",\n        \"numExamples\": 0\n      },\n    {\n        \"id\": 612,\n        \"datasetId\": 57,\n        \"name\": \"mountain\",\n        \"numExamples\": 0\n      }\n          ]\n  },\n  \"totalExamples\": 0,\n  \"totalLabels\": 2,\n  \"object\": \"dataset\"\n}\n","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/datasets"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`labels`\",\n    \"1-0\": \"`name`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 1,000.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"Name of the dataset. Maximum length is 180 characters.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nKeep the following points in mind when creating datasets.\n- If you pass labels in when you create a dataset, the label names can’t contain a comma. If you’re adding a label that contains a comma, use the call to create a single label. See [Create a Label](doc:create-a-label).\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the dataset was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"Dataset ID.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`labelSummary`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Name of the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`object`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Object returned; in this case, `dataset`.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`totalExamples`\",\n    \"5-1\": \"int\",\n    \"5-2\": \"Total number of examples in the dataset.\",\n    \"5-3\": \"1.0\",\n    \"6-0\": \"`totalLabels`\",\n    \"6-1\": \"int\",\n    \"6-2\": \"Total number of labels in the dataset.\",\n    \"6-3\": \"1.0\",\n    \"7-0\": \"`updatedAt`\",\n    \"7-1\": \"date\",\n    \"7-2\": \"Date and time that the dataset was last updated.\",\n    \"7-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 8\n}\n[/block]\n## Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples in the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-19T18:43:54.306Z","excerpt":"Creates a new dataset and labels, if they're specified.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-a-dataset","sync_unique":"","title":"Create a Dataset","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postCreate a Dataset

Creates a new dataset and labels, if they're specified.

##Request Parameters## [block:parameters] { "data": { "0-0": "`labels`", "1-0": "`name`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 1,000.", "0-3": "1.0", "1-2": "Name of the dataset. Maximum length is 180 characters.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] Keep the following points in mind when creating datasets. - If you pass labels in when you create a dataset, the label names can’t contain a comma. If you’re adding a label that contains a comma, use the call to create a single label. See [Create a Label](doc:create-a-label). ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the dataset was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "Dataset ID.", "1-3": "1.0", "2-0": "`labelSummary`", "2-1": "object", "2-2": "Contains the `labels` array that contains all the labels for the dataset.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the dataset.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `dataset`.", "4-3": "1.0", "5-0": "`totalExamples`", "5-1": "int", "5-2": "Total number of examples in the dataset.", "5-3": "1.0", "6-0": "`totalLabels`", "6-1": "int", "6-2": "Total number of labels in the dataset.", "6-3": "1.0", "7-0": "`updatedAt`", "7-1": "date", "7-2": "Date and time that the dataset was last updated.", "7-3": "1.0" }, "cols": 4, "rows": 8 } [/block] ## Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`labels`", "1-0": "`name`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 1,000.", "0-3": "1.0", "1-2": "Name of the dataset. Maximum length is 180 characters.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] Keep the following points in mind when creating datasets. - If you pass labels in when you create a dataset, the label names can’t contain a comma. If you’re adding a label that contains a comma, use the call to create a single label. See [Create a Label](doc:create-a-label). ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the dataset was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "Dataset ID.", "1-3": "1.0", "2-0": "`labelSummary`", "2-1": "object", "2-2": "Contains the `labels` array that contains all the labels for the dataset.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the dataset.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `dataset`.", "4-3": "1.0", "5-0": "`totalExamples`", "5-1": "int", "5-2": "Total number of examples in the dataset.", "5-3": "1.0", "6-0": "`totalLabels`", "6-1": "int", "6-2": "Total number of labels in the dataset.", "6-3": "1.0", "7-0": "`updatedAt`", "7-1": "date", "7-2": "Date and time that the dataset was last updated.", "7-3": "1.0" }, "cols": 4, "rows": 8 } [/block] ## Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":2,"_id":"57e5966d8c679e220074d9ab","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/57","language":"curl"}]},"params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"id\": 57,\n  \"name\": \"Beach and Mountain\",\n  \"createdAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"updatedAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 612,\n        \"datasetId\": 57,\n        \"name\": \"beach\",\n        \"numExamples\": 49\n      },\n      {\n        \"id\": 611,\n        \"datasetId\": 57,\n        \"name\": \"mountain\",\n        \"numExamples\": 50\n      }\n    ]\n  },\n  \"totalExamples\": 99,\n  \"totalLabels\": 2,\n  \"object\": \"dataset\"\n}\n","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/datasets/<DATASET_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the dataset was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"Dataset ID.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`labelSummary`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Name of the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`object`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Object returned; in this case, `dataset`.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`totalExamples`\",\n    \"5-1\": \"int\",\n    \"5-2\": \"Total number of examples in the dataset.\",\n    \"5-3\": \"1.0\",\n    \"6-0\": \"`totalLabels`\",\n    \"6-1\": \"int\",\n    \"6-2\": \"Total number of labels in the dataset.\",\n    \"6-3\": \"1.0\",\n    \"7-0\": \"`updatedAt`\",\n    \"7-1\": \"date\",\n    \"7-2\": \"Date and time that the dataset was last updated.\",\n    \"7-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 8\n}\n[/block]\n##Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples in the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T20:54:05.059Z","excerpt":"Returns a single dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-a-dataset","sync_unique":"","title":"Get a Dataset","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet a Dataset

Returns a single dataset.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the dataset was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "Dataset ID.", "1-3": "1.0", "2-0": "`labelSummary`", "2-1": "object", "2-2": "Contains the `labels` array that contains all the labels for the dataset.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the dataset.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `dataset`.", "4-3": "1.0", "5-0": "`totalExamples`", "5-1": "int", "5-2": "Total number of examples in the dataset.", "5-3": "1.0", "6-0": "`totalLabels`", "6-1": "int", "6-2": "Total number of labels in the dataset.", "6-3": "1.0", "7-0": "`updatedAt`", "7-1": "date", "7-2": "Date and time that the dataset was last updated.", "7-3": "1.0" }, "cols": 4, "rows": 8 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the dataset was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "Dataset ID.", "1-3": "1.0", "2-0": "`labelSummary`", "2-1": "object", "2-2": "Contains the `labels` array that contains all the labels for the dataset.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the dataset.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `dataset`.", "4-3": "1.0", "5-0": "`totalExamples`", "5-1": "int", "5-2": "Total number of examples in the dataset.", "5-3": "1.0", "6-0": "`totalLabels`", "6-1": "int", "6-2": "Total number of labels in the dataset.", "6-3": "1.0", "7-0": "`updatedAt`", "7-1": "date", "7-2": "Date and time that the dataset was last updated.", "7-3": "1.0" }, "cols": 4, "rows": 8 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":0,"_id":"57e5a44b00c8680e00fae832","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets","language":"curl"}]},"params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"id\": 57,\n      \"name\": \"Beach and Mountain\",\n      \"updatedAt\": \"2016-09-09T22:39:22.000+0000\",\n      \"createdAt\": \"2016-09-09T22:39:22.000+0000\",\n     \"labelSummary\": {\n           \"labels\": [\n          {\n              \"id\": 36,\n              \"datasetId\": 57,\n              \"name\": \"beach\",\n              \"numExamples\": 49\n          },\n          {\n            \"id\": 37,\n            \"datasetId\": 57,\n            \"name\": \"mountain\",\n            \"numExamples\": 50\n          }\n        ]\n      },\n      \"object\": \"dataset\"\n    },\n    {\n      \"id\": 58,\n      \"name\": \"Brain Scans\",\n      \"updatedAt\": \"2016-09-24T21:35:27.000+0000\",\n      \"createdAt\": \"2016-09-24T21:35:27.000+0000\",\n     \"labelSummary\": {\n           \"labels\": [\n          {\n              \"id\": 122,\n              \"datasetId\": 58,\n              \"name\": \"healthy\",\n              \"numExamples\": 5064\n          },\n          {\n            \"id\": 123,\n            \"datasetId\": 58,\n            \"name\": \"unhealthy\",\n            \"numExamples\": 5080\n          }\n      ]\n      \"object\": \"dataset\"\n    }\n  ]\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/datasets"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `dataset` objects.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Dataset Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the dataset was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"Dataset ID.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`labelSummary`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Name of the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`object`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Object returned; in this case, `dataset`.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`updatedAt`\",\n    \"5-1\": \"date\",\n    \"5-2\": \"Date and time that the dataset was last updated.\",\n    \"5-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples in the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T21:53:15.555Z","excerpt":"Returns a list of datasets and their labels that were created using the specified API key. The response is sorted by dataset ID.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":3,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-all-datasets","sync_unique":"","title":"Get All Datasets","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet All Datasets

Returns a list of datasets and their labels that were created using the specified API key. The response is sorted by dataset ID.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `dataset` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Dataset Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the dataset was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "Dataset ID.", "1-3": "1.0", "2-0": "`labelSummary`", "2-1": "object", "2-2": "Contains the `labels` array that contains all the labels for the dataset.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the dataset.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `dataset`.", "4-3": "1.0", "5-0": "`updatedAt`", "5-1": "date", "5-2": "Date and time that the dataset was last updated.", "5-3": "1.0" }, "cols": 4, "rows": 6 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `dataset` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Dataset Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the dataset was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "Dataset ID.", "1-3": "1.0", "2-0": "`labelSummary`", "2-1": "object", "2-2": "Contains the `labels` array that contains all the labels for the dataset.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the dataset.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `dataset`.", "4-3": "1.0", "5-0": "`updatedAt`", "5-1": "date", "5-2": "Date and time that the dataset was last updated.", "5-3": "1.0" }, "cols": 4, "rows": 6 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":0,"_id":"57e5a65c6dec2419008de645","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/108"}]},"params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":204},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/<DATASET_ID>"},"body":"This call doesn’t return a response body. Instead, it returns an HTTP status code 204.","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T22:02:04.938Z","excerpt":"Deletes the specified dataset and associated labels, examples, and models.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":4,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"delete-a-dataset","sync_unique":"","title":"Delete a Dataset","type":"delete","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

deleteDelete a Dataset

Deletes the specified dataset and associated labels, examples, and models.

This call doesn’t return a response body. Instead, it returns an HTTP status code 204.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



This call doesn’t return a response body. Instead, it returns an HTTP status code 204.
{"__v":1,"_id":"57e5a7ce00c8680e00fae833","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=beach\" https://api.metamind.io/v1/vision/datasets/57/labels"}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 614,\n  \"datasetId\": 57,\n  \"name\": \"beach,\n  \"object\": \"label\"\n}","language":"json","status":200},{"name":"","code":"{\n  \"message\": \"Adding labels to datasets after training has started is not supported\"\n}","language":"json","status":400}]},"settings":"","url":"/<DATASET_ID>/labels"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`name`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Name of the label. Must be unique in the dataset; otherwise, you receive an HTTP 400 error. Maximum length is 180 characters.\",\n    \"0-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 1\n}\n[/block]\nYou can add a label only before the dataset has been successfully trained. If the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`, adding a label returns an error. If the dataset has only models with a `FAILED` status, you can continue to add labels.\n\nKeep the following points in mind when creating labels.\n\n- You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels.\n- The label name must be unique within the dataset. Otherwise, the call returns an HTTP status of 400 Failed.\n- We recommend a maximum of 1,000 labels per dataset.\n- A dataset must have a minimum of two labels to create a model.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `label`.\",\n    \"3-3\": \"1.0\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T22:08:14.712Z","excerpt":"Creates a label in the specified dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":5,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-a-label","sync_unique":"","title":"Create a Label","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postCreate a Label

Creates a label in the specified dataset.

##Request Parameters## [block:parameters] { "data": { "0-0": "`name`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Name of the label. Must be unique in the dataset; otherwise, you receive an HTTP 400 error. Maximum length is 180 characters.", "0-3": "1.0" }, "cols": 4, "rows": 1 } [/block] You can add a label only before the dataset has been successfully trained. If the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`, adding a label returns an error. If the dataset has only models with a `FAILED` status, you can continue to add labels. Keep the following points in mind when creating labels. - You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels. - The label name must be unique within the dataset. Otherwise, the call returns an HTTP status of 400 Failed. - We recommend a maximum of 1,000 labels per dataset. - A dataset must have a minimum of two labels to create a model. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `label`.", "3-3": "1.0", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`name`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Name of the label. Must be unique in the dataset; otherwise, you receive an HTTP 400 error. Maximum length is 180 characters.", "0-3": "1.0" }, "cols": 4, "rows": 1 } [/block] You can add a label only before the dataset has been successfully trained. If the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`, adding a label returns an error. If the dataset has only models with a `FAILED` status, you can continue to add labels. Keep the following points in mind when creating labels. - You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels. - The label name must be unique within the dataset. Otherwise, the call returns an HTTP status of 400 Failed. - We recommend a maximum of 1,000 labels per dataset. - A dataset must have a minimum of two labels to create a model. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `label`.", "3-3": "1.0", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":0,"_id":"57e5ad09bc0e7d0e00bf34e7","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.metamind.io/v1/vision/datasets/57/labels/614","language":"curl"}]},"params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"id\": 614,\n  \"datasetId\": 57,\n  \"name\": \"beach,\n  \"object\": \"label\"\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/datasets/<DATASET_ID>/labels/<LABEL_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `label`.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T22:30:33.952Z","excerpt":"Returns the label for the specified ID.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":6,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-a-label","sync_unique":"","title":"Get a Label","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet a Label

Returns the label for the specified ID.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `label`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `label`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":1,"_id":"57e5b08d26c33a1700a94b1b","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=77880132.jpg\" -F \"labelId=614\" -F \"data=@C:\\Mountains vs Beach\\Beaches\\77880132.jpg\" https://api.metamind.io/v1/vision/datasets/57/examples"}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 43887,\n  \"name\": \"77880132.jpg\",\n  \"createdAt\": \"2016-09-15T23:18:13.000+0000\",\n  \"label\": {\n    \"id\": 614,\n    \"datasetId\": 57,\n    \"name\": \"beach\"\n  },\n  \"object\": \"example\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/datasets/<DATASET_ID>/examples"},"body":"[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Tip\",\n  \"body\": \"The image file types supported are PNG,  JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).\"\n}\n[/block]\n##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Location of the local image file to upload.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"1-0\": \"`labelId`\",\n    \"1-1\": \"long\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the example. Maximum length is 180 characters.\",\n    \"1-2\": \"ID of the label to add the example to.\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nYou can add or delete an example only before the dataset has been successfully trained. Adding or deleting an example returns an error if the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`. If the dataset has only models with a `FAILED` status, you can continue to add examples.\n\nKeep the following points in mind when creating examples.\n- After you add an example, you can return information about it, but you can’t access the image.\n- Add an example to only one label.\n- The maximum image file size is 5 MB.\n- We recommend a minimum of 100 examples per label.\n- The maximum number of examples per dataset is 50,000 and not to exceed 2.5 GB total for all images.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Name of the example.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`object`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Object returned; in this case, `example`.\",\n    \"4-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains information about the label that the example is associated with.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"0-2\": \"ID of the dataset that the example’s label belongs to.\",\n    \"1-2\": \"ID of the example’s label.\",\n    \"2-2\": \"Name of the example’s label.\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T22:45:33.433Z","excerpt":"Adds an example to a dataset in the specified label.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":7,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-an-example","sync_unique":"","title":"Create an Example","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postCreate an Example

Adds an example to a dataset in the specified label.

[block:callout] { "type": "info", "title": "Tip", "body": "The image file types supported are PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap)." } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Location of the local image file to upload.", "0-3": "1.0", "2-0": "`name`", "1-0": "`labelId`", "1-1": "long", "2-1": "string", "2-2": "Name of the example. Maximum length is 180 characters.", "1-2": "ID of the label to add the example to.", "1-3": "1.0", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] You can add or delete an example only before the dataset has been successfully trained. Adding or deleting an example returns an error if the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`. If the dataset has only models with a `FAILED` status, you can continue to add examples. Keep the following points in mind when creating examples. - After you add an example, you can return information about it, but you can’t access the image. - Add an example to only one label. - The maximum image file size is 5 MB. - We recommend a minimum of 100 examples per label. - The maximum number of examples per dataset is 50,000 and not to exceed 2.5 GB total for all images. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the example.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `example`.", "4-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the example is associated with.", "2-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the example’s label belongs to.", "1-2": "ID of the example’s label.", "2-2": "Name of the example’s label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



[block:callout] { "type": "info", "title": "Tip", "body": "The image file types supported are PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap)." } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Location of the local image file to upload.", "0-3": "1.0", "2-0": "`name`", "1-0": "`labelId`", "1-1": "long", "2-1": "string", "2-2": "Name of the example. Maximum length is 180 characters.", "1-2": "ID of the label to add the example to.", "1-3": "1.0", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] You can add or delete an example only before the dataset has been successfully trained. Adding or deleting an example returns an error if the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`. If the dataset has only models with a `FAILED` status, you can continue to add examples. Keep the following points in mind when creating examples. - After you add an example, you can return information about it, but you can’t access the image. - Add an example to only one label. - The maximum image file size is 5 MB. - We recommend a minimum of 100 examples per label. - The maximum number of examples per dataset is 50,000 and not to exceed 2.5 GB total for all images. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the example.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `example`.", "4-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the example is associated with.", "2-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the example’s label belongs to.", "1-2": "ID of the example’s label.", "2-2": "Name of the example’s label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block]
{"__v":0,"_id":"57e7138868bab10e006c5253","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{\n  \"id\": 43887,\n  \"name\": \"77880132.jpg\",\n  \"createdAt\": \"2016-09-15T23:18:13.000+0000\",\n  \"label\": {\n    \"id\": 614,\n    \"datasetId\": 57,\n    \"name\": \"beach\"\n  },\n  \"object\": \"example\"\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.metamind.io/v1/vision/datasets/57/examples/43887","language":"curl"}]},"auth":"required","params":[],"url":"/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains information about the label that the example is associated with.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Name of the example.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`object`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Object returned; in this case, `example`.\",\n    \"4-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n## Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the example's label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example's label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the example's label.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:00:08.749Z","excerpt":"Returns the example for the specified ID.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":8,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-an-example","sync_unique":"","title":"Get an Example","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet an Example

Returns the example for the specified ID.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the example is associated with.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the example.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `example`.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ## Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the example's label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example's label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the example's label.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the example is associated with.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the example.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `example`.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ## Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the example's label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example's label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the example's label.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block]
{"__v":0,"_id":"57e714a75f33650e00763871","api":{"examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/57/examples","language":"curl"}]},"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"id\": 43888,\n      \"name\": \"659803277.jpg\",\n      \"createdAt\": \"2016-09-16T17:14:38.000+0000\",\n      \"label\": {\n        \"id\": 618,\n        \"datasetId\": 57,\n        \"name\": \"beach\"\n    },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43889,\n      \"name\": \"661860605.jpg\",\n      \"createdAt\": \"2016-09-16T17:14:42.000+0000\",\n      \"label\": {\n        \"id\": 618,\n        \"datasetId\": 57,\n        \"name\": \"beach\"\n      },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43890,\n      \"name\": \"660548647.jpg\",\n      \"createdAt\": \"2016-09-16T17:15:25.000+0000\",\n      \"label\": {\n        \"id\": 619,\n        \"datasetId\": 57,\n        \"name\": \"mountain\"\n      },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43891,\n      \"name\": \"578339672.jpg\",\n      \"createdAt\": \"2016-09-16T17:15:29.000+0000\",\n      \"label\": {\n        \"id\": 619,\n        \"datasetId\": 57,\n        \"name\": \"mountain\"\n      },\n      \"object\": \"example\"\n    }\n  ]\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":"/datasets/<DATASET_ID>/examples"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `example` objects.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Example Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"The label that the example is associated with.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Name of the example.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`object`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Object returned; in this case, `dataset`.\",\n    \"4-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n## Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:04:55.609Z","excerpt":"Returns all the examples for the specified dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":9,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-all-examples","sync_unique":"","title":"Get All Examples","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet All Examples

Returns all the examples for the specified dataset.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `example` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Example Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "The label that the example is associated with.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the example.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `dataset`.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ## Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `example` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Example Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "The label that the example is associated with.", "2-3": "1.0", "3-0": "`name`", "3-1": "string", "3-2": "Name of the example.", "3-3": "1.0", "4-0": "`object`", "4-1": "string", "4-2": "Object returned; in this case, `dataset`.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ## Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block]
{"__v":1,"_id":"57e714e8da4e2e0e007fbbfc","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/108/examples/43555"}]},"params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":204},{"name":"","code":"{\n  \"message\": \"Deleting examples from datasets after training has started is not supported\"\n}","language":"json","status":400}]},"settings":"","url":"/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>"},"body":"This call doesn’t return a response body. Instead, it returns an HTTP status code 204.","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:06:00.766Z","excerpt":"Deletes the specified example.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":10,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"delete-an-example","sync_unique":"","title":"Delete an Example","type":"delete","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

deleteDelete an Example

Deletes the specified example.

This call doesn’t return a response body. Instead, it returns an HTTP status code 204.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



This call doesn’t return a response body. Instead, it returns an HTTP status code 204.
{"__v":0,"_id":"57e717e15f33650e00763875","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model\" -F \"datasetId=57\" https://api.metamind.io/v1/vision/train"}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"datasetId\": 57,\n  \"name\": \"Beach and Mountain Model\",\n  \"status\": \"QUEUED\",\n  \"Progress\": 0,\n  \"createdAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"updatedAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"learningRate\": 0.001,\n  \"epochs\": 3,\n  \"queuePosition\": 1,\n  \"object\": \"training\",\n  \"modelId\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\"\n}","language":"json","status":200},{"name":"","code":"{\n  \"message\": \"Train job not yet completed successfully\"\n}","language":"json","status":400}]},"settings":"","url":"/train"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the dataset to train.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`learningRate`\",\n    \"1-0\": \"`epochs`\",\n    \"1-1\": \"int\",\n    \"2-1\": \"float\",\n    \"2-2\": \"Optional. Specifies how much the gradient affects the optimization of the model at each time step. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.\",\n    \"1-2\": \"Optional. Number of training iterations for the neural network. Valid values are 1–100. If not specified, the default is calculated based on the dataset size. The larger the number, the longer the training takes to complete.\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-2\": \"Name of the model. Maximum length is 180 characters.\",\n    \"3-1\": \"string\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\nIf you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"1.0\",\n    \"5-0\": \"`name`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Name of the model.\",\n    \"5-3\": \"1.0\",\n    \"6-0\": \"`object`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Object returned; in this case, `training`.\",\n    \"6-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"4-0\": \"`modelId`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"ID of the model. Contains letters and numbers.\",\n    \"4-3\": \"1.0\",\n    \"2-0\": \"`epochs`\",\n    \"2-1\": \"int\",\n    \"2-2\": \"The epochs used during training.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`learningRate`\",\n    \"3-1\": \"float\",\n    \"3-2\": \"The learning rate used during training.\",\n    \"3-3\": \"1.0\",\n    \"7-0\": \"`progress`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"How far the training job has progressed. Values are between 0–1.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`queuePosition`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Where the training job is in the queue. This field appears in the response only  if the status is `QUEUED`.\",\n    \"8-3\": \"1.0\",\n    \"9-0\": \"`status`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Status of the training job. Valid values are:\\n- `QUEUED`—The training job is in the queue.\\n- `RUNNING`—The training job is running.\\n- `SUCCEEDED`—The training job succeeded, and the model was created.\\n- `FAILED`—The training job failed.\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the model was last updated.\",\n    \"9-3\": \"1.0\",\n    \"10-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:18:41.425Z","excerpt":"Trains a dataset and creates a model.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":11,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"train-a-dataset","sync_unique":"","title":"Train a Dataset","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postTrain a Dataset

Trains a dataset and creates a model.

##Request Parameters## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the dataset to train.", "0-3": "1.0", "2-0": "`learningRate`", "1-0": "`epochs`", "1-1": "int", "2-1": "float", "2-2": "Optional. Specifies how much the gradient affects the optimization of the model at each time step. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.", "1-2": "Optional. Number of training iterations for the neural network. Valid values are 1–100. If not specified, the default is calculated based on the dataset size. The larger the number, the longer the training takes to complete.", "1-3": "1.0", "2-3": "1.0", "3-0": "`name`", "3-2": "Name of the model. Maximum length is 180 characters.", "3-1": "string", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] If you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "5-0": "`name`", "5-1": "string", "5-2": "Name of the model.", "5-3": "1.0", "6-0": "`object`", "6-1": "string", "6-2": "Object returned; in this case, `training`.", "6-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "4-0": "`modelId`", "4-1": "string", "4-2": "ID of the model. Contains letters and numbers.", "4-3": "1.0", "2-0": "`epochs`", "2-1": "int", "2-2": "The epochs used during training.", "2-3": "1.0", "3-0": "`learningRate`", "3-1": "float", "3-2": "The learning rate used during training.", "3-3": "1.0", "7-0": "`progress`", "7-1": "int", "7-2": "How far the training job has progressed. Values are between 0–1.", "7-3": "1.0", "8-0": "`queuePosition`", "8-1": "int", "8-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "8-3": "1.0", "9-0": "`status`", "9-1": "string", "9-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and the model was created.\n- `FAILED`—The training job failed.", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the model was last updated.", "9-3": "1.0", "10-3": "1.0" }, "cols": 4, "rows": 11 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the dataset to train.", "0-3": "1.0", "2-0": "`learningRate`", "1-0": "`epochs`", "1-1": "int", "2-1": "float", "2-2": "Optional. Specifies how much the gradient affects the optimization of the model at each time step. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.", "1-2": "Optional. Number of training iterations for the neural network. Valid values are 1–100. If not specified, the default is calculated based on the dataset size. The larger the number, the longer the training takes to complete.", "1-3": "1.0", "2-3": "1.0", "3-0": "`name`", "3-2": "Name of the model. Maximum length is 180 characters.", "3-1": "string", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] If you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "5-0": "`name`", "5-1": "string", "5-2": "Name of the model.", "5-3": "1.0", "6-0": "`object`", "6-1": "string", "6-2": "Object returned; in this case, `training`.", "6-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "4-0": "`modelId`", "4-1": "string", "4-2": "ID of the model. Contains letters and numbers.", "4-3": "1.0", "2-0": "`epochs`", "2-1": "int", "2-2": "The epochs used during training.", "2-3": "1.0", "3-0": "`learningRate`", "3-1": "float", "3-2": "The learning rate used during training.", "3-3": "1.0", "7-0": "`progress`", "7-1": "int", "7-2": "How far the training job has progressed. Values are between 0–1.", "7-3": "1.0", "8-0": "`queuePosition`", "8-1": "int", "8-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "8-3": "1.0", "9-0": "`status`", "9-1": "string", "9-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and the model was created.\n- `FAILED`—The training job failed.", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the model was last updated.", "9-3": "1.0", "10-3": "1.0" }, "cols": 4, "rows": 11 } [/block]
{"__v":0,"_id":"57e7199e68bab10e006c5254","api":{"settings":"","results":{"codes":[{"name":"","code":"{\n  \"datasetId\": 57,\n  \"name\": \"Beach and Mountain Model\",\n  \"status\": \"SUCCEEDED\",\n  \"Progress\": 1,\n  \"createdAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"updatedAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"learningRate\": 0.001,\n  \"epochs\": 3,\n  \"object\": \"training\",\n  \"modelId\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/train/7JXCXTRXTMNLJCEF2DR5CJ46QU"}]},"auth":"required","params":[],"url":"/train/<MODEL_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"1.0\",\n    \"6-0\": \"`name`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Name of the model.\",\n    \"6-3\": \"1.0\",\n    \"7-0\": \"`object`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Object returned; in this case, `training`.\",\n    \"7-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"5-0\": \"`modelId`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"ID of the model. Contains letters and numbers.\",\n    \"5-3\": \"1.0\",\n    \"2-0\": \"`epochs`\",\n    \"2-1\": \"int\",\n    \"2-2\": \"The epochs used during training.\",\n    \"2-3\": \"1.0\",\n    \"4-0\": \"`learningRate`\",\n    \"4-1\": \"float\",\n    \"4-2\": \"The learning rate used during training.\",\n    \"4-3\": \"1.0\",\n    \"8-0\": \"`progress`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"How far the training job has progressed. Values are between 0–1.\",\n    \"8-3\": \"1.0\",\n    \"9-0\": \"`queuePosition`\",\n    \"9-1\": \"int\",\n    \"9-2\": \"Where the training job is in the queue. This field appears in the response only  if the status is `QUEUED`.\",\n    \"9-3\": \"1.0\",\n    \"10-0\": \"`status`\",\n    \"10-1\": \"string\",\n    \"10-2\": \"Status of the training job. Valid values are:\\n- `QUEUED`—The training job is in the queue.\\n- `RUNNING`—The training job is running.\\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\\n- `FAILED`—The training job failed.\",\n    \"11-0\": \"`updatedAt`\",\n    \"11-1\": \"date\",\n    \"11-2\": \"Date and time that the model was last updated.\",\n    \"10-3\": \"1.0\",\n    \"11-3\": \"1.0\",\n    \"3-0\": \"`failureMsg`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Reason the dataset training failed. Returned only if the training status is `FAILED`.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 12\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:26:06.156Z","excerpt":"Returns the status of a training job. Use the progress field to determine how far the training has progressed. When training completes successfully, the status is `SUCCEEDED` and the progress is 1.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":12,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-training-status","sync_unique":"","title":"Get Training Status","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet Training Status

Returns the status of a training job. Use the progress field to determine how far the training has progressed. When training completes successfully, the status is `SUCCEEDED` and the progress is 1.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "6-0": "`name`", "6-1": "string", "6-2": "Name of the model.", "6-3": "1.0", "7-0": "`object`", "7-1": "string", "7-2": "Object returned; in this case, `training`.", "7-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "5-0": "`modelId`", "5-1": "string", "5-2": "ID of the model. Contains letters and numbers.", "5-3": "1.0", "2-0": "`epochs`", "2-1": "int", "2-2": "The epochs used during training.", "2-3": "1.0", "4-0": "`learningRate`", "4-1": "float", "4-2": "The learning rate used during training.", "4-3": "1.0", "8-0": "`progress`", "8-1": "int", "8-2": "How far the training job has progressed. Values are between 0–1.", "8-3": "1.0", "9-0": "`queuePosition`", "9-1": "int", "9-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "9-3": "1.0", "10-0": "`status`", "10-1": "string", "10-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\n- `FAILED`—The training job failed.", "11-0": "`updatedAt`", "11-1": "date", "11-2": "Date and time that the model was last updated.", "10-3": "1.0", "11-3": "1.0", "3-0": "`failureMsg`", "3-1": "string", "3-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "3-3": "1.0" }, "cols": 4, "rows": 12 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "6-0": "`name`", "6-1": "string", "6-2": "Name of the model.", "6-3": "1.0", "7-0": "`object`", "7-1": "string", "7-2": "Object returned; in this case, `training`.", "7-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "5-0": "`modelId`", "5-1": "string", "5-2": "ID of the model. Contains letters and numbers.", "5-3": "1.0", "2-0": "`epochs`", "2-1": "int", "2-2": "The epochs used during training.", "2-3": "1.0", "4-0": "`learningRate`", "4-1": "float", "4-2": "The learning rate used during training.", "4-3": "1.0", "8-0": "`progress`", "8-1": "int", "8-2": "How far the training job has progressed. Values are between 0–1.", "8-3": "1.0", "9-0": "`queuePosition`", "9-1": "int", "9-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "9-3": "1.0", "10-0": "`status`", "10-1": "string", "10-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\n- `FAILED`—The training job failed.", "11-0": "`updatedAt`", "11-1": "date", "11-2": "Date and time that the model was last updated.", "10-3": "1.0", "11-3": "1.0", "3-0": "`failureMsg`", "3-1": "string", "3-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "3-3": "1.0" }, "cols": 4, "rows": 12 } [/block]
{"__v":1,"_id":"57e71b1d5f33650e00763876","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/7JXCXTRXTMNLJCEF2DR5CJ46QU","language":"curl"}]},"params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"metricsData\": {\n    \"f1\": [\n      0.9090909090909092,\n      0.9411764705882352\n    ],\n    \"labels\": [\n      \"beach\",\n      \"mountain\"\n    ],\n    \"testAccuracy\": 0.9286,\n    \"trainingLoss\": 0.021,\n    \"confusionMatrix\": [\n      [\n        5,\n        0\n      ],\n      [\n        1,\n        8\n      ]\n    ],\n    \"trainingAccuracy\": 0.9941\n  },\n  \"createdAt\": \"2016-09-16T18:04:59.000+0000\",\n  \"id\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\",\n  \"object\": \"metrics\"\n}","name":""},{"status":400,"language":"json","code":"{\n  \"message\": \"Train job not yet completed successfully\"\n}","name":""}]},"settings":"","url":"/models/<MODEL_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"ID of the model. Contains letters and numbers.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`metricsData`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Model metrics values.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `metrics`.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\n##MetricsData Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`confusionMatrix`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`f1`\",\n    \"1-1\": \"array\",\n    \"1-2\": \"Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.\",\n    \"1-3\": \"1.0\",\n    \"3-0\": \"`testAccuracy`\",\n    \"3-1\": \"float\",\n    \"3-2\": \"From your initial dataset 10% of your data is removed and not used to train your classifier. This 10% is then sent to the trained classifier for prediction. How often the correct prediction is made with this 10% is reported as testAccuracy.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`trainingAccuracy`\",\n    \"4-1\": \"float\",\n    \"4-2\": \"From your initial dataset 90% of your data is left after the testAccuracy set is removed. This 90% is then sent to the trained classifier for prediction. How often the correct prediction is made with this 90% is reported as trainingAccuracy.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`trainingLoss`\",\n    \"5-1\": \"float\",\n    \"5-2\": \"Summary of the errors made in predictions using the training and validation datasets. The lower the number value, the more accurate the model.\",\n    \"5-3\": \"1.0\",\n    \"2-0\": \"`labels`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\nUse the `labels` array and the `confusionMatrix` array to build the confusion matrix for a model. The labels in the array become the matrix rows and columns. Here's what the confusion matrix for the example results looks like.\n[block:parameters]\n{\n  \"data\": {\n    \"0-1\": \"5\",\n    \"0-2\": \"0\",\n    \"h-1\": \"beach\",\n    \"h-2\": \"mountain\",\n    \"1-0\": \"**mountain**\",\n    \"0-0\": \"**beach**\",\n    \"1-1\": \"1\",\n    \"1-2\": \"8\"\n  },\n  \"cols\": 3,\n  \"rows\": 2\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:32:29.792Z","excerpt":"Returns the metrics for a model, such as the f1 score, accuracy, and confusion matrix. The combination of these metrics gives you a picture of model accuracy and how well it will perform.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":13,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-model-metrics","sync_unique":"","title":"Get Model Metrics","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet Model Metrics

Returns the metrics for a model, such as the f1 score, accuracy, and confusion matrix. The combination of these metrics gives you a picture of model accuracy and how well it will perform.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "string", "1-2": "ID of the model. Contains letters and numbers.", "1-3": "1.0", "2-0": "`metricsData`", "2-1": "object", "2-2": "Model metrics values.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `metrics`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##MetricsData Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`confusionMatrix`", "0-1": "array", "0-2": "Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.", "0-3": "1.0", "1-0": "`f1`", "1-1": "array", "1-2": "Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.", "1-3": "1.0", "3-0": "`testAccuracy`", "3-1": "float", "3-2": "From your initial dataset 10% of your data is removed and not used to train your classifier. This 10% is then sent to the trained classifier for prediction. How often the correct prediction is made with this 10% is reported as testAccuracy.", "3-3": "1.0", "4-0": "`trainingAccuracy`", "4-1": "float", "4-2": "From your initial dataset 90% of your data is left after the testAccuracy set is removed. This 90% is then sent to the trained classifier for prediction. How often the correct prediction is made with this 90% is reported as trainingAccuracy.", "4-3": "1.0", "5-0": "`trainingLoss`", "5-1": "float", "5-2": "Summary of the errors made in predictions using the training and validation datasets. The lower the number value, the more accurate the model.", "5-3": "1.0", "2-0": "`labels`", "2-1": "array", "2-2": "Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.", "2-3": "1.0" }, "cols": 4, "rows": 6 } [/block] Use the `labels` array and the `confusionMatrix` array to build the confusion matrix for a model. The labels in the array become the matrix rows and columns. Here's what the confusion matrix for the example results looks like. [block:parameters] { "data": { "0-1": "5", "0-2": "0", "h-1": "beach", "h-2": "mountain", "1-0": "**mountain**", "0-0": "**beach**", "1-1": "1", "1-2": "8" }, "cols": 3, "rows": 2 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "string", "1-2": "ID of the model. Contains letters and numbers.", "1-3": "1.0", "2-0": "`metricsData`", "2-1": "object", "2-2": "Model metrics values.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `metrics`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##MetricsData Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`confusionMatrix`", "0-1": "array", "0-2": "Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.", "0-3": "1.0", "1-0": "`f1`", "1-1": "array", "1-2": "Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.", "1-3": "1.0", "3-0": "`testAccuracy`", "3-1": "float", "3-2": "From your initial dataset 10% of your data is removed and not used to train your classifier. This 10% is then sent to the trained classifier for prediction. How often the correct prediction is made with this 10% is reported as testAccuracy.", "3-3": "1.0", "4-0": "`trainingAccuracy`", "4-1": "float", "4-2": "From your initial dataset 90% of your data is left after the testAccuracy set is removed. This 90% is then sent to the trained classifier for prediction. How often the correct prediction is made with this 90% is reported as trainingAccuracy.", "4-3": "1.0", "5-0": "`trainingLoss`", "5-1": "float", "5-2": "Summary of the errors made in predictions using the training and validation datasets. The lower the number value, the more accurate the model.", "5-3": "1.0", "2-0": "`labels`", "2-1": "array", "2-2": "Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.", "2-3": "1.0" }, "cols": 4, "rows": 6 } [/block] Use the `labels` array and the `confusionMatrix` array to build the confusion matrix for a model. The labels in the array become the matrix rows and columns. Here's what the confusion matrix for the example results looks like. [block:parameters] { "data": { "0-1": "5", "0-2": "0", "h-1": "beach", "h-2": "mountain", "1-0": "**mountain**", "0-0": "**beach**", "1-1": "1", "1-2": "8" }, "cols": 3, "rows": 2 } [/block]
{"__v":0,"_id":"57e71d00f000280e006eff7c","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/57/models"}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"datasetId\": 57,\n      \"name\": \"Beach Mountain Model - Test1\",\n      \"status\": \"FAILED\",\n      \"progress\": 0,\n      \"createdAt\": \"2016-09-15T15:31:23.000+0000\",\n      \"updatedAt\": \"2016-09-15T15:32:53.000+0000\",\n      \"failureMsg\": \"Won’t operate without train examples\",\n      \"object\": \"model\",\n      \"modelId\": \"2KXJEOM3N562JBT4P7OX7VID2Q\"\n    },\n    {\n      \"datasetId\": 57,\n      \"name\": \"Beach Mountain Model - Test2\",\n      \"status\": \"SUCCEEDED\",\n      \"progress\": 1,\n      \"createdAt\": \"2016-09-15T16:15:46.000+0000\",\n      \"updatedAt\": \"2016-09-15T16:17:19.000+0000\",\n      \"object\": \"model\",\n      \"modelId\": \"YCQ4ZACEPJFGXZNRA6ERF3GL5E\"\n    }\n  ]\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/datasets/<DATASET_ID>/models"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `model` objects. If the dataset has no models, the array is empty.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Training Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the model.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `model`.\",\n    \"5-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"3-0\": \"`modelId`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"ID of the model. Contains letters and numbers.\",\n    \"3-3\": \"1.0\",\n    \"7-0\": \"`status`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Status of the model. Valid values are:\\n- `QUEUED`—The training job is in the queue.\\n- `RUNNING`—The training job is running.\\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\\n- `FAILED`—The training job failed.\",\n    \"8-0\": \"`updatedAt`\",\n    \"8-1\": \"date\",\n    \"8-2\": \"Date and time that the model was last updated.\",\n    \"7-3\": \"1.0\",\n    \"8-3\": \"1.0\",\n    \"2-0\": \"`failureMsg`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Reason the dataset training failed. Returned only if the training status is `FAILED`.\",\n    \"2-3\": \"1.0\",\n    \"6-0\": \"`progress`\",\n    \"6-1\": \"int\",\n    \"6-2\": \"How far the dataset training has progressed. Values are between 0–1.\",\n    \"6-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 9\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:40:32.207Z","excerpt":"Returns all models for the specified dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":14,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-all-models","sync_unique":"","title":"Get All Models","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet All Models

Returns all models for the specified dataset.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `model` objects. If the dataset has no models, the array is empty.", "0-3": "1.0", "1-2": "Object returned; in this case, `list`.", "1-0": "`object`", "1-1": "string", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Training Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the model.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `model`.", "5-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "3-0": "`modelId`", "3-1": "string", "3-2": "ID of the model. Contains letters and numbers.", "3-3": "1.0", "7-0": "`status`", "7-1": "string", "7-2": "Status of the model. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\n- `FAILED`—The training job failed.", "8-0": "`updatedAt`", "8-1": "date", "8-2": "Date and time that the model was last updated.", "7-3": "1.0", "8-3": "1.0", "2-0": "`failureMsg`", "2-1": "string", "2-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "2-3": "1.0", "6-0": "`progress`", "6-1": "int", "6-2": "How far the dataset training has progressed. Values are between 0–1.", "6-3": "1.0" }, "cols": 4, "rows": 9 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `model` objects. If the dataset has no models, the array is empty.", "0-3": "1.0", "1-2": "Object returned; in this case, `list`.", "1-0": "`object`", "1-1": "string", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Training Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the model.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `model`.", "5-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "3-0": "`modelId`", "3-1": "string", "3-2": "ID of the model. Contains letters and numbers.", "3-3": "1.0", "7-0": "`status`", "7-1": "string", "7-2": "Status of the model. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\n- `FAILED`—The training job failed.", "8-0": "`updatedAt`", "8-1": "date", "8-2": "Date and time that the model was last updated.", "7-3": "1.0", "8-3": "1.0", "2-0": "`failureMsg`", "2-1": "string", "2-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "2-3": "1.0", "6-0": "`progress`", "6-1": "int", "6-2": "How far the dataset training has progressed. Values are between 0–1.", "6-3": "1.0" }, "cols": 4, "rows": 9 } [/block]
{"__v":1,"_id":"57e71e345f33650e00763877","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleBase64Content=/9j/4AAQSkZ...\" -F \"modelId=YCQ4ZACEPJFGXZNRA6ERF3GL5E\" https://api.metamind.io/v1/vision/predict"}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"probabilities\": [\n    {\n      \"label\": \"beach\",\n      \"probability\": 0.9602110385894775\n    },\n    {\n      \"label\": \"mountain\",\n      \"probability\": 0.039788953959941864\n    }\n  ],\n  \"object\": \"predictresponse\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/predict"},"body":"[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"The image file types supported are PNG,  JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).\",\n  \"title\": \"Tip\"\n}\n[/block]\n##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`modelId`\",\n    \"1-0\": \"`sampleBase64Content`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the model that makes the prediction.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"The image contained in a base64 string.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`sampleId`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Optional. String that you can pass in to tag the prediction. Can be any value, and is returned in the response.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`message`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Error message. Returned only if the status is something other than successful (200).\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `predictresponse`.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`probabilities`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Probabilities for the prediction.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`sampleId`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Value passed in when the prediction call was made. Returned only if the parameter is provided.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`status`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Status of the prediction. Status of 200 means the prediction was successful.\",\n    \"4-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n##Probabilities Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`label`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Probability label for the input.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`probability`\",\n    \"1-1\": \"float\",\n    \"1-2\": \"Probability value for the input. Values are between 0–1.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:45:40.004Z","excerpt":"Returns a prediction for the specified image converted into a base64 string.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":15,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"prediction-with-image-base64-string","sync_unique":"","title":"Prediction with Image Base64 String","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postPrediction with Image Base64 String

Returns a prediction for the specified image converted into a base64 string.

[block:callout] { "type": "info", "body": "The image file types supported are PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).", "title": "Tip" } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "1-0": "`sampleBase64Content`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-2": "The image contained in a base64 string.", "1-3": "1.0", "2-0": "`sampleId`", "2-1": "string", "2-2": "Optional. String that you can pass in to tag the prediction. Can be any value, and is returned in the response.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



[block:callout] { "type": "info", "body": "The image file types supported are PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).", "title": "Tip" } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "1-0": "`sampleBase64Content`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-2": "The image contained in a base64 string.", "1-3": "1.0", "2-0": "`sampleId`", "2-1": "string", "2-2": "Optional. String that you can pass in to tag the prediction. Can be any value, and is returned in the response.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block]
{"__v":0,"_id":"57e71fc54207050e004c3baa","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleId=Photo Prediction\"  \"sampleContent=/FileToPredict/our_trip_to_the_beach.jpg\" -F \"modelId=YCQ4ZACEPJFGXZNRA6ERF3GL5E\" https://api.metamind.io/v1/vision/predict"}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"probabilities\": [\n    {\n      \"label\": \"beach\",\n      \"probability\": 0.980938732624054\n    },\n    {\n      \"label\": \"mountain\",\n      \"probability\": 0.0190612580627203\n    }\n  ],\n  \"object\": \"predictresponse\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/predict"},"body":"[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"The image file types supported are PNG,  JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).\",\n  \"title\": \"Tip\"\n}\n[/block]\n##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`modelId`\",\n    \"1-0\": \"`sampleContent`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the model that makes the prediction.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"File system location of the image file to upload\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`sampleId`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Optional. String that you can pass in to tag the prediction.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`message`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Error message. Returned only if the status is something other than successful (200).\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `predictresponse`.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`probabilities`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Probabilities for the prediction.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`sampleId`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Value passed in when the prediction call was made. Returned only if the parameter is provided.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`status`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Status of the prediction. Status of 200 means the prediction was successful.\",\n    \"4-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n##Probabilities Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`label`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Probability label for the input.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`probability`\",\n    \"1-1\": \"float\",\n    \"1-2\": \"Probability value for the input. Values are between 0–1.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:52:21.290Z","excerpt":"Returns a prediction for the specified local image file.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":16,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"prediction-with-image-file","sync_unique":"","title":"Prediction with Image File","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postPrediction with Image File

Returns a prediction for the specified local image file.

[block:callout] { "type": "info", "body": "The image file types supported are PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).", "title": "Tip" } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "1-0": "`sampleContent`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-2": "File system location of the image file to upload", "1-3": "1.0", "2-0": "`sampleId`", "2-1": "string", "2-2": "Optional. String that you can pass in to tag the prediction.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



[block:callout] { "type": "info", "body": "The image file types supported are PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).", "title": "Tip" } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "1-0": "`sampleContent`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-2": "File system location of the image file to upload", "1-3": "1.0", "2-0": "`sampleId`", "2-1": "string", "2-2": "Optional. String that you can pass in to tag the prediction.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block]
{"__v":0,"_id":"57e72183f000280e006eff7d","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://www.mysite.com/our_beach_vacation.jpg\" -F \"modelId=YCQ4ZACEPJFGXZNRA6ERF3GL5E\" https://api.metamind.io/v1/vision/predict"}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"probabilities\": [\n    {\n      \"label\": \"beach\",\n      \"probability\": 0.9997345805168152\n    },\n    {\n      \"label\": \"mountain\",\n      \"probability\": 0.0002654256531968713\n    }\n  ],\n  \"object\": \"predictresponse\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/predict"},"body":"[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"The image file types supported are PNG,  JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).\",\n  \"title\": \"Tip\"\n}\n[/block]\n##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`modelId`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the model that makes the prediction.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`sampleId`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Optional. String that you can pass in to tag the prediction.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`sampleLocation`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"URL of the image file.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`message`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Error message. Returned only if the status is something other than successful (200).\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `predictresponse`.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`probabilities`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Probabilities for the prediction.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`sampleId`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Value passed in when the prediction call was made. Returned only if the parameter is provided.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`status`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Status of the prediction. Status of 200 means the prediction was successful.\",\n    \"4-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n##Probabilities Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`label`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Probability label for the input.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`probability`\",\n    \"1-1\": \"float\",\n    \"1-2\": \"Probability value for the input. Values are between 0–1.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:59:47.456Z","excerpt":"Returns a prediction for the image file specified by its URL.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","order":17,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"prediction-with-image-url","sync_unique":"","title":"Prediction with Image URL","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postPrediction with Image URL

Returns a prediction for the image file specified by its URL.

[block:callout] { "type": "info", "body": "The image file types supported are PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).", "title": "Tip" } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-0": "`sampleId`", "1-1": "string", "1-2": "Optional. String that you can pass in to tag the prediction.", "1-3": "1.0", "2-0": "`sampleLocation`", "2-1": "string", "2-2": "URL of the image file.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



[block:callout] { "type": "info", "body": "The image file types supported are PNG, JPG, JPEG, PGM (image/x-portable-graymap), and PPM (image/x-portable-pixmap).", "title": "Tip" } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-0": "`sampleId`", "1-1": "string", "1-2": "Optional. String that you can pass in to tag the prediction.", "1-3": "1.0", "2-0": "`sampleLocation`", "2-1": "string", "2-2": "URL of the image file.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block]
{"__v":0,"_id":"588259f1b0888d23007b0c85","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.metamind.io/v1/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\""}]},"params":[],"results":{"codes":[{"name":"","code":"{\n  \"access_token\": \"c3d95b4bf17108680b7495d069912127d7e3cbb9\",\n  \"token_type\": \"Bearer\",\n  \"expires_in\": 9999902\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/oauth2/token"},"body":"You must pass an assertion into this API call, so you first need to create a JWT payload and sign it with your private key to generate an assertion. To generate an assertion:\n\n1. Create the JWT payload. The payload is JSON that contains:\n\n - `sub`---Your email address. This is your email address contained in the Salesforce org you used to sign up for a Predictive Services account.\n\n - `aud`---The API endpoint URL for generating a token.\n\n - `exp`---The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid.\n\n  The JWT payload JSON looks like this.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"sub\\\": \\\"<EMAIL_ADDRESS>\\\",\\n  \\\"aud\\\": \\\"https://api.metamind.io/v1/oauth2/token\\\",\\n  \\\"exp\\\": \\\"<EXPIRATION_SECONDS_IN_UNIX_TIME>\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n2. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `predictive_services.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. \n\n3. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you just generated.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -H \\\"Content-type: application/x-www-form-urlencoded\\\" -X POST https://api.metamind.io/v1/oauth2/token -d \\\"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=eyJhbGciOiJSUsEKpMMtu...\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`access_token`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Token value for authorization.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`token_type`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Type of token returned. Always `Bearer`.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`expires_in`\",\n    \"2-1\": \"integer\",\n    \"2-3\": \"1.0\",\n    \"2-2\": \"Number of seconds that the token will expire from the time it was generated.\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2017-01-20T18:41:53.835Z","excerpt":"Returns an OAuth token to access the API. You must pass a valid token in the header of each API call.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":19,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"generate-an-oauth-token","sync_unique":"","title":"Generate an OAuth Token","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postGenerate an OAuth Token

Returns an OAuth token to access the API. You must pass a valid token in the header of each API call.

You must pass an assertion into this API call, so you first need to create a JWT payload and sign it with your private key to generate an assertion. To generate an assertion: 1. Create the JWT payload. The payload is JSON that contains: - `sub`---Your email address. This is your email address contained in the Salesforce org you used to sign up for a Predictive Services account. - `aud`---The API endpoint URL for generating a token. - `exp`---The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. The JWT payload JSON looks like this. [block:code] { "codes": [ { "code": "{\n \"sub\": \"<EMAIL_ADDRESS>\",\n \"aud\": \"https://api.metamind.io/v1/oauth2/token\",\n \"exp\": \"<EXPIRATION_SECONDS_IN_UNIX_TIME>\"\n}", "language": "json" } ] } [/block] 2. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `predictive_services.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. 3. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you just generated. [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.metamind.io/v1/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=eyJhbGciOiJSUsEKpMMtu...\"", "language": "curl" } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`access_token`", "0-1": "string", "0-2": "Token value for authorization.", "0-3": "1.0", "1-0": "`token_type`", "1-1": "string", "1-2": "Type of token returned. Always `Bearer`.", "1-3": "1.0", "2-0": "`expires_in`", "2-1": "integer", "2-3": "1.0", "2-2": "Number of seconds that the token will expire from the time it was generated." }, "cols": 4, "rows": 3 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



You must pass an assertion into this API call, so you first need to create a JWT payload and sign it with your private key to generate an assertion. To generate an assertion: 1. Create the JWT payload. The payload is JSON that contains: - `sub`---Your email address. This is your email address contained in the Salesforce org you used to sign up for a Predictive Services account. - `aud`---The API endpoint URL for generating a token. - `exp`---The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. The JWT payload JSON looks like this. [block:code] { "codes": [ { "code": "{\n \"sub\": \"<EMAIL_ADDRESS>\",\n \"aud\": \"https://api.metamind.io/v1/oauth2/token\",\n \"exp\": \"<EXPIRATION_SECONDS_IN_UNIX_TIME>\"\n}", "language": "json" } ] } [/block] 2. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `predictive_services.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. 3. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you just generated. [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.metamind.io/v1/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=eyJhbGciOiJSUsEKpMMtu...\"", "language": "curl" } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`access_token`", "0-1": "string", "0-2": "Token value for authorization.", "0-3": "1.0", "1-0": "`token_type`", "1-1": "string", "1-2": "Type of token returned. Always `Bearer`.", "1-3": "1.0", "2-0": "`expires_in`", "2-1": "integer", "2-3": "1.0", "2-2": "Number of seconds that the token will expire from the time it was generated." }, "cols": 4, "rows": 3 } [/block]
{"__v":0,"_id":"57e026a7ef837d0e00fd8af2","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"- A dataset should contain at least 1,000 images per label.\n\n- Each dataset label should have about the same number of images. For example, avoid a situation where you have 1,000 images in one label and 400 in another within the same dataset.\n\n- Each dataset label should have a wide variety of images. If you have a label that contains images of a certain object, include images:\n  - In color\n  - In black and white\n  - Blurred\n  - That contain the object with other objects it might typically be seen with\n  - With text and without text (if applicable)\n\n \n- A dataset with a wide variety of images means that the model will be more accurate. For example, if you have a dataset label called “buildings,” include images of many different styles of buildings: Asian, Medieval, Renaissance, Modern, and so on.\n\n- In a binary dataset, include images in the negative label that look similar to images in the positive label. For example, if your positive label is oranges be sure to have grapefruits, tangerines, lemons, and other citrus fruits in your negative label.\n\n- As you test your model, take the false positives and false negatives and add them to your training dataset to make the model more accurate.\n\n- If your dataset changes, you must train it and create a new model.\n\n- You can’t delete a dataset label, or add labels and images to a dataset after it has been successfully trained and a model created. So we recommend that you create a script that contains the commands to create your dataset, labels, and examples.","category":"57deec8084019d2000e95af4","createdAt":"2016-09-19T17:55:51.515Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"dataset-and-model-best-practices","sync_unique":"","title":"Dataset and Model Best Practices","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Dataset and Model Best Practices


- A dataset should contain at least 1,000 images per label. - Each dataset label should have about the same number of images. For example, avoid a situation where you have 1,000 images in one label and 400 in another within the same dataset. - Each dataset label should have a wide variety of images. If you have a label that contains images of a certain object, include images: - In color - In black and white - Blurred - That contain the object with other objects it might typically be seen with - With text and without text (if applicable) - A dataset with a wide variety of images means that the model will be more accurate. For example, if you have a dataset label called “buildings,” include images of many different styles of buildings: Asian, Medieval, Renaissance, Modern, and so on. - In a binary dataset, include images in the negative label that look similar to images in the positive label. For example, if your positive label is oranges be sure to have grapefruits, tangerines, lemons, and other citrus fruits in your negative label. - As you test your model, take the false positives and false negatives and add them to your training dataset to make the model more accurate. - If your dataset changes, you must train it and create a new model. - You can’t delete a dataset label, or add labels and images to a dataset after it has been successfully trained and a model created. So we recommend that you create a script that contains the commands to create your dataset, labels, and examples.
- A dataset should contain at least 1,000 images per label. - Each dataset label should have about the same number of images. For example, avoid a situation where you have 1,000 images in one label and 400 in another within the same dataset. - Each dataset label should have a wide variety of images. If you have a label that contains images of a certain object, include images: - In color - In black and white - Blurred - That contain the object with other objects it might typically be seen with - With text and without text (if applicable) - A dataset with a wide variety of images means that the model will be more accurate. For example, if you have a dataset label called “buildings,” include images of many different styles of buildings: Asian, Medieval, Renaissance, Modern, and so on. - In a binary dataset, include images in the negative label that look similar to images in the positive label. For example, if your positive label is oranges be sure to have grapefruits, tangerines, lemons, and other citrus fruits in your negative label. - As you test your model, take the false positives and false negatives and add them to your training dataset to make the model more accurate. - If your dataset changes, you must train it and create a new model. - You can’t delete a dataset label, or add labels and images to a dataset after it has been successfully trained and a model created. So we recommend that you create a script that contains the commands to create your dataset, labels, and examples.
{"__v":0,"_id":"581b94a7bdfa410f0087c27a","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"## Code Samples ##\n\n\n- [Salesforce-einstein-predictive-vision](https://github.com/muenzpraeger/salesforce-einstein-predictive-vision)—This GitHub repo contains code samples that call the Predictive Vision Service API from different programming languages (currently, Java only).\n\n## Trailhead ##\n\n- [AI Basics](https://trailhead.salesforce.com/en/module/ai_basics)—Learn what AI is and how it will transform CRM and the customer experience.\n\n- [Salesforce Einstein Features](https://trailhead.salesforce.com/en/module/get_smart_einstein_feat)—Discover insights and predict outcomes with this powerful set of AI-enhanced features.","category":"581b94229c78ac0f005a4c52","createdAt":"2016-11-03T19:48:55.501Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"code-samples-and-learning-resources","sync_unique":"","title":"Code Samples and Learning Resources","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Code Samples and Learning Resources


## Code Samples ## - [Salesforce-einstein-predictive-vision](https://github.com/muenzpraeger/salesforce-einstein-predictive-vision)—This GitHub repo contains code samples that call the Predictive Vision Service API from different programming languages (currently, Java only). ## Trailhead ## - [AI Basics](https://trailhead.salesforce.com/en/module/ai_basics)—Learn what AI is and how it will transform CRM and the customer experience. - [Salesforce Einstein Features](https://trailhead.salesforce.com/en/module/get_smart_einstein_feat)—Discover insights and predict outcomes with this powerful set of AI-enhanced features.
## Code Samples ## - [Salesforce-einstein-predictive-vision](https://github.com/muenzpraeger/salesforce-einstein-predictive-vision)—This GitHub repo contains code samples that call the Predictive Vision Service API from different programming languages (currently, Java only). ## Trailhead ## - [AI Basics](https://trailhead.salesforce.com/en/module/ai_basics)—Learn what AI is and how it will transform CRM and the customer experience. - [Salesforce Einstein Features](https://trailhead.salesforce.com/en/module/get_smart_einstein_feat)—Discover insights and predict outcomes with this powerful set of AI-enhanced features.