Model Plugins

With the Model custom plugin, you can add a quick way for users to run your model on their project assets.

When users click to run your plugin, you will get one of their assets, feed them to your model, and return the model's results as pre-annotations to their assets.

Creating a Model Plugin

Following the steps outlined in this section, create a new plugin from the UI, choosing "Model" as the plugin type.

In the "Class Names" section, enter the names of the classes used by your model.

For example, assume that your model has three classes it returns, "person," "car," and "light." At a later stage, the user will be able to "link" (map) these classes to their own label categories representing the ideas of "person," "car" etc.

Then, create and run a Python script using the ModelPlugin class you can find in our ango Python package under ango.plugins.

You will need to add the ango package to your Python environment by running

pip install ango

Here is the class's documentation, and an example:

BatchModelPlugin

Parameters:

  • id: string

    • The plugin's ID. You may obtain this ID from the plugin's information box in the Development section of the Plugin page.

  • secret: string

    • The plugin's secret. You can think of this as a private key you'll need to be able to connect your script to the plugin. You may obtain this secret from the plugin's information box in the Development section of the Plugin page.

  • callback: Callable[[str, dict], Tuple[str, BytesIO]]

    • The callback function. This function will be run whenever a user asks for your model to be run using this plugin. More on the callback function below.

Callback Function

Parameters:

  • **data: dict

    • projectId: string

      • The ID of the project for which the plugin was run.

    • categorySchema: dict

      • The label categories that have been passed to this plugin when the plugin is being run.

      • For example, when creating the plugin, if in the "Class Names" section you had input the classes "Vehicle" and "Person", and when running, if they were both mapped to existing labeling tools, this is what you would get as input here:

      [{'schemaId': '797ea755f5693c0dc902558', 'modelClass': 'Vehicle'}, {'schemaId': '797ea755f5693c0dc902558', 'modelClass': 'Person'}]
    • dataURL: str

      • The URL of the file to be sent to the model.

      • Example: https://your-bucket.s3.eu-central-1.amazonaws.com/bb26ccc1-3a9f-4e41-bb01-df285b0c5bd5.jpg

    • apiKey: str

      • The API Key of the user running the plugin.

    • orgId: str

      • The Organization ID of the organization where the plugin is run.

    • runBy: str

      • The user ID of the user running the plugin.

    • session: str

    • logger: PluginLogger

    • batches: List[str]

    • configJSON: str

      • The config JSON your users will pass to you through the Config JSON text field when running the plugin. Warning: the JSON will be passed as a string so you will have to destringify it. Example code to obtain the original JSON as a Python object:

    def sample_callback(**data):
        config_str = data.get('configJSON')
        config = json.loads(config_str)

Returns:

  • annotation_json: dict

    • A message to show users when the plugin finishes running.

    • Example:

{
  "data": "https://angohub-public-assets.s3.eu-central-1.amazonaws.com/bb26ccc1-3a9f-4e41-bb01-df285b0c5bd5.jpg",
  "answer": {
    "objects": [...],
    "classifications": [...],
    "relations": [...]
  }
}

Sample Python Script

In this script, for the sake of brevity, instead of running the asset through a model, we add a bounding box to it as pre-label, "simulating" the model's results. We then upload these annotations to the user's assets with a "To-Do" status as prelabels using the SDK.

import json
from ango.sdk import SDK
from ango.plugins import ModelPlugin, run

HOST = '<YOUR HOST>'
PLUGIN_ID = '<YOUR PLUGIN ID>'
PLUGIN_SECRET = '<YOUR PLUGIN SECRET>'


def run_model(**data):
    # Extract input parameters
    project_id = data.get('projectId')
    data_url = data.get("dataURL")
    category_schema = data.get('categorySchema')
    logger = data.get('logger')
    api_key = data.get('apiKey')
    config_str = data.get('configJSON')
    if config_str is not None:
        config = json.loads(config_str)

    logger.info("Plugin session is started!")

    if category_schema is None:
        bbox_obj = [{"bounding-box": {"x": 20, "y": 30, "width": 50, "height": 60} }]
        annotation_json = {"data": data_url,
                           "answer": {"objects": bbox_obj, "classifications": [], "relations": []}}
    else:
        schema_id = category_schema[0]['schemaId']
        bbox_obj = [{"schemaId": schema_id,
                     "bounding-box": {"x": 20, "y": 30, "width": 50, "height": 60} }]
        annotation_json = {"data": data_url,
                           "answer": {"objects": bbox_obj, "classifications": [], "relations": []}}
    
    logger.info("Plugin session is ended!")
    return annotation_json

if __name__ == "__main__":
    plugin = ModelPlugin(id=PLUGIN_ID,
                         secret=PLUGIN_SECRET,
                         callback=run_model)

    run(plugin, host=HOST)

Find the always up-to-date version of this code at the following link:

Running a Model using the Plugin

Your Python script needs to be running for users to be able to run your plugin.

Navigate to the asset were you'd like to run the model. Click on the "detectors" icon, then on the name of the model.

From the dialog that pops up, map your classes to the model's classes, then click on Run. The plugin will pre-label your asset with the model's results.

Last updated