Manual Review

Manual review is the most basic method for evaluating the quality of training datasets. In Suite, the reviewer can evaluate the quality of labels and approve or reject it.

The review can be performed on the entire labeled dataset or a sample of the dataset. A full inspection can improve the quality of your dataset, but it definitely will be more costly.

Manual Review Process

1. Submit Labels

There are a few ways to create labels. You can manually submit a labeling task in the Label Mode, apply Auto-label, or upload existing label JSON files via the SDK/CLI.

2. Review Labels

  • After inspecting the quality of labels in Review Mode, a user can either Approve or Reject the label.

  • When a reviewer user rejects a label, a reason for the rejection must be entered, and this will be recorded in the issue thread. This allows the labeler to quickly identify areas that need correction.

3. Revising Rejected Labels

If a label has an assigned labeler user, the user will be notified as soon as a reviewer user rejects a label. The label can then be revised and submitted again.

Manual Review Statistics

In the project Overview, you can check the statistics on the progress of the manual review in real-time, and when you click the Review chart highlighted below, the list of labels that have been approved or rejected will be displayed.

Review Status and Reviewer Filter

Within the Labels table, you can filter labels based on the Review status (i.e. rejected, approved) and the Reviewer user.

Review Statistics by Labeler User

In the User Reports tab, you can check the number of approved and rejected labels per each labeler user.