Case Study

Order Now

Background: The goal of the team is to build high quality training, test and validation dataset to power autonomous vehicles. In order to generate high quality labeled dataset we need a platform that enables people to provide the annotations, judgments, and labels you need, labeling standards that define human annotations, trained workforce and processes. Data collected by the car across various sensors is ingested into the platform and is called a task. The task is then annotated by trained and qualified workers with additional layers of both human, data and machine learning driven quality control checks.

The team: Machine Teaching Team (Mowgli ATG)

● 1000+ operators across 2 vendors and 2 sites to label dataset with machine assistance

● Stakeholders: Product team, Engineering team, Production Support team, Upper Management, Vendor teams. Majority of the stakeholders are in US across 2 time zones

○ Product and engineering team builds the tool that are used to generate high quality training dataset based on label consumer requirements

○ Production Support is mainly responsible for pre-processing and post-processing of data. Upstream data is the collects from cars, downstream data is labelled data for consumers to use in their autonomy models

Problem Statement: The recent labeled dataset requirements from the label consumers will require us to stretch our capacity. In a brainstorming session conducted internally we discussed several challenges that we would face with the new set of sensors on the car. The earlier sensor configuration required us to label data across one Lidar (Light Detection and Ranging); the latest version has 7 sensors. The number of different object types we would label on a task has doubled. We used to label this data in two different tools earlier (Lidar and Image based); now the two tools are unified into one. All these changes have significantly increased the cognitive load on a labeler.

The tool changes necessary to address these new requirements have been made and data suggests that while the previous sensor configuration allowed us to deliver a task in 12 hours with 95% quality, the newer version demands an effort of 26 hours to deliver a similar task with the same quality.

One of our PgMs came up with a novel idea to split the task. Let’s say a task comprises two main parts, A and B. The status quo workflow is as follows:

A —> B —> (A+B)QA —> Finished product

The new workflow designed would split A and B into smaller parts:

A1 —> A2 —> A3 Running parallel with B1—> B2—> B3 Followed by (A+B)QA—-> Finished product

The latest version of the labeling tool addresses the new customer requirements, but does not address this new workflow. Experiments run through manual orchestration suggests a 34% efficiency gain with the new workflow. The leadership team has already committed 20,000 finished tasks by October 2020. The product and engineering team need a sprint plan to better manage the development. We have the onus of creating a project plan which shall address both these requirements while meeting the 95% quality mark.

You are chosen as the Program Manager to scale this new workflow, but you cannot lose sight of the delivery target. We have initiated production on the new data.

Deliverabls:

● Project Plan for scaling the new workflow

● Scenario Analysis:

○ Delivery charter and end date if scaling fails

○ Delivery charter and end date if scaling is successful

○ Risk Assessment

Make assumptions as necessary for your solutions, but clearly state them in the answer.

Abbreviations :

QA : Quality Assessment – The process where Operators manually check the labelled images for errors.

Leave a Comment