Dataloop secures cash infusion to expand its data annotation tool set

2 years ago 103

Data annotation, or the process of adding labels to images, text, audio and other forms of sample data, is typically a key step in developing AI systems. The vast majority of systems learn to make predictions by associating labels with specific data samples, like the caption “bear” with a photo of a black bear. A system trained on many labeled examples of different kinds of contracts, for example, would eventually learn to distinguish between those contracts and even extrapolate to contracts that it hasn’t seen before.

The trouble is, annotation is a manual and labor-intensive process that’s historically been assigned to gig workers on platforms like Amazon Mechanical Turk. But with the soaring interest in AI — and in the data used to train that AI — an entire industry has sprung up around tools for annotation and labeling.

Dataloop, one of the many startups vying for a foothold in the nascent market, today announced that it raised $33 million in a Series B round led by Nokia Growth Partners (NGP) Capital and Alpha Wave Global. Dataloop develops software and services for automating aspects of data prep, aiming to shave time off of the AI system development process.

“I worked at Intel for over 13 years, and that’s where I met Dataloop’s second co-founder and CPO, Avi Yashar,” Dataloop CEO Eran Shlomo told TechCrunch in an email interview. “Together with Avi, I left Intel and founded Dataloop. Nir [Buschi], our CBO, joined us as third co-founder, after he held executive positions [at] technology companies and [lead] business and go-to-market at venture-backed startups.”

Dataloop initially focused on data annotation for computer vision and video analytics. But in recent years, the company has added new tools for text, audio, form and document data and allowed customers to integrate custom data applications developed in-house.

One of the more recent additions to the Dataloop platform is data management dashboards for unstructured data. (As opposed to structured data, or data that’s arranged in a standardized format, unstructured data isn’t organized according to a common model or schema.) Each provides tools for data versioning and searching metadata, as well as a query language for querying datasets and visualizing data samples.

Dataloop

Image Credits: Dataloop

“All AI models are learned from humans through the data labeling process. The labeling process is essentially a knowledge encoding process in which a human teaches the machine the rules using positive and negative data examples,” Shlomo said. “Every AI application’s primary goal is to create the ‘data flywheel effect’ using its customer’s data: a better product leads to more users leads to more data and subsequently a better product.”

Dataloop competes against heavyweights in the data annotation and labeling space, including Scale AI, which has raised over $600 million in venture capital. Labelbox is another major rival, having recently nabbed more than $110 million in a financing round led by SoftBank. Beyond the startup realm, tech giants, including Google, Amazon, Snowflake and Microsoft, offer their own data annotation services.

Dataloop must be doing something right. Shlomo claims the company currently has “hundreds” of customers across retail, agriculture, robotics, autonomous vehicles and construction, although he declined to reveal revenue figures.

An open question is whether Dataloop’s platform solves some of the major challenges that exist in data labeling today. Last year, a paper published out of MIT found that data labeling tends to be highly inconsistent, potentially harming the accuracy of AI systems. A growing body of academic research suggests that annotators introduce their own biases when labeling data — for example, labeling phrases in African American English (a modern dialect spoken primarily by Black Americans) as more toxic than the general American English equivalents. These biases often manifest in unfortunate ways; think moderation algorithms that are more likely to ban Black users than white users.

Data labelers are also notoriously underpaid. The annotators who contributed captions to ImageNet, one of the better-known open source computer vision libraries, reportedly made a median of $2 per hour in wages.

Shlomo says it’s incumbent on the companies using Dataloop’s tools to affect change — not necessarily Dataloop itself.

“We see the underpayment of annotators as a market failure. Data annotation shares many qualities with software development, one of them being the impact of talent on productivity,” Shlomo said. “[As for bias,] bias in AI starts with the question that the AI developer chooses to ask and the instructions they supply to the labeling companies. We call it the ‘primary bias.’ For example, you could never identify color bias unless you ask for skin color in your labeling recipe. The primary bias issue is something the industry and regulators should address. Technology alone will not solve the issue.”

To date, Dataloop, which has 60 employees, has raised $50 million in venture capital. The company plans to grow its workforce to 80 employees by the end of the year.

Dataloop secures cash infusion to expand its data annotation tool set by Kyle Wiggers originally published on TechCrunch

Read Entire Article