Archive for the ‘University of Massachusetts’ Tag

Work3.0 Featured on!

Tuesday, December 18th, 2012 by Seth Weinstein

crowdsoucing.orgThree cheers for recognition! The article I posted last week about Automan and Amazon Mechanical Turk was picked up as an editorial feature on, the leading site for news and discussion regarding crowdsourcing, crowd funding, cloud labor, and distributed knowledge.

Professionals in the field would be wise to check out the content on with some degree of regularity. Their articles are well-sorted, submitted from a variety of sources, and will be seen by bright minds from a multitude of distinct industries. When I reported for Tiny Work, many of my news leads could be traced back to, and the attention my articles received on the site was instrumental in getting me where I am today.

Thank you,, and I hope we continue to have a mutually beneficial relationship with each other.

Cloud Labor Scuffle: Ziptask, AutoMan, and MTurk’s Flaws

Tuesday, December 11th, 2012 by Seth Weinstein


Researchers at the University of Massachusetts have recently created AutoMan, a new cloud labor algorithm that intends to outsource not the worker, but the boss. New Scientist’s Douglas Haven reports that AutoMan is a fully automatic system that analyses and delegates tasks to human workers on Amazon Mechanical Turk. Where Ziptask simplifies task outsourcing via our task management team and a “set it and forget it” setup, AutoMan seeks to tackle the process completely automatically. If AutoMan is successful, it could end up wildly improving on the original Turk by automating oversight, the one remaining untouched process.

In a report published by the UMass researchers, the grievances against MTurk are laid out quite succinctly on the very first page. Turk doesn’t scale well to complicated tasks, it’s often difficult to determine the appropriate payment or time scale for a job, and there’s no guarantee that the finished work will be of acceptable quality. Being so similar, both Ziptask and AutoMan have their own unique ways of addressing these flaws.

Scale and Complexity

MTurk is great for simple tasks like identifying the subjects of photos, but when it comes to complicated, iterative, or interrelated tasks, its power often falls short. The problem lies in the fact that clients need to separate complex tasks into bite-sized chunks of work, which are better suited to the platform. Ziptask solves this problem with its team of project managers, who can break down and assign tricky tasks to multiple workers, or pore through their database for a worker who is qualified for all aspects of the task. Unfortunately, it does not appear as though AutoMan will have any innate capability to split up or delegate a task in such a way; perhaps this functionality will be addressed in a later update. We’ve discussed the strength of Ziptask’s scalability before, so I hope the UMass researchers have something good up their sleeves.

Payment and Time

Those who wish to assign work via MTurk not only have to format and post their task, but must also determine how long it should take and how much money they think it’s worth. Since task posters are already short on time by definition, this step becomes an unnecessary speed bump. Ziptask, again with its human team of supervisors, assigns prices to jobs automatically based on the difficulty and type of work. Since the labor is compensated per-minute, they’ll also determine a cutoff price to help you avoid going over budget. By contrast, AutoMan turns the process into trial-and-error based on a series of formulas. Price is calculated based on the duration of the work and federal minimum wage, and task time limits are set to 30 seconds by default. AutoMan will automatically adjust both the task price and time limit (upwards) if it’s not getting the results it requires. Clients can set these parameters to other defaults if the task requires, but the process is otherwise very standardized.

Quality Assurance

Any cloud labor platform, regardless of its makeup or the details of its process, will live and die by work quality. Who wants to pay for substandard results? Quality assurance is an absolute necessity, and MTurk has next to none built in. Ziptask once again turns to its supervision team, who personally make sure that every document is up to standards before presenting it to the client. The client provides the final pass/fail check, and no money changes hands until everyone agrees that the work makes the cut. AutoMan, by comparison, automates the process in the simplest possible way; it has multiple workers complete the task, and waits to see which results are the most common. The workers are paid once the majority has reached a statistically viable agreement, with no payment going to workers who provided incorrect answers.

Will My New Boss Be A Robot?

Rest assured, it’s probably not gonna happen anytime soon. The relative inflexibility of both the AutoMan algorithm and the MTurk interface mean that this combination is going to be very effective, but only for certain kinds of tasks. In a nutshell, this isn’t going to add any muscle to MTurk; it will continue to be bad at intricate or skill-based work, but good at work that’s just above “a monkey could do it”-level. The only difference is that the AutoMan algorithm could highly increase Turk’s effectiveness at completing these types of tasks. For all other office work, especially things that you can’t wait around for five or six workers to agree on, Ziptask is going to get you better results, faster, and most likely for a better price.

%d bloggers like this: