Mechanical Turk – promoted in an academic journal

Startseite von Amazons Mechanical Turk

Image by HerrKrueger via Flickr

Amazon’s Mechanical Turk may see a lot of new survey offerings as a result of a new article in Perspectives on Psychological Science: “Amazon’s Mechanical Turk: A New Source for Inexpensive, Yet High-Quality, Data?” While surveys are often some of the best work available on MTurk, the remuneration is so low that one wonders if this is ethical – or meets the standards of human subjects review.

Amazon’s Mechanical Turk is an online site where employers can put out micro-tasks (called HIT for “human intelligence tasks) and workers can choose which ones to do.  I’ve explored it a bit from the workers’ perspective – there is a self-contained social system that ranks the bosses, marks out those to be avoided, shares information about the good ones, and even some tips on how to do the work more quickly or less onerously.

It is also rife with social conflict. Many of the HITs involve writing or language. The very low pay-scale – usually measured in pennies and dimes, not even a quarter – makes the work much more attractive to non-native English speakers in India or Asia.  The employers, though, want a high-level of language competence.  There are technical attempts to block people from the wrong part of the world, and technical attempts to get around it.

HIT listings are full of dire warnings “Don’t turn in junk you you will BE BANNED!!”   When I participated in some HITs that built on the previous work of other Turkers, I could see what they were talking about: people just pasting gibberish into a box and trying to collect a penny or two for it.  Turkers put up warnings about Scammers who set up a HIT in such a way that you’ve done all the work and then you’re disqualified or never paid.  Ortheory report:

Mechanical Turk has been called a digital sweatshop.  Here are two perspectives – an Economic Letters piece: “The condition of the Turking class: Are online employers fair and honest?”   And, a piece calling for intervention: “Working the crowd: Employment and labor law in the crowdsourcing industry.”

The PS article finds the Turkers a more appealing and higher-quality source of data than that most-studied group, college students.  The authors have examined the trade-off between levels of pay and quality of data, and decided that compensation they term realistic (presumably small enough to be affordable from a research budget) yields data of adequate or better quality.

Findings indicate that (a) MTurk participants are slightly more demographically diverse than are standard Internet samples and are significantly more diverse than typical American college samples; (b) participation is affected by compensation rate and task length, but participants can still be recruited rapidly and inexpensively; (c) realistic compensation rates do not affect data quality; and (d) the data obtained are at least as reliable as those obtained via traditional methods. Overall, MTurk can be used to obtain high-quality data inexpensively and rapidly.

The sociology of the online labor force has not yet been explored; I hope someone is working on it.

Enhanced by Zemanta

About Sister Edith

Benedictine sister of St. Scholastica Monastery, Duluth, Minnesota
This entry was posted in Economics, Internet, Law and Legal, Psychology, Sociology. Bookmark the permalink.

Comments are welcome and moderated

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s