Week 1 reading

Running experiments online, why and how

The plan for week 1

Week 1 is a fairly gentle intro to the course - in the lecture I will cover some admin and give you a preview of what we’ll be covering, then in the week 1 practical we’ll get you started on jsPsych, including building a little “hello world” demo and putting it on our teaching server. In a normal week this reading page will direct you to some journal articles you have to read before the corresponding lecture, but since the first lecture is at 9am on the Monday of week 1 there is no reading this week, other than this page!

A note on accessing readings

I will link to readings that you can access on the Edinburgh University network (e.g. in journals which are open access, or for which the University pays a subscription). Obviously we are not necessarily all on the University network at the moment - I am writing this sitting in the spare room at home in lovely Dunbar - and not all these links will work from your home network. The work-around is to log onto the University’s VPN (Virtual Private Network), at which point you are treated as being on the University network and have access to the journal websites. Here are the instructions for getting on the VPN.. Don’t forget to disconnect from the VPN once you have what you need, otherwise all your web traffic goes through the University which might slow you down or get you in trouble if you are doing things you wouldn’t do on a public computer in the library.

Collecting data online

Various kinds of linguistic and psycholinguistic research need data from humans - you need to know whether a certain construction is grammatical or not, how people from a certain region pronounce a certain word, whether there is variation within a community in a certain linguistic feature, how hard a particular type of sentence is to process, whether a particular type of word meaning is easy or hard to learn, how people deploy their linguistic system in interaction with others, etc etc. Until relatively recently, this kind of data has been collected in person - you ask colleagues or students for their grammaticality judgments, record people speaking, bring people into your lab to run psycholinguistic experiments on your lab computers, or have pairs or groups of participants interact in person on shared tasks.

However, in recent years there’s been an accelerating move to collect at least some of these kinds of data online: rather than going out yourself to track down participants and elicit data, or having them come into the lab and run through an experiment, you have them participate remotely. While this could be done in the old days too - I could write you a letter to solicit your opinion on the grammaticality of some sentences, or phone you up to listen to how you speak - collecting data over the internet massively accelerates this process, and makes large-scale remote data collection feasible. In the canonical case, your participants simply participate in their web browser - you point participants to a URL and their browser runs code you have written to solicit certain kinds of data.

Collecting data in this way therefore involves a couple of important steps which differ from in-person studies - connecting with participants, and building an experiment that runs in a web browser.

Connecting with participants

If you are doing data collection in person this is usually fairly obvious - you pop next door to your colleague, turn up at your field site with a recorder and some local contacts to chase down, or flyer the campus to recruit participants for your lab study. These methods are also possible with online data collection - if you have built your experiment to run in a web browser you can email the URL to colleagues and friends, turn up at your field site and have someone speak into your laptop, or have participants come into the lab and do the experiment on a browser on your lab machine. But obviously the real power of online data collection comes from connecting with participants remotely - ideally your software will work on any web browser which means you can in principle reach anyone in the world with an internet-enabled device and internet access, which is an enormous potential participant pool. Of course the trick is knowing how to reach those people.

There are a range of ways to do this. For instance, Hartshorn et al. 2018 set up a grammar quiz that went viral on social media and ended up with over a million responses (it’s still going strong even after that 2018 paper was published). I wouldn’t count on any of your experiments going viral though. The more standard route is to pay people to participate. You could do that manually by putting your study URL on flyers and then reimbursing people using paypal - we’ve done that, it’s laborious but it works. The more efficient and powerful approach is to recruit through websites which are designed to facilitate crowdsourcing. These sites allow you to set paid tasks for members of the public. The sites have a pool of people looking for tasks, and provide an infrastructure for paying people. In return they charge you a fee, which is often quite substantial (e.g. Prolific charges an additional 30%-36% of the amount you pay the participants, MTurk charges an additional 20-40%, see technical note at the end on pricing if you are wondering why the rate varies). The most widely used site is Amazon Mechanical Turk (MTurk), although Prolific has a growing following in academia (at least in the UK), and less of a Wild West reputation. Until recently I exclusively used MTurk, recently I have switched to Prolific for most stuff (the data quality is maybe a little better, but mainly because there is a nice interface on the website, plus the data seems to come in just as fast as it does on MTurk).

In week 2 I’ll talk a bit more about the pros and cons of crowdsourcing, ethical considerations, what kind of populations you are dealing with on Prolific and MTurk, then in the final week of the course I’ll provide the details of how to interact with these crowdsourcing sites, but to summarise for now: once your data-collection website is all built and tested, you pay for credit on one of these sites, list your experiment as an assignment or series of assignments, and like magic people come along and complete it. Or that’s the theory - in practice often people come along and tell you they can’t complete your experiment because it contains a bug you didn’t catch, or they complete it in a way you hadn’t anticipated, you revise and repeat until you have a working experiment, and then the data comes rolling in.

Building an experiment that runs in a web browser

If you are doing data collection in person you can sometimes get away with a low-tech approach - paper-and-pencil surveys, voice recording on a handheld recorder, etc. For some kinds of online data collection you might also be able to go low-tech - for instance, maybe you can just get people to email your their judgments, or interview people over Zoom/Teams. But for the kinds of methods we are interested in, that won’t work and you are going to need to get your hands dirty and build an experiment that runs in a web browser.

Luckily lots of people want to do this, so there are a range of tools available that you can use. Some of these have quite simple interfaces, which allow you to build a survey or experiment with relatively little technical knowledge, using point-and-click interfaces - options here include Qualtrics, Gorilla and Psychopy. Point-and-click interfaces make it quick and easy to pull something together, but these tools often limit what you can do, and anything other than a very standard-looking paradigm might be hard or impossible to achieve. The other extreme involves writing stuff from scratch in html and javascript (a programming language that runs in browsers), which is what I did until recently - coding from scratch means you can basically do anything you can figure out how to code up, but the problem is you have to figure it out yourself from scratch / by trawling through Stack Overflow (a website where programming questions are posted and answered which seems to contain the answer to all common programming problems).

On this course we are going to go for a middle route, which is to build our experiments using jsPsych. jsPsych is a set of tools for javascript, created by Josh de Leeuw. Building an experiment in jsPsych involves doing a bit of html and javascript, but with tools to make the kinds of things you’ll probably want to do a lot easier than coding from scratch - it’s designed by cognitive scientists for cognitive scientists, so lots of the things we will want to do are covered as standard. Even better, our in-house PPLS programming guru Alisdair Tullo, who is also teaching on this course, has written a nice intro course for us to use, which introduces many of the tools we’ll need to get started. Hopefully this makes the programming part of the course fairly accessible (and we are on hand to help you in the labs!), but gives us the flexibility to build quite fancy experiments. I have switched to using jsPsych when coding up my own experiments, since it gives me nearly all the flexibility I need and makes it much faster to build new experiments.

The plan for this course is that you will start by working through bits of Alisdair’s tutorial, which will give you enough javascript and jsPsych to read, play with and tweak my jsPsych implementation of interesting psycholinguistic experiments; those experiments are based on the experiments we cover in the readings and lectures. You won’t be expected to build experiments completely from scratch on your own, but by the end of the course you’ll have some template experiments for inspiration and enough knowledge to understand how to go about repurposing those template experiments to cover your data collection needs; the second assessment involves building an experiment, and there’s lots of flexibility about how adventurous you can be in terms of sticking close to an experiment we cover on the course versus building something more novel.

Reading tasks for this week

In a normal week at this point I’ll give you an article or articles to read, and some accompanying notes, and you go off and read them before the Monday lecture. But this week all you need to read is this page, so you are done!

A technical note on pricing

Above I say “Prolific charges an additional 30%-36% of the amount you pay the participants, MTurk charges an additional 20-40%”. Just to reiterate, in both cases this is on top of the money you pay to your participants - e.g. if you set up an experiment that pays the participant £10, the total cost to you will be £13-£14 depending on which platform and which rate you end up paying.

The reasons the rate varies is different for the two platforms.

Re-use

All aspects of this work are licensed under a Creative Commons Attribution 4.0 International License.


Course main page

Project maintained by kennysmithed Hosted on GitHub Pages — Theme by mattgraham