Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wishlist #1

Open
3 of 20 tasks
JamesPHoughton opened this issue Nov 11, 2020 · 6 comments
Open
3 of 20 tasks

Wishlist #1

JamesPHoughton opened this issue Nov 11, 2020 · 6 comments

Comments

@JamesPHoughton
Copy link

JamesPHoughton commented Nov 11, 2020

intro

  • log consent time
  • log elapsed time in training
  • log number of training attempts
  • log individuals who quit partway through training

stimulus/response task

  • opportunity for players to say if they want another batch to label
  • survey routing within a particular stimulus, if questions become irrelevant based upon prior answers
  • straightforward templating system to allow you to customize questions to stimuli

exit

  • open feedback on "thanks" page
  • log 'complete' time

general

  • single URL, keep state across intro, task, and exit steps
  • auto-launch HITs and pay workers, check HIT feedback to raise problems flagged by workers
  • proper password-protected admin interface
  • a programmatic method for choosing the next stimulus, (possibly an independent module) that could either sample a fixed number of times, sample to some level of confidence, or sample based upon an active learning algorithm.
  • evaluate the quality of raters on the fly by comparison to existing data, internal consistency, offer more articles/tasks to high-quality raters.
  • manage server load by scheduling HITs, or use a serverless compute setup.

data

  • csv/json-style UI
  • extract data from a specific batch of the survey
  • version control to make sure questions are logged properly with answers, and if questions get updated, we can see the effects of those updates in subsequent answers.
  • ability to have multiple "projects" going on, either at once in the same system or in multiple "apps", and spool up additional projects easily.
  • ability to document the process of data collection for publication in a way that would make the collection replicable. Ie, export the questions and stimuli in such a way that the data collection can be understood by our lab in the future, and replicated by third parties.

Would it be easier to just modify someone else's platform for all of this?

@markwhiting
Copy link
Member

I think most of those are pretty doable. I don't know of a comparable platform thats so flexible. I mean, you could do some of these things in Qualtrics but it would be a nightmare to program the logic.

Are any of these seeming more problematic or annoying than others?

@JamesPHoughton
Copy link
Author

JamesPHoughton commented Jan 29, 2021

Things to check and see if they will work for our feature set:

  • qualtrics programmatic interface
  • survey monkey
  • google forms
  • mturk direct coding
  • typeform
  • hemlock (https://dsbowen.github.io/hemlock/)
  • Building interface in empirica, having separate "dispatcher" control the content

@markwhiting
Copy link
Member

@JamesPHoughton do you have versions of the wishlist features that you implemented that could be added in to the public version here?

@JamesPHoughton
Copy link
Author

I did have some parts of a training module implemented here: https://glitch.com/edit/#!/media-survey although I don't think my approach was necessarily the best. Basically you could assign a number of training examples, which needed to be correctly completed before continuing.

The training page would load stimuli and question/correct answers from a spreadsheet:

html
  head
    title= title
    link(rel='stylesheet' href='/style.css')
    meta(name="viewport" content="width=device-width, initial-scale=1")
  
  body.wrapper
    - pnum = parseInt(page) + 1
    h2(style=" color: #FF0000")= "Training Question " + (pnum)
      
    div(class='row')
      div(class='column')
        include article.pug
                
      div(class='column') 
        form(action = "/training", method = "POST")
          - var responded_correctly = {}
          p * answer required for submission.
          ol
            each candidate, i in article.candidates.split('|')
              - candidate = candidate[0].toUpperCase() + candidate.slice(1)
              
              each question in questions
                if question.text.includes("<candidate>")
                  - qid = question.name + candidate
                else
                  - qid = question.name
                - correct = article[question.name]
                - response = article[question.name+"Response"]
                - responded_correctly[qid] = false
                
                // We only want to display questions that don't include a candidate once.
                if question.text.includes("<candidate>") | i == 0
                  li(class='inputBlock')
                    h4= question.text.replace("<candidate>", candidate) + (" *")
                    - answers = question.answer_choices.split(' | ')
                    //- answers.push("N/A")
                    div(class='inputbox' role='radiogroup')
                      each answer in answers
                        - id = (answer+qid).replace(/ /g,'')
                        input(type='radio' role='radio' name=qid 
                              id=id value=answer required=true 
                              data-response=response data-correct=correct 
                              onclick="showResponse(this.id)"
                              )
                        label(for=id)= answer
                        
                    // a (currently) empty paragraph tag where we will add feedback when the
                    // participant makes a selection
                    p(id="response"+qid)
          
          //add an element to the form response indicating which page we are on.
          input(type='hidden' name='page' value=page)
          
          p You must answer all questions correctly in order to proceed.
          input(type= "submit" id="submit" disabled=true data-isCorrect=responded_correctly)
          
    script(src='/client.js')

There was some client JS to check an answer before letting the user proceed, updating the page as it went:

// for training rounds:
// Check the participants answer and display a message
function showResponse(id) {
  let element = document.getElementById(id);
  const qid = element.name
  const answer = element.value
  const correct = element.attributes.getNamedItem("data-correct").value
  const response = element.attributes.getNamedItem("data-response").value
  
  // did they get it right?
  const isCorrect = answer==correct
  
  // display the response to the user's answer
  document.getElementById("response"+qid).innerHTML = isCorrect ? "Correct!" : "Ooops! " + response;
  
  // update the collection of which questions have correct responses
  let submit = document.getElementById("submit");
  const attr = submit.attributes.getNamedItem("data-isCorrect")
  var all_answers = JSON.parse(attr.value)
  all_answers[qid] = isCorrect
  attr.value = JSON.stringify(all_answers) 
  submit.attributes.setNamedItem(attr)
  
  // enable or disable the submit button
  if (Object.values(all_answers).every(val => val==true)) {
    document.getElementById("submit").disabled = false;  
  } else {
    document.getElementById("submit").disabled = true;  
  }
  
}

Then when post was allowed by the client side JS, you'd proceed to the next training page or the actual task.

app.post("/training", (req, res) => {
    let newpage = parseInt(req.body["page"]) + 1;
    if (newpage < process.env.num_training_articles) {
      res.redirect("/training/?page=" + newpage);
    } else {
      let response = req.body;
      response["trainingCompleteAt"] = Date.now()
      db.addUser(req.body);
      res.redirect("/survey");
    }
  });

It's not a particularly robust or scalable architecture though. Would probably be better to have someone who actually knows web development write it from scratch... =)

@markwhiting
Copy link
Member

Thanks!

My guess is that we don't need this for the individual-mapping effort but perhaps we will at a later point or if others want to use it.

I think my intuition would be to mark some rows as training, so that it could automatically occur if present and not otherwise.

@markwhiting
Copy link
Member

markwhiting commented Aug 20, 2021

A few minor thoughts based on ideas motivated in #12.

  • Support for off platform activity URLs, as long as they return valid JSON objects
  • Support for looping
  • Support for direct column requests, e.g., RME, CRT, as opposed to requesting those as surveys.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants