In my job as a consultant in the game industry I ran some large scale surveys and designed surveys campaigns with all sorts of security/validation concerns, in case you have any technical questions about survey software and deployment that goes beyond google forms. Seems like you have a plan for that already though so fee free to ignore.
Recently I've been rereading some of SSC 2019 adversarial collaborations, and I was so impressed with how much work the collaborators put into them. I agree that they may not be as entertaining to read as book reviews, but they serve a very important role. I feel that Scott gave up on them too soon. It would great to see them again once in a while.
I don't know if Scott gave up on them or if he just figured the novelty wore of/the proof of concept was established, but regardless, I would also very much like to see the return of the adversarial collaborations.
I want to express how happy I am to see this post. As a huge enthusiast of SSC-user generated content, I love seeing Scott involve the community in projects like these.
Scott, I know during the book review contest you missed some submissions to their authors’ anguish. I assume you have learned your lesson and already solved whatever the failure mode was that caused that problem. But if you need a volunteer to help with the administrative work keeping tabs on all the submitted requests, I could be such a volunteer.
You should try to replicate the colour perception - depression finding, or maybe do it with a tone-deafness test since readers would be less primed for that, or a sherik grey scale eye test if you want to stick to a visual perception test.
Is it also okay to suggest studies that are not Google-Docs based? I would like to test whether suggestibility to a visual illusion is linked to depression and/or anxiety symptoms, but I don´t think that this will work in Google Docs. Rather I would do that in an online study builder (e.g. https://lab.js.org/).
Okay, I'm ROFL at this because didn't Neike in the Links thread joke about why aren't we a cult? The Rightful Caliph is now putting us to work for the good of the Family so here we go! 🤣
Second, I hope there will be a "if you're not in the USA" option, because it's frustrating to be asked questions that are specifically USA-centric. On the other hand, if the survey takers want USA answers only, making that clear will avoid a lot of confusion.
Third, the race/gender/whatsit options are going to be pecked to death with "you didn't include this/you put that under the other heading and it should have gone here" but again, this is what you must expect on here.
Fourth, yes dammit, I love taking random surveys online so sign me up!
I would propose having the users email themselves their user id, in a way they could locate it later, and then have them use it again in future surveys, in subsequent years. Panel data can answer many questions that a single survey can't.
I think that's a feature. I'm not proposing this as a way to save time in future questionnaires, but precisely to see these changes. You can ask almost all the questions again, but attribute them to the same individual. For example, you'll be able to test what fraction of people who are depressed and don't take any treatment, a year later no longer report being depressed.
Ah! My assumptions about your intent were incorrect; that makes sense and is a reasonable idea (although some people might not like to have data collected about them over a period of years and all tied to the same identifier; that starts to get a lot less anonymous over time).
Note that this is too short a timeline for many university-based researchers to get IRB/ethics approval to have you include their questions, which means some university-based researchers who send to you might later have to withdraw or be uncomfortable even asking without IRB approval. A solution to consider is extending the deadline.
I sent an email with two suggested surveys, one of which I have finished the form for and one of which I am still working on. Can you confirm that you have received this email?
A suggestion on offering to link to studies that derive from your survey: set conditions on 'good' studies, including negative results and list what that would include (pre-register, etc, etc)
Good idea and strategy for execution. It might be helpful to keep in mind that close to half of early drafts of questions designed by professional measurement specialists fail validation, usually on one of a small number of technical criteria. Anyone who cares about the validity of their findings will want to get a measurement scientist involved. In turn, this will entail validation studies on small samples. Short of that and you are likely to have interesting responses to discuss but little more.
In my job as a consultant in the game industry I ran some large scale surveys and designed surveys campaigns with all sorts of security/validation concerns, in case you have any technical questions about survey software and deployment that goes beyond google forms. Seems like you have a plan for that already though so fee free to ignore.
I look forward to Infowars writing another shitty story about whatever unflattering tendencies the survey uncovers.
Did Infowars write about SSC survey results before? It seems to be such two far and and unconnected niches
https://slatestarcodex.com/2020/02/12/welcome-infowars-readers/
The comment section there is fire!
I have an offtopic comment:
Recently I've been rereading some of SSC 2019 adversarial collaborations, and I was so impressed with how much work the collaborators put into them. I agree that they may not be as entertaining to read as book reviews, but they serve a very important role. I feel that Scott gave up on them too soon. It would great to see them again once in a while.
Just chipping in to say I agree with your sentiment. ("Make Adversarial Collaborations Again" anyone?)
Make Collaborations Adversarial Again, you mean.
Yes, but you can't pronounce MCAA, whereas MACA could be made into, I don't know, a hat maybe.
I don't know if Scott gave up on them or if he just figured the novelty wore of/the proof of concept was established, but regardless, I would also very much like to see the return of the adversarial collaborations.
I want to express how happy I am to see this post. As a huge enthusiast of SSC-user generated content, I love seeing Scott involve the community in projects like these.
Scott, I know during the book review contest you missed some submissions to their authors’ anguish. I assume you have learned your lesson and already solved whatever the failure mode was that caused that problem. But if you need a volunteer to help with the administrative work keeping tabs on all the submitted requests, I could be such a volunteer.
You should try to replicate the colour perception - depression finding, or maybe do it with a tone-deafness test since readers would be less primed for that, or a sherik grey scale eye test if you want to stick to a visual perception test.
You could also check if the critical flicker fusion frequency changes for depressed people.
An ease way is just to use the wagon-wheel effect as see which rpm the wheel appears to start spinning backwards at: https://michaelbach.de/ot/mot-wagonWheel/
Though monitor refresh rates would likely interfere.
Is it also okay to suggest studies that are not Google-Docs based? I would like to test whether suggestibility to a visual illusion is linked to depression and/or anxiety symptoms, but I don´t think that this will work in Google Docs. Rather I would do that in an online study builder (e.g. https://lab.js.org/).
Okay, I'm ROFL at this because didn't Neike in the Links thread joke about why aren't we a cult? The Rightful Caliph is now putting us to work for the good of the Family so here we go! 🤣
Second, I hope there will be a "if you're not in the USA" option, because it's frustrating to be asked questions that are specifically USA-centric. On the other hand, if the survey takers want USA answers only, making that clear will avoid a lot of confusion.
Third, the race/gender/whatsit options are going to be pecked to death with "you didn't include this/you put that under the other heading and it should have gone here" but again, this is what you must expect on here.
Fourth, yes dammit, I love taking random surveys online so sign me up!
I would be interested to see how responses vary based on subscriber status, could you include that along with the demographic questions?
Also, will data from the survey be publicly available (conditional on participant consent to be in the publicly available dataset)?
I would propose having the users email themselves their user id, in a way they could locate it later, and then have them use it again in future surveys, in subsequent years. Panel data can answer many questions that a single survey can't.
However, this might cause issues if someone changes some of the info associated with their user ID—e.g., they move to another country or get a degree.
I think that's a feature. I'm not proposing this as a way to save time in future questionnaires, but precisely to see these changes. You can ask almost all the questions again, but attribute them to the same individual. For example, you'll be able to test what fraction of people who are depressed and don't take any treatment, a year later no longer report being depressed.
Ah! My assumptions about your intent were incorrect; that makes sense and is a reasonable idea (although some people might not like to have data collected about them over a period of years and all tied to the same identifier; that starts to get a lot less anonymous over time).
Are you going to post the Daniel Ingram meditation survey? That sounds interesting.
Note that this is too short a timeline for many university-based researchers to get IRB/ethics approval to have you include their questions, which means some university-based researchers who send to you might later have to withdraw or be uncomfortable even asking without IRB approval. A solution to consider is extending the deadline.
That was my thought, too, considering that the holiday season is approaching. I would propose to relax the tight schedule.
I sent an email with two suggested surveys, one of which I have finished the form for and one of which I am still working on. Can you confirm that you have received this email?
A suggestion on offering to link to studies that derive from your survey: set conditions on 'good' studies, including negative results and list what that would include (pre-register, etc, etc)
Good idea and strategy for execution. It might be helpful to keep in mind that close to half of early drafts of questions designed by professional measurement specialists fail validation, usually on one of a small number of technical criteria. Anyone who cares about the validity of their findings will want to get a measurement scientist involved. In turn, this will entail validation studies on small samples. Short of that and you are likely to have interesting responses to discuss but little more.
Thanks !