After 30+ years in research and 6 years since I first presented on the role of #AI in qualitative research at the 2018 AMRS Conference, I have some thoughts on how we can make research better, faster, smarter and cheaper.
Clearly AI, in particular generative AI, has come a long way in the last year or so, and may continue to improve, but it remains to seen how many of the jobs we do as researchers AI can do, or more specfically should, do; in the interests of maintaining methodological rigour.
Here's my view on the state of play. I would love to hear from other agency- and client-side researchers, and academics for that matter on this far-from-exhaustive list.
Study design: Human
This is too important a task to outsource to AI - despite the fact that I am sure AI could do a decent job that looks credible. A team of people with experience responding to specific briefs and custom-designing studies is going to do a better job because they can draw on all that real-world experience that goes beyond the theoretical knowledge. This is particularly true for qualitative components where the researcher has all that direct fieldwork experience of speaking directly to respondents and learning what does and doesn't work to get to the insight.
Question design. Human/AI
When it comes to individual questions and how to phrase them, I think AI can be quite good. Our questions at Redge aren't written using AI, but I'd be surprised if researchers and clients aren't routinely asking ChatGPT to write them, or at least get them started.
Scripting: Automations
Surprisingly, this is still a human done task much of the time, although self-service platforms are removing the need for this step. At Redge, our surveys are edited and launched programmatically.
Quant outputs: Algorithm
This is an area where generative AI has been a major disappointment and results are inconsistent and, too often, inaccurate. At Redge, we write algorithms for each quant output which are accurate and blasingly fast to perform.
Qual outputs: AI and Human
Qual outputs, particularly the ability to 'read' and summarise large amounts of unstructured text is where generative AI really shines. At Redge we've been spent countless hours writing prompts to do just this. Can a human do this? Of course! But it will take longer and can bring biases of its own.
The other really cool thing that we can do with AI is extract numeric insight from unstructured data by identifying sentiment and themes. This makes comparisons between data sets really interesting as it gives us tangible measures to allow comparisons between your ad, product or brand to its future or past results, as well as to other ads, products or brands.
Implications: Human
How do you bring all these findings together and draw conclusions on what this means? For now, we believe that this is best left to you the client, or your trusted research partner. Being able to understand the totality of the findings and offer judgements on how this relates to your specific business challenges is not something I would outsource to AI. We have a team of experienced researchers on hand if you'd like some help with this.
We'd love to know what you think? What tasks are you doing manually and what are using AI for? Has your view on this changed?