Considerations when selecting a platform:
Currently, while there is a lot of parity in AI-powered MR platforms, there are also important differences. Capabilities vary across platforms:
Some only offer AI-generated overall themes, and a few offer more than just a list of major themes, like a full written report in PowerPoint.
Some only generate “reports” in grid format, while others may only offer “reports” in text format (think Word doc), and some offer both.
Some allow reports to be downloaded, while frustratingly some do not allow downloads of AI-generated reports (Arrrgh!!).
Prices can vary significantly across platforms and seemingly in very arbitrary ways - some offer ad hoc pricing, while others only offer subscription-based pricing.
Additionally, pricing strategies vary across platforms, some offer flat fee-based pricing, some offer per transcribed-minute pricing, some offer flat-fee per project pricing, etc.
Most offer some kind of coding or tagging but vary in terms of ease-of-use, customization of codes/tags, manual vs. automatic application, and stupefyingly, some don’t have “global” application capabilities.
Most offer transcription capabilities – but the quality can vary significantly across platforms. For example, sometimes the moderator is misidentified as a respondent and vice versa.
Some only allow video uploads, some only allow video or audio uploads, some only allow text/transcripts uploads, and some allow all three.
Some are “walled gardens”, meaning the AI platforms use only the data you upload, whereas others bring in “data” from the internet which can muddle the research findings.
Some offer text-based queries of the data (e.g. transcripts) beyond just a simple report of major themes, some do not.
Some offer video-based and/or text-based FAQs, some do not.
User-interfaces truly vary wildly across platforms.
AI-generated themes, reports, and “analysis” can vary wildly across different platforms.
And most disconcertingly, AI-generated themes, reports, and “analysis” can vary even within a single platform – you can run the same query twice on the same platform, one after the other, and get two markedly different results.
Best practices for working with an AI platform for analysis
Choose the AI platforms carefully – many factors influence what you need from your AI-powered platform, you don’t always need all the “bells-n-whistles,” sometimes you just need a simple, and accurate download of major themes.
Don’t commit to a yearly subscription if you don’t have the yearly business to offset the subscription costs – experiment with a few platforms before committing to a costly long-term contract.
Try to get a free trial use before committing to any platform, regardless of pricing, e.g., ad hoc or subscription.
Unless you're innately tech-savvy, ask for a free demo of capabilities before committing.
Try to work with platforms that offer [free] support via video- or text-based FAQs, live chat with a human (ideally 18-24 hr support), or less preferred email support.
Do not expect using AI-powered MR platforms to be a time-saver in terms of analysis and reporting. Do not trust those claims from any provider. IT’S A FLAT-OUT LIE!! Using AI adds an extra step, time, and effort to the analysis and report-writing of any MR project. Part of the reason it takes longer is that the 1st-run reports are not adequate enough. Oftentimes, you need to refine your prompts/queries which takes time. And to dive deeper into the “data,” you will need to “ask questions” of the data via chat-based querying (assuming the platform provides this capability).
While some may disagree with this idea, generally all AI-generated reports sound/feel like they are stilted, mechanical, sterile, and written by an AI, therefore it is not recommended to just cut-n-paste AI-generated themes into a research report.
Some platforms are clearly targeting the big research agencies, while others are not - this is evident in the capabilities they offer, their pricing strategies, and the types and formats of the outputs they offer.
What they can and can't realistically expect from AI
While there may be differences across platforms, generally most AI platforms do a really good job [meaning mostly accurate] at summarizing major themes – assuming you stop at that, this is the one and only instance in which AI can save a little time in the analysis and reporting phase.
They cannot expect 100% accuracy – all AI platforms will hallucinate…some more than others.
The problem here is that unless you have an eagle eye and a strong familiarity with the nitty-gritty details of the data (transcripts), you may not even notice when the AI platforms are hallucinating and have provided you an inaccurate theme or a theme based on inaccurate AI-powered analysis.
And what makes matters worse, if you don’t identify the inaccuracies and hallucinations early in the first pass(es), then those inaccuracies proliferate and get spread through the data and can impact other parts of the overall analysis and reporting.
While I don’t have direct experience with this, I’ve been told that some people have used AI to generate interesting slides and/or charts - but it takes a lot of time-consuming prompting to get a usable and effective result.
And how much work still falls on humans
Strictly in terms of analysis and reporting, AI-generated reports that summarize major themes are best used in two different ways:
Either as a starting place for human analysis
Or as a double-check of your own human analysis
No AI platform is remotely close to writing a human-generated written report with human-powered synthesis and analysis
And it’s still the job of the researcher/report-writer to identify hallucinations so that inaccuracies do not taint the overall research results.