Perplexity AI
Example excerpt heres
The onboarding experience is genuinely good. Fast, frictionless, no lengthy setup wizard. I was running queries within two minutes of landing. The interface is clean and the Copilot mode stood out immediately — it asked clarifying questions before diving in, which is behaviour most research tools skip entirely. Initial impressions were high enough that I went into Phase 2 expecting it to hold.
This is where it came apart. The citation UI that looked so trustworthy in Phase 1 turned out to be decorative in too many cases. I ran a structured series of verifiable-fact queries across topics I could cross-check independently. The failure rate was higher than I expected from a tool whose entire value proposition is sourced answers. Phase 2 score of 54 is not a rounding error — it reflects genuine problems with the core promise.
I ran Perplexity inside actual work for the second half of the test — daily research briefings, competitor analysis, background prep for calls. The results were nuanced. For orientation tasks it genuinely saved time. For anything where accuracy at the fact level mattered, I was spending that saved time doing verification anyway. The habit-formation risk is real: the tool makes it easy to feel done when you are not.
Perplexity earns a place in a research workflow if the scope is orientation, not conclusion. It is fast and the conversational thread handling is genuinely good. But the citation layer is partially theatre — it signals rigour without consistently delivering it. Use it to find the right questions. Do not use it to find the right answers.
- Fast orientation on unfamiliar topics
- Copilot mode asks clarifying questions
- Thread-based follow-up questioning is strong
- Clean interface with low friction entry
- Daily briefing use case: net time positive
- Cites paywalled sources it has not accessed
- Hallucinated statistics attached to real documents
- No consistency checking between conversation turns
- Confident output on under-specified input
- Outdated content presented without recency caveat
- Multi-step reasoning is retrieval in disguise