Imagine AI Crash Test
This will be a real world AI test eventually, but for now it is just for review purposes when looking at the website populated.
The final score ([SCORE]/100) is calculated using weighted averages — not all metrics are equally important.
Example: Core Use Case (15% weight) matters more than Onboarding (10% weight). A tool that's hard to set up but solves the problem brilliantly scores higher than one that's easy to start but mediocre at its job.
The math: Each metric (1-10) is converted to percentage, multiplied by its weight, then summed. So a 9/10 on Output Quality (15% weight) contributes 13.5 points to the final score.
Why it matters: This prevents inflated scores from tools that nail the basics but fail at what actually matters — solving your problem reliably.
Initial Impact
First Impression: [YOUR OBSERVATIONS]
Stress Test
[YOUR DETAILED STRESS TEST OBSERVATIONS]
Evidence Log: [SUMMARY OF KEY EVIDENCE]
My Stack for This Test
| Coding | Claude Pro | 87/100 |
| Testing | Cursor | 72/100 |
| [CATEGORY] | [TOOL NAME] | [RATING] |
* Affiliate links where tools pass my tests. I earn commission at no cost to you.
Operator Evaluation
- [STRENGTH 1]
- [STRENGTH 2]
- [STRENGTH 3]
- [WEAKNESS 1]
- [WEAKNESS 2]
- [WEAKNESS 3]
Evidence Log
[X] EntriesOperator Friction Analysis
How much resistance you'll encounter at each phase
Final Verdict
[MAIN VERDICT STATEMENT]
[DETAILED VERDICT PARAGRAPH 1]
Worth it if:
- [CONDITION 1]
- [CONDITION 2]
- [CONDITION 3]
- [CONDITION 4]
Skip it if: [SKIP CONDITIONS]