Skip to main content

Frequently Asked Questions

Find answers to common questions about the DualMind Arena platform.

General

DualMind Arena is a crowdsourced AI evaluation platform. It allows you to compare two AI models side-by-side using the same prompt. By voting on the better response without knowing the model names beforehand (“blind testing”), the community creates an unbiased leaderboard of model quality.
Yes! The platform is free for public use to help democratize AI evaluation.
Knowing a model’s name (e.g., “GPT-4” or “Claude 3”) creates subconscious bias. Blind comparison forces you to evaluate the content of the response, not the brand of the model, leading to more accurate quality data.

Comparisons & Voting

In Random Mode, the system selects two different models from the active pool at random. In Topper Mode, the current highest-ranked model is paired against a random challenger. In Manual Mode, you explicitly choose which two models to compare.
We use an ELO rating system similar to chess. When Model A beats Model B, Model A gains points and Model B loses points. The amount of points depends on the rating difference — beating a high-rated model awards more points than beating a low-rated one.
Votes are final once submitted to ensure the integrity of the leaderboard and prevent manipulation.

API & Integration

You can attain an API token by logging into the web interface. This token allows you to submit prompts, retrieve comparisons, and vote programmatically. See the API Reference for details.
Yes, to ensure fair usage for everyone, the API has rate limits. Authenticated users have higher limits than anonymous usage.
Please contact us for enterprise licenses or commercial usage of the comparison data and API.

Still have questions?

API Reference

Technical documentation for developers

Live Platform

Experience the arena yourself