Everyone in AI is talking about Manus. We put it to the test.
1 min read
Summary
China-based startup Butterfly Effect has launched its general AI agent, Manus, to great interest and hype from some quarters, although few people have access to it.
Invoking comparisons to the AI model DeepSeek, Manus is designed to operate on a wide array of tasks rather than just conversations, and its “Manus’s Computer” window shows users what it is doing.
MIT Technology Review tested the AI with three tasks, including finding journalists covering China tech, searching for a New York two-bedroom property under $900k, and nominating innovators for MIT’s Under 35 list.
On the first task, Manus provided a short list of five journalists, which was expanded to 30 when pressed for consistency.
For the property search, Manus adapted well to clarified criteria after initially excluding some results.
For the third task, Manus performed poorly, providing three candidates for 50 required by the broad criteria and 30 required by the specified criteria.
Manus is intuitive and suitable for many users, but its instability, high failure rate, and propensity for crashes mean it is not yet widely applicable.