The tool integration problem that’s holding back enterprise AI (and how CoTools solves it)
1 min read
Summary
Researchers from the Soochow University of China have created a new system called Chain-of-Tools (CoTools) that enhances large language models’ (LLMs) use of external tools.
While current LLMs are very good at text generation and understanding, they must use external resources and applications to perform many tasks.
This requires them to be trained on, or tuned to use, these specific tools, which limits their flexibility and can impact their core abilities.
The CoTools system combines aspects of fine-tuning and in-context learning while keeping the core LLM frozen, meaning its original weight and reasoning capabilities are left untouched.
Three key components are used: a Tool Judge, a Tool Retriever and a Tool Caller.
The Tool Judge decides whether a tool is needed at a certain point, the Retriever selects the best one, and the Caller uses in-context learning (ICL) prompts to fill in the tool’s parameters, allowing it to select new or unused tools efficiently.
The team believes it can drive the development of more flexible, powerful LLM-powered agents that can more easily adapt to new tools and APIs.