Summary

  • Carnegie Mellon University researchers Isaac Liao and Albert Gu believe compressing information can allow AI to complete complex reasoning tasks without needing pre-training, challenging usual ML methodology.
  • To demonstrate this, they developed CompressARC, which attempted to complete Abstraction and Reasoning Corpus (ARC) puzzles.
  • These puzzles test abilities thought crucial to general human-like reasoning, and the average human solves 76.2% of them, with experts at 98.5%.
  • CompressARC was able to solve 43.9% without pre-training, and 56.5% with pre-training assistance, compared to 36.5% with BERT (a common ML model) and humans at 5%.
  • Although CompressARC only tackled one specific task, Liao and Gu believe their work shows “intelligence and behavior can be compressed,” and further research may prove their findings extend to many other tasks.

By Benj Edwards

Original Article