The open-source AI debate: Why selective transparency poses a serious risk
1 min read
Summary
Artificial intelligence (AI) developers must share all components of systems for the technology to be considered open source, according to a VentureBeat report.
Meta, for example, launched Llama 3.1 405B as an “open-source AI model”, but only released its pre-trained parameters and some software, which limited collaboration and barred users from building on the model.
Complete transparency would allow the community to understand, analyse and extend AI systems, and independent scrutiny can ensure ethical behaviour and debugging.
Increased collaboration could also boost innovation, with different industries and domains able to create tailored applications without needing to use proprietary models.