The Download: stereotypes in AI models, and the new age of coding
1 min read
Summary
SHADES is a data set aimed at helping developers to spot harmful stereotypes that emerge in AI chatbot responses in a bid to combat bias in AI.
Existing tools that spot stereotypes in models only work with models trained in English, relying on machine translations to localise stereotypes in other languages, which are often inaccurate.
SHADES was built using 16 languages from 37 geopolitical regions to counter this issue.
Startups are building models that they hope will produce better software and ultimately achieve AGI (artificial general intelligence).
The Gates foundation is under threat due to the Trump administration’s massive cuts to foreign aid.
Interpolation, rather than generation, could be used to improve AI and produce more accurate results that are not forged.
A new era of deepfake fraud is emerging where fraudulent parties manipulate video calls in real time.