Summary

  • A group of AI researchers, philosophers and technologists met at a $30m mansion in the US to discuss the end of humanity and the transition to posthumanism.
  • The topic of the event, organised by AI entrepreneur Daniel Faggella, was a contentious one, with critics arguing that the people developing AI are doing so without fully considering its consequences for humanity.
  • Faggella responded that talking about the potential downside of AI is now against the interests of big tech firms, which are competing to develop the technology.
  • The event featured talks on the future of intelligence and how human values might be impossible to translate to AI.
  • One speaker suggested the goal of AI development should not be to hard-code human preferences into future systems, but to build “AI that can seek out deeper, more universal values we haven’t yet discovered”, an approach he called “cosmic alignment”
  • Another speaker argued that if consciousness is “the home of value”, then building AI without fully understanding consciousness is a dangerous gamble, and that teaching both humans and machines to pursue ‘the good’ was a better goal than forcing AI to obey human commands forever.

By Kylie Robison

Original Article