Robust, reliable uncertainties in AI development and applications make the pursuit of global dominance risky, creating pressures toward cooperative stability

Eric Drexler at AI Prospects:

Even experts disagree about current and near-term AI capabilities. Research proceeds along multiple lines, sometimes in secrecy. New algorithmic approaches are reducing or bypassing previously anticipated compute requirements, undermining predictions based on hardware constraints.1 Specialized models are pushing frontiers in unpredictable directions,2 the use of external tools by models is proliferating,3 inference-time reasoning4 is still in its infancy, extensions to latent-space reasoning5 may prove transformative, prospective latent-space knowledge models6 promise to break the link between model size and knowledge scope, and both large concept models7 and nonautoregressive reasoning models8 mark departures from sequential token generation architectures. In every application area, patterns of success and failure — even in applying established technologies — have been surprising.9 No degree of intelligence or investment can eliminate these uncertainties.

More here.

Enjoying the content on 3QD? Help keep us going by donating now.