Licensing is neither feasible nor effective for addressing AI risks

Non-proliferation of AI, similar to nuclear weapons or human cloning, has been suggested as a way to make AI safer by limiting development to certain licensed companies and organizations. However, this approach would be difficult to enforce, and open-source AI models are already becoming more affordable and accessible. Non-proliferation could also concentrate power in a small number of big tech companies, further harming competition and increasing risks such as monoculture and regulatory capture. Instead, a diverse group of academics, companies, and NGOs should develop and evaluate state-of-the-art models to uncover and address AI risks, with necessary guardrails in place.

https://aisnakeoil.substack.com/p/licensing-is-neither-feasible-nor

To top