On the 4th of July, Nathan Lambert launched “The American DeepSeek Project,” a plan to counter the open-weight AI large language models (LLMs) from China’s DeepSeek with support for an American “fully open source model at the scale and performance of current (publicly available) frontier models, within two years.”
It’s an issue dear to his heart. Lambert is a former Hugging Face research scientist (who’s also worked at Google DeepMind and Facebook AI Research), and is now a post-training lead at the nonprofit Allen Institute for…








