Meta has released Llama 3 (70B), its latest open-weight model aimed squarely at competing with GPT-4 and Gemini Pro. This move signals Meta’s belief that the divide between open and closed AI is closing fast.
This is more than an update. By pitching Llama 3 as a match for the best proprietary models, Meta is trying to turn high-end AI into a public resource—if you can afford the H100 clusters needed to run it.
The "open-source" tag is still debated. Meta provides the model weights but not the full training recipe. Still, for most developers, this difference is minor. If Llama 3 can perform on par with closed models in benchmarks like MMLU and HumanEval, companies might rethink paying for closed APIs.
Under the hood, Llama 3 marks a big jump in training efficiency. Meta trained it on a massive 15-trillion-token dataset, betting that scale and data quality can push transformer models further. The company claims improvements in reasoning, coding, and nuance—areas where open models have lagged.
That said, the talk of an "AGI blueprint" needs skepticism. Llama 3 is a refined version of existing transformer tech, not a breakthrough toward human-level intelligence. The hype feels more like marketing than a technical milestone.
The true test will be community use. If Llama 3 handles messy, real-world tasks without a proprietary API’s safety net, Meta could gain a real edge. For now, it’s a solid advance, though the AGI finish line remains distant.