10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
AI

Why Elon Musk’s AI company ‘open-sourcing’ Grok matters — and why it doesn’t


Elon Musk’s xAI released its Grok large language model as “open source” over the weekend. The billionaire clearly hopes to set his company at odds with rival OpenAI, which despite its name is not particularly open. But does releasing the code for something like Grok actually contribute to the AI development community? Yes and no.

Grok is a chatbot trained by xAI to fill the same vaguely defined role as something like ChatGPT or Claude: you ask it, it answers. This LLM, however, was given a sassy tone and extra access to Twitter data as a way of differentiating it from the rest.

As always, these systems are nearly impossible to evaluate, but the general consensus seems to be that it’s competitive with last-generation medium-size models like GPT-3.5. (Whether you decide this is impressive given the short development time frame or disappointing given the budget and bombast surrounding xAI is entirely up to you.)

At any rate, Grok is a modern and functional LLM of significant size and capability, and the more access the dev community has to the guts of such things, the better. The problem is in defining “open” in a way that does more than let a company (or billionaire) claim the moral high ground.

This isn’t the first time the terms “open” and “open source” have been questioned or abused in the AI world. And we aren’t just talking about a technical quibble, such as picking a usage license that’s not as open as another (Grok is Apache 2.0, if you’re wondering).

To begin with, AI models are unlike other software when it comes to making them “open source.”

If you’re making, say, a word processor, it’s relatively simple to make it open source: you publish all your code publicly and let community to propose improvements or make their own version. Part of what makes open source as a concept valuable is that every aspect of the application is original or credited to its original creator — this transparency and adherence to correct attribution is not just a byproduct, but is core to the very concept of openness.

With AI, this is arguably not possible at all, because the way machine learning models are created involves a largely unknowable process whereby a tremendous amount of training data is distilled into a complex statistical representation the structure of which no human really directed, or even understands. This process cannot be inspected, audited, and improved the way traditional code can — so while it still has immense value in one sense, it can’t ever really be open. (The standards community hasn’t even defined what open will be in this context, but are actively discussing it.)

That hasn’t stopped AI developers and companies from designing and claiming their models as “open,” a term that has lost much of its meaning in this context. Some call their model “open” if there is a public-facing interface or API. Some call it “open” if they release a paper describing the development process.

Arguably the closest to “open source” an AI model can be is when its developers release its weights, which is to say the exact attributes of the countless nodes of its neural networks, which perform vector mathematics operations in precise order to complete the pattern started by a user’s input. But even “open-weights” models like LLaMa-2 exclude other important data, like the training dataset and process — which would be necessary to recreate it from scratch. (Some projects go further, of course.)

All this is before even mentioning the fact that it takes millions of dollars in computing and engineering resources to create or replicate these models, effectively restricting who can create and replicate them to companies with considerable resources.

So where does xAI’s Grok release fall on this spectrum?

As an open-weights model, it’s ready for anyone to download, use, modify, fine tine, or distill. That’s good! It appears to be among the largest models anyone can access freely this way, in terms of parameters — 314 billion — which gives curious engineers a lot to work with if they want to test how it performs after various modifications.

The size of the model comes with serious drawbacks, though: you’ll need hundreds of gigabytes of high-speed RAM to use it in this raw form. If you’re not already in possession of, say, a dozen Nvidia H100s in a six-figure AI inference rig, don’t bother clicking that download link.

And although Grok is arguably competitive with some other modern models, it’s also far, far larger than them, meaning it requires more resources to accomplish the same thing. There’s always a hierarchy of size, efficiency, and other metrics, and it’s still valuable, but this is more raw material than final product. It’s also not clear whether this is the latest and best version of Grok, like the clearly tuned version some have access to via X.

Overall, it’s a good thing to release this data, but it’s not a game-changer the way some hoped it might be.

It’s also hard not to wonder why Musk is doing this. Is his nascent AI company really dedicated to open source development? Or is this just mud in the eye of OpenAI, with which Musk is currently pursuing a billionaire-level beef?

If they are really dedicated to open source development, this will be the first of many releases, and they will hopefully take the feedback of the community into account, release other crucial information, characterize the training data process, and further explain their approach. If they aren’t, and this is only done so Musk can point to it in online arguments, it’s still valuable — just not something anyone in the AI world will rely on or pay much attention to after the next few months as they play with the model.



Source link

AI
by The Economic Times

IBM said Tuesday that it planned to cut thousands of workers as it shifts its focus to higher-growth businesses in artificial intelligence consulting and software. The company did not specify how many workers would be affected, but said in a statement the layoffs would “impact a low single-digit percentage of our global workforce.” The company had 270,000 employees at the end of last year. The number of workers in the United States is expected to remain flat despite some cuts, a spokesperson added in the statement. A massive supplier of technology to… Source link

AI
by The Economic Times

The number of Indian startups entering famed US accelerator and investor Y Combinator’s startup programme might have dwindled to just one in 2025, down from the high of 2021, when 64 were selected. But not so for Indian investors, who are queuing up to find the next big thing in AI by relying on shortlists made by YC to help them filter their investments. In 2025, Indian investors have invested in close to 10 Y Combinator (YC) AI startups in the US. These include Tesora AI, CodeAnt, Alter AI and Frizzle, all with Indian-origin founders but based in… Source link

by Techcrunch

Lovable, the Stockholm-based AI coding platform, is closing in on 8 million users, CEO Anton Osika told this editor during a sit-down on Monday, a major jump from the 2.3 million active users number the company shared in July. Osika said the company — which was founded almost exactly one year ago — is also seeing “100,000 new products built on Lovable every single day.” Source link