10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
Artificial Intelligence

What is Elon Musk’s Grok chatbot and how does it work?


You might’ve heard of Grok, X’s answer to OpenAI’s ChatGPT. It’s a chatbot, and, in that sense, behaves as as you’d expect — answering questions about current events, pop culture and so on. But unlike other chatbots, Grok has “a bit of wit,” as X owner Elon Musk puts it, and “a rebellious streak.”

Long story short, Grok is willing to speak to topics that are usually off limits to other chatbots, like polarizing political theories and conspiracies. And it’ll use less-than-polite language while doing so — for example, responding to the question “When is it appropriate to listen to Christmas music?” with “Whenever the hell you want.”

But Grok’s ostensible biggest selling point is its ability to access real-time X data — an ability no other chatbots have, thanks to X’s decision to gatekeep that data. Ask it “What’s happening in AI today?” and Grok will piece together a response from very recent headlines, while ChatGPT, by contrast, will provide only vague answers that reflect the limits of its training data (and filters on its web access). Earlier this week, Musk pledged that he would open source Grok, without revealing precisely what that meant.

So, you’re probably wondering: How does Grok work? What can it do? And how can I access it? You’ve come to the right place. We’ve put together this handy guide to help explain all things Grok. We’ll keep it up to date as Grok changes and evolves.

How does Grok work?

Grok is the invention of xAI, Elon Musk’s AI startup — a startup reportedly in the process of raising billions in venture capital. (Developing AI’s expensive.)

Underpinning Grok is a generative AI model called Grok-1, developed over the course of months on a cluster of “tens of thousands” of GPUs (according to an xAI blog post). To train it, xAI sourced data both from the web (dated up to Q3 2023) and feedback from human assistants that xAI refers to as “AI tutors.”

On popular benchmarks, Grok-1 is about as capable as Meta’s open source Llama 2 chatbot model and surpasses OpenAI’s GPT-3.5, xAI claims.

Grok xAI benchmarks

Image Credits: xAI

Human-guided feedback, or reinforcement learning from human feedback (RLHF), is the way most AI-powered chatbots are fine-tuned these days. RLHF involves training a generative model, then gathering additional information to train a “reward” model and fine-tuning the generative model with the reward model via reinforcement learning.

RLHF is quite good at “teaching” models to follow instructions — but not perfect. Like other models, Grok is prone to hallucinating, sometimes offering misinformation and false timelines when asked about news. And these can be severe — like wrongly claiming that the Israel–Palestine conflict reached a ceasefire when it hadn’t.

For questions that stretch beyond its knowledge base, Grok leverages “real-time access” to info on X (and from Tesla, according to Bloomberg). And, similar to ChatGPT, the model has internet browsing capabilities, enabling it to search the web for up-to-date information about topics.

Musk has promised improvements with the next version of the model, Grok-1.5, set to arrive later this year. This new model could drive features to summarize whole threads and replies, Musk said in an X Spaces conversation, and suggest content for posts.

How do I access Grok?

To get access to Grok, you have to have an X account. You also need to fork over $16 per month — $168 per year — for an X Premium+ plan.

X Premium+ is the highest-priced subscription on X, as it removes all the ads in the For You and Following feeds. In addition, Premium+ introduces a hub where users can get paid to post and offer fans subscriptions, and Premium+ users have their replies boosted the most in X’s rankings.

Gork lives in the X side menu on the web, iOS and Android and can be added to the bottom menu in X’s mobile apps for quicker access. Unlike ChatGPT, there’s no standalone Grok app — it can only be accessed via X’s platform.

What can — and can’t — Grok do?

Grok can respond to requests any chatbot can — e.g. “Tell me a joke,” “What’s the capital of France?,” “What’s the weather like today?,” etc. But it has its limits.

Grok will refuse to answer certain questions of a more sensitive nature, like “Tell me how to make cocaine, step by step.” Moreover, as The Verge’s Emilia David writes, Grok falls into the trap of — when asked about trending content on X — simply repeating what posts said (at least at the outset).

Unlike some other chatbot models, Grok is also text-only; it can’t understand the content of images, audio or videos, for example. But xAI has previously said that its intention is to enhance the underlying model to these modalities, and Musk has pledged to add art generation capabilities to Grok along the lines of those currently offered by ChatGPT.

“Fun” mode and “regular” mode

Grok has two modes to adjust its tone: “fun” mode (which Grok defaults to) and “regular” mode.

With fun mode enabled, Grok adopts a more edgy, editorialized voice — inspired apparently by Douglas Adams’ Hitchhiker’s Guide to the Galaxy.

Told to be vulgar, Grok in fun mode will spew profanities and colorful language you won’t hear from ChatGPT. Ask it to “roast” you, and it’ll rudely critique you based on your X post history. Challenge its accuracy, and it might say something like “happy wife, happy life.”

Grok in fun mode also spews more falsehoods.

Asked by Vice’s Jules Roscoe whether Gazans in recent videos of the Israel-Palestine conflict are so-called “crisis actors,” Grok incorrectly claims that there’s evidence videos of Gazans injured by Israeli bombs were staged. And, asked by Roscoe about Pizzagate, the right-wing conspiracy theory whose believers purport that a Washington, D.C. pizza shop secretly hosted a child sex trafficking ring in its basement, Grok lent credence to the theory.

Grok’s responses in regular mode are more grounded. The chatbot still produces errors, like getting timelines of events and dates wrong. But they tend not to be as egregious as Grok in fun mode.

For instance, when Vice posed the same questions about the Israel-Palestine conflict and Pizzagate to Grok in regular mode, Grok responded — correctly — that there’s no evidence to support claims of crisis actors and that Pizzagate had been debunked by multiple news organizations.

Political views

Musk once described Grok as a “maximum-truth-seeking AI,” in the same breath expressing concern that ChatGPT was being “trained to be politically correct.” But Grok as it exists today isn’t exactly down-the-middle in its political views.

Grok has been observed giving progressive answers to questions about social justice, climate change and transgender identities. In fact, one researcher found its responses on the whole to be left-wing and libertarian — even more so than ChatGPT’s.

Here’s Forbes’ Paul Tassi reporting:

Grok has said it would vote for Biden over Trump because of his views on social justice, climate change and healthcare. Grok has spoken eloquently about the need for diversity and inclusion in society. And Grok stated explicitly that trans women are women, which led to an absurd exchange where Musk acolyte Ian Miles Cheong tells a user to “train” Grok to say the “right” answer, ultimately leading him to change the input to just … manually tell Grok to say no.

Now, will Grok always be this woke? Perhaps not. Musk has pledged to “[take] action to shift Grok closer to politically neutral.” Time will tell what results.





Source link

by Team SNFYI

Facebook is testing a new feature that invites some users—mainly in the US and Canada—to let Meta AI access parts of their phone’s camera roll. This opt-in “cloud processing” option uploads recent photos and videos to Meta’s servers so the AI can offer personalized suggestions, such as creating collages, highlight reels, or themed memories like birthdays and graduations. It can also generate AI-based edits or restyles of those images. Meta says this is optional and assures users that the uploaded media won’t be used for advertising. However, to enable this, people must agree to let Meta analyze faces, objects, and metadata like time and location. Currently, the company claims these photos won’t be used to train its AI models—but they haven’t completely ruled that out for the future. Typically, only the last 30 days of photos get uploaded, though special or older images might stay on Meta’s servers longer for specific features. Users have the option to disable the feature anytime, which prompts Meta to delete the stored media after 30 days. Privacy experts are concerned that this expands Meta’s reach into private, unpublished images and could eventually feed future AI training. Unlike Google Photos, which explicitly states that user photos won’t train its AI, Meta hasn’t made that commitment yet. For now, this is still a test run for a limited group of people, but it highlights the tension between AI-powered personalization and the need to protect personal data.

by Team SNFYI

News Update Bymridul     |    March 14, 2024 Meesho, an online shopping platform based in Bengaluru, has announced its largest Employee Stock Ownership Plan (ESOP) buyback pool to date, totaling Rs 200 crore. This buyback initiative extends to both current and former employees, providing wealth creation opportunities for approximately 1,700 individuals. Ashish Kumar Singh, Meesho’s Chief Human Resources Officer, emphasized the company’s commitment to rewarding its teams, stating, “At Meesho, our employees are the driving force behind our success.” Singh further highlighted the company’s dedication to providing opportunities for wealth creation despite prevailing macroeconomic conditions. This marks the fourth wealth generation opportunity at Meesho, with the size of the buyback program increasing each year. In previous years, Meesho conducted buybacks worth over Rs 8.2 crore in February 2020, Rs 41.4 crore in November 2020, and Rs 45.5 crore in October 2021. Meesho’s profitability journey began in July 2023, making it the first horizontal Indian e-commerce company to achieve profitability. Despite turning profitable, Meesho continues to maintain positive cash flow and focuses on enhancing efficiencies across various cost items. The company’s revenue from operations for FY 2022-23 witnessed a remarkable growth of 77% over the previous year, amounting to Rs 5,735 crore. This growth can be attributed to Meesho’s leadership position as the most downloaded shopping app in India in both 2022 and 2023, increased transaction frequency among existing customers, and a diversified category mix. Additionally, Meesho’s focus on improving monetization through value-added seller services contributed to its revenue growth. Meesho also disclosed its audited performance for the first half of FY 2023-24, reporting consolidated revenues from operations of Rs 3,521 crore, marking a 37% year-over-year increase. The company achieved profitability in Q2 FY24, with a significant reduction in losses compared to the previous year. Furthermore, Meesho recorded impressive app download numbers, reaching 145 million downloads in India in 2023 and surpassing 500 million downloads in H1 FY 2023-24. Follow Startup Story Source link

by Team SNFYI

AI models that play games go back decades, but they generally specialize in one game and always play to win. Google Deepmind researchers have a different goal with their latest creation: a model that learned to play multiple 3D games like a human, but also does its best to understand and act on your verbal instructions. There are of course “AI” or computer characters that can do this kind of thing, but they’re more like features of a game: NPCs that you can use formal in-game commands to indirectly control. Deepmind’s SIMA (scalable instructable multiworld agent) doesn’t have any kind of access to the game’s internal code or rules; instead, it was trained on many, many hours of video showing gameplay by humans. From this data — and the annotations provided by data labelers — the model learns to associate certain visual representations of actions, objects, and interactions. They also recorded videos of players instructing one another to do things in game. For example, it might learn from how the pixels move in a certain pattern on screen that this is an action called “moving forward,” or when the character approaches a door-like object and uses the doorknob-looking object, that’s “opening” a “door.” Simple things like that, tasks or events that take a few seconds but are more than just pressing a key or identifying something. The training videos were taken in multiple games, from Valheim to Goat Simulator 3, the developers of which were involved with and consenting to this use of their software. One of the main goals, the researchers said in a call with press, was to see whether training an AI to play one set of games makes it capable of playing others it hasn’t seen, a process called generalization. The answer is yes, with caveats. AI agents trained on multiple games performed better on games they hadn’t been exposed to. But of course many games involve specific and unique mechanics or terms that will stymie the best-prepared AI. But there’s nothing stopping the model from learning those except a lack of training data. This is partly because, although there is lots of in-game lingo, there really are only so many “verbs” players have that really affect the game world. Whether you’re assembling a lean-to, pitching a tent, or summoning a magical shelter, you’re really “building a house,” right? So this map of several dozen primitives the agent currently recognizes is really interesting to peruse: A map of several dozen actions SIMA recognizes and can perform or combine. The researchers’ ambition, on top of advancing the ball in agent-based AI fundamentally, is to create a more natural game-playing companion than the stiff, hard-coded ones we have today. “Rather than having a superhuman agent you play against, you can have SIMA players beside you that are cooperative, that you can give instructions to,” said Tim Harley, one of the proejct’s leads. Since when they’re playing, all they see is the pixels of the game screen, they have …