10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
Artificial Intelligence

Deal on EU AI Act gets thumbs up from European Parliament


The European Parliament voted Wednesday to adopt the AI Act, securing the bloc pole-position in setting rules for a broad sweep of artificial intelligence-powered software — or what regional lawmakers have dubbed “the world’s first comprehensive AI law”.

MEPs overwhelmingly backed the provisional agreement reached in December in trilogue talks with the Council, with 523 votes in favor vs just 46 against (and 49 abstentions).

The landmark legislation sets out a risk-based framework for AI; applying various rules and requirements depending on the level of risk attached to the use-case.

The full parliament vote today follows affirmative committee votes and the provisional agreement getting the backing of all 27 ambassadors of EU Member States last month. The outcome of the plenary means the AI Act is well on its way to soon becoming law across the region — with only a final approval from the Council pending.

Once published in the EU’s Official Journal in the coming months, the AI Act will come into force 20 days after that. Although there’s a phased implementation, with the first subset of provisions (prohibited use-cases) biting after six months; with others applying after 12, 24 and 36 months. Full implementation is thus not expected until mid 2027.

On the enforcement front, penalties for non-compliance can scale up to 7% of global annual turnover (or €35M if higher) for violating the ban on prohibited uses of AI. While breaches of other provisions on AI systems could attract penalties of up to 3% (or €15M). Failure to cooperate with oversight bodies risks fines of up to 1%.

Speaking during a debate Tuesday, ahead of the plenary vote, Dragoș Tudorache, MEP and co-rapporteur for the AI Act, said: “We have forever attached to the concept of artificial intelligence the fundamental values that form the basis of our societies. And with that alone the AI Act has nudged the future of AI in a human-centric direction. In a direction where humans are in control of the technology and where it, the technology, helps us leverage new discoveries, economic growth, societal progress, and unlock human potential.”

Dragos Tudorache celebrate the AI Act plenary vote in European Parliament

AI Act co-rapporteur, Dragoș Tudorache, gives a thumbs-up to the plenary vote result in European Parliament (Screengrab: Natasha Lomas/TechCrunch)

The risk-based proposal was first presented by the European Commission back in April 2021. It was then substantially amended and extended by EU co-legislators in the parliament and Council, over a multi-year negotiation process, culminating in a political agreement being clinched after marathon final talks in December.

Under the Act, a handful of potential AI use-cases are deemed “unacceptable risk” and banned outright (such as social scoring or subliminal manipulation). The law also defines a set of “high risk” applications (such as AI used in education or employment, or for remote biometrics). These systems must be registered and their developers are required to comply with risk and quality management provisions set out in the law.

The EU’s risk-based approach leaves most AI apps outside the law, as they are considered low risk — with no hard rules applying. But the legislation also puts some (light touch) transparency obligations on a third subset of apps, including AI chatbots; generative AI tools that can create synthetic media (aka deepfakes); and general purpose AI models (GPAI). The most powerful GPAIs face additional rules if they are classified as having so-called “systemic risk” — the bar for risk management obligations kicking in there.

Rules for GPAIs were a later addition to the AI Act, driven by concerned MEPs. Last year lawmakers in the parliament proposed a tiered system of requirements aimed at ensuring the advanced wave of models responsible for the recent boom in generative AI tools would not escape regulation.

However a handful of EU Member States, led by France, pushed in the opposite direction — fuelled by lobbying by homegrown AI startups (such as Mistral) — pressing for a regulatory carve-out for advanced AI model makers by arguing Europe should focus on scaling national champions in the fast developing field to avoid falling behind in the global AI race.

In the face of fierce lobbying, the political compromise lawmakers reached in December watered down MEPs’ original proposal for regulating GPAIs.

It did not grant a full carve-out from the law but most of these models will only face limited transparency requirements. It is only GPAIs whose training used compute power greater than 10^25 FLOPs that will have to carry out risk assessment and mitigation on their models.

Since the compromise deal, it has also emerged that Mistral has taken investment from Microsoft. The US tech giant holds a much larger stake in OpenAI, the US-based maker of ChatGPT.

During a press conference today ahead of the plenary vote, the co-rapporteurs were asked about Mistral’s lobbying — and whether the startup had succeeded in weakening the EU’s rules for GPAIs. “I think we can agree that the results speak for itself,” replied Brando Benifei. “The legislation is clearly defining the needs for safety of most powerful models with clear criteria… I think we delivered on a clear framework that will ensure transparency and safety requirements for the most powerful models.”

Tudorache also rejected the suggestion lobbyists had negatively influenced the final shape of the law. “We negotiated and we made the compromises that we felt were reasonable to make,” he said, calling the outcome a “necessary” balance. “The behaviour and what companies choose to do — they are their decisions — and they have not, in any way, impacted the work.”

“There were interests for all of those developing these models to keep still a ‘black box’ when it comes to the data that goes into these algorithms,” he added. “Whereas we promoted the idea of transparency, particularly for copyrighted material, because we thought it is the only way to give effect to the rights of authors out there.”

Benifei also pointed out the addition of environmental reporting requirements in the AI Act as another win.

The lawmakers added that the AI Act represents the start of a journey for the EU’s governance of AI — stressing the model will need to evolve and be extended with additional legislation in the future, with Benifei pointing to the need for a directive to set rules for the use of AI in the workplace.

“This Act is only the beginning of a longer journey, because AI is going to have an impact that we can’t only measure through this AI Act — it’s going to have an impact on education systems, it’s going to have an impact on our labour market, it’s going to have an impact on warfare,” Tudorache added.

“So there’s a whole new world out there that opens up where AI is going to play a central part and therefore, from this point onwards, as we’re also going to build the governance that comes out of the Act, we’ll have to be very mindful of this evolution of the technology in the future. And be prepared to respond to new challenges that might come out of this evolution of technology.”

Tudorache also reiterated his call last year for joint working on AI governance between likeminded governments and even more broadly, wherever agreements could be forged.

“We still have a duty to try to be as interoperable as possible — to be open to build a governance with as many democracies, with as many like minded partners out there. Because the technology is one, irrespective of which quarter of the world you might be in. Therefore, we have to invest in joining up this governance is in a framework that makes sense.”





Source link

by Team SNFYI

Facebook is testing a new feature that invites some users—mainly in the US and Canada—to let Meta AI access parts of their phone’s camera roll. This opt-in “cloud processing” option uploads recent photos and videos to Meta’s servers so the AI can offer personalized suggestions, such as creating collages, highlight reels, or themed memories like birthdays and graduations. It can also generate AI-based edits or restyles of those images. Meta says this is optional and assures users that the uploaded media won’t be used for advertising. However, to enable this, people must agree to let Meta analyze faces, objects, and metadata like time and location. Currently, the company claims these photos won’t be used to train its AI models—but they haven’t completely ruled that out for the future. Typically, only the last 30 days of photos get uploaded, though special or older images might stay on Meta’s servers longer for specific features. Users have the option to disable the feature anytime, which prompts Meta to delete the stored media after 30 days. Privacy experts are concerned that this expands Meta’s reach into private, unpublished images and could eventually feed future AI training. Unlike Google Photos, which explicitly states that user photos won’t train its AI, Meta hasn’t made that commitment yet. For now, this is still a test run for a limited group of people, but it highlights the tension between AI-powered personalization and the need to protect personal data.

by Team SNFYI

News Update Bymridul     |    March 14, 2024 Meesho, an online shopping platform based in Bengaluru, has announced its largest Employee Stock Ownership Plan (ESOP) buyback pool to date, totaling Rs 200 crore. This buyback initiative extends to both current and former employees, providing wealth creation opportunities for approximately 1,700 individuals. Ashish Kumar Singh, Meesho’s Chief Human Resources Officer, emphasized the company’s commitment to rewarding its teams, stating, “At Meesho, our employees are the driving force behind our success.” Singh further highlighted the company’s dedication to providing opportunities for wealth creation despite prevailing macroeconomic conditions. This marks the fourth wealth generation opportunity at Meesho, with the size of the buyback program increasing each year. In previous years, Meesho conducted buybacks worth over Rs 8.2 crore in February 2020, Rs 41.4 crore in November 2020, and Rs 45.5 crore in October 2021. Meesho’s profitability journey began in July 2023, making it the first horizontal Indian e-commerce company to achieve profitability. Despite turning profitable, Meesho continues to maintain positive cash flow and focuses on enhancing efficiencies across various cost items. The company’s revenue from operations for FY 2022-23 witnessed a remarkable growth of 77% over the previous year, amounting to Rs 5,735 crore. This growth can be attributed to Meesho’s leadership position as the most downloaded shopping app in India in both 2022 and 2023, increased transaction frequency among existing customers, and a diversified category mix. Additionally, Meesho’s focus on improving monetization through value-added seller services contributed to its revenue growth. Meesho also disclosed its audited performance for the first half of FY 2023-24, reporting consolidated revenues from operations of Rs 3,521 crore, marking a 37% year-over-year increase. The company achieved profitability in Q2 FY24, with a significant reduction in losses compared to the previous year. Furthermore, Meesho recorded impressive app download numbers, reaching 145 million downloads in India in 2023 and surpassing 500 million downloads in H1 FY 2023-24. Follow Startup Story Source link

by Team SNFYI

You might’ve heard of Grok, X’s answer to OpenAI’s ChatGPT. It’s a chatbot, and, in that sense, behaves as as you’d expect — answering questions about current events, pop culture and so on. But unlike other chatbots, Grok has “a bit of wit,” as X owner Elon Musk puts it, and “a rebellious streak.” Long story short, Grok is willing to speak to topics that are usually off limits to other chatbots, like polarizing political theories and conspiracies. And it’ll use less-than-polite language while doing so — for example, responding to the question “When is it appropriate to listen to Christmas music?” with “Whenever the hell you want.” But Grok’s ostensible biggest selling point is its ability to access real-time X data — an ability no other chatbots have, thanks to X’s decision to gatekeep that data. Ask it “What’s happening in AI today?” and Grok will piece together a response from very recent headlines, while ChatGPT, by contrast, will provide only vague answers that reflect the limits of its training data (and filters on its web access). Earlier this week, Musk pledged that he would open source Grok, without revealing precisely what that meant. So, you’re probably wondering: How does Grok work? What can it do? And how can I access it? You’ve come to the right place. We’ve put together this handy guide to help explain all things Grok. We’ll keep it up to date as Grok changes and evolves. How does Grok work? Grok is the invention of xAI, Elon Musk’s AI startup — a startup reportedly in the process of raising billions in venture capital. (Developing AI’s expensive.) Underpinning Grok is a generative AI model called Grok-1, developed over the course of months on a cluster of “tens of thousands” of GPUs (according to an xAI blog post). To train it, xAI sourced data both from the web (dated up to Q3 2023) and feedback from human assistants that xAI refers to as “AI tutors.” On popular benchmarks, Grok-1 is about as capable as Meta’s open source Llama 2 chatbot model and surpasses OpenAI’s GPT-3.5, xAI claims. Image Credits: xAI Human-guided feedback, or reinforcement learning from human feedback (RLHF), is the way most AI-powered chatbots are fine-tuned these days. RLHF involves training a generative model, then gathering additional information to train a “reward” model and fine-tuning the generative model with the reward model via reinforcement learning. RLHF is quite good at “teaching” models to follow instructions — but not perfect. Like other models, Grok is prone to hallucinating, sometimes offering misinformation and false timelines when asked about news. And these can be severe — like wrongly claiming that the Israel–Palestine conflict reached a ceasefire when it hadn’t. For questions that stretch beyond its knowledge base, Grok leverages “real-time access” to info on X (and from Tesla, according to Bloomberg). And, similar to ChatGPT, the model has internet browsing capabilities, enabling it to search the web for up-to-date information about topics. Musk has promised improvements with the …