10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
AgriTech

UK government urged to adopt more positive outlook for LLMs to avoid missing ‘AI goldrush’


The U.K. government is taking too “narrow” a view of AI safety and risks falling behind in the AI gold rush, according to a report released today.

The report, published by the parliamentary House of Lords’ Communications and Digital Committee, follows a months-long evidence-gathering effort involving input from a wide gamut of stakeholders, including big tech companies, academia, venture capitalists, media and government.

Among the key findings from the report was that the government should refocus its efforts on more near-term security and societal risks posed by large language models (LLMs) such as copyright infringement and misinformation, rather than becoming too concerned about apocalyptic scenarios and hypothetical existential threats, which it says are “exaggerated.”

“The rapid development of AI large language models is likely to have a profound effect on society, comparable to the introduction of the internet — that makes it vital for the Government to get its approach right and not miss out on opportunities, particularly not if this is out of caution for far-off and improbable risks,” the Communications and Digital Committee’s chairman Baroness Stowell said in a statement. “We need to address risks in order to be able to take advantage of the opportunities — but we need to be proportionate and practical. We must avoid the U.K. missing out on a potential AI goldrush.”

The findings come as much of the world grapples with a burgeoning AI onslaught that looks set to reshape industry and society, with OpenAI’s ChatGPT serving as the poster child of a movement that catapulted LLMs into the public consciousness over the past year. This hype has created excitement and fear in equal doses, and sparked all manner of debates around AI governance — President Biden recently issued an executive order with a view toward setting standards for AI safety and security, while the U.K. is striving to position itself at the forefront of AI governance through initiatives such as the AI Safety Summit, which gathered some of the world’s political and corporate leaders into the same room at Bletchley Park back in November.

At the same time, a divide is emerging around to what extent we should regulate this new technology.

Regulatory capture

Meta’s chief AI scientist Yann LeCun recently joined dozens of signatories in an open letter calling for more openness in AI development, an effort designed to counter a growing push by tech firms such as OpenAI and Google to secure “regulatory capture of the AI industry” by lobbying against open AI R&D.

“History shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation,” the letter read. “Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.”

And it’s this tension that serves as a core driving force behind the House of Lords’ “Large language models and generative AI” report, which calls for the government to make market competition an “explicit AI policy objective” to guard against regulatory capture from some of the current incumbents such as OpenAI and Google.

Indeed, the issue of “closed” versus “open” rears its head across several pages in the report, with the conclusion that “competition dynamics” will not only be pivotal to who ends up leading the AI / LLM market, but also what kind of regulatory oversight ultimately works. The report notes:

At its heart, this involves a contest between those who operate ‘closed’ ecosystems, and those who make more of the underlying technology openly accessible. 

In its findings, the committee said that it examined whether the government should adopt an explicit position on this matter, vis à vis favouring an open or closed approach, concluding that “a nuanced and iterative approach will be essential.” But the evidence it gathered was somewhat colored by the stakeholders’ respective interests, it said.

For instance, while Microsoft and Google noted they were generally supportive of “open access” technologies, they believed that the security risks associated with openly available LLMs were too significant and thus required more guardrails. In Microsoft’s written evidence, for example, the company said that “not all actors are well-intentioned or well-equipped to address the challenges that highly capable [large language] models present“.

The company noted:

Some actors will use AI as a weapon, not a tool, and others will underestimate the safety challenges that lie ahead. Important work is needed now to use AI to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs.

Regulatory frameworks will need to guard against the intentional misuse of capable models to inflict harm, for example by attempting to identify and exploit cyber vulnerabilities at scale, or develop biohazardous materials, as well as the risks of harm by accident, for example if AI is used to manage large scale critical infrastructure without appropriate guardrails.

But on the flip side, open LLMs are more accessible and serve as a “virtuous circle” that allows more people to tinker with things and inspect what’s going on under the hood. Irene Solaiman, global policy director at AI platform Hugging Face, said in her evidence session that opening access to things like training data and publishing technical papers is a vital part of the risk-assessing process.

What is really important in openness is disclosure. We have been working hard at Hugging Face on levels of transparency [….] to allow researchers, consumers and regulators in a very consumable fashion to understand the different components that are being released with this system. One of the difficult things about release is that processes are not often published, so deployers have almost full control over the release method along that gradient of options, and we do not have insight into the pre-deployment considerations.

Ian Hogarth, chair of the U.K. government’s recently launched AI Safety Institute, also noted that we’re in a position today where the frontier of LLMs and generative AI is being defined by private companies that are effectively “marking their own homework” as it pertains to assessing risk. Hogarth said:

That presents a couple of quite structural problems. The first is that, when it comes to assessing the safety of these systems, we do not want to be in a position where we are relying on companies marking their own homework. As an example, when [OpenAI’s LLM] GPT-4 was released, the team behind it made a really earnest effort to assess the safety of their system and released something called the GPT-4 system card. Essentially, this was a document that summarised the safety testing that they had done and why they felt it was appropriate to release it to the public. When DeepMind released AlphaFold, its protein-folding model, it did a similar piece of work, where it tried to assess the potential dual use applications of this technology and where the risk was.

You have had this slightly strange dynamic where the frontier has been driven by private sector organisations, and the leaders of these organisations are making an earnest attempt to mark their own homework, but that is not a tenable situation moving forward, given the power of this technology and how consequential it could be.

Avoiding or striving to attain regulatory capture lies at the heart of many of these issues. The very same companies that are building leading LLM tools and technologies are also calling for regulation, which many argue is really about locking out those seeking to play catch-up. Thus, the report acknowledges concerns around industry lobbying for regulations, or government officials becoming too reliant on the technical know-how of a “narrow pool of private sector expertise” for informing policy and standards.

As such, the committee recommends “enhanced governance measures in DSIT [Department for Science, Innovation and Technology] and regulators to mitigate the risks of inadvertent regulatory capture and groupthink.”

This, according to the report, should:

….apply to internal policy work, industry engagements and decisions to commission external advice. Options include metrics to evaluate the impact of new policies and standards on competition; embedding red teaming, systematic challenge and external critique in policy processes; more training for officials to improve technical know‐how; and ensuring proposals for technical standards or benchmarks are published for consultation.

Narrow focus

However, this all leads to one of the main recurring thrusts of the report’s recommendation, that the AI safety debate has become too dominated by a narrowly focused narrative centered on catastrophic risk, particularly from “those who developed such models in the first place.”

Indeed, on the one hand the report calls for mandatory safety tests for “high-risk, high-impact models” — tests that go beyond voluntary commitments from a few companies. But at the same time, it says that concerns about existential risk are exaggerated and this hyperbole merely serves to distract from more pressing issues that LLMs are enabling today.

“It is almost certain existential risks will not manifest within three years, and highly likely not within the next decade,” the report concluded. “As our understanding of this technology grows and responsible development increases, we hope concerns about existential risk will decline. The Government retains a duty to monitor all eventualities — but this must not distract it from capitalising on opportunities and addressing more limited immediate risks.”

Capturing these “opportunities,” the report acknowledges, will require addressing some more immediate risks. This includes the ease with which mis- and dis-information can now be created and spread — through text-based mediums and with audio and visual “deepfakes” that “even experts find increasingly difficult to identify,” the report found. This is particularly pertinent as the U.K. approaches a general election.

“The National Cyber Security Centre assesses that large language models will ‘almost certainly be used to generate fabricated content; that hyper‐realistic bots will make the spread of disinformation easier; and that deepfake campaigns are likely to become more advanced in the run up to the next nationwide vote, scheduled to take place by January 2025’,” it said.

Moreover, the committee was unequivocal on its position around using copyrighted material to train LLMs — something that OpenAI and other big tech companies have been doing, arguing that training AI is a fair-use scenario. This is why artists and media companies such as The New York Times are pursuing legal cases against AI companies that use web content for training LLMs.

“One area of AI disruption that can and should be tackled promptly is the use of copyrighted material to train LLMs,” the report notes. “LLMs rely on ingesting massive datasets to work properly, but that does not mean they should be able to use any material they can find without permission or paying rightsholders for the privilege. This is an issue the Government can get a grip of quickly, and it should do so.”

It is worth stressing that the Lords’ Communications and Digital Committee doesn’t completely rule out doomsday scenarios. In fact, the report recommends that the government’s AI Safety Institute should carry out and publish an “assessment of engineering pathways to catastrophic risk and warning indicators as an immediate priority.”

Moreover, the report notes that there is a “credible security risk” from the snowballing availability of powerful AI models which can easily be abused or malfunction. But despite these acknowledgements, the committee reckons that an outright ban on such models is not the answer, on the balance of probability that the worst-case scenarios won’t come to fruition, and the sheer difficulty in banning them. And this is where it sees the government’s AI Safety Institute coming into play, with recommendations that it develops “new ways” to identify and track models once deployed in real-world scenarios.

“Banning them entirely would be disproportionate and likely ineffective,” the report noted. “But a concerted effort is needed to monitor and mitigate the cumulative impacts.”

So for the most part, the report doesn’t say that LLMs and the broader AI movement don’t come with real risks. But it says that the government needs to “rebalance” its strategy with less focus on “sci-fi end-of-world scenarios” and more focus on what benefits it might bring.

“The Government’s focus has skewed too far towards a narrow view of AI safety,” the report says. “It must rebalance, or else it will fail to take advantage of the opportunities from LLMs, fall behind international competitors and become strategically dependent on overseas tech firms for a critical technology.”





Source link

by Vivek Kumar

Atlassian, a leading provider of team collaboration and productivity software, has launched its latest research report, the AI Collaboration Index 2025. The report highlights that 77% of Indian knowledge workers now use generative AI daily, a significant rise from 46% in 2024. This outpaces counterparts in regions including the US (59%), Germany (54%), France (47%), and Australia (45%). The report, commissioned by Atlassian’s Teamwork Lab, surveyed more than 12,000 knowledge workers worldwide, including over 2,000 respondents in India. It explores how individuals and teams are adapting to the surge in AI adoption, highlighting both major productivity gains and persistent challenges in collaboration. Even during the early stages of AI adoption, India’s workforce is seeing significant individual productivity benefits. The report found Indian professionals are saving an average of 1.3 hours a day using AI – compared to a global average of just under one hour. How Indian business leaders model AI use has also had an immense impact on their teams. The Index found workers whose managers model AI use are four times more likely to integrate it throughout their daily workflows and three times more likely to become ‘strategic AI collaborators’, meaning they use AI as a team of expert advisors who can enhance decision making. Molly Sands, Head of the Teamwork Lab at Atlassian, said, “India has become one of the fastest-growing regions for everyday AI use in the workplace. But our research shows that ramping up individual productivity with AI isn’t necessarily translating into real business impact. The next wave of value comes from using AI to connect knowledge, coordinate work, and align teams – bridging silos and driving action on shared goals – we must see knowledge workers shift to become strategic AI collaborators. Organisations that move beyond isolated efficiency gains, of simple AI users, and focus on AI-powered collaboration will unlock the full potential of their people and resources.” Additional Key Findings from India: While much research focuses on AI adoption, the Atlassian AI Collaboration Index 2025 goes further, exploring how people perceive AI’s role in the workplace and its broader impact on how we work. The report emphasizes the need for a mindset shift to unlock AI’s full potential, moving from AI as a tool for individual efficiency to AI as a collaborative teammate capable of transforming teamwork. This shift is crucial for Indian organizations to fully capitalize on AI’s opportunities. The AI Collaboration Index 2025 also warns that overemphasis on personal productivity could cost the Fortune 500 an estimated $98 billion annually in lost returns on AI investments. Instead, Atlassian advocates for a shift towards AI-powered teamwork practices, including:

by INC42

Whatever we gather for cooking isn’t consumed entirely. In fact, a large part of the food meant for human consumption remains unused every day. That’s where food can be converted to feed. It’s green, it’s clean, and it guarantees zero landfill, claims Wastelink.  “The science behind our business lies in the food that was destined for humans but could not reach humans for supply chain issues. It can be best utilised to feed animals,” said Saket Dave, whose Wastelink is trying to address two issues with one solution.  After… Source link

by Vivek Kumar

Two Brothers Organic Farms (TBOF), the farmer-owned regenerative food company, recently launched a compelling new brand film this August in celebration of India’s Independence Day. Premiered on the company’s official YouTube channel, the film delivers a bold and timely message: true freedom lies in the power to choose food that is pure, honest, and deeply rooted in intention, food that nourishes both people and the planet. The film traces the journey of TBOF from a single farm in Maharashtra started by two brothers, to a nationwide movement empowering 20,000+ farmers and reaching 700,000+ customers in over 65 countries. “We industrialised, globalised, standardised. But we broke our food system, our soil, our health” says Ajinkya Hange, Co-founder & Farmer of Two Brothers Organic Farms. The narrative is based on the real-life journey of co-founders Mr. Satyajit and Mr. Ajinkya Hange, who left their corporate careers to return to their ancestral village in Maharashtra and rebuild a system that puts farmers and the soil first. Two Brothers Organic Farms seeks to inspire a new generation not just to eat better, but to think differently about food, sustainability, and agriculture. Launched during India’s Independence Day week, the brand film is a compelling call to action: to seek freedom from artificial ingredients, and factory-processed foods. Through this campaign, the brand aims to spark a deeper conversation around food and freedom, urging consumers to question what’s on their plate, make conscious choices, and take pride in India’s rich agricultural heritage. It’s a reminder that true independence lies in everyday decisions that shape our health, our farmers’ livelihoods, and the future of the planet. The film also captures TBOF’s scale and integrity, showing its mega kitchens in Bhodani, its real-time traceability technology, and its upcoming warehouse expansions. Every aspect of the brand’s supply chain is backed by third-party testing and global certification standards, while farmer training programs continue to expand across India. The film reinforces that TBOF’s greatest strength lies in its people -farmers, cooks, villagers, and the communities who are redefining what it means to grow and consume food. “We are not just a brand; we are a movement to Fix the broken food system” says Satyajit Hange, Co-founder & Farmer of Two Brothers Organic Farms, the brand is redefining what it means to build a brand rooted in purpose, powered by people. “Through this campaign, we want to highlight that food sovereignty is a form of freedom too, the freedom to grow, eat, and live with dignity. We are redefining what it means to build a brand rooted in purpose and powered by people” The film reflects TBOF’s mission to build a future where food is nourishing for the soil, the farmer, and the consumer and redefines progress by returning to roots.