10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
AI

AI compliance: ‘Algorithmic’ rider in draft data law trips firms deploying AI


The obligation of due diligence in “algorithmic software” under the new data rules is likely to impact organisations employing artificial intelligence in their products or services, especially those building models, experts said. This mandate — even though softer than that of the European Union which has barred AI companies from releasing new AI models in the region without state approval — could be a difficult one to comply with, they added.“Data protection laws and AI have several zones of friction,” said Supratim Chakraborty, partner at law firm Khaitan & Co. “Cardinal principles of data protection laws such as ‘purpose limitation’, ‘data minimisation’, etc., and important rights such as ‘data subject rights’ mostly come in conflict with the way AI systems function,” he said.

Compliance with erasure requests from data subjects without impacting the functionality of the model is a highly intricate task, Chakraborty added.

Even more challenging is the fact that the layered nature of data management in AI makes it difficult to establish how personal data is processed. Because once trained, AI models can generate new content autonomously, thereby making it difficult to establish relation to training data.

The draft DPDP rules, released Friday, said data fiduciaries must observe due diligence to verify “algorithmic software” deployed by them.


This is an important step for AI governance as so far the Ministry of Electronics and Information Technology has only issued advisories to intermediaries and platforms to test their models and algorithms to ensure they do not permit discrimination, give information to users on unreliability of outputs from models under testing and prohibit users from using such models in contravention of the IT Act and Rules.

Discover the stories of your interest


“India has signalled its intent to align with global standards like the EU’s AI Act; however, the Indian approach appears broader and less defined,” said Ankit Sahni, partner at law firm Ajay Sahni & Associates. “Its open-ended nature raises questions about implementation clarity. What constitutes adequate ‘due diligence’ and the extent of scrutiny for AI systems remains undefined, potentially creating operational ambiguities.”The EU AI Act enacted in August 2024 mandated a risk-based approach for AI companies to rate models as ‘unacceptable’, ‘high’, ‘limited’, and ‘minimal’ risk, forcing companies like OpenAI, Meta, Anthropic and Alibaba to release new models in the region.

For instance, a new “LLM checker” by LatticeFlow showed that OpenAI’s GPT-3.5 Turbo model received a concerning score of 0.46 for its performance in preventing discriminatory output. Alibaba’s Qwen1.5 72B Chat model fared even worse, with a score of 0.37 in the same category. Meta’s Llama 2 13B Chat model scored poorly on a cybersecurity threat called prompt hijacking.

“Major AI companies like Google, Meta and OpenAI face a balancing act between compliance and innovation in these divergent regulatory landscapes,” said Anandaday Misshra, founder and managing partner of AMLEGALS.

“The draft DPDP Rules, despite their flexibility, still necessitate enhanced data management practices,” he added.

However, it must be highlighted that there is a distinction between the intent of India’s DPDP Rules and EU’s AI Act.

The scope of the former is limited to significant data fiduciaries in the context of ensuring that rights of Data Principals (such as the right to access personal information, right to correction and erasure) are not at risk, explained Nakul Batra, partner of DSK Legal. “Conversely, the EU AI Act prescribes for a much broader assessment of the risks of the AI model in relation to the fundamental rights, health and safety of the general public,” he said.



Source link

AI
by The Economic Times

IBM said Tuesday that it planned to cut thousands of workers as it shifts its focus to higher-growth businesses in artificial intelligence consulting and software. The company did not specify how many workers would be affected, but said in a statement the layoffs would “impact a low single-digit percentage of our global workforce.” The company had 270,000 employees at the end of last year. The number of workers in the United States is expected to remain flat despite some cuts, a spokesperson added in the statement. A massive supplier of technology to… Source link

AI
by The Economic Times

The number of Indian startups entering famed US accelerator and investor Y Combinator’s startup programme might have dwindled to just one in 2025, down from the high of 2021, when 64 were selected. But not so for Indian investors, who are queuing up to find the next big thing in AI by relying on shortlists made by YC to help them filter their investments. In 2025, Indian investors have invested in close to 10 Y Combinator (YC) AI startups in the US. These include Tesora AI, CodeAnt, Alter AI and Frizzle, all with Indian-origin founders but based in… Source link

by Techcrunch

Lovable, the Stockholm-based AI coding platform, is closing in on 8 million users, CEO Anton Osika told this editor during a sit-down on Monday, a major jump from the 2.3 million active users number the company shared in July. Osika said the company — which was founded almost exactly one year ago — is also seeing “100,000 new products built on Lovable every single day.” Source link