10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
AI

Rubrik’s IPO filing reveals an AI governance committee. Get used to it.


Tucked into Rubrik’s IPO filing this week — between the parts about employee count and cost statements — was a nugget that reveals how the data management company is thinking about generative AI and the risks that accompany the new tech: Rubrik has quietly set up a governance committee to oversee how artificial intelligence is implemented in its business.

According to the Form S-1, the new AI governance committee includes managers from Rubrik’s engineering, product, legal and information security teams. Together, the teams will evaluate the potential legal, security and business risks of using generative AI tools and ponder “steps that can be taken to mitigate any such risks,” the filing reads.

To be clear, Rubrik is not an AI business at its core — its sole AI product, a chatbot called Ruby that it launched in November 2023, is built on Microsoft and OpenAI APIs. But like many others, Rubrik (and its current and future investors) is considering a future in which AI will play a growing role in its business. Here’s why having AI governance could become the new normal.

Growing regulatory scrutiny

Some companies are adopting AI best practices to take the initiative, but others will be pushed to do so by regulations such as the EU AI Act.

Dubbed “the world’s first comprehensive AI law,” the landmark legislation — expected to become law across the bloc later this year — bans some AI use cases that are deemed to bring “unacceptable risk,” and defines other “high risk” applications. The bill also lays out governance rules aimed at reducing risks that might scale harms like bias and discrimination. This risk-rating approach is likely to be broadly adopted by companies looking for a reasoned way forward for adopting AI.

Privacy and data protection lawyer Eduardo Ustaran, a partner at Hogan Lovells International LLP, expects the EU AI Act and its myriad of obligations to amplify the need for AI governance, which will in turn require committees. “Aside from its strategic role to devise and oversee an AI governance program, from an operational perspective, AI governance committees are a key tool in addressing and minimizing risks,” he said. “This is because collectively, a properly established and resourced committee should be able to anticipate all areas of risk and work with the business to deal with them before they materialize. In a sense, an AI governance committee will serve as a basis for all other governance efforts and provide much-needed reassurance to avoid compliance gaps.”

In a recent policy paper on the EU AI Act’s implications for corporate governance, ESG and compliance consultant Katharina Miller concurred, recommending that companies establish AI governance committees as a compliance measure.

Legal scrutiny

Compliance isn’t only meant to please regulators. The EU AI Act has teeth, and “the penalties for non-compliance with the AI Act are significant,” British-American law firm Norton Rose Fulbright noted.

Its scope also goes beyond Europe. “Companies operating outside the EU territory may be subject to the provisions of the AI Act if they carry out AI-related activities involving EU users or data,” the law firm warned. If it is anything like GDPR, the legislation will have an international impact, especially amid increased EU-U.S. cooperation on AI.

AI tools can land a company in trouble beyond AI legislation. Rubrik declined to share comments with TechCrunch, likely because of its IPO quiet period, but the company’s filing mentions that its AI governance committee evaluates a wide range of risks.

The selection criteria and analysis include consideration of how use of generative AI tools could raise issues relating to confidential information, personal data and privacy, customer data and contractual obligations, open source software, copyright and other intellectual property rights, transparency, output accuracy and reliability, and security.

Keep in mind that Rubrik’s desire to cover legal bases could be due to a variety of other reasons. It could, for example, also be there to show it is responsibly anticipating issues, which is critical since Rubrik has previously dealt with not only a data leak and hack, but also intellectual property litigation.

A matter of optics

It goes without saying companies won’t solely look at AI through the lens of risk prevention. There will be opportunities they don’t want to miss, and neither do their clients. That’s one reason why generative AI tools are being implemented despite having obvious flaws like “hallucination” (i.e., a tendency to fabricate information).

It will be a fine balance for companies to strike. On one hand, boasting about their use of AI could boost their valuations, no matter how real said use is or what difference it makes to their bottom line. On the other hand, they will have to put minds at rest about potential risks.

“We’re at this key point of AI evolution where the future of AI highly depends on whether the public will trust AI systems and companies that use them,” the privacy counsel of privacy and security software provider OneTrust, Adomas Siudika, wrote in a blog post on the topic.

Establishing AI governance committees likely will be at least one way to try to help on the trust front.



Source link

AI
by The Economic Times

IBM said Tuesday that it planned to cut thousands of workers as it shifts its focus to higher-growth businesses in artificial intelligence consulting and software. The company did not specify how many workers would be affected, but said in a statement the layoffs would “impact a low single-digit percentage of our global workforce.” The company had 270,000 employees at the end of last year. The number of workers in the United States is expected to remain flat despite some cuts, a spokesperson added in the statement. A massive supplier of technology to… Source link

AI
by The Economic Times

The number of Indian startups entering famed US accelerator and investor Y Combinator’s startup programme might have dwindled to just one in 2025, down from the high of 2021, when 64 were selected. But not so for Indian investors, who are queuing up to find the next big thing in AI by relying on shortlists made by YC to help them filter their investments. In 2025, Indian investors have invested in close to 10 Y Combinator (YC) AI startups in the US. These include Tesora AI, CodeAnt, Alter AI and Frizzle, all with Indian-origin founders but based in… Source link

by Techcrunch

Lovable, the Stockholm-based AI coding platform, is closing in on 8 million users, CEO Anton Osika told this editor during a sit-down on Monday, a major jump from the 2.3 million active users number the company shared in July. Osika said the company — which was founded almost exactly one year ago — is also seeing “100,000 new products built on Lovable every single day.” Source link