10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
AI

The Model Works. But There Is Chaos All Around

In fintech, everything starts with analyzing customer behavior: a model at the input, accurate recommendations at the output. In the first few weeks, the business achieves 92.4% accuracy, the team is satisfied, and the business is thriving.

But after a month and a half, accuracy drops to 78%. Why? Because the system is not ready for spikes in production loads. While the model was working in a controlled environment, everything was fine, but when the heat came, no one turned on the air conditioner.

Where is MLOps? — you may ask. In the form of a document in Notion — those who treat AI development services as a one-time delivery rather than an ecosystem will answer you.

The Illusion of Control: Why Can’t We See When AI Goes Haywire?

One of the most insidious problems is silence. Even though the algorithm starts to make mistakes, the monitoring panels are still green, the tests pass, and CI/CD works. Nothing is displayed in the logs. There are no alerts. The reality only hits when someone from management asks, “Why did our revenue drop?”

However, signs of the problem appear long before the economy takes a hit:

  • Users complain, “Is this déjà vu, or why are recommendations for home appliances following me around the sports department?”
  • Forecasts are either overestimated or underestimated, although the system stubbornly accepts them as the norm.
  • The QA team says, “Everything went well.” But reality says “No,” hitting the cash register.

How One Friday Broke E-Commerce

An N-iX client (the latest company to implement AI and an entire system based on it) launches a recommendation model for an online store — everything is fine until TikTok traffic doubles. S3 freezes, Airflow cannot restart the task, and users are recommended coffee makers instead of T-shirts.

The model? It’s not the model’s fault. It’s just that the world has changed, and the system didn’t notice.

Inside, everything is classic:

  1. The script is in the final_final_v2_fixed folder.
  2. Someone configured the logs… sometime.
  3. They promised to do monitoring “after the holidays.”
  4. DVC broke when the tags were mixed up.

The problem is not with AI. The problem is that everything around it is a mess.

“Mimicry Of Correctness”: When Everything Seems To Be Working — But It Isn’t

On the outside, there is artificial intelligence, green control panels, and optimistic reports. Meanwhile, under the hood, there is a final_final_For_Now script, forgotten outside of Git, data for 2020, and an error that no one will know about until the owner of a Nissan Leaf takes out insurance on a Porsche.

No updates, no version control, and CI/CD goes straight from the notebook to production. Anomalies? Only on the slides. The model seems to be “working,” but in reality, it has decided that today is Black Friday, not Tuesday.

This is where things get interesting: the indicators are fine, there are no alerts, but the business is confused and looks at you like a magician who has forgotten his spell. Don’t look for logic. Look for the moment when the AI was left without context.

How To Implement AI Correctly

Experts at N-iX, a company known for developing industrial-grade AI solutions, share their experience. They don’t fix production at the last minute. They immediately create the infrastructure so that the model can be retrained on schedule and does not become obsolete even when user behavior changes.

All updates are logged. This approach allows you to return to any experiment and understand what happened during the process. If something goes wrong, the system will notice it itself. Anomalies are detected before business losses occur.

A/B tests do not drag on for weeks — a couple of hours at most. Data changing? Git tracks everything and sends a signal so that the model doesn’t fall apart due to unexpected changes — whether it’s a deleted column, a renamed field, or a sudden “null-as-string” format that no one warned about.

Before And After AI Infrastructure

Even the smartest model won’t work if it’s surrounded by a vacuum; it’s part of a complex system made up of data, pipelines, monitoring, tests, people, and that invisible logic that ties it all together. And as long as there are holes in the system, any breakthrough will eventually turn into a breakdown.

This is how things used to be — and how they start to work once the model becomes part of the architecture.

It wasIt became (after 4 weeks with the N-iX experts)
Retrain “by shouting”Every 3 days + automatic monitoring
Error logs onlyFull tracing + anomaly alerts
Manual A/B testingAutomation, delay no more than 2 hours
“Backend said it’s normal”Schema control via Git + alerts

Conclusion

If the metrics do not correspond to reality, do not rush to abandon the model. Take a step back. Most often, the problem is not in the numbers, but in the fact that the model lacks a habitat.

In AI/ML engineering, this means that the model (algorithm, basic solution) has been created, but the entire supporting infrastructure is either missing, fragile, or completely disconnected from the environment.

So, for your AI model to work, be prepared to take responsibility for the architecture. However, remember: even the best AI model does not work on its own. Its quality, stability, and value depend not only on the algorithm, but also on the data architecture, pipelines, monitoring, testing, and continuous retraining.

And if you are tired of guessing whether your AI is working, it’s time to call in those who know how to put things in order. Don’t just hope that everything will work out on its own. After the AI development service, no coffee makers appeared in the “Sports” section.

AI
by The Economic Times

IBM said Tuesday that it planned to cut thousands of workers as it shifts its focus to higher-growth businesses in artificial intelligence consulting and software. The company did not specify how many workers would be affected, but said in a statement the layoffs would “impact a low single-digit percentage of our global workforce.” The company had 270,000 employees at the end of last year. The number of workers in the United States is expected to remain flat despite some cuts, a spokesperson added in the statement. A massive supplier of technology to… Source link

AI
by The Economic Times

The number of Indian startups entering famed US accelerator and investor Y Combinator’s startup programme might have dwindled to just one in 2025, down from the high of 2021, when 64 were selected. But not so for Indian investors, who are queuing up to find the next big thing in AI by relying on shortlists made by YC to help them filter their investments. In 2025, Indian investors have invested in close to 10 Y Combinator (YC) AI startups in the US. These include Tesora AI, CodeAnt, Alter AI and Frizzle, all with Indian-origin founders but based in… Source link

by Techcrunch

Lovable, the Stockholm-based AI coding platform, is closing in on 8 million users, CEO Anton Osika told this editor during a sit-down on Monday, a major jump from the 2.3 million active users number the company shared in July. Osika said the company — which was founded almost exactly one year ago — is also seeing “100,000 new products built on Lovable every single day.” Source link