Rethinking Optimisation: How Satish Saka Is Bringing Structure to Decision-Making in Unstable Data Environments

In fast-moving digital environments, speed is often treated as a competitive advantage. Metrics update in real time, dashboards constantly shift, and teams are expected to respond just as quickly.

But in practice, faster decisions don’t always lead to better outcomes.

For Satish Saka, who has spent years working closely with performance systems, this became clear not through theory, but through repeated patterns in live environments.

“Decisions were happening quickly,” he says, “but not always on data that was ready.”

From Execution to Observation

Satish’s work began in execution-heavy roles managing campaigns, analysing performance metrics, and working with businesses focused on scaling efficiently.

At first, the process seemed straightforward. Monitor performance, identify changes, and act accordingly.

But over time, something didn’t quite add up.

Metrics like cost per acquisition, return on ad spend, and conversion rates were constantly fluctuating. Some changes reflected genuine performance shifts. Many didn’t.

“In most setups, even a small drop would trigger immediate action,” he explains. “Budgets would be adjusted, campaigns paused, or strategies changed often within hours.”

This approach was common across teams. It felt logical. But the outcomes were inconsistent.

In several cases, early interventions introduced more instability rather than fixing the issue.

​​In practice, delaying intervention until data stabilised led to more consistent performance patterns compared to frequent reactive adjustments.

When Acting Early Creates More Problems

One particular pattern stood out.

In one campaign environment, a short-term spike in acquisition cost led to an immediate reduction in budget. Within the next couple of days, performance began to stabilise on its own. But by then, the earlier intervention had already disrupted the campaign’s learning phase, delaying recovery and extending volatility.

Situations like this weren’t rare.

They highlighted something deeper the problem wasn’t always the decision itself, but the timing of it.

“You can see movement in numbers,” Satish says, “but that doesn’t always mean there’s a clear signal. Acting too early can sometimes amplify the problem.”

Identifying the Missing Layer

This led to a shift in how he approached optimisation.

Instead of focusing only on what to do nextthe more important question became:
“Is the data ready for a decision at all?”

Most optimisation workflows follow a simple loop:

Observe → Act

What was missing, Satish realised, was a layer in between a way to evaluate whether the observed data was reliable enough to justify action.

Without that, even well-intended decisions could be based on incomplete or unstable information.

Building the MDU Engine

The MDU Engine emerged from this gap not as a product idea initially, but as a structured way to reduce repeated mistakes.

“It wasn’t something I set out to build as a product,” Satish reflects. “It started as a way to make sense of patterns I kept seeing across different campaigns.”

At its core, the system focuses on decision readiness.

Instead of reacting directly to performance changes, it evaluates the underlying quality of the data using factors such as:

  • Consistency over time
  • Sufficiency of data volume
  • Directional clarity of signals
  • Potential downside risk of acting early

Based on this, outcomes are not limited to action. They include:

  • Scale
  • Hold
  • Reduce
  • Or importantly, no action

That last outcome doing nothing is often the hardest to justify in fast-paced environments, but in many cases, it is the most appropriate one.

“The goal isn’t to slow things down unnecessarily,” he says. “It’s to make sure we’re not acting on noise.”

The approach has been applied across multiple campaign environments where high variability in data required a more structured decision process.

Moving Beyond Reactive Optimisation

As digital systems become more automated, the volume of available data continues to grow. But more data doesn’t automatically lead to better decisions.

In fact, it often creates the opposite problem too many signals, not enough clarity.

“What looks like control is sometimes just a reaction,” Satish notes. “And reaction, without context, can be costly.”

This becomes especially relevant when multiple campaigns, channels, and variables are running simultaneously. Without a structured approach, even experienced teams can fall into reactive cycles.

A Broader Perspective on Decision Systems

While the MDU Engine is rooted in performance marketing, the underlying idea extends beyond a single domain.

This isn’t limited to marketing. Any system working with fluctuating data whether in product analytics, financial modelling, or forecasting runs into the same challenge: distinguishing between meaningful trends and temporary variation.

The concept itself isn’t new. But in practice, it’s often overlooked.

Time pressure, performance expectations, and constant visibility push teams toward immediate action. Structured evaluation tends to take a back seat.

Satish’s work focuses on bringing that missing discipline into real-world environments.

The Human Side of Data

One of the more important insights behind this approach is not technical, it’s behavioural.

Data doesn’t create urgency. People do.

In uncertain situations, the instinct to act is natural. Waiting can feel like losing control. But that instinct, when unchecked, often leads to premature decisions.

“Even experienced teams can overreact to short-term changes,” he says. “Sometimes, introducing a pause is what actually improves outcomes.”

Systems like the MDU Engine are not just analytical tools. They act as guardrails helping teams step back before stepping in.

Looking Ahead

As organisations continue to rely more heavily on data-driven processes, the challenge is no longer access to data.

It’s an interpretation.

And more importantly, timing.

Satish believes that decision-support systems will become increasingly relevant in environments where variability is high and decisions are frequent.

The shift, he suggests, is already happening from optimisation itself to optimisation readiness.

A Shift in Thinking

Rather than positioning his work as a replacement for existing systems, Satish sees it as an additional layer, one that brings structure to how decisions are validated.

In environments driven by constant data movement, clarity often matters more than speed.

And sometimes, better decisions don’t come from acting faster but from recognising when not to act at all.

Comments are closed.