Blog

Reclaiming Control in the Age of Generative AI

author
Jeroen Appel
Last Update
September 1, 2025
Published
September 1, 2025

It's safe to say it's an interesting time to live in.

While acknowledging the broader global challenges humanity is facing, this blog zooms in on the potential of GenAI in today’s enterprise technology landscape. It offers a grounded perspective on how to move beyond the hype and leverage AI to meaningfully support your company’s goals.

From ERP to Innovation: The Example of a Cookie Company’s Journey

Imagine, it's 2019, and your cookie-selling company just 'survived' a migration from SAP Business Suite to SAP S/4HANA. Knowing that you sell the best cookies in Europe, the company grew into a large construct with respect for its own processes and ways of working. A solid ERP implementation surely helped you focus on what you know best: producing and selling cookies.

Nevertheless, the regional differences and 'delighting customer procedures' that brought you part of your success were strongly tied to your SAP core implementation. The team of ABAP and Java developers spent over one and a half years migrating the custom business logic from the previous SAP version to the new one. New business requests were put on hold, mostly, as they would interfere with the timeline.

Or, did things go differently?

Saved from Slowdown

Luckily, Gartner is mostly there to help us set the right course. So they did in 2016, too, advocating for concepts like "keep your core clean." This idea was embraced by large enterprise software vendors like SAP, who began offering extensibility options outside their core systems, even partnering with Mendix as their preferred low-code platform since 2017.

Your company didn’t just follow this trend. It embraced it. A partner was found to move most of the ERP customizations into separate, non-isolated 'systems of differentiation and innovation.' Former ABAP developers transitioned into Mendix consultants. And thanks to your deep Microsoft ecosystem, citizen developers quickly emerged throughout the organization. With Power Apps available by default to all employees, the inevitable rise of modern shadow IT wasn’t far behind (about which I could write a book on its own).

Despite these growing pains, the benefits became clear. ERP updates were no longer blocked by custom code entanglements. Departments began noticing each other’s digital initiatives and the speed at which they were being delivered.

Your CIO was pleased, too. A well-defined governance model ensured that the apps being built were not only helpful and intuitive, but also secure. You began laying the foundation for a modern data architecture, including a shared catalog of primary data. And as more of your differentiating processes shifted into low-code apps, you could gradually move their data sourcing away from ERP and into a centralized data platform.

The result? Low-code teams could now access data from various systems in consistent, simplified ways, without needing deep knowledge of those back-end platforms. They could spend more time solving real problems and less time navigating technical complexity.

This, in many ways, is what a composable enterprise looks like. Thanks again, Gartner!

Wait... Wasn’t This About AI?

Before we get to that, let’s highlight what your cookie company now enjoys: small teams and clarity in a complex world.

As a Mendix Consultant, I had the privilege of working with many different customers of all sizes and ages. One thing that has stood out in my experience thus far is that projects integrating with large systems like ERP or CRM, compared to those where an integration and/or data platform were already in place, often required:

  • Larger teams
  • Longer delivery cycles
  • Broader knowledge on the inner workings of those systems and procedures

Why? Because responsibilities couldn’t be sufficiently isolated. This introduces the need for more people to understand the broader business and technical context. And in an increasingly complex world, this is your real challenge.

AI Is Modular by Default

I am very pleased to see that most of the 'GenAI magic' we see in production is or can be modular by default. Especially with Agentic AI, which has proven to be a real game-changer, you break it down to:

  • A central agentic building block that ‘understands’ the tools it can use and the sources it can consult
  • An LLM (Large Language Model) to support the decision-making, reasoning, and orchestration
  • A memory in the shape of a mechanism and a datastore to store previous questions and answers
  • Well-defined, pre-configured, and parameterized prompts to provide clear instruction pathways

To clear up a misunderstanding, the tools that claim their AI is trained for you personally do stretch the worthiness somewhat. The more honest explanation is that these tools are good at tracking historical questions and answers, providing these historical interactions to your 'new' prompt when needed in a smart way.

The good news is that for most use cases, it's very doable to build this yourself, with the benefit of being in complete control. And with Mendix, you already have the foundational layer to get you started.

What “Being in Control” Actually Means

Control isn't binary—it's contextual. What's essential to your business can be different from use-case to use-case, and compared to other companies. Here are the key control areas to consider:

1. LLM Choice

Most off-the-shelf tools won't let you pick a preferred LLM. This is not necessarily a problem (as it brings simplicity as a trade-off), but it is when you want to have control over what data you will share with what party. Additionally, tools might not pick the very best LLM in the market to support your use case, which can change from week to week in the current climate.

2. Data Location

Closely related to the LLM choice, various LLMs or tools are only hosted in the US or other parts of the world. Although more vendors are starting to offer pure EU deployments, you might need complete control over where your data is stored, and whether your LLM is managed by either a commercial party or by a company with which far-reaching agreements can be made.

3. Token Distribution

Most AI tools offer specific functionality for a monthly fee per person. This can lead to significant monthly investments, especially in large companies, while not all employees might use the tools to their full potential. Here, it's helpful when you can allocate LLM tokens for the entire company and distribute their usage across departments. This lets you track the financial impact more effectively and evaluate the foreseen business case granularly.

4. LLM Monitoring

Everyone who has ever used one of the chat interfaces of the big LLM companies knows that models can hallucinate and start drifting. When you're in control of the separate modules that form your solution, you're able to monitor the effectiveness of your solution in a detailed way. For example, Datadog allows you to monitor all prompts and answers and analyze the quality, token usage, and anomalies in an automated way.

5. Knowledge

Every once in a while, you hear someone say, 'every company will be an IT company'. As this can be debated for sure, a similar saying will be there for using AI in your business. All knowledge you gain on the inner workings will help you make informed decisions moving forward with fewer dependencies on an increasing number of external parties.

The Buy-vs-Build Dilemma

If you’re starting to think about building something yourself, let’s be realistic: it won’t be production-ready tomorrow. But it could be, say, next month. And in today’s economy of speed, we need to acknowledge the buy-versus-build dilemma.

Buying AI tools can be tempting. They promise a quick start, vendor support, data privacy (if you’re lucky), and sometimes even regional hosting. For roughly €20 per user, you’re off to the races and for an additional ‘enterprise fee’, you will get SSO support too. The tool will likely use the most affordable or efficient LLM behind the scenes and assure you that your data won't be used to train their models.

Sounds good, right?

But here’s where the fine print kicks in. These tools often come with limited visibility, little room for customization, and very few levers to control the actual underlying technology. When token prices drop, or a cheaper LLM becomes available, the benefit usually goes to the vendor—not you. And just as you're starting to rely on it, they might raise prices or quietly sunset the product altogether.

Unicorns need to start somewhere, but many of these tools have only been around for a few months and are run by very small teams. At the very least, this is worth including in the equation.

So, what’s the alternative?

The Pragmatic Roadmap

To wrap this up, let’s translate all of this into a practical, realistic roadmap, one that works for enterprises of any size. The key message is simple: start small, learn quickly, and continuously evaluate the best course of action, always with a long-term strategy in mind. Just as we do for our customers using Mendix.

Step 1: Experiment and Stay Informed

Allow your employees to experiment with tools they find. As new tools appear every day, it's impossible to keep track of this with a centralized department. Besides that, your employees will likely use them anyway. Educate people on safe usage (experimenting without sharing sensitive data) and enable enthusiastic people to advocate their tools within the company. This is a perfect opportunity to polish your security policy and -education in general, too.

Step 2: Establish a Buy-vs-Build Decision Process

Set up a proper internal process to allow employees to submit use cases or tools and let other employees vote on these use cases (if your company's size requires this). Try to blend this into the general software procurement process but try to avoid bottlenecks that can be either automated or resolved up front. Iterate on a decision tree for your company to choose between buying or building the AI solution.

Step 3: Embrace Platform Thinking

As described in this article, the modular approach to (agentic) AI is available and doable. It's even more likely that you have most components already in place. Use the very first use case (alright, maybe the second) to start iterating on your approach to 'embedded AI' in your company's software landscape. With clear ownership, separation of concerns, and solid governance. Just like with other technologies that support the business today. You can do it; if you need a hand, a partner is happy to help.

Agentic AI is here. Go Make It.

The potential of Agentic AI is undeniable, and what excites me most is how much control organizations can exercise without adding unnecessary complexity. We are not starting from scratch; we’re building on decades of technological progress, including the rise of composable business architectures and the vital distinction between systems of record and systems of differentiation and innovation.

Seen through this broader lens, AI doesn’t have to become a fragile dependency. Instead, it can be a powerful enabler, if you choose the right tools and the right partners to guide you.

Agentic AI is already here, and asking meaningful questions about your own data is no longer rocket science.

With Mendix, you can unlock this potential in a way that’s sustainable, controlled, and fast—while keeping the freedom to adapt and choose what works best for your business.

Not just for now, but for the decade ahead.

Originally published here.

Find out how CLEVR can drive impact for your business

Contact us

FAQ

Can't find the answer to your question? Just get in touch

No items found.
join the newsletter

Receive personal news and updates in your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
CLEVR Company picture Alicia - Ech