We believe in Trust, Quality, Responsblility and Teamwork

Interview: Is AI Set to Revolutionise Microsoft Dynamics 365 F&O?

I. AI in Daily Life: From Search Engine to Intelligent Assistant

Tomasz Mordel: Let’s start with the basics. Beyond your professional life, how is AI impacting your daily routine? Where do you encounter these tools, and how do you use them—excluding, of course, the ubiquitous social media “fake videos”?

Alexey Khottchenkov: AI is already an inseparable part of our reality, and it’s not going away. Interestingly, at this stage, I find AI more helpful in my everyday life than in the highly specific technical complexities of F&O.

I use it as a “Super Google.” When I face a topic where I’m not proficient—be it medicine, law, travel, or sightseeing—AI is my first point of contact. Previously, Google gave us a list of links that we had to painstakingly sift through, analysing articles and wasting time selecting information. AI provides visibility into a topic in seconds. While the question of trust remains—especially with medical data—as a summary tool for building a knowledge base, AI is unrivalled.

II. Delivering Tangible Value in ERP Systems

Tomasz Mordel: Moving to your area of expertise—in which sections of Finance & Operations does AI deliver the most tangible value today, and who is actually benefiting?

Alexey Khottchenkov: It’s a complex issue. Regarding Dynamics 365 F&O, Microsoft hasn’t yet “heavily dived” into AI to the extent we might hope for. We see the early stages, but it’s not yet at a point where we can be fully satisfied.

Currently, the most significant benefit is in knowledge base exploration. AI is brilliant at investigating documentation and providing a user or consultant with a specific step-by-step guide to achieve a task in the system. However, there is a paradox: while Copilot is thriving in the Office environment (Word, Excel), that success hasn’t fully translated to the F&O user interface or development environment yet. F&O is a narrow, specialised domain with a smaller community, so it hasn’t been the primary focus for Microsoft’s broadest AI updates.

III. Data Security: The Main Brake on Innovation

Tomasz Mordel: So we are at the beginning of the road. In your opinion, what is the main bottleneck preventing faster AI growth in business systems?

Alexey Khottchenkov: The number one blocker is data security. In the enterprise world, this is the top consideration. AI models are essentially public, and no business wants its private financial data, strategies, or intellectual property “squeezed” into a model where it could leak.

This has led to a situation where even Microsoft, despite owning Copilot, does not give it full access to the internal logic of F&O. Copilot doesn’t know the full structure of AOT objects or the specific file formats we use; it only reads what is in public articles. Consequently, when I ask it to generate X++ code, the result is often irrelevant to the existing platform. Without secure access to data and metadata, AI cannot analyse processes or give accurate recommendations.

IV. The Future: Automation, Integrations, and AI Agents

Tomasz Mordel: Let’s look ahead. Where can AI help customers the most once these barriers are removed?

Alexey Khottchenkov: I see several key areas:

  • Routine Automation: Creating a sales order requires a user to fill in numerous fields. Currently, companies spend a lot on manual integrations to automate this. AI could do this dynamically, mapping data from external systems directly into F&O.
  • Data Mining and Analytics: F&O is a massive database. Using AI-enhanced BI or OLAP systems would allow for instant insights without the need to manually build complex exports to Data Lakes.
  • Agents in Microsoft Teams: This is the future. AI Agents could act like virtual colleagues. Instead of opening the ERP, you chat in Teams: “What’s the credit limit for this customer?” or “Can I sign a new multi-million contract?” The AI, acting as a bridge, would query the system API, extract parameters (dates, account IDs), and return the answer as text or an Excel file.

V. The Developer’s Role: Will AI Replace Us?

Tomasz Mordel: You mentioned code generation. Is AI capable of replacing an F&O developer?

Alexey Khottchenkov: At this moment—absolutely not. I’ve tested Copilot for simple tasks, like creating a command-line tool to convert text file formats. AI knows general languages (like C# or Java) brilliantly, but even then, I had to manually fix the code at the end.

With F&O, it’s harder because it’s a proprietary system. AI cannot replace even a junior developer today because it lacks an understanding of the deep architecture that Microsoft hasn’t shared with it. However, it can significantly shrink the time spent on routine code, which will eventually lower implementation costs for clients.

VI. Advice for the CFO: How to Start?

Tomasz Mordel: Finally, what would you advise a sceptical CFO who is wary of AI but feels the pressure to innovate?

Alexey Khottchenkov: I’m a sceptic myself, so persuading another sceptic is hard! But as an optimist, I’d say: AI will conquer the market one way or another. The sooner a business turns toward it, the more beneficial it will be.

For a CFO, the strongest argument is operational cost optimisation. Microsoft is heavily boosting AI agents that can, for example, monitor Azure infrastructure. An agent in the admin cockpit can advise: “You’re paying for a plan that’s too high here,” or “You have unused tools here.” With millions of parameters in the cloud, a human can’t track everything—AI does it instantly. My advice: start with safe tools that support administration and technology, and gradually move toward business processes.

Key Takeaways:

  1. AI as a “Super Google”: Excellent for research and learning, but requires expert verification.
  2. The Security Barrier: Enterprise adoption will remain limited until fully isolated, trusted models are the norm.
  3. The Rise of Agents: The future lies in natural language interaction (voice/text) with F&O via Microsoft Teams.
  4. Support, Not Replacement: AI helps developers and admins reduce costs and double-check work, but it does not replace expert knowledge.

Tomasz Mordel: Alexey, thank you very much for sharing your insights and spending this time with me. Even as a self-proclaimed “optimistic sceptic,” your perspective on the practical trajectory of F&O is incredibly valuable.

Alexey Khottchenkov: Thank you as well! It was a pleasure to discuss these developments. I look forward to seeing how the ecosystem evolves.



Does Generative AI truly help in Software Engineering? AI support using LLM models

Author

Gustaw Szwed – Senior Developer

Created

September 16th, 2025

This paper is in the form of a laid-back column, mostly personal opinions of mine. But first of all, I recommend reading our two previous articles – one from Krzysztof Dukszta-Kwiatkowski: Frontend developer perspective onusing AI code assistants and LLMs and one in the form of an interview with Łukasz Ciechanowski by Rafał Polański: AI and DevOps: Tools, Challenges, and the Road Ahead – Expert Insights. Now it’s time for another case study – a backend development perspective. 

My environment and a bit about me

I’ve been working as a software engineer in multiple languages (not to mention dozens of frameworks, either). Now I stick with Reactive Java and Spring – wouldn’t call it my first choice, but so far, it’s the least painful and allows me and my team to deliver high-quality projects on time. And then – ChatGPT appeared with a big bang. As mentioned in the linked articles, it opened a new world for software engineers. This Pandora’s box also created many doubts in terms of Intellectual Property rights, like stealing artwork from artists through generative image AI.

How it all started

At the beginning, software engineers relied mostly on AI chatbots – where they provided some questions, code snippets and so on. Nowadays, those are often integrated in our IDEs – I use IntelliJ IDEA Ultimate with JetBrains AI Assistant Plug-in as my daily tools. I also tried other plugins like those provided straight from OpenAI – I guess everyone recognises their ChatGPT.

As you may see, it’s an integrated ChatBot. You can also change the LLM model to a different one – not only those from OpenAI (like GPT-4o), but also from Google (Gemini) or Antropic (Claude). It also allows you to generate code straight in your editor – but with varying results:

As you may see, it generated a sample straight from Jakarta EE (previously known as simply Java EE) instead of Spring’s PostConstruct, which comes straight from javax. On the contrary, when I have doubts while writing sophisticated Reactive chains, I can always ask an AI assistant for advice – and not only do I get some hints, but also it explains to me some steps and changes they made:

It allows engineers to refactor Reactive code with proper fallbacks (which is not that intuitive for starters), configure loggers (not only logback, but also allow to include Logbook), or when they want to generate some Java records based on a sample JSON (or vice versa), it does its job really well. The same thing happens with Jackson or Swagger annotations, so it really accelerates the software development. And it also has a great knowledge about many external APIs – KeyCloak user, role and group management? No problem. Oauth2 support, Kafka brokers, everything needed daily – it shows its potential. But with everything, there’s a pinch of tar – when I asked an AI assistant for some simple CSV file transformation when providing a table, it couldn’t understand how to manage some strings and just made up the results. Also, if a developer doesn’t know what they’re doing and relies only on AI assistants, they will quickly find out they are in a dark alley with AI-provided solutions that totally don’t make any sense. 

Damage is done

Around all the controversies with not-so-legal fetched materials for learning LLM (large language models) or image generation models (like Stable Diffusion), it also allowed engineers like me to greatly increase productivity – but with some cost. LLM models are based on our existing content – repositories, Stack Overflow questions and replies, news, documentation and articles. If you want it or not, you’re part of it now, even if you opt-out (you can truly opt-out?). Obviously, you don’t have to use AI assistants too, but as you may see above, it accelerates coding, so there’s a chance you will stay behind. But even as a part of this trend, there’s still one big challenge ahead of us – degradation of input material for LLM models. Stack Overflow and other portals have seen dramatically decreased traffic, and that’s a really bad situation – no one can rate replies, review them, give some feedback or promote a given advisor to a higher rank.

We lost control

Those are bold words, but you will not find better words for that – we don’t control input nor output of LLM models, no one can guarantee that results aren’t made up, don’t come from unreliable (or what’s worse – shady) sources, won’t spoil someone’s Intellectual Property, etc. 

But can you truly trust it? Of course, we use IntelliJ IDEA, WebStorm, Xcode and other IDEs, but it’s local and we can (at least theoretically) check the traffic that comes in and out from our Mac or PC. Now we don’t have any guarantees that our queries don’t flow around the globe. 

Corporate policy

Some companies are afraid of that; therefore, they simply don’t allow engineers to use any AI assistant, which may be harmful for them. Their colleagues from other companies can now improve their work, for instance, help the DevOps team to provide Ingress rules (because they can learn it with an AI assistant in no time), and understand their code better. Every smart business on the market provides its own servers with LLM models. It’s not a perfect solution, it’s quite often not up-to-date with the newest versions of frameworks or libraries, and doesn’t allow (most often) to pick other LLM models like the JetBrains AI plug-in does. Maintaining those servers doesn’t come cheap, not only because of DevOps work, but also at a resource level – it requires a lot of disk space and computing power. But at least they’re in control, and that’s a trade-off they prefer to keep. There is also possibility to run LLM on local machines (vide Ollama, LM Studio or HuggingFace), but those require powerful work machines and for many engineers who just use their company laptop it may be too much (in terms of battery life and thermal considerations) – for more information I suggest to read article from Krzysztof Dukszta-Kwiatkowski mentioned at the beginning of this column. 

They took our jobs!!!

That’s the biggest fear of engineers, which I totally don’t agree with. With or without the AI trend, the level of engineering skills has degraded in the past. As a technical recruiter, I noticed most engineers just came “to work”, whereas software engineering is way more. And they burn out quickly, don’t learn new things and follow new trends and technologies on the market, including new frameworks. I would rather work with two great engineers than with 10 average ones, because the knowledge sharing is really time-consuming and doesn’t move the project forward. In the past years, companies have been over-recruiting, so now only good engineers can find a great project to be part of, and for them, the AI assistant is a great tool to push those projects at a higher pace.

Conclusions

In summary, while AI assistants like ChatGPT have undeniably changed the way we approach backend development – speeding things up, filling knowledge gaps, and even acting like a second pair of eyes – they’re still just tools. Powerful, yes, but imperfect. As engineers, we have to stay sharp, question the output, and keep learning the fundamentals. Otherwise, we risk becoming over-reliant on something we don’t fully understand or control. The landscape is shifting fast, and it’s up to us to adapt thoughtfully, not blindly follow the trend.