Last Christmas (I gave AI my heart)
Last Christmas, I wrote about how I personally use AI in my daily work. While I anticipated rapid advancements in the field, I still managed to underestimate just how quickly things would evolve. So, let’s get down to it:
From Personal Curiosity to Critical Company-Level Domain Functions
At Pareto, we focus on creating domain function implementations that combine LLM pipelines with executable code. These are delivered through a managed API layer we refer to as the domain function. For systems and companies, this approach aligns seamlessly with traditional data architecture and flow models, while unlocking the potential of generative AI engines and the latest advancements.
This marks a profound shift from LLMs being isolated, “fun-to-play-with” novelties to becoming fully integrated, enterprise-grade infrastructure solutions that fit perfectly into existing operational models.
Azure and SAP: The Unwavering Duo
There’s an old Finnish mustard commercial about "kahta en vaihda” (two things I won’t change). These days, that phrase aptly applies to Azure infrastructure and SAP. Sure, other ecosystems exist, but when working with companies adopting AI solutions, we encounter these two systems over 80% of the time as core data sources for AI integration.
Many companies also value Azure’s added benefit of providing a sandbox for development and experimentation. Honestly, investing in Microsoft stock doesn’t sound like the worst idea right now.
Expert Productivity With AI vs. Without AI: No Contest
By 2024, it’s become abundantly clear that AI isn’t about replacing knowledge workers; instead, it acts as a powerful amplifier of their abilities. Looking ahead to 2025, experts leveraging AI will simply be in a different league compared to those who don’t. At this stage, enumerating AI-supported use cases feels almost redundant—it can assist with virtually any information-related task.
What’s true for individuals will increasingly hold true for organizations. The gap in efficiency between AI-enabled and non-AI-enabled companies will only continue to widen.
No Single AI to Rule Them All
GPT, Gemini, Llama, Claude, Mistral, and an ever-growing universe of local LLMs. Today, we’re already building solutions where multiple LLMs interact within pipeline architectures, leveraging pre- and post-processing with custom configurations.
Tools like Flowise and Langflow are making it easier than ever to visually plan and explain these processing chains. This is lowering the barrier to creating tailored AI solutions that perfectly fit specific needs, whether it’s a particular configuration, version, or application.
At this stage, the competition is no longer about “which engine is best,” but rather about which toolchain offers the optimal balance of quality, speed, and deployability for building robust AI-powered data architectures. Interestingly, usability—a concept that once felt overshadowed in AI development—is suddenly taking center stage again, and that’s a fantastic development.
Humans and Organizations Struggling to Keep Pace
This spring, I was preparing a presentation for Tuesday morning on ChatGPT 4.0’s capabilities—and its limitations. Late Monday night, at 10 PM, OpenAI released ChatGPT 4.0o (Omni), which rendered a third of my planned examples obsolete as many of the previously noted issues were significantly resolved.
The truth is, humans struggle to grasp the implications of exponential growth, and we’re witnessing it in real-time with AI advancements. In 2024, LLM capabilities—including engines and toolchains—have taken quantum leaps every three months.
Even software companies that monitor these developments around the clock find it challenging to keep up. For customers who spend the vast majority of their time focused on daily operations, it’s almost impossible to stay current.
Is the End Result All That Matters?
In 2024, we’ve witnessed AI triumph in artistic competitions, creating works that outperformed human competitors in blind reviews. The resulting outrage has been palpable, raising questions about what truly stirs this discomfort. Is it the realization that the collective body of public art has been absorbed to fuel AI’s capabilities, or is it the unsettling truth that, when objectively assessed, we may be losing to AI’s creative power?
This raises a deeper, utilitarian question: does it matter if the end result delivers the desired value? If a piece of art moves us, inspires us, or fulfills its purpose, should it matter whether it was created by a human or an algorithm?
In my view, time and again, we’ve seen that value ultimately prevails over ideology—and I see no reason to believe this situation will be any different.
Predictions on What’s Next
Many organizations and individuals will face challenges hopping aboard the AI hype train. Based on my experience creating and deploying AI solutions, the key is to lower the barriers to entry. Encouraging experimentation and allowing people to explore the boundaries of what AI can do is essential for adoption.
Personal Agents
By the end of 2025, I envision the rise of personal AI agents—dedicated instances running continuously to act as our representatives in the connected data network. These agents could manage cybersecurity, execute tasks, and handle the mundane: navigating customer portals, making selections from drop-down menus, and filling out endless forms. With current AI capabilities, there’s no reason we should still be doing these tasks manually for much longer.
What’s especially exciting is the potential for locally running AI agents that work in tandem with cloud services. Imagine having a personal AI instance on your phone, seamlessly syncing with cloud infrastructure and accessible across devices. It’s reminiscent of the old CPU MHz race, followed by the GPU competition. I foresee a similar evolution—a personal “AI enhancer version xx.xx” race shaping up soon.
Let’s see if I’ll be writing about this in my end-of-2025 AI post. 🙂
If you made it this far, I hope you enjoyed the article!
#generativeAI #Pareto #focusonwhatmatters