Four parts, one platform.
Observation
In construction and infrastructure, AI looks different depending on where in the organization you look.
At larger companies, two AI tracks often live side by side. One is top-down — someone in leadership has procured ChatGPT Enterprise or Microsoft Copilot, access for everyone, brief training. The other is bottom-up — a team has built a RAG chatbot on SharePoint that answers questions about contracts or technical specifications. Both work. Both live their own lives.
At small and mid-sized companies, it looks different. There, AI is often an individual initiative. Someone discovered ChatGPT, learned to prompt reasonably well, and started using it daily. A colleague picks up the tip, tries it their own way. Usage is scattered, fragmented, uneven — and growing.
In both patterns, AI is being used. In neither is AI a part of how the organization works.
That's the difference the four parts are about.
Overview
Four distinct levels of AI for organizations today. The journey toward the more advanced levels is at the same time a maturity journey for the whole organization — and a steadily increasing value from the technology.
1. Basic AI — the individual gets help with a task. Chat, prompt, generation.
2. Operational AI — AI finds things in your organization's own documents. Sourced answers.
3. Advanced AI — AI performs multi-step work. Agents, automations, background runs.
4. Transformative AI — systems that have learned how you actually work. Custom-built capabilities, organizational memory, AI that produces without being asked.
Few have all four. Those that do have built it over time, with the people and the way of working following along.
Where are you today? And what's the next step?
Part 1
ChatGPT, Claude, Gemini, Copilot. You open a chat window, type a prompt, get an answer.
AI that helps the individual, right now. An estimator writes an email. A project manager summarizes a long meeting note. A bid manager translates a technical text.
This is where most start. ChatGPT Enterprise or Microsoft Copilot procured at the group level, access for everyone, brief training. Or, at smaller companies, a handful of individuals who bought their own licenses and use them daily. Usage is uneven — some build real habits, others forget they have access. The individual user's skill is the entire value.
The 2025 MIT NANDA report puts it this way:
General-purpose tools like ChatGPT excel for individuals because they're flexible, but they stall in enterprise use because they don't learn from or adapt to workflows.
MIT NANDA, 2025
That's not criticism. It's what this part of AI is for. General models, general tasks, individual productivity.
Part 2
RAG — Retrieval-Augmented Generation. AI that searches in your own documents and answers based on what it finds, with citations to page and line.
That's where AI starts talking with the organization. A design manager asks about a technical specification and gets an answer with a page citation. A site manager looks up a contract without opening it. A bid manager finds previous bids on similar projects in seconds.
The most common implementation is a RAG chatbot on SharePoint, OneDrive, or Google Drive. SharePoint Copilot is Microsoft's variant. Glean, Notion AI, and similar tools belong to the same category. Technically, it's document embeddings in a vector database, semantic search, and an LLM call that answers based on the matching excerpts.
The value is in the speed. What took twenty minutes takes twenty seconds. Whoever found this first in an organization has probably become their AI champion.
Variants within Operational AI handle different precision requirements. A general RAG chatbot on SharePoint finds things. A construction-specific platform also understands AMA codes and expands searches automatically — ventilation also finds VVS, V-drawings, air handling, AMA VVS & Kyl 22 / QJB.12. What's built around the model determines the difference.
Part 3
Agentic AI. AI that takes initiative. An orchestrator breaks down a task, calls specialized agents and your systems, runs the parts in parallel or in sequence, and returns results. Without anyone clicking between the steps.
The difference between asking and delegating.
The industry calls it agentic AI and it's the theme of 2025–2026. All the big platforms are building it. Anthropic published the Model Context Protocol (MCP) as a standard for how agents should talk to external systems. OpenAI launched its Agent Builder. The MIT NANDA report called it “the next phase of enterprise AI”.
In construction it looks like this: a tender review that takes 240 pages and returns a compliance table in five minutes. Work preparations compiled every Monday morning, based on weather forecast, contract terms, and previous projects. Change-order (ÄTA) identification that continuously reviews incoming materials against the contract. AMA and BBR reconciliation that runs on every new delivery without anyone asking for it.
Ten tasks in parallel in the background. The teams work on the rest.
Part 4
AI that has learned the specific organization.
Custom-built capabilities — often called Skills — mirror exactly how you do things. A specific review, a recurring calculation, a workflow someone builds once that then runs hundreds of times across different projects. Scalable across the organization, not just within the team.
Organizational memory is how AI keeps track of your projects over time. Every new project is indexed automatically — file by file, contract by contract, parties and phases. When an estimator opens a new tender request, the relevant history is already activated in context. Not as data to search through, but as knowledge that flows into the estimate.
And here's where it gets interesting: AI doesn't need to wait for someone to ask for something. It can be triggered by schedule (every Monday morning), by events (when a new version is uploaded), or by continuous monitoring (flag deviations in incoming materials). When you sit down at the screen, half the work is often already done — the analysis is there, ready to be reviewed.
In practice, this means an estimator gets decades of previous projects activated in every new tender request. A QHSE manager has the same standard running across twelve companies, with Skills someone built once that then run hundreds of times.
AI is no longer a service you use. It's how the work is done.
Synthesis
The four parts do different things. Together they make up an organization's AI maturity.
In practice, the parts often live in different tools, from different vendors, with different security models and contracts. ChatGPT Enterprise or Copilot for Basic. SharePoint Copilot or Glean for Operational. n8n or Zapier with LLM nodes for Advanced. For Transformative, there's usually no standard tool — it's built specifically.
You can stack them. But that's not the same as having a platform.
The MIT report has a specific observation: 95 percent of enterprise AI pilots deliver no measurable P&L impact. But within the 5 percent that succeed, a clear pattern emerges — organizations that buy a specialized AI platform succeed twice as often as those that build their own. 67 percent compared to just over 30. That's not an argument against in-house development. It's an argument for finding a partner who's done it before, and who has a platform where all four parts live together.
Yesper
All four parts in the same platform. General AI, RAG against your documents, agentic workflows, Skills and organizational memory. Built-in understanding of Swedish construction standards. All under one security model, one contract, one company memory that grows the longer you use it.
And a partner for the journey there. That's what we call Applied AI.
Twenty minutes on a call — you describe where you are, we share a first read. No commitment, no proposal until we know there's something worth pursuing.
Book a call