AI Trends 2026: Leveraging AIML For Large Infrastructure Projects
- Ken Twist

- Jan 27
- 5 min read

Post Series: AI Trends For 2026
The WhyData Perspective: Written by Ken Twist
Patterns Are Not Always What They Seem

For anyone with experience in construction, whether delivering telecom infrastructure or running large civil engineering programs, the promise of AI probably sounds straightforward. Apply AI to streamline operations, tighten the supply chain, and deliver end-to-end project visibility. Pull data from ERPs, procurement systems, schedules, telematics, and daily field reports. Train models to forecast demand, predict delays, surface safety risk, and optimize inventory. Layer on dashboards and alerts, and the solution appears complete. Except this is where most systems can quietly fail, eventually getting decommissioned.
Not because the models are wrong or insufficient, but because the assumptions underneath them were never fully vetted against actual operations. We’ve experienced this with a large North American Service Provider and a large North American construction management firm.
Large multi-year construction projects can generate enormous volumes of data. Multiple industry surveys show that 20-50 percent of that data can be inaccurate, inconsistent, or arrive too late to influence decisions (which we can corroborate). In one North American study, one-third of contractors reported that poor data quality led to misinformed decisions more than half the time. AI systems built on top of this reality do not fix the mess. They expose it faster.
AI systems built on top of this reality do not fix the mess. They expose it faster.
The real gap is not technical capability. It’s changing how teams function, operate, and manage.
Too many AI deployments start with assumptions based on genericized patterns instead of constraints. Feature stores (a centralized system that standardizes and serves model features consistently across training and production), real-time pipelines, complex model ensembles, and LLM copilots are often deployed before cross-functional teams have clearly defined KPIs or answer basic questions. For example: which decision does this improve, how fast before this gate impacts the next build phase, what is the cost of being wrong, and where does reliable feedback come from? It’s one thing to have an answer or even several possible answers to these questions. It’s entirely different to understand the “why” behind these questions and how that understanding feeds back into the system.
Large complex projects and systems make these questions unavoidable. What actually happened can arrive weeks or months after a decision window has passed. Material costs (OPEX & CAPEX) can be documented late, incorrectly, or insufficiently. Safety incidents can go underreported. Maintenance windows shift. Productivity can vary and is shaped as much by material and equipment shortages, supervision quality, union vs. non-union crews, crew stability, incentives, and fatigue as by schedules or quantities. Academic studies consistently show that stable crews with experienced foremen outperform higher churn crews even when scope and tools are identical. Yet most AI systems can treat labor as interchangeable input data.
This Is Where Pattern-Driven Design Breaks Down

An FMI–Autodesk industry survey shows that nearly half of construction rework is driven by poor communication and inconsistent project information. We see the same pattern when we’re brought in to instrument and stabilize struggling projects. The root cause isn’t a lack of forecasting—it’s fragmented systems, disaggregated data, unclear ownership, and delayed visibility into emerging risk.
When AI systems are dropped into environments without context and variability (e.g., the natural, structural, and human-driven ways the environment changes over time), they add complexity without improving outcomes. Sophisticated models are often trained on labels or data dictionaries that are subjective or influenced by reporting incentives, particularly when performance metrics affect job security. Feature stores are built before feature definitions have stabilized. In postmortems, teams often discover that a few simple deterministic rules or shared event timelines would have delivered most of the value with a fraction of the risk.
Every Component Has An Envelope Where It Helps
In WhyData vernacular, an envelope defines the range of conditions in which a component meaningfully improves decisions by reducing risk or cost, or by outperforming simpler alternatives. Outside that range, the same component may no longer improve outcomes and can introduce new failure modes, increase operational friction, and create organizational drag.
A miscalculation AI systems or practitioners can make is assuming a component is always beneficial, rather than conditional.
Caching is most effective when access patterns cluster. Streaming when decisions are time sensitive. Machine learning works well when data is stable, feedback is available, and the decision value justifies the uncertainty (obviously, lots of caveats).
Industry practitioners increasingly acknowledge that many AI initiatives stall because they are heavy on technology and light on decision alignment.
The Systems That Succeed Look Very Different

They start with a small set of high-impact operational decisions, such as whether to expedite a shipment, re-sequence work due to weather or other field conditions, accelerate permitting, intervene with a subcontractor, or address emerging safety risk. Each decision has a defined owner, decision window, and cost of error. These are not out-of-the-box features of an AI system, but learned, contextualized, and modeled within a dynamic operating environment.
They build and understand a canonical event timeline that unifies procurement, delivery, staging, installation, and field progress, inspection, re-work variables, etc., before developing predictive models. We find that this often surfaces risk earlier than any set of “out-of-the-box” algorithm(s).
They instantiate transparent heuristics and deterministic scoring that cross-functional teams can understand and trust. Machine learning is introduced only when it demonstrably outperforms simpler baselines and improves a measurable outcome.
Human factors are treated as primary signals and markers, not noise. Combined crew and shift stability, labor patterns, operational changes, communication load, and reporting behavior are explicitly considered. Research shows that incentive structures and leadership quality materially influence productivity and safety outcomes, all of which can change during a project. AI systems can often ignore this context and produce insights that appear precise but fail to meet set objectives and KPIs, such as compensation, which drive behavior.
Most importantly, some AI-enabled decision-support systems used to manage construction operations are designed to express uncertainty. We believe it’s important to call this out because expressing uncertainty is the difference between AI that gets trusted and AI that gets ignored, misused, and abused. Construction projects, like other complex environments, are adaptive systems. Once teams react to an issue, the data-generating process changes. In these scenarios, AI systems are participants, not observers.
If a model presents its output as certain but turns out to be wrong in a visible way, people stop trusting it. After that, they either ignore it entirely or treat it as more noise from supervisors or project managers.
In contrast, if a model says, “This looks risky, but confidence is low,” people have the ability to apply judgment. Even if it turns out to be wrong, it doesn’t lose credibility because it never claimed certainty.
The goal is not perfect prediction. It is to help people make better decisions in uncertain and dynamic environments.
The WhyData Perspective
Beware of generic architectures, workflows, and “out-of-the-box” solutions. Every workflow and component must exist because it supports a real constraint. Every model earns its place by improving a specific decision. Every new component introduces a failure mode that is understood by humans and is monitored.
If a system cannot clearly explain which decision it improves, why it is necessary, and how its failure would be detected, it is not delivering intelligence. It is repeating patterns developed and applied out of context.
Don’t become infatuated with cool new tools. Stop starting with architecture. Start with decisions, constraints, and failure modes. Let the math, the data reality, and the human system force the design. Start simple.
That’s how AI delivers real value in complex operating environments. If you’re exploring how to apply these principles in your organization, WhyData can help you get started or strengthen the AI initiatives already underway.
Written by Ken Twist, Chief Innovation Officer, WhyData. For more information, visit www.whydata.com/contact
Category: AI Trends 2026



Comments