insight

AI in Construction Project Management – A Supercharged Assistant or a Silent Saboteur?

Photo of Jonah Balmford's photo
Jonah Balmford

AI in Construction Project Management – A Supercharged Assistant or a Silent Saboteur?

A Brave New (Digital) World

The construction industry has long been seen as a sector slow to embrace transformation of any kind, let alone digital transformation. But in recent years, the tide has turned. From drones mapping out topography and supporting asset management, to Building Information Models (BIM) reshaping design coordination, technology is no longer a bolt-on – it’s becoming the backbone of what we do. Now, Artificial Intelligence (AI) is stepping into the spotlight, promising to revolutionise how we plan, procure, deliver, optimise and decarbonise projects.

For multi-disciplinary construction consultancies like TSA Riley, particularly those that the RICS regulates, AI presents both an exciting opportunity and a complex challenge. On one hand, it offers the potential to streamline workflows, enhance decision-making, and unlock new levels of efficiency. On the other hand, it raises serious questions about data integrity, professional accountability, and the risk of over-reliance on tools that are still, in many ways, untested.

It’s worth remembering how we all felt the first time we used a self-checkout at a supermarket (or grocery store to our international readers). Slightly bewildered, maybe a little sceptical. Fast forward a few years, and they’re everywhere. We’ve adapted. But even now, when the screen flashes up ‘Unexpected Item in Bagging Area’, we instinctively glance around for a human to help. AI in construction may follow a similar path: rapid adoption, growing trust, but always with a need for human oversight when things get tricky. Besides, a self-checkout machine can’t (yet) verify your age (required for certain purchases) – so how far can AI really go in replacing the skills, judgement, and instinct of our talented workforce?

So, is AI the ultimate assistant – a digital partner that empowers consultants to do more with less – or is it undermining the very principles of trust, rigour, and human judgement that underpin our profession?

With the value of AI in the global construction market forecast to grow from USD$3.93 billion in 2024 to over $22 billion by 2032 (Fortune Business Insights, 2025), the question is no longer if AI will reshape our industry – but how we choose to shape its role in everything we do.Jonah Balmford

The Promise of AI in Construction Consultancy

AI is already reshaping how construction consultants operate – not by replacing professionals, but by enhancing their capabilities. In a sector where time, cost, quality, and carbon are constantly in tension, AI offers a powerful toolkit to help project managers, cost consultants and other industry professionals stay ahead of the curve. But how?

  • Workflow automation: Repetitive tasks like progress reporting, meeting minutes, and document control can be streamlined using AI-powered tools. This frees up consultants to focus on higher-value activities like stakeholder engagement and strategic planning, while also ensuring greater consistency in the delivery of repetitive tasks.
  • Data-driven decision-making: AI can analyse vast datasets – from historical cost plans to live programme data – to identify trends, flag risks, and suggest optimisations. For example, predictive models can forecast delays or cost overruns before they materialise, allowing for proactive intervention.
  • Visual intelligence: AI can interpret drone footage, site photos, and BIM models to detect safety hazards, track progress, or verify compliance – all in near real-time. This adds a new layer of insight to traditional site visits and audits, while helping to reduce the inherent safety risk associated with visiting construction sites.
  • Client engagement: Faster turnaround times, clearer insights, and more accurate forecasting all contribute to a better client experience. AI can help consultants deliver more value, more consistently

The Pitfalls and Risks

But with great power comes great responsibility. The rapid adoption of AI in consultancy settings also brings a host of challenges – some technical, others ethical, and many still emerging.

  • Data governance and confidentiality: AI systems are only as good as the data they’re trained on – and in a consultancy setting, that data is often sensitive. Without clear protocols, there’s a risk of breaching client confidentiality or falling foul of General Data Protection Regulations.
  • Over-reliance and deskilling: As AI takes on more tasks, there’s a danger that junior team members may miss out on foundational learning experiences. If we’re not careful, we risk creating a generation of consultants who can prompt a chatbot but don't have experience reading and understanding contracts.
  • Accountability and liability: If an AI tool makes a recommendation that leads to a costly error, who’s responsible? The consultant? The software provider? The client? These questions are still largely unanswered in legal and regulatory frameworks.
  • Regulatory uncertainty: RICS and other professional bodies are still developing their positions on AI. Until clearer guidance emerges, we need to tread carefully, ensuring that any AI use aligns with core principles of integrity, competence, and transparency.

In essence, AI is not a plug-and-play solution. It requires governance, oversight, and a healthy dose of professional scepticism. Without governance and planning, it could undermine the trust and quality - and the construction industry's reputation.

The RICS Perspective: Guardrails for Responsible AI Use

In early 2025, the RICS published a Version 3 consultation paper on its upcoming Professional Standard on the Responsible Use of AI, with final publication expected mid-year. By drafting this as a Professional Standard, the RICS is signalling that it will include both mandatory requirements and recommended best practices, designed to protect clients, professionals, and the wider public. As such, it will apply to both RICS Members and Regulated Firms, with familiarity with its contents essential for all industry professionals.

The benefits of AI are clear: it can streamline work and reduce human error. However, the risks are equally present – from bias in outputs and data confidentiality concerns to the ever-relevant ‘garbage in, garbage out’ principle. The draft Professional Standard outlines these risks and offers mitigation techniques, but ultimately, the responsibility for compliance rests squarely with the professionals and businesses using the tools.

One of the key stipulations is that ‘Members who use AI systems in their surveying practice must develop and maintain sufficient and appropriate knowledge to support the responsible use of AI.’ In principle, this is sound – but how is it achieved in practice? According to the RICS, this knowledge includes:

  • Familiarity with different AI typologies;
  • Understanding the risk of erroneous outputs;
  • Recognising the inherent risk of bias in AI systems.

The RICS also rightly highlights that AI introduces additional privacy and confidentiality concerns beyond those typically encountered in surveying. AI systems rely on user-provided data – often uploaded to platforms outside an organisation’s internal infrastructure – increasing the risk of data breaches. While one could argue this is no different from any data leaving an organisation, the fact that over 79% of UK professionals in all industries now use AI daily (Forbes, 2025) makes it clear why safeguarding sensitive information is more important than ever.

Yes, redacting or omitting confidential data before uploading it to an AI system is one mitigation strategy, but it raises a practical dilemma. If you have to strip out key information, rewrite the prompt, and then reinsert the original data manually, it’s a bit like scanning your shopping at a self-checkout, only to have to re-bag everything at the end because the machine couldn’t recognise your pasta sauce. Efficient? Not exactly.

The draft Professional Standard is unequivocal:

Quote

While this is a sensible requirement, it hinges on the interpretation of ‘most appropriate’. To support this, the RICS mandates that firms develop a written policy explaining the rationale behind AI use – providing an auditable trail of decision-making.

That policy must also define how human control and judgement will interact with AI systems, such as through regular monitoring or dip-sampling of outputs. But what does ‘regular’ mean? Shouldn’t all AI-generated outputs be peer-reviewed? This doesn’t mean every AI output needs a second pair of eyes – but it does mean the person responsible for uploading and using the data must be confident in the result.

The draft Professional Standard also requires that RICS Members and Regulated Firms declare, in writing, the reliability of AI outputs. This doesn’t mean a full report every time AI is used – that would defeat the purpose – but it does mean dip-sampling high-volume tasks and providing written declarations for outputs that meet a certain threshold criteria.

Above all, transparency is key. Clients and stakeholders must be informed when AI has been used. While this may feel like an uncomfortable conversation – removing the human touch of certain workstreams – it’s essential for maintaining trust in the profession and laying the groundwork for confidence in AI-assisted work.

While AI may promise faster results, it also requires greater oversight and governance than many skilled professionals. Any perceived savings in time or fees may be offset by the need to ensure compliance with both internal policies and external regulations.

As for the implications of AI on professional indemnity insurance and the drafting of contracts? That’s a complicated issue in its own right – and one best saved for a future article!

A Glimpse into the Future

Picture this: it’s 2035, and your project management consultancy runs like a well-oiled machine – powered by people, assisted by AI. Your morning starts not with emails, but with a dashboard that’s already summarised them, flagged the urgent ones, and drafted responses for your review. Your programme update? Done overnight by an AI assistant that’s analysed site data, contractor inputs, and weather forecasts to suggest a revised critical path.

Cost plans are no longer static spreadsheets – they’re dynamic models that adjust in real time based on market trends, supplier availability, and historical performance. AI doesn’t just crunch numbers; it interprets them, offering insights that would take a human many hours to uncover. And when a client asks for a risk register update, your system has already run a predictive scan across similar projects and flagged emerging risks before they hit the radar.

But the consultant’s role hasn’t diminished – it’s evolved. You’re no longer just managing tasks; you’re curating intelligence. You decide which insights matter, which outputs are trustworthy, and when to override the machine. You’re the human in the loop, ensuring that the advice your firm gives is not just fast, but sound.

Of course, the future isn’t frictionless. There are still moments when the AI gets it wrong – when a clause is misinterpreted, or a cost anomaly slips through. That’s when your expertise kicks in. While AI might be the engine, you’re still the driver.

A Tool Worth Wielding – With Care

AI is no longer a distant concept on the horizon – it’s here, it’s evolving fast, and it’s already reshaping how construction consultants work. From automating routine tasks to unlocking new insights through data, AI offers a compelling opportunity to enhance the value we deliver to clients. But as with any powerful tool, its impact depends entirely on how it’s used.

This article has sought to explore both sides of the coin: the promise of AI as a force for good, and the pitfalls that arise when it’s deployed without oversight, context, or care. The risks – from data breaches to deskilling – are real. But so too is the potential to elevate our profession, provided we remain grounded in the principles that define it: integrity, competence, and accountability.

The RICS’s forthcoming Professional Standard on the Responsible Use of AI will play a critical role in shaping how the industry moves forward. It sets out clear expectations for knowledge, transparency, and governance – and rightly places the responsibility for compliance on the shoulders of those using the tools. AI may be able to process information quickly, but it cannot replace a qualified professional's judgment, experience, and ethical compass.

Looking ahead, the future of AI in construction consultancy is not about replacement – it’s about augmentation. The consultants of tomorrow won’t be replaced by machines; they’ll be the ones who know how to work alongside them. Those who understand when to trust the output, when to challenge it, and when to step in with the human touch that no algorithm can replicate.

So, is AI a supercharged assistant or simply too risky? As always, the answer lies in how we choose to use it.

Share