Are you ready for AI?

Artificial intelligence (AI) is no longer a future consideration for the surveying profession. It is already shaping how services are delivered, how decisions are made and how professional judgement is applied.  Used well, AI can improve efficiency, consistency and insight.  Used badly, it can introduce unmanaged risk, undermine client confidence and expose firms to regulatory challenge. 

RICS has recognised this dual reality through the publication of its new global professional standard, Responsible use of artificial intelligence in surveying practice (1st edition, September 2025), effective from 9 March 2026.  The standard applies to all RICS members and regulated firms and sets a clear expectation: AI is acceptable, but only where it is used responsibly, transparently and under robust governance. 

For many firms, this standard represents a shift in mindset. AI is no longer simply an IT issue or a productivity tool; it is now firmly a practice management and risk management issue

When AI becomes a regulatory issue

The standard is concerned with AI use that has a material impact on the delivery of surveying services. Importantly, this does not mean that AI must produce a final report or valuation.  It is sufficient that AI output influences how professional advice is formed, tested or relied upon. 

In practice, this includes common and growing use cases such as supporting procurement exercises by analysing tender submissions, identifying risk or non-compliance, summarising large volumes of documentation, or assisting in the preparation of expert witness reports.  Where AI shapes professional judgement, RICS expects firms to recognise that impact and manage it appropriately. 

This requires firms to make conscious, recorded decisions about where AI is used, why it is appropriate and what controls are in place. 

Professional judgement remains non-negotiable

A central theme of the RICS standard is that AI does not replace the surveyor.  Accountability for advice remains with the individual professional and the regulated firm.  AI outputs must be approached with professional scepticism and tested using experience, knowledge and judgement. 

Members using AI are expected to understand, at a proportionate level, how the systems they rely upon work, their limitations, the risk of bias and the potential for erroneous outputs.  AI may assist analysis or drafting, but it cannot be treated as authoritative in its own right. 

From a governance perspective, this reinforces the importance of human oversight, quality assurance and clear responsibility for decision-making. 

From ad-hoc use to structured governance

One of the most significant practical implications of the standard is the requirement for formal governance arrangements. RICS is clear that casual or unmanaged use of AI tools is no longer acceptable within regulated practice. 

Firms must have a written policy covering the procurement and use of AI.  This policy should explain where AI is used within the business, who is responsible for approving and overseeing its use, how outputs are reviewed, and how staff are trained.  For many firms, this will sit alongside existing IT, data protection or information security policies, but it must explicitly address AI-specific risks and responsibilities. 

This approach reflects a wider trend we see across professional services: regulators increasingly expect emerging technologies to be embedded within existing management systems, not bolted on as standalone tools. 

Paid tools, enterprise systems and data protection

A particularly important issue for regulated firms is the use of widely available AI platforms such as ChatGPT or Copilot.  While these tools are powerful and accessible, they also raise serious data protection and confidentiality concerns if used incorrectly. 

Free or consumer versions of AI tools may retain or reuse uploaded data for model training.  For surveying firms handling confidential client information, this presents an unacceptable level of risk.  The RICS standard reinforces the need for firms to take reasonable steps to ensure that private and confidential data is protected, and that it is not uploaded into AI systems without appropriate safeguards or consent. 

In practice, this points firms towards paid-for or enterprise subscriptions, where contractual assurances are available, and towards “closed systems” where data remains within the organisation’s control.  From a governance perspective, enterprise-level AI solutions offer clearer auditability, stronger controls and far greater confidence when dealing with sensitive or regulated information. 

Making AI risk visible

Alongside written policies, RICS requires regulated firms to maintain a dedicated AI risk register where AI has a material impact on services. This is not a theoretical exercise, but a practical management tool that must be reviewed at least quarterly. 

The risk register should capture and actively manage issues such as bias, erroneous outputs, limitations in training data and risks associated with data retention or reuse.  Each risk must be assessed, mitigated and aligned with the firm’s risk appetite. This mirrors established good practice in other areas of professional risk management and ensures that AI is treated with appropriate seriousness. 

Transparency with clients

Finally, the standard reinforces the importance of transparency. Clients must be informed, in advance and in writing, where AI is used in the delivery of a service and for what purpose.  Clear communication helps manage expectations, preserves trust and supports informed decision-making. 

For firms already operating strong governance frameworks, this requirement should feel like a natural extension of existing client care principles. 

Are you ready for AI?

The direction of travel is clear. AI offers significant benefits to surveying practices, but only where it is implemented thoughtfully and governed effectively.  Written policies, risk registers, paid and closed systems, and clear accountability are no longer optional extras; they are becoming baseline expectations for RICS-regulated firms. 

The question for organisations is no longer whether AI will feature in their business, but whether it is being introduced in a way that protects clients, professionals and reputation.

Find out more

Are you ready for AI in your business?  We work with enterprise AI developers and regulated organisations to help design, implement and govern AI solutions that align with regulatory expectations and good practice.  We would be pleased to discuss how we could work with your organisation to implement AI solutions that are effective, compliant and fit for purpose. 

Neil Thody is a Fellow of the RICS, leading procurement specialist, CEDR/RICS Mediator, RICS Adjudicator and Independent Adviser, working with clients across multiple sectors. 

Next
Next

World Mental Health Day 2025 - My Story