‘Be sceptical of AI’s benefits, be wary of its risks,’ warns nima’s thinktank
Be sceptical of claims about AI’s benefits and don’t fall into the trap of ‘magical thinking’. That’s the stark warning from the GIIG in its new position paper on AI.

The paper from nima’s thinktank GIIG (formerly the Government & Industry Interoperability Group) aims to provide clear, relevant, objective guidance on the use of AI, and to encourage appropriate habits and behaviours.
It sets out 15 recommendations for information managers, asset managers, technologists and product specifiers in the built environment when using AI. Among those recommendations are expected themes, such as establishing a business case for AI use, the importance of data quality, and focusing on security risks. However, throughout the guide, the authors do not pull their punches, deploying direct language, for example: “Technology providers and AI evangelists often exaggerate benefits that may or may not be proven, resulting in AI for AI’s sake.”
The 15th point of the guide states: “Be sceptical! Many of those developing, selling and implementing AI tools have unrealistically positive views of what their technology can achieve. In some cases, they could be accused of ‘magical thinking’. Before implementing AI tools, be sceptical about the claims being made about them, introduce suitable controls around their use and role in decision-making and only relax these controls based on actual performance of these tools. Think critically about the data, analysis, outputs and decisions being made to reduce the risk of mistakes.”
DC+ spoke to Julian Schwarzenbach, who led the authoring of the document
DC+: Can you give us the background on GIIG deciding to produce the position paper?
“Whichever mechanism was used in the decision-making – pencil and paper, Excel spreadsheet, vendor proprietary software or an AI tool – if you are the accountable organisation, you are still the accountable organisation.”
Julian Schwarzenbach: You can’t consume any article, any news, without AI being mentioned multiple times. We had a lot of discussions about what AI is and how it can be used. And we felt there was a gap: on the one hand, you’ve got a lot of AI evangelists, who are saying how wonderful everything’s going to be and AI will solve every problem known to man; on the other hand, there are quite a few people who have the view of “never touch AI”.
So, there are two extremes, but there is also the very different context of the built environment, because we’re dealing with assets that have lives of 20, 50-plus years. There are assets whose failure could lead to physical harm to people or could cause societal harm if things go wrong. If an AI tool is transcribing a meeting and gets it wrong, it’s no big deal. But if an AI tool is being used in decision-making in the built environment, and it gets it wrong, and a structure or building or service fails, then the consequences are both significant, potentially costly and far-reaching.
So we wanted to produce the paper to try to get people to adopt the right sort of habits when thinking about AI in that particular context. There is excessive exuberance, excessive boosterism [about AI], and that has to be tempered, particularly if you are a custodian of the built environment, or you’re managing assets – you’ve got to think very, very carefully.
DC+: Accountability and AI’s role in decision-making are key themes throughout the paper…
JS: It should be common sense to most people, but it’s there as a reminder: if you are the asset owner, you are still accountable. Whichever mechanism was used in the decision-making, whether it was pencil and paper, an Excel spreadsheet, some vendor proprietary software or an AI tool, if you are the accountable organisation, you are still the accountable organisation. The inappropriate use of an AI tool could then cause a problem, but you are still on the hook as the organisation.
It’s very important for people to think about the nature of the decisions being made. If it’s something critical, something novel, something that is irreversible, or something that has safety or security implications, then that is a strong driver to say [that AI should only be used as] decision support. So, the AI tool suggests an answer, but a human validates it: somebody with appropriate knowledge.
Only if all those factors I mentioned are very low level could you think about automated decision-making.
DC+: Off the back of human validation, the paper also flags concerns that AI use will erode current and future skills…
JS: [To provide the human validation], you will need people who’ve got sufficient domain knowledge, who can understand the context, who know the right questions to ask, who can implicitly understand if an answer that comes out is nonsensical – they can just get it immediately and say, ‘we need to rework that’.
But by doing that, we then run the risk that our experts and their skills start to become blunted, because as they start to use the [AI] tools, they do less thinking. So that’s one skills challenge.
The other skills challenge is that we need to think about the pipeline of knowledge in organisations. Those people who are the subject matter experts may be later in their careers, or may be approaching retirement. Therefore, if you get to the point where they have retired, and you don’t have [succession planning processes and successors] to take over with that knowledge, you’re going to struggle to support any decision-making [in the future].
DC+: The paper also emphasises commercial considerations, doesn’t it?
JS: Many of these tools are either low-cost or free at the moment. The amount of money that’s being invested into tools cannot continue – [the developers] will want their money back at some point. What is currently free will be expensive in the future.
If you have got rid of a lot of people by automating a process using an AI tool that is currently low-cost, and suddenly it becomes high-cost, you’re a hostage. But even worse, if one of those AI vendors goes bust and that tool is no longer available, overnight, you can lose what might be mission-critical decision-making support.
If a tool either ceases to work or you decide that another tool might be better, how can you move the thinking, the logic, into a new AI tool?
The paper is there to get people to be objective, to be realistic, to think about the long term.
Keep up to date with DC+: sign up for the midweek newsletter.