In my previous Housing Digital published article, “A Day in the Life of AI in a Housing Association” — a glimpse of how AI is already shaping housing life. That piece was about now; this one is about what happens next.
If the sector keeps adopting AI faster than it governs it, the consequences will go beyond technology. More than 90% of providers are experimenting, but fewer than one in five have a policy or named lead. Adoption races ahead; assurance stands still. So, AI is spreading through housing like water through old pipework — fast, useful, but not always contained.
This article looks at what that means for trust, risk, and regulation, and asks a simple question: if we keep going as we are, who will be accountable when the algorithm gets it wrong?
The Comfort Trap: How Adoption Outran Assurance
Across the sector, AI is no longer a curiosity; it’s a convenience. Housing teams are drafting letters with Copilot, triaging repairs with chatbots, and feeding data into dashboards that promise instant insight. Efficiency has become the headline, but governance sits in the footnotes. It’s like fitting new windows while the foundations quietly crack — the surface looks modern, but the structure hasn’t caught up. The DASH AI Readiness in Social Housing (2025) found that while boards are open to AI, only half feel even partially prepared for the risks. HAILIE’s figures echo this: over 90 per cent of organisations are experimenting, yet fewer than 20 per cent have a policy, strategy, or named lead.
This is the comfort trap — progress without preparedness. As the recent “If Homes Could Talk” (IHCT 2025) report warned, “you can’t install your way out of a 1990s mindset.” The same pattern now defines AI: innovation without alignment, data without ownership, risk without oversight.
For our CEO, Sally, that comfort looks like dashboards glowing green while unresolved issues hide beneath the surface. She knows her teams save time, but can’t tell if they’re building trust. For our board member Bob, comfort means assurance by assumption, believing someone, somewhere, has checked the ethical box.
Staff and tenants are adapting faster than their organisations. The tools work, but each ungoverned use adds another thread to a tangled web of accountability. Unless leaders turn comfort into control, the sector will keep innovating into uncertainty, one convenient click at a time.
The Consequences of Staying Comfortable
If the previous section was about drift, this one is about destination. The longer the sector stays comfortable, the more likely control will be taken out of its hands.
We’ve seen it before. When connected-home pilots stalled, IHCT warned that inaction would invite intervention, and it did. The Decent Homes Standard now includes continuous digital monitoring. The same logic will soon apply to AI: if housing providers cannot evidence safe, ethical use, the Regulator will demand it. The Housing Ombudsman will not be far behind, questioning whether automated letters or chatbots meet fairness and tone requirements under the Complaint Handling Code. If housing doesn’t build its own fence, the regulator will eventually put one up for it.
For Sally, this shifts AI from a technical issue to a leadership test. The next inspection could probe algorithmic bias or audit trails as rigorously as gas-safety checks. For Bob, the risk is reputational. One AI-generated complaint response written without oversight could become the sector’s next headline, not through malice, but through missing assurance.
The DASH AI Readiness in Social Housing shows governance as the weakest link; most organisations lack an AI-risk framework. The NHF AI Survey (2025) found that only six per cent provide security or ethics training.
If nothing changes, the outcome will echo the IoT experience: fragmented systems, fragile trust, and regulation by default. The sector can avoid that, but only by acting first, before silence becomes surrender.
Signals of Progress: What Responsible Adoption Looks Like
Despite the governance gaps, there are encouraging signs that housing is learning from its past. The Aspirations and Applications of AI in Social Housing (2025) study found nearly 40 per cent of providers are now building or testing AI strategies — up from just 4 per cent a year earlier. Predictive repairs, complaint-trend analysis, and arrears forecasting are moving from pilot to practice. More importantly, several landlords are linking these projects to board-level oversight, echoing IHCT’s reminder that “technology must be treated as infrastructure, not experiment.”
The NHF AI Survey reports that 33 per cent of organisations have an AI policy in progress and 20 per cent already in place. DASH’s AI Readiness in Social Housing shows 69 per cent of boards are open or very open to AI integration. Governance remains fragile, but awareness is catching up to enthusiasm.
For Sally, progress means structure replacing spontaneity. She now tries to tie every AI pilot to one of the key issues — repairs, stock data, complaints, VFM and safe, explainable AI. She’s publishing plain-English “AI transparency notes” for tenants, reflecting IHCT’s principle that “consent is not a formality; it’s a relationship.” For Bob, progress means evidence — requesting audit logs and bias-testing results, as the HAILIE Survey urged: “turn experimentation into structured learning.”
Small but systemic changes are taking root: literacy training, cleaner data, vendor bias checks. They may not make headlines, but they show housing is beginning to balance innovation with assurance, and that’s how trust begins to scale. Progress may be uneven, but at least the lights are coming on in more rooms.
Where Leadership Must Act: Turning Progress into Proof
Progress is visible, but still fragile. The next phase of AI in housing will be shaped not by vendors or data officers but by leaders who can turn policy into proof.
Every major housing AI study reaches the same conclusion. IHCT found that executive ownership and accountability were the strongest predictors of success. The Aspirations and Applications research showed most staff use AI without confidence in governance, a gap only senior leadership can close. The NHF AI Survey and DASH AI Readiness in Social Housing confirm that boards are open but under-prepared. Enthusiasm is not a strategy.
For Sally, leadership means embedding AI into the organisation’s bloodstream, risk registers, assurance plans, and staff objectives. She sees governance not as red tape but as proof of fairness, safety, and value. For Bob, leadership means scrutiny with purpose: he no longer asks “Are we using AI?” but “How do we know it’s working as intended for everyone?” This is the moment to move from driving by instinct to flying by instruments, decisions guided by evidence, not hope.
Other sectors have shown the way. The NHS AI Lab’s Impact Assessments, the Law Society’s Ethical Charters, and techUK’s Assurance Tools prove that innovation and accountability can grow together.
Housing can do the same: adopt a concise AI Ethics Charter, require impact reviews for high-risk tools, and train boards to interpret AI risk as confidently as financial risk. Act now, and leaders define the standard. Wait, and the standard will define them. See NHS AI ethics initiative.
Five Things Boards and Executives Can Do Now to Get Out of the AI Comfort Zone
The comfort zone feels safe, pilots are running, dashboards are glowing, and nobody’s been fined. But every dataset, chatbot, and “quick win” without structure adds risk. Here are five moves to shift from comfort to control.
1. Name It and Own It
AI needs ownership. Fewer than one in five providers have a named lead or policy. Assign clear accountability for sign-off, audits, and reporting. No owner, no assurance.
2. Build the Evidence Loop
Progress needs proof. IHCT advised “no strategy, no sensors.” Apply that to AI: baseline data, ethics checks, measurable outcomes.
3. Train for Literacy, Not Loyalty
Only 20% of staff rate their AI knowledge as good, yet 90% use these tools. Provide short, scenario-based learning so boards can ask the right questions.
4. Publish a Simple AI Use Statement
Transparency builds trust. Publish a one-page “AI Use & Ethics” statement explaining what’s used, why, and who’s accountable.
5. Prepare for Oversight Before It Arrives
Regulation is coming. Map where AI touches regulated processes — complaints, safety, allocations — and embed checks now.
“Governance is the seatbelt of innovation. You hope you never need it, but you can’t drive without it.”
Takeaway: AI in housing doesn’t need to move faster, just smarter. Boards that act now will turn innovation into evidence and rebuild trust where it matters most.
Closing Reflection
My previous article, A Day in the Life of AI in a Housing Association showed how AI already shadows our working day, from Pauline’s damp alert to Bob’s board pack. This piece has asked the harder question: what happens if we keep going as we are?
The research is consistent — from NHF to IHCT— the sector’s ambition outpaces its assurance. AI in housing won’t fail because of bad intentions; it will fail because of good intentions left ungoverned. We’ve seen this before with sensors and data dashboards that promised transformation but delivered fragmentation.
This time, the consequences won’t just be technical; they’ll be ethical, reputational, and regulatory. The choice for leaders is simple: define what responsible AI looks like before someone else defines it for us, because the future isn’t waiting for permission; it’s already in the inbox. The future of housing AI won’t arrive with a bang; it will seep quietly through our systems, one unchecked line of code at a time.
To help boards and executives navigate this next phase, the DASH team has published a new AI Adoption Briefing, drawing on insights from NHF, HAILIE, DASH and Service insights research.It explores where the sector stands, what good governance looks like, and how to move from pilots to assured, tenant-centred impact.
This article was featured in Housing Digital



