

By the end of 2025, AI had become part of the everyday fabric of many housing associations, not through grand transformation programmes, but via small, pragmatic decisions made under pressure.
Staff were using generative AI to draft letters and summarise cases; teams were testing predictive tools for repairs, damp and mould; boards were beginning to see AI-shaped insights appear in papers and dashboards.
Sector evidence suggests that while the vast majority of landlords experimented with AI in some form during the year, only a small minority embedded it at scale or put formal governance around its use (NHF AI Survey, 2025).
This gap matters. As MIT Technology Review noted in its recent “hype correction”, organisations are discovering that value from AI does not come from adoption alone, but from discipline, evidence and trust.
As I have previously explored in my articles in Housing Digital, 2025 was not a breakthrough year for housing AI; it was a revealing one.
What genuinely changed in 2025 was not the ambition around AI, but its visibility. Tools that had previously sat at the margins became part of day-to-day working life across the sector.
Sector survey evidence from 2025 shows that while well over nine in 10 housing organisations had experimented with some form of AI, fewer than one in five had embedded its use or put formal governance and training in place (NHF AI survey data/HAILIE/DASH).
Typical uses were modest but practical: drafting correspondence, summarising policies, analysing text-heavy material, and supporting early triage in areas such as repairs, complaints and arrears. These activities rarely sat within formal programmes; more often, they emerged organically as teams looked for ways to cope with pressure and capacity constraints.
Alongside this, a smaller cohort of organisations began testing more advanced applications, particularly predictive and prioritisation tools. However, adoption evidence consistently shows these initiatives remained exploratory rather than embedded, constrained by uneven data quality, skills gaps and uncertainty about long-term ownership and assurance.
What did not change at the same pace was the underlying operating model. Clear accountability, board-level oversight and consistent staff guidance lagged behind day-to-day use. Many colleagues were left utilising powerful tools without confidence or clarity, while boards were increasingly asked to take assurance on systems they did not yet fully understand.
In short, 2025 normalised AI activity across housing, but it did not yet normalise confidence, assurance or proof.
Taken together, the evidence from 2025 reveals a clear and repeated pattern: AI adoption moved faster than the structures needed to stand behind it.
Across housing surveys, peer leadership insight and wider organisational research, experimentation consistently outpaced formal governance, assurance and skills development. Informal use spread quickly, while ownership, risk frameworks, and board literacy remained uneven.
This created a form of institutional comfort. AI activity was visible enough to signal progress, yet informal enough to defer difficult questions about accountability, bias, data quality and escalation.
In my last article in Housing Digital, I described this dynamic as a “comfort trap”: progress that looks reassuring on the surface, but remains fragile because it cannot yet be evidenced, audited or confidently explained to tenants, regulators or boards.
One way to think about it is scaffolding. In many organisations, AI supported work and helped teams move faster, but it is not yet part of the permanent structure. Leaders could lean on it cautiously, but not rely on it when challenged. Where data is incomplete or inconsistent, AI doesn’t just become less effective; it becomes harder to defend.
This pattern is not unique to housing. As MIT Technology Review noted in its recent “hype correction”, many sectors reached a similar plateau in 2025. For housing, however, the implications are sharper: without proof, assurance and transparency, even well-intentioned AI use becomes a leadership risk rather than a strategic asset.
For tenants, AI was largely invisible, until it shaped the tone, speed or outcome of an interaction. Automated updates, prioritisation tools and templated responses increasingly sat behind services, often without explanation.
Housing Ombudsman insight and complaints data suggest trust was most fragile where automation felt opaque or impersonal, particularly when residents could not easily understand decisions or reach a human when something went wrong.
For frontline staff, AI was experienced less as a formal programme and more as a quiet productivity aid. Sector surveys and leadership discussions indicate widespread informal use of generative tools to draft, summarise and sense-check work, often in the absence of clear organisational guidance. The result was a mix of relief and unease: time saved on routine tasks, but uncertainty about data, permission and accountability.
For executives, 2025 sharpened a different question. Early pilots and dashboards hinted at efficiency and foresight, yet evidence was uneven and hard to generalise. Confidence lagged capability. The issue was no longer whether AI could help, but whether leaders could consistently stand behind it.
For boards, AI crystallised as a governance issue. Supportive of innovation, non-executives increasingly focused on assurance: ownership, bias, escalation and transparency, often before organisations were ready to provide clear answers.
Across roles, AI felt less like autopilot and more like a satnav: useful most of the time, but still requiring human judgement when it confidently pointed the wrong way.

This outlook for 2026 reflects not a single viewpoint, but a growing consensus emerging from discussions across our DASH experts – including housing executives, board members, data leaders and governance specialists who spent 2025 grappling with AI in practice.
The message is consistent: the conditions that allowed informal experimentation to feel acceptable are changing. Three pressures are now converging.
Taken together, this makes drift risky. In 2025, AI could sit alongside services. In 2026, it will increasingly sit within them. That shift changes the leadership task. AI must move from something teams try to something organisations can explain, evidence and defend.
The year ahead is less about accelerating adoption, and more about deciding what the organisation is prepared to stand behind, publicly, regulatorily and ethically.
If 2026 is to be different from 2025, leadership behaviour has to change, not in ambition, but in discipline. If leaders cannot evidence how AI is used, they cannot credibly assure it. The consistent lesson from sector evidence, regulatory signals and peer discussion is that AI can no longer sit outside normal governance.
First, name ownership. AI cannot live everywhere and nowhere. Boards should be clear who is accountable for AI use across the organisation, how decisions are approved, and where responsibility sits when something goes wrong. Without this, assurance is impossible.
Second, demand evidence. Pilots and proofs of concept should close the loop or stop. Leaders should expect clear answers to simple questions: what problem does this solve, how do we know it works, and what risks does it introduce? This reflects the wider lesson highlighted in MIT Technology Review’s “hype correction”: value comes from discipline, not deployment. Utilise DASH 90-Day AI Pilot Logbook.
Third, raise AI literacy, particularly at the board level. NHF and peer leadership insight consistently show that confidence lags capability. Boards do not need technical mastery, but they do need enough understanding to ask the right questions about bias, data quality, escalation and transparency. Utilise DASH AI/Digital Fluency Pathway.
Fourth, be explicit with tenants and staff. Housing Ombudsman guidance is clear that automation does not dilute duties around fairness, clarity or access to human support. Explaining where AI is used – and where it is not – builds trust.
Finally, embed AI into existing governance, rather than treating it as a special case. Risk, audit, complaints and equality frameworks already exist. AI should sit within them, not alongside them. Utilise DASH AI Governance Toolkit.
In 2026, leadership is less about moving faster and more about standing behind what moves.
By the end of 2026, AI in social housing will be judged less by how widely it is used and more by how confidently it is governed.
The experience of 2025 showed that experimentation alone does not build trust. What matters now is leadership: the ability to explain decisions, evidence outcomes, and take responsibility when systems fall short.
AI is no longer just a digital capability. It is a test of organisational maturity, and of whether housing is prepared to stand behind the choices it makes.
For a more detailed briefing, read DASH’s AI Leadership in Social Housing at the 2026 Inflection Point.
This article featured in Housing Digital Jan 26
This resource offers general information only and is not legal, financial or professional advice. See Disclaimer details . © DASH – Demystifying AI for Social Housing.




© 2025 DASH – Demystifying AI for Social Housing. | Guidance only. Not professional advice. | For internal use only. | View full Terms of Use & Disclaimer