DASH logo

AI in Housing: Clear Answers and Plain-English Terms

Your combined FAQ & Glossary — straightforward definitions and board/exec-ready explanations to help leaders use AI responsibly, confidently, and in line with housing values.
This page brings together DASH’s expert-curated FAQ & AI Glossary in one place. It gives boards and execs plain-English answers to complex AI questions, defines essential terms, and links each concept to governance, assurance, and tenant outcomes — helping housing leaders act with clarity, not jargon.

Answer: AI can help reduce arrears, predict repairs, and streamline customer service, freeing staff for higher-impact work. It’s already improving KPIs like TSMs, complaints, and service quality. Small, well-scoped pilots show the clearest gains. 

Answer: Start with a shared briefing that links AI to regulatory duties, tenant outcomes, and corporate priorities. Use DASH’s Board Readiness Checklist and Executive Readiness Assessment together to create a joined-up view. Treat AI like any transformation: clear roadmap, shared risk appetite, and regular reviews.

Answer: Begin with one or two operational areas where data is strong, teams are open, and the value is clear (e.g., complaints triage or rent collection). Use that as a proof point to build momentum. Roll out AI literacy training in parallel to avoid tech being seen as a threat.

Answer: Focus on one use case with measurable benefits (e.g., time saved, complaints reduced, risk flagged earlier). DASH’s Business Case Toolkit includes housing-specific templates. Include risk mitigations, cost projections, and how success will be measured. 

Answer: Copilot is powerful for summarisation, admin, and search, but comes at a high licence cost and doesn’t fit every user. Consider a mixed approach: Copilot for power users and secure use of open models (e.g. Azure OpenAI or private GPTs) for others. 

Answer: Every AI use should leave a trail: who designed it, what data it used, and how decisions are checked. DASH’s Governance Checklist sets out the controls boards should see fairness reviews, audit logs, and named accountability. If it can’t be explained, it shouldn’t be approved.  

Answer: AI can magnify problems in poor data, especially bias, consent issues, or missing information. As a board, look for evidence of data audits and safeguarding practices before any AI tool is deployed. Red flag: no data governance owner = don’t proceed.

Answer: Insist on pre- and post-implementation metrics: cost savings, risk reduction, or improved tenant outcomes. Value should be benchmarked against business-as-usual, not AI promises. Ask for quarterly reports with trends, lessons, and any unintended impacts.

Answer: Good pilots have a clear use case, risk review, data ownership, ethical safeguards, named accountable officers, and an exit strategy.  Red flag: If you don’t know who’s responsible or what it’s trying to solve, pause. 

Answer: You don’t need to code, but you should understand what AI is (and isn’t), how it links to your statutory duties, and how to scrutinise it like any other business risk.  Consider DASH’s AI Boardroom Briefing or How to Use Gen AI guide. Ask for annual training as standard, like you do for cyber or safeguarding. 

These definitions are taken from the Parliamentary Office of Science and Technology and the Alan Turing Institute.

A sequence of rules that a computer uses to complete a task. An algorithm takes an input (e.g. a dataset) and generates an output (e.g. a pattern that it has found in the data). Algorithms underpin the technology that makes our lives tick, from smartphones and social media to sat nav and online dating, and they are increasingly being used to make predictions and support decisions in areas as diverse as healthcare, employment, insurance and law. (The Alan Turing Institute, 2024)  

The UK Government’s 2023 policy paper on ‘A pro-innovation approach to AI regulation’ defined AI, AI systems or AI technologies as “products and services that are ‘adaptable’ and ‘autonomous’.” The adaptability of AI refers to AI systems, after being trained, often developing the ability to perform new ways of finding patterns and connections in data that are not directly envisioned by their human programmers. The autonomy of AI refers to some AI systems that can make decisions without the intent or ongoing control of a human. (UK Parliament, 2024) 

Unfairness can arise from problems with an algorithm’s process or the way the algorithm is implemented, resulting in the algorithm inappropriately privileging or disadvantaging one group of users over another group. Algorithmic biases often result from biases in the data that has been used to train the algorithm, which can lead to the reinforcement of systemic prejudices around race, gender, sexuality, disability or ethnicity. (The Alan Turing Institute, 2024)

Any information that has been collected for analysis or reference. Data can take the form of numbers and statistics, text, symbols, or multimedia such as images, videos, sounds and maps. Data that has been collected but not yet processed, cleaned or analysed is known as ‘raw’ or ‘primary’ data. (The Alan Turing Institute, 2024) 

Pictures and videos that are deliberately altered to generate misinformation and disinformation. Advances in generative AI have lowered the barrier to the production of deepfakes. (UK Parliament, 2024)

A subset of machine learning that uses artificial neural networks to recognise patterns in data and provide a suitable output, for example, a prediction. Deep learning is suitable for complex learning tasks and has improved AI capabilities in tasks such as voice and image recognition, object detection and autonomous driving. (UK Parliament, 2024) 

An AI model that generates text, images, audio, video or other media in response to user prompts. It uses machine learning techniques to create new data that has similar characteristics to the data it was trained on. Generative AI applications include chatbots, photo and video filters, and virtual assistants. (UK Parliament, 2024) 

A system comprising a human and an artificial intelligence component, in which the human can intervene in some significant way, e.g. by training, tuning or testing the system’s algorithm so that it produces more useful results. It is a way of combining human and machine intelligence, helping to make up for the shortcomings of both. (The Alan Turing Institute, 2024) 

A type of foundation model that is trained on vast amounts of text to carry out natural language processing tasks. During training phases, large language models learn parameters from factors such as the model sise and training datasets. Parameters are then used by large language models to infer new content. (UK Parliament, 2024)

A type of AI that allows a system to learn and improve from examples without all its instructions being explicitly programmed. Machine learning systems learn by finding patterns in training datasets. They then create a model (with algorithms) encompassing their findings. This model is then typically applied to new data to make predictions or provide other useful outputs, such as translating text. (UK Parliament, 2024)

Often refers to the practice of designing, developing, and deploying AI with certain values, such as being trustworthy, ethical, transparent, explainable, fair, robust and upholding privacy rights. (UK Parliament, 2024)

Tab Content

This is a basic text element.

Ready to put this into practice?

See how other housing leaders are applying safe, explainable AI in their organisations
Dash Footer Logo
Board-ready AI. Tenant-first outcomes

© 2025 DASH – Demystifying AI for Social Housing.   |   Guidance only. Not professional advice.   |   For internal use only.   |   View full Terms of Use & Disclaimer

Get the latest updates.

Get free, practical AI resources, leadership briefings, and event invites, trusted by housing execs and board members. Unsubscribe anytime.