top of page
Search

How AI Can Advance or Undermine Impact: Insights from the 3 Space Investor Breakfast

  • Impact VC
  • Dec 18, 2025
  • 10 min read

The 3 Space Investor Breakfast served as a curated space bringing together LPs, GPs, Foundations, Family Offices, and capital allocators who are actively shaping the future of impact investing in venture capital.


Co-hosted by the 3-Space organisers, ImpactVC, Rothschild & Co Wealth Management UK, UpLink - World Economic Forum Open Innovation, and Better Society Capital, and supported by Morgan Lewis, this year we focused on the urgent question: Where does AI help, and where might it hinder, impact outcomes?




We heard from keynote speaker Philip Colligan, CEO of Raspberry Pi Foundation, and then dug deeper into the question in smaller breakout groups in three core themes - Health, Economic Empowerment, Climate & Just Transition.


We’ve captured below some key headlines and takeaways. 


We began by exploring where AI could have the greatest positive impact, guided by insights from Philip - whose experience spans helping lead Raspberry Pi’s successful IPO, serving as Deputy Chief Executive at Nesta, acting as a NED at the Nudge Unit, and earlier roles in local government and the civil service. His breadth of perspective gave us a grounded view of both the promise and the practical realities of tech innovation



🌟Opportunities 🌟

  • AI-enabled scientific breakthroughs AI’s role in scientific discovery (e.g., DeepMind’s AlphaFold) demonstrates its potential to help solve complex global challenges.

  • Embedding AI across education and skills While technological innovation offers huge potential to improve lives, it can also widen inequality - particularly for young people in resource-stretched schools that struggle to keep pace with rapid change. Despite this, there is a significant opportunity to embed AI into every stage of learning:

    • In schools, teaching young people how to use data, build with AI, and create new innovations.

    • In universities, ensuring every programme - from biology to political sciences - includes at least one AI-focused module.

    • In workplace learning, integrating AI literacy to ensure AI augments work rather than replacing it.


  • Positive models Mozilla was highlighted as an example of an organisation that has consistently nudged the market towards better, more ethical practices - even if progress has been incremental.



⚠️ Challenges ⚠️

  • Cautionary lesions from the early internet In some ways, education systems missed the opportunity of the early internet. In the 1990s, curricula shifted from teaching computing fundamentals to simply teaching how to use tools (e.g., Word, PowerPoint). We must avoid repeating this with AI, for both young people and adults. It’s not enough to teach prompting - people need to understand data, build with AI, and create new breakthroughs.

  • Historic lack of ethics and diversity in tech still prevails Ethics and diversity have long been overlooked within computing and tech disciplines. There is a real risk of repeating these failures in the AI era. Very few technologists employ or seriously engage with ethicists, often believing this is solely the government’s responsibility. Philip stressed an urgent need for CTOs and technical leaders who are both highly literate in AI and grounded in ethics.


Philip’s final reflection was to urge investors to challenge organisations building AI on:

  • How they think about ethics

  • How they assess societal implications

  • How they plan to mitigate harms


These questions are the foundations of the Responsible AI Due Diligence Tool we’ve been building with Reframe Venture, Project Liberty Institute, and supported by Zendesk.


Below are some key takeaways from the breakout group


Economic opportunity (work, skills, and access to opportunities) 

🌟 Opportunities 🌟

Economic Inclusion & Financial Access

  • AI can enable fairer lending, risk assessment, and financial products, for example Urban Jungle 

  • Widening access to finance for underserved communities

  • Platforms that connect the right people with the right job for them that gives them good work to improve job matching in a deeply imperfect recruitment system, for example Jack & Jill 

  • Can support for small business owners by lowering technical barriers



Workplace Empowerment & Organisational Transformation

  • There’s a real need and opportunity to build communities, strengthening worker connections, and workplace organising (for example, see Organise).

  • Increasing access to justice by making legal advice and tribunal processes more accessible, for example Valla

  • Opportunities for companies to apply high-discretion augmentation, giving workers control over how AI supports their work. High-discretion augmentation is where workers can decide which tasks to hand to AI and which to keep, and can override or ignore AI suggestions without penalty. This preserves agency and keeps humans “in command” rather than merely “in the loop”.​ It’s where AI is designed to complement human skills (e.g., analysis, judgement, creativity, relationship-building) instead of stripping tasks down to routines that can be easily monitored and controlled


Democratising Access to Education & Learning

  • AI can level the academic playing field through personalised tutoring, behavioural insights, and scalable support

  • Can democratise access to quality learning for populations historically excluded.

  • Opportunity to train models on higher-quality, smaller datasets capturing real student behaviour



⚠️ Challenges ⚠️

Rights, power, and worker agency in an AI driven workplace 

  • The core question here was how workers can gain real power and control over how AI affects their jobs, including things like workers having autonomy in deciding how AI tools are employed in their day-to-day work. Research from the Institute for the Future of Work distinguishes between high-discretion augmentation, where workers choose how AI supports them, and low-discretion augmentation, where AI dictates tasks and reduces autonomy. 

  • The goal is to move toward the high-discretion augmentation. Historical lessons - such as those described in the book Power and Progress and past automation in the automotive industry – show that outcomes depend on worker power structures. In regions with strong unions, like parts of Germany, automation led to upskilling and supervisory roles rather than job stripping. 


Bias, Inequality & Reinforcement of Existing Power

  • AI trained on biased or low-quality data can deepen inequality and distort truth. A key opportunity can lie in backing companies that create their own high-quality datasets and train models on smaller, behaviour-driven data - for example, tutoring tools that personalise support based on how students actually learn, not just the answers they produce, for example, Solvey.ai

  • Recruitment risks: AI-written CVs are being screened by AI tools, pushing recruiters back to old approaches of using their existing networks and social capital → deepening inequality.


Job Displacement & Workforce Disruption

  • Automation replacing not just ‘low-skilled’ roles but increasingly high-skilled roles (e.g., in the medical field - radiology, nursing decision-support, cataract post-op).

    • Need to explore how to train or upskill roles of managing the AI, not simply being replaced by it - what does this mean for upskilling? 

  • Over automation in human critical sectors 

    • Need to invest in the things we appreciate in humans - soft skills and emotional support needed in certain settings eg healthcare 

    • Risk of replacing human interaction in care and health settings despite patient preference for humans.

  • Fear of loss of autonomy at work; admin-heavy roles worsened by tech rather than improved.


Louise Marston, Resolution Foundation, also wrote a thorough write up of this session on LinkedIn, which you can access here.


Climate and Just Transition


🌟 Opportunities 🌟


AI can significantly accelerate climate action and decarbonisation

  • Improves detection and monitoring (thermal imaging for buildings, methane leaks, climate-risk mapping).

  • Delivers major efficiency gains across industry, renewables, minerals, heating/cooling, and agriculture.

  • Enhances modelling for underused resources like geothermal, enabling cleaner baseload energy.


AI can strengthen the just transition through new governance and institutional models

  • Organisational innovations (cooperatives, data-unions, public-interest AI) can re-align value with the public good, for example the DAIR Institute, Co-Op Cloud

  • Investors can embed ethics, governance clauses, and support audit/safety tools to shape responsible ecosystems.

  • Cross-sector collaboration helps build data governance models and talent pipelines for climate-oriented work.


AI can increase speed, reduce error, and unlock scalability for climate and social solutions

  • Expands capacity by reducing human error and enabling faster, more precise operations.

  • Supports policymakers and businesses in designing more efficient, higher-impact systems.

  • Creates an opportunity to use all available technological tools to confront converging planetary crises.


DAIR (Distributed Artificial Intelligence Research Institute) is an independent, community-rooted AI research institute focused on preventing AI harms, centering diverse perspectives, and advancing inclusive, ethical AI research and frameworks.
DAIR (Distributed Artificial Intelligence Research Institute) is an independent, community-rooted AI research institute focused on preventing AI harms, centering diverse perspectives, and advancing inclusive, ethical AI research and frameworks.


⚠️ Challenges ⚠️


AI’s energy footprint and geopolitical concentration threaten climate and security goals

  • Data-centre energy and cooling demands undermine sustainability and burden households through grid upgrade costs.

  • Europe risks dependency on imported AI capacity, mirroring past energy vulnerabilities.

  • Infrastructure clustered in a few nations (US, China, Gulf states) creates strategic exposure.


AI risks worsening inequality and labour-market disruption

  • Displacement of lower-skilled jobs and automation of entry-level tasks deepen shadow unemployment and loss of purpose.

  • Unequal access to compute, data, and education may create a two-tier cognitive economy (can happen on local and global scales)

  • Benefits accrue disproportionately to those with assets and computational power.


Bias, safety, and accountability gaps undermine trust and democratic resilience

  • Algorithmic and synthetic-data bias affects justice, lending, and image recognition systems.

  • Robotics + AI make behaviour harder to explain, complicating accountability, see Planet A’s piece on mythbusting physical AI here

  • Corporate power, information asymmetries, and regulatory constraints threaten democracy and public trust.


Co-authored by Sam Baker (Planet A), together with Jan Erik Solem (Staer — building intelligence for mobile robot fleets), and Søren Halskov Nissen (Yaak — building the data platform for spatial intelligence).
Co-authored by Sam Baker (Planet A), together with Jan Erik Solem (Staer — building intelligence for mobile robot fleets), and Søren Halskov Nissen (Yaak — building the data platform for spatial intelligence).

Health and Well-being


🌟 Opportunities 🌟

Improving Precision, Consistency, and Quality of Care

  • Enhances diagnostics: MRIs, skin analytics, stroke detection, ambulant care.

  • Supports clinical staff shortages and enables more consistent decision-making.

  • Focuses on improving quality of care, not just cost savings - reducing deaths caused by low-quality care.


Addressing Equity Gaps

  • AI can mitigate disparities, e.g., gender bias in clinical trials, limited access to care or data.

  • Potential to democratise access to better diagnostics and personalised treatment, for example 

  • Can be used to collect and analyse long-term outcomes that were previously unavailable.

Supporting Implementation and Adoption

  • Implementation science critical: user experience, workflow integration, and healthcare professional dynamics affect adoption, see more here

  • Early-stage founders need clarity on meaningful outcomes (e.g., real health impact vs. cost savings e.g. all health tech doesn’t = impact).

  • Opportunity to combine strong R&D infrastructure (e.g., UK) with commercial focus in larger markets (e.g., US).


⚠️ Challenges ⚠️

Data Limitations and Equity Concerns

  • Lack of rich, high-quality datasets limits training and evaluation.

  • Systems often have insufficient capacity to collect, store, and use data effectively.

  • Existing equity gaps in health care may be reinforced if not addressed.


Regulatory and Implementation Barriers

  • High regulatory standards, while essential for ensuring safety in highly regulated sectors, can sometimes slow deployment and increase complexity.

  • Thoughtful regulation strikes a balance - protecting users and enabling long-term commercial success through sustained trust and adoption, rather than acting as an indiscriminate brake on innovation.

  • Adoption depends on workflow integration, professional structures, and incentives, not just effectiveness.


Misalignment of Incentives

  • Demand for AI tools can be unclear due to weak incentives in healthcare systems.

  • Founders may prioritise cost savings over actual health outcomes.



Conclusion

Across every discussion, one theme was clear: AI systems are human-built tools - trained on human data and shaped by human decisions. They encode patterns, not understanding; they generate probabilities, not truth. Recognising this is essential to using AI effectively, governing it well, and building systems worthy of trust.


The opportunities across health, climate, economic empowerment, and education are real and significant, but so are the risks - inequality, bias, misaligned incentives, energy demands, and the potential erosion of worker and democratic power.


For investors at the frontier, the responsibility is twofold: (1) back teams using AI intentionally for impact outcomes (2) rigorously challenge organisations on ethics, governance, data quality, societal implications, and long-term accountability.


With thoughtful capital, strong governance, and cross-sector collaboration, this community can ensure AI becomes a force that strengthens, rather than undermines, people, planet, and shared prosperity.



Tools and resources for responsible AI

This framework is a v1 due-diligence tool developed with international input from GPs, LPs, operators, and domain experts. It is intentionally agile and iterative: future updates will refine the questions as technologies mature, new risks and opportunities emerge, and our collective understanding of AI’s impact on markets and societies deepens.




Your perspective on AI matters! As part of the next phase of the AI resource building, we’re launching a short anonymous survey with Reframe Venture, the Project Liberty Institute, and Zendesk to understand how venture investors view AI development. It takes 5–10 minutes and will inform key industry research. Aggregated findings will appear in a public white paper in January 2026—please share with relevant investors.


Resources and further reading mentioned

Economic opportunities

Analysing the distribution of capabilities in the UK workforce amidst technological change UCL’s Institute for the Future of Work explores how technological exposure and capability distribution shape inequality and resilience in the UK labour market.

Reframing Automation: a new model for anticipating risks and impacts A framework from IFOW that moves beyond job loss metrics to understand the broader social and wellbeing impacts of automation.

Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity Economists Daron Acemoglu and Simon Johnson trace history’s tech revolutions, exposing how progress and power imbalances shape who truly benefits.

Is This Working? The Jobs We Do Told By The People Who Do Them Charlie Colenutt’s oral history captures the voices behind Britain’s everyday work—an authentic look at meaning, dignity, and change in modern labour.

Health Unlocking AI’s Impact Potential in Healthcare: Bridging the Implementation Gap Notes from ImpactVC event examining why AI’s promise in health systems remains under-realised and how investors and innovators can close the delivery-innovation divide.

When Does NOT Using AI in LMIC Healthcare Become Unethical? A new piece from ICT Works argues AI’s life-saving potential makes deployment in resource-poor settings an ethical imperative - if guided by contextual wisdom.

Joe Stringer | L1 Impact | Is HealthTech/LifeTech worth investing in? Joe dives into the pros and cons of Health Tech and Life Tech, and unpicks the high-impact, defensible opportunities that generalist investors often overlook.


Climate, energy, and the just transition

The AI Energy Boom: A Climate Tech Crossroads | Project Frame March 2025 Community Meeting A Project Frame community session exploring how surging AI energy demand collides with climate goals, and what it means for investors trying to back genuinely net-positive climate solutions.

The Rebound Effect: AI’s Silent Backfire | Planet A An exploration of how AI efficiency gains can paradoxically increase overall energy use, highlighting rebound effects as a hidden climate risk.

The Electric Slide: AI, Energy, and Geopolitics | Packy McCormick (Not Boring) A deep dive into how AI’s energy demands are reshaping global power dynamics, linking technological ambition with competition over energy systems and geopolitical influence.


General AI 

Robots in the Real World: Mythbusting Physical AI Planet A unpacks the realities of robotic adoption—challenging hype to reveal what physical AI can (and can’t) do in practical environments.


Flash Poll: Trust and Artificial Intelligence at a Crossroads | Edelman Trust Barometer 2025

A global Edelman Trust Barometer flash poll revealing stark geographic divides in AI enthusiasm-Brazil and China embrace rapid adoption through experimentation, while U.S., UK, and Germany markets show skepticism and tempered advance-plus how trust, knowledge, and experience drive adoption.



 
 
 

Comments


bottom of page