This personal reflection was written by Yağız Boran, a volunteer note-taker at the Knowledge4Change Tkaronto Hub’s Innovation Lab, “Centering Community Participatory Knowledge in the Age of AI,” held on January 30-31, 2026, at the Regent Park Community Centre. To learn more about the Innovation Lab, check out our event recap here!

The panel on ethics, labour, and organizational strategy, at the K4C Innovation Lab, foregrounded a central tension in contemporary AI adoption: while AI is often framed as a neutral productivity tool, its deployment is deeply political, infrastructural, and shaped by power relations. One of the most striking points raised was that “so much of what we do today is embedded in technologies we no longer think about — where they come from, whose ideas are inside them, and what kinds of labour make them possible.” This widespread framing of AI as an inevitable efficiency tool conflicts directly with mission-driven, community-rooted work that depends on trust, relational accountability, and ethical restraint. Across sectors—conservation, community development, and climate/energy governance—speakers emphasized that technology is not neutral, and that AI adoption must be governed by values rather than momentum.
“Technology is not a goal in itself. It must serve the mission.” – Bella Lam
Bella Lam, CEO of the Jane Goodall Institute of Canada, articulated this most clearly by framing technology as subordinate to organizational purpose. For JGI, this mission is grounded in a “North Star” vision of a world where “animals, people, and the environment can thrive together,” with conservation framed as humans acting within ecosystems rather than standing outside them. In practice, this has translated into bounded, research-oriented uses of AI—such as digitizing decades of multilingual field notes and piloting sound-based species detection—while drawing firm red lines around community-facing deployment. As Lam emphasized, “privacy and security are huge, especially when working with communities,” and the organization has deliberately avoided introducing AI into Indigenous-led conservation contexts without explicit consent and community control over data.
This values-first framing also extended to organizational risk. Lam noted that staff raised the environmental cost of AI as a top concern in an internal survey, highlighting the contradiction between climate-oriented missions and energy-intensive data infrastructures. She also described reputational risks arising from misinformation and authenticity failures: AI-generated false quotes attributed to Jane Goodall circulated online, requiring staff to spend time correcting and removing them.
“Sometimes you don’t need AI, you need more people.” – Anna Jasinska
Anna Jasinska, the Associate Manager of Communications, Development and Operations at the Toronto Centre for Community Learning & Development, brought a frontline community perspective to these concerns. Within a small, community-rooted organization, AI adoption has been fragmented and individualized, with some staff eager to experiment and others resistant. Rather than imposing top-down mandates, Jasinska described a cautious, consultative approach: “sometimes you don’t need AI, you need more people.” This statement captured a recurring tension in the panel: efficiency narratives often obscure structural under-resourcing. The temptation to use chatbots or automation for community-facing services may appear pragmatic, but risks eroding trust and relational accountability, particularly when community needs are complex and context-specific. As Jasinska noted, even seemingly benign tools—such as AI-generated homework problems or VR-based interview practice—require human oversight to address bias, misalignment, and emotional nuance.
Rajaa Berry, an OCIC Youth Policy-Maker alumni, extended this analysis into the domain of climate and energy governance, framing AI as a decision-support tool rather than a decision-maker. She offered that in energy transition contexts, AI may be able to support forecasting renewable variability and modeling grid flows, but only under conditions of transparency, equity-centered governance, and strong data stewardship. Berry cautioned that uneven data availability across regions can skew investment toward already data-rich communities, reproducing structural inequities.
The panel repeatedly emphasized that AI systems are built on extractive labour practices, data centres, and invisible human work, from data labeling to content moderation. When organizations adopt AI under narratives of efficiency and inevitability, they often obscure these material and human costs. One speaker noted that “these systems make complicated power relations even harder to trace, because their sources and origins become invisible,” which complicates accountability within organizations.
“Policy can function precisely by forcing organizations to slow down, to pause and reflect, rather than reproducing “Wild West” adoption, driven by hype and funding pressures.” – Bella Lam
A recurring theme in the panel was the distinction between “invited” and “uninvited” AI. While speakers described carefully scoped pilots as legitimate—such as research transcription or sound-based species detection—they also highlighted the growing presence of AI features automatically embedded into email platforms, meeting tools, CRMs, and productivity suites. These “uninvited” systems blur consent through default settings and opaque terms of service. As one speaker put it, “some tools are invited; others are uninvited, baked into the platforms by default.” This dynamic complicates organizational governance, particularly when meeting transcripts or automated summaries may expose sensitive community discussions to surveillance and data extraction.
Audience questions further emphasized governance as a form of ethical friction. Rather than viewing policy as bureaucratic delay, speakers framed governance frameworks as necessary mechanisms to slow down impulsive adoption and create decision points. As Bella Lam noted, policy can function precisely by forcing organizations to pause and reflect, rather than reproducing “Wild West” adoption driven by hype and funding pressures. Rajaa Berry similarly argued that AI is too consequential to be left to individual discretion; it requires operational-level oversight structures grounded in equity, transparency, and accountability.
“The real problem is not whether machines think but whether men do.”
– B. F Skinner
These organizational reflections remind me of B. F. Skinner’s quote above. The ethical risk is not AI autonomy, but human abdication of responsibility under narratives of inevitability. When institutions frame AI as unavoidable, they displace accountability into infrastructure and market logic. Against this, the panel offered a coherent alternative: mission alignment as a “North Star,” community trust as non-negotiable, governance as productive friction, and narrow, bounded uses of AI as preferable to generalized automation. As Descartes’ “Cogito, ergo sum” reminds us, agency remains human. AI may shape organizational workflows, but responsibility for harm, equity, and care cannot be delegated to systems designed under extractive logics.
In synthesizing these discussions, the panel clearly challenged techno-solutionist approaches to organizational strategy. AI cannot ethically substitute for labour protections, democratic governance, or institutional reflexivity. The question is not whether organizations will use AI, but under what conditions, with whose consent, and for whose benefit. When organizations treat AI as an inevitable authority rather than a helpful tool, they risk outsourcing not only decision-making, but ethical responsibility itself.

Yağız Boran is a community worker, disability advocate, and adaptive athlete working at the intersection of lived experience, systems change, and public engagement. With a background in psychology, he focuses on stakeholder engagement, capacity promotion, and program coordination, applying a people-centred and anti-colonial approach to community, education, and access to services.
Alongside his professional work, his journey as an adaptive athlete serves as a platform for storytelling, fundraising, and awareness-building. Grounded in the understanding that the personal is political, he challenges structural barriers across disability, gender equity, and equitable access to health and social services, with a focus on justice-oriented practice, decolonization, and building more accountable systems.