How we think about AI.
Transparently, responsibly, and with human accountability at every step.
Five principles that govern our AI use.
We actively use AI — we advise on AI strategy and use AI in our own operations and client work. Here's exactly how we think about it.
01
Humans Are Always Responsible
AI may assist, accelerate, and augment — but humans are ultimately responsible for every outcome. Whether it's a prompt, a recommendation, or a decision, the human in the loop owns the result.
02
AI Should Create Good
AI should be deployed to solve problems, reduce suffering, improve efficiency, and create value. We evaluate every AI application by what it produces — and what it doesn't.
03
Full Autonomy Is Not the Goal
We do not pursue or advocate for fully autonomous AI systems. AI should operate under human direction, with appropriate oversight at every stage.
04
Shared Responsibility Across the Chain
Everyone involved in an AI project bears responsibility — executives who fund it, engineers who build it, managers who oversee it, and users who deploy it. No one gets to say 'the AI did it.'
05
AI Has Limitations — And So Do Humans
AI systems are biased. So are humans. Anything an AI generates must be critically evaluated before use. The principle is that neither should be trusted blindly.
AI in our internal operations.
As a small and agile startup, we rely on a variety of AI models to support us in work that we alone benefit from.
This includes the development and management of our website, prototype development, copywriting, and other internal tasks. This frees up our already limited resources to better serve our clients and community. We never blindly use AI, and remain fully responsible and accountable for what we put out into the world.
How we use AI in client work.
As a technology company, we continually explore, evaluate, and adopt new and emerging technologies. We see AI as a valuable tool and enabler, capable of increasing our productivity, quality, and speed of delivery — similar to the role that interns or junior staff play in supporting experienced professionals.
We are transparent about our use of AI in certain aspects of our work with clients; however, we always remain fully responsible and accountable for all deliverables. We carefully brief, guide, and review everything AI produces. Just as no company relies solely on junior staff, we do not rely solely on AI agents; our team's experience and expertise remain central to our work.
When you hire us, you receive our skills, knowledge, and direct involvement, while AI serves as a contributor that helps us scale our efforts and deliver faster. Consider AI's contributions as added value — a complement rather than a replacement for what we provide.
Examples of how we use AI in client work
- To rewrite a first draft or a collection of ideas authored by us
- As a creative partner to critique, expand on, and contribute to our internal ideation processes
- For mass editing, transformation, or analysis of data
- During prototype development and other coding projects — either as a junior/mid-level developer supporting our (human) senior developers, as a quality assurance tester, security tester, or in other supportive capacities
- For contracting and other administrative tasks where we lack the resources or know-how to do it ourselves, and do not have the budget or contractual freedom to outsource the work
Never
- Rely solely on AI for any client deliverable
- Present AI-generated work as exclusively human-produced
- Use client data to train AI models
- Deploy AI without understanding what it's doing and why
How we govern AI use.
These principles apply to our own work and when advising clients on AI adoption.
Work with us
Tell us what you need.
Direct and honest. We'll tell you upfront if we're the right fit — and if we're not, we'll tell you that too.