The Context House

Make
Values
run AI

AI excels at data—but is dangerous without values. We make your culture readable for both humans and machines.

Our
Method:
ValuesFirst™

Every organisation holds years of experience, judgement and cultural codes that employees navigate intuitively—but that neither new hires nor AI can see. We capture the unspoken and make it readable.

We don’t write a book that gathers dust. We build a living knowledge resource that grows with your organisation—updated by your employees’ experiences, readable for your AI systems.

The Context House proceeds from a different assumption: AI should not merely be governed—it should be raised. This is not a metaphor. It is a method.

Our ValuesFirst™ methodology produces a book—a comprehensive text of hundreds of pages, written by AI under human guidance using a proprietary prompt methodology that achieves high linguistic density.

But the book is not an end product. It is living infrastructure that is continuously updated based on feedback from your employees. The longer you use it, the smarter it becomes.

Products

ValuesFirst Book™ + API

Your values as AI infrastructure

We extract how your organization actually operates – then deploy it as an API any AI system can query.

Delivery: Workshops → Documentation → API → Dashboards
Example: Bot asks “Angry customer?” → API returns your de-escalation principles.

Raise-AI Program™ + Companion

Trained people + daily AI assistant

8 weeks teaching staff to raise AI like a colleague. Plus an assistant everyone uses, grounded in your values.

Delivery: Training → Certification → Assistant rollout
Example: Staff spot AI drift and correct it. Assistant answers “How should I handle this?” based on your culture.

3. Voice Agent Starter Kit

Voice AI filtered through your values

Speech recognition (<150ms), synthesis, and a gateway that checks responses against your values before speaking.

Delivery: Speech setup → Gateway → Privacy config → EU hosting
Example: Customer calls. AI transcribes instantly. Values filter response. Zero-logging available.

FAQ / Comparisons

Implementation Timeline

Total duration: ~6 months across five phases


Phase 1: Foundation

6 weeks

  • Values discovery sessions with executive team
  • Leadership interviews and video documentation
  • Extract and refine initial Values Charter
  • Deploy MCP server for executive testing (Values Foundation v1.1)
  • Client decision point

Phase 2: Policy Integration

6 weeks

  • Refine Values Foundation v1.1 → v1.4 with executive team
  • Context-tune existing policy documents for AI readability
  • Integrate relevant regulatory frameworks
  • Executive team testing in real scenarios
  • Client decision point

Phase 3: The Book

10 weeks

  • Deep interviews with key personnel—bearers of tacit knowledge
  • Write your custom ValuesFirst Book (Master Context)
  • Document Legendary Cases and narrative knowledge
  • Deploy and validate ValuesFirst v1.1
  • Client decision point

Phase 4: Deployment

4 weeks

  • Deploy MCP Gateways for organizational integration
  • Hands-on training for employees
  • Client decision point

Phase 5: Continuous Improvement

1 week setup + ongoing

  • Deploy feedback loop and data capture system
  • Weekly executive briefs with value alignment analysis
  • Quarterly updates to ValuesFirst Book (v1.2, v1.3, etc.)

Terms

Payment: 50% start, 40% delivery, 10% net 30

No lock-in period—each phase independently contracted

Platform-agnostic—we connect where you already are

ValuesFirst is not a proprietary closed platform. It is an interface layer that connects AI to the organisation’s values, responsibilities and context—on top of the systems and models you already use.

Through MCP (Model Context Protocol), ValuesFirst can be integrated with various AI platforms and switched or expanded over time, without lock-in.

MCP Gateways and servers—governance outside the model

MCP servers publish versioned context. MCP gateways control how it is used.

This means:

  • Right context to the right role
  • Clear boundaries for autonomy
  • Full traceability across versions and usage

Governance resides in the protocol, not in free prompts—providing better security and control.

Three information formats—the right form for the right content

Text tokens Used for values, principles, policies and structured examples. Stable, easy to review and simple to update.

High-resolution visual tokens Used for processes, relationships and complex wholes where structure is more important than exact wording.

Optically compressed OCR images (cutting edge) Sensitive or extensive material can be stored as optically compressed images with OCR. This provides high information density, better integrity and efficient handling of large context volumes. This is at the absolute cutting edge of how multimodal AI systems handle long and complex contexts.

Why this matters

With this architecture you get:

  • Freedom to choose and switch AI platforms
  • Stronger governance and lower risk
  • Efficient handling of complex and sensitive context

You’re not investing in yet another AI platform—but in a governance and context layer that makes AI genuinely useful.

Standard AI: General AI tools like ChatGPT or Copilot are trained on the internet—not on your organisation. They provide generic responses based on statistical patterns and lack understanding of your specific context, culture and values.

TCH AI: An AI that has read the book about your culture—hundreds of pages capturing how you actually reason, not just what your policies say. Every response is filtered through your principles and escalated to human judgement when the situation demands it.

Result: An AI that acts like an informed and experienced colleague—not a stranger with access to Google.

Standard AI:

  • Optimises for efficiency and speed
  • Follows instructions literally
  • Lacks a moral compass
  • Can give “correct” answers that are still wrong

ValuesFirst AI:

  • Optimises for acting according to your values
  • Interprets instructions in light of your culture
  • Has built-in ethical guardrails
  • Gives answers that are both correct and culturally appropriate

The difference: A ValuesFirst AI doesn’t just ask “Can I?” but also “Should I?”

Training AI (traditional):

  • Feeds the model with data
  • Focus on what it should know
  • Result: An AI that knows much but lacks judgement
  • Like giving someone an encyclopaedia and expecting wisdom

Raising AI (the TCH method):

  • Extracts your organisation’s values
  • Gives the model context through a book we write together
  • Focus on how it should behave
  • Result: An AI with both knowledge and judgement
  • Like mentoring a new employee into your culture

Training gives AI facts. Raising gives AI judgement. The difference is a book—your book.

Context: The specific environment in which your AI operates: your industry, your customers, your language, your internal processes, your history. Context is where AI operates.

Guiding Principles: The four principles that govern how AI behaves in your context:

  • Humans first—Human values take precedence over optimisation
  • Direction before speed—Better right than fast
  • Judgement before rules—Escalate when rules fall short
  • Responsibility is shared—AI is a tool, humans are accountable

Context + Principles = An AI that both understands your reality and acts according to your values.

Traditional policies:

  • PDFs on the intranet
  • Written once, rarely updated
  • Read during onboarding, then forgotten
  • Inaccessible to AI
  • Static documents

TCH approach:

  • Living documents in active use
  • Continuous updating through employee feedback
  • Integrated into daily work
  • Readable for both humans and AI
  • Dynamic infrastructure that grows with you

We make your policies readable—for real.

Learning organisation (Peter Senge, 1990):

  • People learn from experience
  • Knowledge is shared between colleagues
  • The organisation improves over time
  • Requires conscious effort and culture
  • Limited by human capacity

Self-learning organisation (TCH, 2025):

  • AI and humans learn together
  • Every AI interaction improves the system
  • Feedback loops between practice and policy
  • Automatic identification of gaps
  • Scalable knowledge accumulation

The next step in organisational development: Not AI that replaces learning, but AI that accelerates it.

What happens if you don’t act?

  • An AI agent that sends the wrong email to 10,000 customers
  • An automated decision that breaches GDPR
  • A chatbot that provides advice it shouldn’t
  • EU AI Act fines of up to €35 million or 7% of turnover

ROI perspective: A Context Tuning investment that prevents a single PR disaster or compliance breach has paid for itself.

The question is not whether you can afford to do this. The question is whether you can afford not to.

What the big firms offer (Deloitte, McKinsey, Accenture, BCG):

  • Responsible AI frameworks
  • Checklists and governance structures
  • Risk assessments and compliance audits
  • Procedure-oriented solutions
  • Answer to the question: “How do we ensure AI doesn’t breach rules?”

What they miss:

  • How AI learns what you mean by good judgement
  • How tacit knowledge is transferred to systems
  • How values become operational—not just declarative
  • How culture is reproduced when decisions move from human to machine

What The Context House does differently:

  • Treats AI as a colleague to be raised—not a system to be constrained
  • Captures your tacit knowledge in narrative, not checklists
  • Makes values readable for AI—not just for auditors
  • Builds judgement over time through the Pre–On–Off–Re loop
  • Answers the question: “How does AI learn how we do things here?”

The short version:

The consultancies build fences around AI. We teach AI why the fence exists.

Traditional consultancy deliverables:

  • Report delivered → project ends
  • Document becomes outdated within months
  • “Call us in two years for an update”
  • Value diminishes over time
  • One-off investment

The ValuesFirst™ book:

  • The book lives and grows with your organisation
  • Continuously updated based on employee feedback
  • New Legendary Cases added as reality changes
  • Value increases over time—the book gets smarter
  • Ongoing partnership

How it works:

  1. An employee encounters a grey zone—a situation where the book doesn’t provide sufficient guidance
  2. The feedback is captured—either directly or through AI escalations
  3. The book is updated—new insights, new cases, refined reasoning
  4. The AI learns—next time, the situation is handled better

The result: A knowledge resource that never becomes obsolete. The longer you use it, the more valuable it becomes.

We don’t deliver a product. We build infrastructure that grows with you.

Standard prompt

Problem: No context. No culture. No values. The AI guesses.

TCH prompt (simplified example)

The difference: Years of cultural capital—condensed into instructions AI actually understands. And updated every time your employees teach you something new.

There is a reason The Context House was founded in Stockholm, not Silicon Valley.

Silicon Valley optimises. Scalability, efficiency, disruption. The human is either user or obstacle.

The Nordic tradition—rooted in consensus culture, the labour movement and collaborative decision-making—views relationships differently. The human is not a problem to be solved but a party to work with.

The Context House calls this perspective relational AI. Not AI that replaces the human. Not AI as a tool for the human. AI as colleague—with all that entails of mutual learning, negotiation and responsibility.

It is a different narrative from the American one. But perhaps one the world needs.

Yesterday (→2024)

AI was a tool. Chatbots answered questions. Humans made all decisions. Risk was limited—AI could give poor advice, but never acted independently.

Today & Tomorrow (2025→)

AI is becoming an agent. It books meetings, sends emails, makes decisions, handles customers. Work becomes more efficient—but risks increase. A miscalibrated AI agent can damage your brand, breach policies or alienate customers in seconds.

The Context House prepares you for the agentic era—before it’s too late.

The Starting Point

Many organisations live with a gap between what they say (policies), what they decide (strategies) and what they actually do (practice). Experienced employees navigate this gap intuitively through tacit knowledge.
AI cannot.

Our Solution

The Context House bridges the gap between policy and practice. We make the unreadable readable. The result is a living book—your organisation’s culture in a format both humans and AI can understand, evolving in step with your business.

The method is called Context Tuning™: the process of transforming your values, tacit knowledge and real decisions into living, machine-readable instructions that AI can actually follow—and that are continuously updated.

how
the Ai-

BOOK

WORKS

The ValuesFirst™ book is not an ordinary policy handbook. It is a living knowledge resource that makes your organisation’s culture readable—written with a methodology that captures tacit knowledge without simplifying away the complexity.

The Process

Extraction — We interview key personnel and analyse real decisions

Formulation — AI writes under human guidance using our proprietary prompt methodology

Validation—The organisation reviews and adjusts

Integration—The book becomes readable context for your AI systems

Continuous updating—Employee feedback makes the book smarter over time

The result: A book of hundreds of pages with high linguistic density—each chapter making your culture more readable for the systems that will represent you. And that continues to grow long after the initial delivery.

This is not a product. It is infrastructure that grows with your organisation.

Quotes &
Taglines

What
is at
stake


If we continue on the current path, we will have AI that is:

  • Technically impressive but socially illiterate
  • Rule-governed but devoid of judgement
  • Efficient but untrustworthy

Organisations will implement agentic AI because they must—and in time discover that it undermines culture and trust from within.
The Context House offers an alternative where values and tacit knowledge form the core. Our distinctive methodology uses your organisation’s values and tacit knowledge to raise AI into a valuable colleague—one that keeps learning, just like a human colleague would.

About us

The Context House was founded on the insight that the AI revolution is missing a critical component: human values.

We combine:

  • Academic depth in relational competence and tacit knowledge
  • Technical expertise in AI systems and contextualisation
  • Practical experience from organisations across industries

We build the soft infrastructure for the agentic era.

Our team has published four books on the methodology, including Raising AI and ValuesFirst.

The Context House | Org.Nr: 556732-2812 | Stockholm Sweden

Contact

Contact Form