Freudian Spaceship — Design Note 1

Original note: early 2026. Updated and expanded: March 2026.


This is a design document for a metamorphic research interface where the content itself becomes part of the structural design system, and the entire mode of interaction can shift. The platform is conceived as a space for experiencing research through different phenomenological lenses — each "mode" is almost like a different cognitive approach to the same corpus of knowledge.


Current State (March 2026)

The Linear/Academic Mode is functionally complete and in review with project colleagues.

What exists:

  • Nuxt 4 / Vue 3 static site at preview.freudianspaceship.com
  • Blog, Podcast (Conversations About Concepts + Interviews), Research, Contact pages
  • 7 podcast episodes + 4 interview pages with integrated audio player
  • 10 transcript blog posts
  • Tag-based filtering, CI/CD pipeline (push → live in ~20 seconds)
  • Content schema established in YAML frontmatter
  • Substantial text corpus: ~140,000 words across transcripts alone

What was reconsidered from the original note:

  • p5.js: Not used. The homepage animation is pure CSS — a 3D planetary system using radial gradients and keyframe transforms. No JS overhead, fully accessible via prefers-reduced-motion. The CSS approach proved more than sufficient and keeps the site fast. P5 remains an option for future interactive modes but is no longer a baseline requirement.
  • ML/NLP pipeline: Not yet implemented. The transcript corpus is rich but not yet fragmented into addressable sub-units (timestamps, paragraph blocks, concept markers).
  • Spatial / Ambient / Serendipitous / Dialogic modes: Not yet built. These remain the next horizon and the substance of this document.

The Core Conceptual Problem

You're not just building a website with alternative themes — you're building a platform for experiencing research through different phenomenological lenses. Each "mode" would be almost like a different cognitive approach to the same corpus of knowledge.

This is not a purely technical problem. It is a problem about how knowledge is encountered, which is precisely one of the questions the project itself is asking. The interface is not neutral — it enacts a theory of access. The linear mode enacts an academic theory of access (read, listen, follow citations). The other modes would enact different theories — spatial, haptic, ambient, oracular.

This makes the interface design itself part of the research practice, not just a vehicle for it.


The Content as a Resource

Something worth naming clearly: the project has accumulated a substantial corpus that is currently only navigable in one way (linearly, by type). This corpus includes:

  • ~140,000 words of transcript across 11 conversations and interviews
  • Research papers and PDFs
  • Blog writing and theoretical notes
  • A tag network already embedded across all content (anxiety, body, psychoanalysis, decolonial, creativity, diagnosis, ecotherapy, love, AI, etc.)

The existing tag system is already a latent concept graph — we just don't have a way to visualise or traverse it non-linearly yet. Every additional mode is essentially a different way of navigating this same corpus, not a different corpus.


Modes as Epistemological Frameworks

Each mode is not a skin but a way of knowing — a different cognitive stance toward the same material.

ModeStatusCore idea
Linear / Academic✅ BuiltRead, listen, follow — traditional navigation
Spatial / Concept Map🔜 NextSee the whole corpus as a navigable field of concepts
Listening Room🔜 NextAudio-first, minimal chrome, ambient presence
Temporal🔜 FutureContent arranged by time, revealing how ideas develop
Serendipitous / Aleatory🔜 FutureRandom encounters, oracle-style, surprise
Dialogic🔜 FutureConversational interface into the corpus
Void / Search🔜 FutureNo navigation at all — just a prompt

Mode Ideas — Expanded

Spatial / Concept Map Mode

The tag data already exists on every piece of content. The concept map makes visible what is currently invisible: the network of ideas running through the whole corpus.

What it would look like:

  • A 2D force-directed graph where nodes are concepts/tags and edges connect nodes that appear together in the same content
  • Node size could reflect frequency (how many pieces reference this concept)
  • Edges could have weight (how often do two concepts appear together)
  • Clicking a node surfaces all content touching that concept
  • The layout would shift and breathe — not static, slightly alive

What it would feel like:

  • Like looking at the project from above, as a whole
  • Like a map drawn by the research itself rather than by a navigator
  • The connections you discover might surprise even the people who made the content — "anxiety" and "ecology" are connected through the ecotherapy episode in ways that the linear interface doesn't surface

Technical approach:

  • D3.js force simulation or Cytoscape.js, both work well in Vue
  • Build-time: generate a concept-graph.json from the content frontmatter
  • No database needed — static JSON is more than sufficient at this scale
  • Embed as a Nuxt Island so it's fully isolated and lazy-loaded

A thought on naming: "Concept map" is academic. We might want a better name — something that reflects the project's own vocabulary. The Diagram? The Field? The Map of Correspondences?


Listening Room Mode

The current site treats audio as content on a page. The Listening Room would invert this: audio becomes the environment and everything else becomes peripheral.

What it would look like:

  • Minimal UI — almost nothing
  • A full-screen or near-full-screen visual environment (could be the CSS space animation, slowed and darkened)
  • The episode title and participants visible but small
  • A waveform or subtle visualiser
  • No navigation visible unless you move the mouse

What it would feel like:

  • Like putting on headphones and settling in
  • Like the website disappears and only the conversation remains
  • Not a podcast player (you wouldn't use this to discover episodes) — you'd use it once you'd already decided what to listen to

Temporal Mode

The corpus spans several years. Ideas introduced in early episodes appear transformed in later ones. The temporal mode would make this development visible.

What it would look like:

  • A timeline running left to right (or top to bottom on mobile)
  • Content pieces positioned by date, possibly clustered by concept
  • You could filter by tag and watch a concept develop over time
  • Episodes and blog posts and interviews all on the same timeline — cross-type navigation

Serendipitous / Aleatory Mode

This mode does not try to be useful. It tries to be surprising.

What it would look like:

  • A single button, or perhaps just a key press
  • Takes you to a random piece of content, or a random passage within a transcript
  • Could also juxtapose two random things — a paragraph from one transcript alongside a paragraph from another, with no explicit connection, inviting the reader to find one

Note on the project's own vocabulary: Schizoanalysis has a critique of the well-organised, clearly-mapped territory of meaning. The aleatory mode would embody something like that critique formally — refusing the organised navigation in favour of drift, encounter, the unexpected connection. This is not decoration; it's an argument about how knowledge forms.


Dialogic Mode

The most speculative mode. A conversational interface into the corpus.

What it would look like:

  • A simple text prompt
  • You ask a question — "what does the project say about the body?" — and the interface surfaces relevant passages from transcripts, blog posts, research
  • Not a chatbot generating new text — more like a semantic search over the existing corpus

A caution: There is a political and intellectual question here about AI-mediated access to this corpus. The project is explicitly critical of certain tendencies in AI discourse. Any dialogic mode should be designed carefully — foregrounding the source material, not summarising or synthesising it. The interface should surface the corpus, not replace it. This needs a collective decision from the project before being built.


Practical Next Steps

Step 1 — ✅ Build the linear mode (complete)

Step 2 — ✅ Create the content schema (complete)

Step 3 — Mode switching infrastructure (low effort, high leverage)

  • useModeState composable storing currentMode in state and localStorage
  • A minimal mode switcher in the nav
  • URL parameter encoding (?mode=spatial)

Step 4 — Build concept-graph generation script (medium effort)

  • A Node.js script that reads all markdown frontmatter and produces public/concept-graph.json
  • Structure: { nodes: [{id, label, count}], edges: [{source, target, weight}] }
  • The data layer that both Spatial Mode and Dialogic Mode would build on

Step 5 — Prototype the Spatial / Concept Map mode (medium effort)

  • D3 or Cytoscape as a Nuxt Island on a new /explore route
  • Click node → navigate to /blog/tag/[tag] (already exists)
  • The first non-linear mode — proves the whole mode-switching concept

Step 6 — Listening Room (low-medium effort)

  • New layout fullscreening the existing AudioPlayer
  • Uses usePlayer (already exists)
  • Transition: entering the Listening Room should feel like a threshold crossing

Step 7 — Search / Void mode (low-medium effort)

  • Build-time search index (Fuse.js or FlexSearch)
  • Index: title, description, tags, first 500 words of body content
  • New route: /search

Step 8 — Content enrichment (ongoing)

  • Timestamp markers in transcripts
  • Paragraph-level IDs for key passages
  • Light manual concept annotation

Step 9 — Temporal mode (future)

Step 10 — Dialogic mode (future, requires collective discussion)


Open Questions for the Project

  • Naming: Should modes have names that reflect the project's own vocabulary rather than technical descriptions? The Map, The Room, The Oracle, The Archive?
  • Discoverability: Should modes be discoverable by exploration (hidden until you find them), or explicitly offered?
  • Blending: Can modes layer? (A temporal filter applied within the spatial map?)
  • Collaboration: Could the site eventually accept contributions from collaborators without requiring git access?
  • Audience: The non-linear modes probably have different audiences. Worth discussing before building.

A Note on the Project's Own Thinking

The modes are not decorative. They are arguments.

The linear mode argues that knowledge is accessed through reading and listening, following a structured path. The spatial mode argues that knowledge is also relational. The serendipitous mode argues that encounter and drift are also valid modes of knowing. The temporal mode argues that ideas have histories.

Each mode enacts something the project is also thinking about theoretically — the relationship between different registers of knowledge, the critique of purely linear/progressive accounts of thought, the value of ambient and embodied forms of encounter.

The interface can be part of the argument.


This platform sits between a research repository, an art piece, and a philosophical tool. The linear mode is the foundation — stable, useful, shareable. The non-linear modes are not improvements on it but different ways of being with the same material.