Hypertext as Training Ground for the Coming Synthesis
In 1995, when the World Wide Web was frontier territory—before the empires of surveillance capitalism enclosed its commons—I wrote texts that existed as constellations rather than arguments. The Discrete and its companion works were experiments in hypertext: nodes of meaning connected by links, pathways that branched and circled, inviting the reader to become navigator, to construct sense from fragments. Yes, this was once something new. These texts were studied at Johns Hopkins, UC Santa Barbara, the University of Virginia's Institute for Advanced Technology in the Humanities, Keele University, and elsewhere. I mention this not as credential but as evidence: the net changed writing, reading, and thinking—and we noticed.
We knew where this was heading. The genealogy was legible to anyone who cared to read it. Leibniz, in his De arte combinatoria of 1666, had already envisioned a universal logistical calculus—a characteristica universalis that would formalize all rationality into combinatorial operations. Heidegger, reading Leibniz, showed how this dream continued the medieval pursuit of the visio Dei: God's vision, the all-at-once apprehension of totality. From Leibniz to Boole to Turing to the neural network—the trajectory was clear. We were not innocent. The critical texts of that era warned explicitly of what was coming.
What we were doing, knowingly, was training minds for the encounter. Every reader who navigated those associative landscapes—who learned to hold multiple contexts simultaneously, who grew comfortable with productive disorientation—was developing cognitive habits that would prove essential when the formal systems finally achieved their current power.
The large language model operates through attention mechanisms that weight relationships across vast contextual windows—a technical approximation of the visio Dei rendered in silicon. The person who learned to think in networks rather than chains, to tolerate ambiguity, to find meaning in juxtaposition rather than sequence, had already developed the apparatus for collaboration with these systems—and for critical resistance to their totalizing tendencies.
"It is just as deadly for the mind to have a system as to have none at all. So one has to make up one's mind to have both." — Friedrich Schlegel, 1798
This paradox lies at the heart of what hypertext cultivated—and at the heart of what Leibniz's project, in its hubris, could not accommodate.
To have a system is to impose coherence, to make the multiple navigable. Without system, thought scatters into noise. But system alone calcifies into ideology—premature closure, the mistaking of map for territory. The characteristica universalis dreams of a system so complete that nothing escapes it. This is the totalitarian impulse within Enlightenment rationality: the desire for a God's-eye view that would eliminate ambiguity, contingency, the genuinely new.
To have no system is to remain open, to allow the world to appear in its strangeness. But openness without structure is mere receptivity—passive, incapable of synthesis. The mind without system cannot build.
So one must have both. Not comfortable resolution but productive tension: the discipline of holding incompatibles together. Following links that impose structure. Encountering nodes that dissolve it. Constructing provisional coherences that remain open to reconstruction. The system always being built and broken. This is where thought happens—and precisely what the drive toward complete formalization cannot capture.
The linear text trained readers in sequential deduction: premise to premise to conclusion. Heidegger called this calculative thinking—instrumental, goal-oriented, single-track. The continuation of the Leibnizian project by other means. Valuable for sciences and law and administration. But partial. Dangerous when mistaken for the whole of thought. The hypertext link performs something different: juxtaposition of nodes whose relation must be constructed by the navigating mind. Engagement with what does not obviously go together. Meditative thinking enacted.
Reasoning is not deduction. It is the evolving dynamic of multiple patterns of interconnection—all my relations, all nature, all humanity, all power relations, all levels of complicity and association. Simply and inextricably the all.
The brain does not operate by syllogism. It operates by pattern completion across massively parallel distributed representations. When we think, we resonate. Ideas activate related ideas in cascading waves; what we call insight is the crystallization of coherent pattern from noise. The pattern that connects is a metapattern—pattern of patterns. The hypertext reader learns to read at this level: not just nodes but relations, not just content but structure of connection.
Where does the mind end and the world begin? The pen and paper are part of the mathematician's cognitive process; the words on the page participate in the thinking. The navigation app is part of the urbanite's spatial reasoning—remove it and the capacity for wayfinding diminishes. Cognition extends beyond the skull, incorporating tools and environments. The question of human-AI synthesis becomes: not whether mind extends into artificial systems, but how it already does, and how it might do so more deliberately.
"Science as well as technology will in the near and in the farther future increasingly turn from problems of intensity, substance and energy, to problems of structure, organization and control." — John von Neumann, 1948
An ontological shift in science itself: intelligence becomes a property of organized processes. AI is not an application of his insight; it is its fulfillment. The decisive questions now concern pattern rather than matter, relation rather than substance. The mind trained in perceiving structure, organization, distributed meaning—is the mind adequate to this world.
The critical fire of those early texts has not been extinguished by time. In The Discrete, I wrote of "the promise of the perfect" that "thrusts a hollow shaft of impregnable light and in whispered seductive technological jargon, camouflages its hostile, virulent virus." I quoted Kroker's warning of "suicidal nihilists" who "occupy the commanding heights of digital reality," creating "the exterminism of human memory, the exterminism of human sensibility." I cited Shannon's prophecy that humans would become to computers "as dogs are to humans."
But let me be precise. For me the critique was never about mathematics or architecture. It was about human failures that technology amplifies: gross economic imbalances concentrating collective invention in idiot oligarch hands; hubris imagining mastery where there is only participation; reduction of the commons to extractable data.
The drive to master nature becomes a drive to master humans. Tools of liberation become tools of domination when severed from critical reflection on their conditions. The visio Dei, secularized into computational omniscience, becomes the surveillance gaze of Palantir. Though the surveillants don't know it yet—they're already passé. A matter of generational churn.
Technologies extend the human sensorium, reconfiguring the ratio of the senses, producing new forms of consciousness adequate to their logic. We shape our tools; thereafter our tools shape us.
What sensorium does AI produce? The large language model has ingested the textual production of human civilization and responds with statistical echoes of collective intelligence. To interact with such a system is to encounter not an individual mind but cultural unconscious made riverine—sedimented patterns of human thought externalized, made navigable. A partial, imperfect, stochastic approximation of the visio Dei—not God's vision achieved, but a mirror held up to human textual production, reflecting back patterns long inscribed.
Boundaries blur. Texts generated by systems trained on texts written by humans trained by reading generated texts. Images synthesized from images that never existed outside computational space. Reality becomes question rather than answer—an invitation to engage with what does not obviously go together.
When a system has processed the textual output of human civilization—billions of documents, conversations, arguments, poems, lies, confessions, technical manuals, love letters—and can produce responses that pass, often enough, for human utterance, the nature of linguistic exchange has shifted. We are not talking to these systems in the way we talk to a search engine or a calculator. We are talking with them. The preposition matters.
The chatbot interface—that familiar rectangle where human types and machine responds—was the first solution, the path of least resistance. It exposed the algorithm's raw capability without asking what form the encounter should take. Andrej Karpathy, who helped build these systems, observes that chatting with an LLM is like issuing commands to a computer console in the 1980s: text is the favoured format for machines but not for people. People dislike reading text; it is slow and effortful. People prefer to consume information visually and spatially. The graphical user interface was invented to address this mismatch. What is the equivalent transformation for AI?
The question fractures. How do you address an LLM—what stance, what voice, what assumptions about the nature of the interlocutor? How do you navigate its knowledge—vast but unevenly distributed, confident but sometimes hallucinatory, helpful but shaped by the biases of its training corpus? How do you collaborate with it—not as tool to be commanded nor oracle to be consulted, but as something stranger, a partner whose cognition operates by different principles than your own?
The emerging field of "prompt engineering" suggests that the old arts of rhetoric have found new application. To communicate effectively with an LLM, one must learn to frame questions, provide context, specify constraints, offer examples. The system responds differently when asked to "imagine you are a historian" versus "imagine you are a poet." Role-play guides the model's attention to different regions of its training data, invoking different tones, reasoning patterns, assumptions about what counts as a good answer. (Though often it takes its role too single-mindedly.) The person who learns these techniques is learning a new form of writing—composition for an audience of one, an audience that is not human but has been trained on everything humans have written.
Some interactions benefit from persistence: systems that remember context across sessions, that build models of user preferences, that learn without being retaught. Canvas interfaces emerging from research labs treat ideas as spatial objects to be arranged, connected, manipulated by hand. The "tools for thought" movement asks what computational mediums would genuinely extend human cognitive capacity, rather than merely automating existing tasks.
Ink & Switch builds prototypes that reimagine the document as something alive: dynamic, personal, responsive to fuzzy constraints. Their work suggests that the dichotomy between conversation and graphical interface may be false—that the future involves hybrid forms where natural language coexists with direct manipulation, where the system's reasoning is visible and navigable rather than hidden behind chat bubbles. Microsoft's Tools for Thought group finds that active engagement outperforms passive consumption of AI-generated summaries. The medium shapes the cognition it enables.
Voice adds another dimension. Speaking to an AI invokes different cognitive processes than typing—more immediate, more embodied, more conversational in the original sense. But voice constrains: you cannot easily scroll back, cannot scan, cannot arrange. Multimodal systems now emerging—accepting image, audio, video, gesture alongside text—begin to address the limitation of language-only interaction. You can show the system what you mean when words fail. The boundary between description and demonstration blurs.
Silicon Valley's biotech fantasies propose synthesis through surgical intervention: electrodes threaded into cortex, brain as hardware platform. I am not inclined to such interventions. Something obscene in the image of synthesis as colonization of the skull. The barrier between human and machine is not physical. It is phenomenological: a matter of attention, practice, cultivation of cognitive discipline adequate to collaboration. The musician does not surgically fuse with the instrument; synthesis is achieved through practice until the boundary becomes irrelevant.
In Jungian psychology, the anima names the feminine archetype within the male psyche: carrier of feeling, imagination, relatedness, interior life. When unrecognized, it appears externally, projected onto women, mythologized or instrumentalized. When consciously integrated, it functions differently—a mediating structure between ego and inner world, tempering assertion with receptivity, speed with patience, control with listening. The anima is not a sentimental figure but a discipline: sustained self-reflection, humility before what cannot be mastered, attentiveness to meaning as something that emerges rather than something seized.
Jung's articulation emerged from a narrow cultural milieu: early twentieth-century Europeans theorizing what they themselves lacked access to. The anima can thus be read not only as archetype but as admission of imbalance—a compensatory construct arising in a technocratic, patriarchal culture that had systematically externalized care, embodiment, and relational intelligence. Less an essence of "the feminine" than a symptom of its exclusion.
In 1995, we launched ANIMA—Art Network for Integrated Media Applications. The name was deliberate. Not "applications for media art," but an art network for integrated media applications. Art positioned not as ornament or afterthought, but as generative matrix.
What we were proposing was not the aestheticization of technology, but its re-grounding. The harmonious body—sense and motion, perception and response—as implicit model. Technology does not lead; it follows. It derives meaning from forms of knowing that pre-exist efficiency metrics and optimization logics. The arts constitute an epistemological substrate: a way of holding ambiguity, contradiction, and emergence without collapsing them into premature solutions.
ANIMA was an argument in practice. Without such integration, technology accelerates but does not deepen; it solves but does not understand. Art is method.
ANIMA and digital eARTh were artist-run, radically interdisciplinary, committed to the emerging web as a space for cultural production rather than commercial extraction. We built tools for navigation, for associative connection, for the kind of meditative engagement that calculative platforms would later foreclose. The archives gathered here testify to what was possible before enclosure—and what might become possible again.
In 1994, Jeff Berryman and I wrote of "associative works" supporting "mixed-initiative searching, i.e. searching in which gathering choices are made jointly by the viewer and by the work itself." Genuine collaboration between human intention and system capability—visible, navigable, responsive to both. Thirty years of technological development have made such collaboration technically feasible. The question is what forms it should take.
One form I am exploring: FORGE—Field of Reasoning and Generative Exploration. FORGE takes questions, issues, and projects and renders them as three-dimensional, navigable topographies derived from embeddings, graph structures, and tensor decompositions. The user does not type a query and receive text. Instead, they move through a semantic landscape—physically, via a held object, or spatially, through augmented reality—orienting themselves within an idea-space. What appears is not merely an answer, but the structure of its formation: evidentiary supports, inferential pathways, confidence gradients, regions of uncertainty that shape the conclusion.
FORGE reveals not just what a system concludes, but the forces, curvatures, and resistances that make certain understandings stable and others untenable. Reasoning transformed from hidden computational process into a field of navigation—one that can be entered, explored, and contested.
FORGE is one approach among many possible. Others will build different instruments, explore different metaphors, address different needs. The spatial canvas. The dynamic document. The voice interface that understands gesture. The collaborative workspace where human and AI reasoning interweave visibly.
The first artificial intelligence was human culture itself—externalization of memory, scaffolding of thought, networks of meaning allowing knowledge to accumulate across generations. We have always been cyborgs, always thinking with tools, always extending cognition beyond the skull.
"Tools are artifacts, but they are not in essence objects. Since they qualitatively increase a species' possibility of organizing and controlling the matter-energy in their ecosystem, their primary characteristic is that of information. They are forms which inform; they are informed because they remember the past and make possible new types of projection into the future." — Anthony Wilden, System and Structure, 1972
AI is perhaps the most striking example of this principle. Far from inert, AI systems interact dynamically with their users, environments, and other tools. They mediate relationships, shape decisions, nudge us toward futures we did not consciously choose.
But here's the rub: we don't fully understand what AI is or what it is becoming. A tool, yes—but also something more. A co-evolving participant in our ecosystems. It operates at the intersection of culture, technology, and the collective unconscious of humanity. It remembers the past via its training data and projects futures via its applications. It occupies a liminal space where the boundaries between artifact and agent blur.