Proof-of-concept that explores the intersection between automated LLM reasoning and human judgment. While language models are powerful tools for generating and transforming content, their outputs often form complex interdependencies that can be difficult to manage and refine.
The Nature of the Problem
Working with LLMs often feels like orchestrating a cascade of thoughts. One prompt leads to another, each building upon previous outputs, forming a tree of interconnected reasoning. But what happens when we want to nudge this process in a different direction? How do we maintain the insight and control?
The Approach
These prompt chains are visualized as a dynamic tree structure. Each node represents data generated by an LLM, connected to others through prompts. The tree structure is interactive - allowing human intervention at any point while maintaining the integrity of the reasoning chain.
How It Works
- The prompts are organized in parent/child-relationships
- The system executes these prompts, maintaining dependencies
- At any point, users can modify data at any node
- The system automatically regenerates dependent nodes, respecting the new context
It is like having a conversation where you can go back in time, change what was said, and see how it affects everything that follows - all while maintaining coherence.
Tech stack
reactjs
nextjs
tailwindcss
postgresql
Abstract thought at the end
"Nigiri" is an attempt to make the invisible visible - to give structure to the ephemeral process of machine-assisted reasoning while keeping human judgment in the loop. Not because we don't trust the machine, but because the best outcomes often emerge from the dialogue between human insight and computational power.