Conduit. The generative AI revolution brought to an interface.
...an LLM-based text editor, with an intuitive and powerful graphical user interface. Employ an AI assistant to help you write without typing any prompts!
Edit documents as normal:
Run a pre-engineered prompt on a text node, without typing anything:
Combine, edit, rework, and improve the results of your prompts:
View your documents without the clutter of the editor (and avoid accidentally making changes when reviewing):
And that's just the beginning! Check out everything that's to come:
- We're open-source, which means no propriety formats! (conduit is built on markdown for storing and editing text).
- A completely customisable and modular system to configure generative AI agents for different media.
- Run pre-engineered prompts (on a per-document, per-node, or per-selection basis) in rich text files, such as:
- Summarise
- Expand
- Rewrite (in style of...)
- Create examples
- Create practice questions/answers
- Write code
- Translate
- Create todos (steps for accomplishing something)
- Implement received feedback
- Create/customise your own!
- Interaction modes:
- Edit: Add, update, and remove content with LLM-based text completion.
- Review: Interact and ask questions about content (to strengthen your understanding, or test your knowledge), alongside a spelling and grammar checker.
- Present: View an uneditable version of the content you have produced, without an interface getting in the way.
- Intuitive interface that provides smooth user experience:
- Movable, collapsible generated material.
- Drag and drop to reorder and organise content.
- File upload:
- Run prompts and ask questions about any (part of a) file.
- Consolidate or summarise multiple files.
- Automatically generate notes from, or summarise a conversation had with an LLM.
- Collaboration with other users, such as:
- Edit the same content.
- Use someone else's document as a base for your new document.
- Create automations and 'work forces' to automate content production through the use of LLMs interacting with each other, such as:
- Create different agents for different tasks, for example:
- A GPT 'researcher'
- A Claude 'content writer'
- An in-context-trained LLaMA as a 'reviewer'
- Run pre-engineered prompts on multimedia, such as:
- Summarise or transcribe audio.
- Describe what happens in a video.
- Create a diagram of what is written in text.
- Create a picture.
- Do Math with LaTeX.
- Run code and call APIs.
- Multimedia functionality will grow alongside the research and industry.