Three Brains, One Workflow: Claude, ChatGPT and Gemini on Stensyl.

Most platforms make you commit to one LLM. Stensyl gives you Claude, ChatGPT, and Gemini in the same workflow, with one credit pool, not five tabs.
The Model Question Most Designers Don't Want to Answer Alone
Ask ten designers which large language model they use for client work, and you will hear ten different answers, often delivered with a shrug. Claude, ChatGPT, Gemini. Each has a passionate following. None has a clear lead. The honest truth is that the best model depends on the task, the tone, and sometimes the day. The dishonest version is what most platforms tell you: pick one, lock in.
Stensyl takes a different position. The platform gives you direct access to all three families inside the workflow you are already using, with the same credit pool and the same login. No extra subscription. No browser tab juggling. The picker scales with the plan, so the workflow scales with the work.
This is not a chat product bolted onto a creative tool. It is a deliberate design choice that runs through three surfaces: the Write studio, the Canvas LLM Chat node, and the Ray creative assistant. Each one solves a different problem in the design process, and together they cover the realistic spread of writing tasks a working designer faces in a week.
The Three Surfaces, and What Each Is Actually For
Write: Document Drafting With Tiered Model Access
The Write tab in the Desk section is where long-form drafting lives. Open it, choose a document type (planning statement, project brief, design rationale, client proposal, and more), and pick the model that fits the tone. The full picker covers six models:
- Claude Sonnet 4.6: fast, precise, strong on tone control. Pro tier and above.
- Claude Opus 4.7: the richest detail, ideal for high-stakes long-form. Pro tier and above.
- Gemini Pro: balanced speed and depth, good for analytical writing. Starter tier and above.
- GPT-5.5: strong reasoning, best for structured argument. Starter tier and above.
- Gemini Flash: quickest drafts, lowest cost per output. All plans.
- GPT-5.4 mini: fast general writing, lean on credits. All plans.
Write supports image and PDF attachments natively for Claude and Gemini, which means a design brief PDF can go straight into the model as context, not as something you copy-paste in fragments. If a draft does not land on first attempt, the "Switch and re-draft" control swaps models without losing the document setup. You can read the same brief through three different model voices in under a minute.
Canvas LLM Chat Node: Chat as a Workflow Component
The Canvas, found in the Create section, is Stensyl's node-based workflow editor. Drop an LLM Chat node onto the canvas and you get the same model picker as Write, with the same tier access rules, but in an entirely different posture. This is open-ended chat with a key difference: the response can be piped directly into other nodes.
That sounds technical until you see it run. Connect a Project Brief node into the LLM Chat node's input. Ask the model to expand the brief into a detailed image-generation prompt. Pipe that response into an Image Generate node. The visual output is now traceable back to the brief, with the prompt-engineering step on the canvas where you can tweak it, fork it, or compare model outputs side by side.
The same applies to video. A scene description refined in an LLM Chat node can pipe into a Video node, with the prompt logic visible to whoever is reviewing the work. The chat is not a separate conversation. It is a workflow component.
Ray: The Decision-Making Layer
Ray sits in the Desk section and serves a different role: helping you decide which generation model to use for a given task. Ask Ray "what should I use to render this exterior elevation in late afternoon light?" and it will recommend a model from the platform's image library, often with a reasoning trace explaining the choice. Ray is built for speed of decision, so under the hood it runs on Anthropic's Haiku tier rather than Sonnet or Opus, optimised for fast, focused responses.
Ray is not where you write your client proposal. Ray is where you stop spending five minutes wondering whether Nano Banana Pro or Flux 2 Pro is the right call, and just ask.
Three surfaces, three jobs. Write for documents. Canvas LLM Chat for prompt engineering and chained workflows. Ray for picking the right tool for the next move.
When to Reach for Which Model
The temptation, looking at six options, is to default to the one you trust and never switch. That is leaving most of the value on the table. Here is how the models break down across the writing tasks designers actually do.
Claude (Sonnet or Opus)
Claude is your tone-led model. When the writing has to feel considered, when it has to read like it came from a person with aesthetic sensibility, Claude is the strongest of the three. Brand narratives, spatial concept statements, design rationale documents, exhibition proposals, anything where the prose carries the work as much as the content does. Sonnet is fine for most of this. Reach for Opus when the document is going to a planning committee, a competition jury, or a client whose budget rounds up.
GPT (5.4 mini or 5.5)
GPT is your structure-led model. Specification sheets, client briefs broken into sections, feature lists, phased project schedules, numbered acceptance criteria. GPT-5.5 in particular handles structured reasoning with less prompting than the other two: tell it the shape you want, and it will hold the shape. The 5.4 mini variant is the right call for cheaper, repeatable work like procurement language or contractor-facing scopes.
Gemini (Flash or Pro)
Gemini is your analysis-led model. When the task requires synthesising multiple sources, comparing material options, cross-referencing competitor analysis, or building a comparison table from scattered notes, Gemini's comparative reasoning has a real edge. Flash for low-stakes synthesis where speed matters. Pro when the analysis informs a real decision.
| Task type | Model recommendation | Why |
|---|---|---|
| Concept statement, brand narrative, design rationale | Claude Opus or Sonnet | Tone control, considered prose |
| Client brief, spec sheet, structured project doc | GPT-5.5 or 5.4 mini | Holds structural shape with less prompting |
| Material comparison, trend synthesis, competitor analysis | Gemini Pro or Flash | Comparative reasoning, multi-source handling |
| Image or video prompt engineering | Claude Sonnet via LLM Chat node | Translates plain language into model-ready prompts |
| Picking a generation model for a task | Ray | Fast, decision-focused, knows the model library |
The Workflow That Browser Tabs Cannot Match
The argument for keeping LLM access inside the design platform is not just convenience, although the convenience is real. The argument is about traceability.
Consider how prompt engineering works without Stensyl. You open Claude in one tab to refine a visual idea into a generation prompt. You copy the prompt across to Midjourney or Runway in another tab. The image generates. You decide it is not quite right. Now the question is: was it the prompt, the model, or the seed? Going back means switching tabs, locating the original conversation, editing the prompt, and pasting it forward again. Multiple times per session, this fragments the creative loop.
On Stensyl, the LLM Chat node and the Image Generate node sit on the same canvas, connected by a visible edge. Edit the prompt in chat, regenerate, and the image node updates downstream. The decision history stays in the canvas. A reviewer can see exactly which model produced which prompt, and which prompt produced which image. When a client asks "why did you go with this look?" three weeks later, the answer is sitting in the workflow, not in a closed browser tab.
The same logic applies to written documents that inform visual work. A design rationale drafted in Write can sit alongside the moodboard and concept renders it describes. The narrative and the imagery share the same project context, the same credit account, and the same review thread.
Multi-model access is the headline. Workflow integration is what makes it stick. The two together are the actual differentiator.
Three Workflows To Try Today
The fastest way to develop a feel for the three models is to put them on the same task and compare. These three short workflows each take under ten minutes and produce something usable at the end.
Workflow 1: Three Voices on the Same Brief
Open the Write tab. Pick any document type that fits a current project. Draft the document with Claude Sonnet first. Use the "Switch and re-draft" control to regenerate with GPT-5.5, then again with Gemini Pro. Read the three side by side. The differences in tone, structure, and emphasis tell you more about each model in five minutes than any benchmark will.
Workflow 2: Chat-to-Image on the Canvas
Open Canvas. Drop an LLM Chat node and an Image Generate node onto the workspace. Connect them. In the chat node, describe a visual concept in two sentences of plain language. Ask the model to expand it into a detailed image prompt with lighting, camera, material, and composition specified. Run the chat. The output flows into the Image Generate node. Run that. The image appears.
Now the experiment: change the chat model from Claude Sonnet to GPT-5.5 and rerun. The prompt will differ, often substantially. Generate the image again. You now have two prompts and two images, side by side, with the model choice as the only variable.
Workflow 3: Ray as the Starting Point
If you are unsure which generation model fits a task, open Ray and describe the task in plain language. "I am rendering an interior with a south-facing window in late afternoon. Material focus is soft fabric and warm timber." Ray will recommend a specific model from the Stensyl library, often with a sentence on why. Take the recommendation into Generate or Canvas and start there. The decision time goes from five minutes of indecision to thirty seconds of guidance.
What This Costs, and Where to Start
Every LLM call on Stensyl draws from the same credit pool that powers image, video, and 3D generation. There is no separate subscription, and switching models mid-session does not introduce new billing complexity. The credit cost varies by model: faster models like Gemini Flash and GPT-5.4 mini are the cheapest per call, while Opus-tier and Pro-tier models cost more in line with their compute.
The platform tiers each unlock progressively more of the LLM picker:
- Lite (£10 a month): 1,000 credits. Writing models: Gemini Flash and GPT-5.4 mini. Suits a designer trying the platform on personal work.
- Starter (£22 a month): 2,500 credits. Adds Gemini Pro and GPT-5.5 to the writing picker. The right entry point for active client work.
- Pro (£42 a month): 6,000 credits. Adds Claude Sonnet and Claude Opus, the strongest of the six for tone-led writing.
- Studio (£84 a month): 12,500 credits, the full picker, plus team-seat support for studios with shared workflows.
For a designer evaluating LLM-led writing as part of a workflow, Starter is the practical starting point: enough credit headroom to test on real work, with four of the six models available. Pro becomes the right move once Claude's tone control is non-negotiable for client-facing prose.
The Real Headline
The point of putting Claude, GPT, and Gemini in the same platform is not to flatter any one model, and not to stage a benchmark race. It is to remove the choice as a barrier. You are not committing to one provider's view of how design writing should sound. You are picking the right voice for the task in front of you, switching when the task changes, and keeping the entire decision trail visible inside the workflow that produced the work.
The model question still matters. It just stops being a contract you sign once and have to live with. On Stensyl it is something you answer fresh each time, in the same window where the rest of the work is happening.
Keep reading.
Try Stensyl for yourself
Image, video, 3D, chat, and document drafting. Every AI model, one studio. Plans from £10/month.
