Skip to main content
Brainstorm

Design Layers

SS-RP-2026-008visual designer3d renderingintake
Created 2026-03-25
Brief

Design Layers

ID: SS-RP-2026-008 Product: Visual Designer Capability: 3D Rendering Status: Intake Created: 2026-03-25


Intended Outcome

A landscaping professional uploads a property photo and builds their design in three natural stages — cleanup, structure, planting. Each stage is an independent, toggleable layer. The professional can:

  • Iterate on any stage without losing the others. Change the plants without touching the patio. Change the patio without replanting. Go back to the cleaned photo and try a completely different direction.
  • Present the transformation as a story. Walk a client through "here's your yard today → here's what we remove → here's the new structure → here's the planted design" by toggling layers on and off.
  • Compare options instantly. Swap between three different planting schemes for the same base design. Toggle hardscape on/off to discuss structure separately from planting.
  • Keep full control. Override any AI-generated content with manual placements. Mix automated generation with hand-picked elements from the plant and material libraries.

The measure of success is not which technology powers it — it's whether a professional can produce and present a layered visual proposal in minutes, adjust it live during a client meeting, and feel confident that changing one thing won't break everything else.

The Three Stages

These represent the professional's mental model of landscape transformation, regardless of how they're implemented:

| Stage | What Changes | Example | |-------|-------------|---------| | 1. Cleanup | Remove unwanted elements from the property | Remove old shrubs, dead lawn patches, cracked concrete | | 2. Base Design | Add structural elements — surfaces, borders, hardscape | Planter bed outlines along house, flagstone patio, gravel path | | 3. Planting | Fill beds and areas with plants | Lavender border, ornamental grasses, shade trees |

Each stage builds on the previous one. Each is independently toggleable in the final design.

Layer Toggle — The Core Interaction

| What's Visible | What the Client Sees | When You'd Use It | |----------------|---------------------|-------------------| | All layers | Complete planted design | Final presentation | | Stages 1 + 2 only | Hardscape without plants | Discuss structure, surfaces, layout | | Stage 1 only | Cleaned property | Show what gets removed, discuss scope | | Original only | Untouched photo | "Here's where we're starting" | | Toggle Stage 3 | Flip planted ↔ unplanted | Help client decide on plant package | | Swap Stage 3 variant | Different planting scheme, same base | A/B comparison of plant options |


Success Metrics

The right solution will be measured against these criteria. No approach will score perfectly on all — the goal is to find the best trade-off.

| # | Metric | What It Measures | Why It Matters | |---|--------|-----------------|----------------| | 1 | Output Quality | Visual realism and coherence of the final composite | Professionals sell projects with these images — they must look convincing | | 2 | Iteration Speed | Time to modify one stage without affecting others | Live client meetings demand fast changes | | 3 | User Control | Precision of control over individual elements | Professionals have specific material and plant preferences | | 4 | Layer Independence | How cleanly stages separate when toggled | Toggling must feel instant and artifact-free | | 5 | Predictability | Consistency of results across regenerations | Regenerating a stage shouldn't produce a wildly different result | | 6 | Cost per Design | AI credits and compute per complete 3-stage design | Must be economical enough for daily use | | 7 | Learning Curve | Time for a new user to produce their first layered design | Adoption depends on approachability | | 8 | Presentation Value | How effective layer toggling is for client storytelling | This is the primary selling scenario | | 9 | Technical Risk | Implementation complexity, API dependencies, failure modes | Lower risk means faster time to market | | 10 | Build Effort | Engineering time to ship a usable MVP | Determines when professionals get this in their hands |


Solution Approaches

Eight distinct approaches to achieving the outcome. Each represents a different trade-off between automation, control, quality, and cost.


Approach A: Full-Scene AI Pipeline

How it works: Each stage generates a complete full-frame image via AI inpainting. Stage 1 takes the original photo and removes marked objects. Stage 2 takes Stage 1's output and adds hardscape. Stage 3 takes Stage 2's output and adds plants. Each output is stored as an independent image layer.

Strengths: Highest visual quality — AI generates photorealistic scenes where elements blend naturally with the photo's lighting, perspective, and context. Simple mental model for the user: "describe what you want, AI does the work."

Weaknesses: Each layer is a complete image, so they're large and memory-intensive. Regenerating Stage 2 can invalidate Stage 3 (since Stage 3 was built from Stage 2's specific output). Limited user control — you describe intent, AI decides placement. Most expensive per iteration.


Approach B: AI Delta Extraction

How it works: Same as Approach A for generation — AI produces full scenes at each stage. But then the system computes the pixel difference between consecutive stages, extracting only what changed as a transparent overlay. Layer 1 = diff(original, cleaned). Layer 2 = diff(cleaned, base design). Layer 3 = diff(base design, planted).

Strengths: True layer independence after extraction — each overlay contains only its stage's contribution. Toggling is mathematically precise. Best possible presentation value since layers composite perfectly by definition.

Weaknesses: Extraction quality depends on how cleanly changes isolate — edge artifacts, halos, and blending anomalies are common with pixel differencing. Costs more than A (same generation cost plus extraction processing). Regenerating one layer requires re-extracting all downstream deltas.


Approach C: Object-Based AI Generation

How it works: AI generates individual design elements as isolated cutout images — a planter bed, a group of shrubs, a patio surface. These are placed as discrete features (similar to existing Plants3d/DesignObjects3d layers), organized into stage groups. User can move, scale, rotate, and delete individual elements.

Strengths: Maximum flexibility and user control. Elements are independently editable. True layer independence — toggling a stage shows/hides its feature group. Closest to how professionals already work with drag-and-drop in the current Visual Designer.

Weaknesses: Individual elements may not blend as seamlessly with the photo as full-scene generation. More complex UX — managing many discrete objects vs. a single cohesive image. AI must produce convincing cutouts with proper perspective and transparency. Hardscape surfaces (patios, paths) are harder to represent as discrete objects than plants.


Approach D: Manual Layer Groups (No New AI)

How it works: No new AI features. Reorganize existing Visual Designer tools into named layer groups. "Cleanup" uses existing background removal. "Base Design" groups shapes and materials (existing Shapes3d plus materials). "Planting" groups plant placements (existing Plants3d). Toggle entire groups on/off.

Strengths: Builds entirely on shipped, proven features. Zero AI cost. Maximum predictability — what you place is what you get. Fastest to build (UI reorganization, no new backend). No API dependencies or failure modes.

Weaknesses: Lowest visual quality — existing drag-and-drop overlays aren't photorealistic full-scene composites. Cleanup stage is limited to removing the background image, not intelligently filling removed areas. More manual work per design. Doesn't leverage AI for the hardest part: making things look real in context.


Approach E: AI-Assisted Manual Workflow

How it works: AI generates suggestions and starting content — recommended plant layouts, surface treatments, removal previews. The user places, arranges, and refines everything manually on organized layer groups. AI is an advisor and generator of raw materials, not the final author.

Strengths: Best balance of quality and control. Professionals retain full authorship while AI accelerates the tedious parts. Predictable — user sees exactly what they're placing. Lower AI cost than full-scene generation. Leverages and extends existing manual tools.

Weaknesses: Slower than full AI approaches — requires more user interaction per design. Output quality depends on the professional's design eye. May feel like "more work" compared to approaches where AI handles composition.


Approach F: Guided Inpainting with Snapshots

How it works: Each stage uses targeted AI inpainting (extending the existing hardscape feature). The system automatically captures the design state before and after each stage as a snapshot. Snapshots become the toggleable layers. Lightweight extension of what's already shipped.

Strengths: Minimal new architecture — extends existing inpainting workflow with automatic state capture. Familiar to current users. Fast to build. Good output quality from proven Gemini inpainting.

Weaknesses: Snapshots are flat images — you can't edit within a snapshot, only regenerate the entire stage. Going back to Stage 2 and changing something makes Stage 3's snapshot stale. Less true layer independence — essentially "undo/redo with named checkpoints" rather than composable layers.


Approach G: Semantic Region Pipeline

How it works: AI segments the property photo into semantic regions (lawn, house, driveway, sky, planting beds, etc.). User assigns treatments per region. Each region's treatment is generated independently and composited. Stages correspond to region types: cleanup regions, hardscape regions, planting regions.

Strengths: Region-level independence — regenerate one zone without affecting others. Natural mapping to landscape zones. AI segmentation identifies boundaries automatically.

Weaknesses: Segmentation quality varies across property types and photo angles. Regions don't always map cleanly to the three stages (a lawn area might involve both cleanup and base design). Boundary artifacts between adjacent regions. Most complex to implement well. User may spend significant time correcting AI segmentation before doing any actual design work.


Approach H: Composite Hybrid

How it works: Use the best method per stage. Stage 1 (Cleanup) uses full-scene AI inpainting — best for seamless object removal. Stage 2 (Base Design) mixes AI-generated surface fills with manual shape/material tools for borders and edges. Stage 3 (Planting) uses object-based generation — AI produces plant groupings as editable cutouts, individually movable and swappable.

Strengths: Optimizes per stage. Photorealistic removal where seamlessness matters most (cleanup). Manual control where it matters most (planting — the thing clients change most often). Stage 3 plants are individually repositionable and replaceable.

Weaknesses: Three different interaction patterns across stages could confuse users. Most complex to build and maintain. Hardest to explain. Broadest technical surface area (inpainting + object generation + manual tools all required).


Scorecard

Scale: 1 (worst) to 5 (best). Higher is better for all metrics.

| Metric | A | B | C | D | E | F | G | H | |--------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | Output Quality | 5 | 4 | 3 | 2 | 3 | 4 | 4 | 4 | | Iteration Speed | 3 | 3 | 5 | 4 | 3 | 2 | 4 | 4 | | User Control | 2 | 2 | 5 | 5 | 5 | 2 | 3 | 4 | | Layer Independence | 3 | 5 | 5 | 5 | 5 | 2 | 4 | 4 | | Predictability | 3 | 2 | 4 | 5 | 4 | 3 | 3 | 3 | | Cost per Design | 2 | 1 | 3 | 5 | 4 | 3 | 2 | 3 | | Learning Curve | 4 | 3 | 3 | 3 | 3 | 4 | 2 | 2 | | Presentation Value | 4 | 5 | 4 | 3 | 4 | 3 | 3 | 4 | | Technical Risk | 3 | 2 | 3 | 5 | 4 | 4 | 2 | 2 | | Build Effort | 3 | 2 | 3 | 5 | 4 | 4 | 2 | 2 | | Total | 32 | 29 | 38 | 42 | 39 | 31 | 29 | 32 |

Reading the Scores

D (Manual Layer Groups) scores highest overall — it's safe, cheap, and fast to ship. But it scores lowest on Output Quality (2/5), which is arguably the single most important metric for a tool that helps professionals sell projects to homeowners.

C (Object AI) and E (AI-Assisted Manual) occupy the balanced middle ground — good quality, strong control, moderate cost. These are the approaches most likely to become the long-term architecture.

A (Full-Scene AI) delivers the best output quality but sacrifices control and cost efficiency. Risk of "impressive demo, frustrating daily use."

H (Hybrid) is the most architecturally ambitious — optimizing per stage is appealing in theory but means three different interaction paradigms to learn, build, and maintain.

B and G are technically interesting but carry the most implementation risk for the least proven upside.

F (Snapshots) is the easiest step from where we are today, but its "layers" are really just save points — limited independence.

Trade-Off Clusters

| Strategy | Approaches | Best For | |----------|-----------|----------| | Ship fast, learn first | D, F | Prove the layer workflow resonates before investing in AI. Low risk, fast to build. Quality ceiling is low but you validate the concept. | | Balanced quality + control | C, E | Best overall trade-off. AI-quality output with hands-on control. Moderate build effort. Most likely long-term architecture. | | Maximum wow factor | A, H | Highest output quality, best for demos and first impressions. Expensive, less controllable, harder to build. | | Technically ambitious | B, G | Could leapfrog competition if executed perfectly. Highest implementation risk. |


Design Decisions (Resolved)

  1. Layer variants — Yes. Multiple versions of any stage (e.g., three planting schemes for Stage 3, two patio materials for Stage 2). Swap variants without affecting other layers.

  2. Cascading regeneration — Yes, with prompt. When an upstream stage changes, downstream stages auto-regenerate using stored inputs. User is notified and can cancel.

  3. Hybrid mode — Yes. Mix AI-generated content with manual drag-and-drop at any stage. Manual placements overlay AI-generated content.

  4. Export — Configurable via toggles. Export dialog includes layer visibility controls: composite (default), all states as separate pages, specific combinations, include/exclude manual elements.

Related Ideas

  • Background Element Extraction (SS-RP-2026-009): Solves the z-ordering limitation that makes Design Layers more valuable. AI-generated content in background layers can be extracted into discrete, z-orderable design objects. Shares research on image differencing and AI cleanup. Higher priority — extraction unblocks the full value of existing AI generation features, not just future layer workflows.

Open Questions

  1. Which approach (or combination) to pursue? — Pending decision after reviewing the scorecard.
  2. Layer naming — User-editable names, or fixed "Cleanup / Base Design / Planting" labels?
  3. Maximum variants — Cap on variants per layer? (Storage and UX clutter)
  4. Collaboration — When sharing, does the viewer see all layers and variants or a curated subset?