[WallFit] Grids, Spacing, and Resolution - 2
— Designing for Predictable Output
The most time-consuming part of building WallFit wasn’t UI polish.
It was math.
Grids, spacing, aspect ratios — these look like minor configuration options.
In reality, they define the entire outcome.
A Grid Is Not Just Rows and Columns
A grid in WallFit isn’t just rows × columns.
Each cell must account for:
- Outer padding
- Inner spacing
- Canvas ratio
- Device resolution
- Pixel alignment
The goal was simple:
What you see on screen must match what gets exported.
Here’s a simplified version of the grid calculation logic:
struct GridLayout {
let rows: Int
let columns: Int
let outerPadding: CGFloat
let innerSpacing: CGFloat
}
func calculateCellFrames(
canvasSize: CGSize,
layout: GridLayout
) -> [CGRect] {
let totalInnerSpacingWidth = CGFloat(layout.columns - 1) * layout.innerSpacing
let totalInnerSpacingHeight = CGFloat(layout.rows - 1) * layout.innerSpacing
let availableWidth = canvasSize.width - (layout.outerPadding * 2) - totalInnerSpacingWidth
let availableHeight = canvasSize.height - (layout.outerPadding * 2) - totalInnerSpacingHeight
let cellWidth = floor(availableWidth / CGFloat(layout.columns))
let cellHeight = floor(availableHeight / CGFloat(layout.rows))
var frames: [CGRect] = []
for row in 0..<layout.rows {
for col in 0..<layout.columns {
let x = layout.outerPadding + CGFloat(col) * (cellWidth + layout.innerSpacing)
let y = layout.outerPadding + CGFloat(row) * (cellHeight + layout.innerSpacing)
frames.append(CGRect(x: x, y: y, width: cellWidth, height: cellHeight))
}
}
return frames
}
The complexity isn’t in the code itself.
It’s in the edge cases.
- What happens when spacing is zero?
- What happens when division produces fractional pixels?
- What if outer padding is disproportionately large?
This is where AI helped.
Working with Claude and GPT
I didn’t use AI to decide what to build.
I used it to stress-test what I had already decided.
Claude was useful for reviewing structural logic.
GPT was helpful in identifying edge cases quickly.
For example:
- Verifying whether
floorwas safer thanround - Checking how fractional coordinates affect rendering sharpness
- Generating resolution-specific test scenarios
AI acted more like a reviewer than a generator.
Matching Preview and Export
SwiftUI layouts are calculated in points.
Exported images are rendered in pixels.
If scale handling is unclear, preview and export will drift apart.
To avoid that, I explicitly controlled rendering scale:
func renderImage(from view: some View, size: CGSize, scale: CGFloat) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat()
rendererFormat.scale = scale
let renderer = UIGraphicsImageRenderer(size: size, format: rendererFormat)
return renderer.image { context in
let hosting = UIHostingController(rootView: view)
hosting.view.bounds = CGRect(origin: .zero, size: size)
hosting.view.drawHierarchy(in: hosting.view.bounds, afterScreenUpdates: true)
}
}
The key wasn’t the API call.
It was understanding:
- When to use device scale vs fixed scale
- How pixel snapping affects clarity
- Why fractional layout values create blur
AI didn’t solve this.
But it helped clarify the rendering model faster than digging through documentation alone.
Design and Icon Decisions
AI was also useful when thinking through design direction.
Not for aesthetics — but for structure.
Questions like:
- Should this feel like a creative app or a utility tool?
- Should the icon communicate emotion or function?
- What happens if the UI density increases slightly?
It helped organize trade-offs.
Final decisions were manual.
WallFit isn’t designed to impress.
It’s designed to stay out of the way.
In the final part, I’ll talk about running WallFit as a free app —
ads, analytics, and the decisions behind what not to collect.
#wallfit #app #ai #llm #background #image #maker

