Article

Apr 17, 2026

Our Local AI Cloud: How We Brought ComfyUI to the Render Farm

How we turned AI workflows into infrastructure — using our existing GPU farm, fully on-premise

At our studio, we work on projects where every frame matters — and increasingly, so does every minute of an artist’s time.

Over the past year, generative AI tools have moved from novelty to real production utility. Chief among them is ComfyUI: a powerful, node-based framework for workflows ranging from image generation and video animation to depth estimation and rotoscoping.

But using AI tools is not the real shift.

The shift is this: AI workflows are no longer tools — they are infrastructure.

The Problem with AI on the Desktop

ComfyUI was designed to run on a single machine. When an artist launches a workflow, their GPU is fully occupied until the job completes.

For quick tests, that’s fine. For high-resolution video processing and production-scale workloads, it becomes a bottleneck.

Workstations get locked. Artists wait. Productivity drops.

The obvious answer is the cloud — and for many studios, it’s the wrong one.

Cloud-based solutions introduce two major issues:

  • Costs don’t scale predictably

  • Client data leaves your infrastructure

For a VFX studio, the second point isn’t optional. It’s a contractual and ethical boundary.

So cloud wasn’t an option.

The Insight: AI Jobs Are Render Jobs

We already had a system designed to handle distributed compute.

Our render farm.

It queues jobs, distributes them across GPU nodes, monitors execution, and scales across machines. We use CGRU/Afanasy — a system proven in production over years.

The insight was simple: A ComfyUI workflow is just a compute task.

It takes inputs, runs on a GPU, and produces outputs. That’s exactly what a render farm does.

From Render Farm to Local AI Cloud

Instead of adapting our workflows to the cloud, we adapted our infrastructure to AI.

We turned our render farm into what we call a Local AI Cloud.

A system where:

  • AI workflows are submitted like render jobs

  • GPUs across machines execute them in parallel

  • Data never leaves the network

  • Artists don’t wait on local hardware

This isn’t a workaround. It’s a shift in how AI is deployed in production.

How It Works

The system is built as a layered pipeline — simple on the surface, powerful underneath.

Artist Interface

Artists access a browser-based panel listing available AI workflows.

They select a workflow, provide inputs (footage, prompts, parameters), and submit.

No JSON. No command line. No friction.

Job Submission & Isolation

Each submission creates an isolated job environment with its own inputs and outputs.

This ensures:

  • No file conflicts

  • Clean tracking of all jobs

  • Full auditability

Multiple jobs can run concurrently across the team without interference.

Render Farm Execution

The job is picked up by CGRU/Afanasy and assigned to an available GPU node.

Each node:

  • Loads a parametrized ComfyUI workflow

  • Injects the artist’s inputs

  • Executes via the ComfyUI API

We maintain separate GPU pools for different workload types, optimizing VRAM usage and throughput.

Monitoring & Output

Artists see live progress through the web interface.

Once complete:

  • Outputs are written to shared storage

  • Results are instantly previewable in-browser

Submit → move on → review when ready.

What This Unlocks

This system changes how AI fits into production.

  • Workstations stay free — artists no longer wait on local GPUs

  • Data never leaves — everything runs fully on-premise

  • No per-job cost — we use existing hardware capacity

  • Parallel execution at scale — multiple jobs run simultaneously

  • Shared workflows — one artist’s solution becomes everyone’s tool

A Living Workflow System

This isn’t a static toolset. It’s a growing system.

When an artist builds a strong workflow — tests it, refines it, proves it in production — it can be promoted into the shared library.

The framework is the infrastructure.
The artists are the ones expanding it.

This turns AI from individual experimentation into collective capability.

Why This Matters

AI in creative industries is at an inflection point.

Most teams are still treating AI as:

  • standalone tools

  • isolated experiments

  • desktop-bound processes

But that model doesn’t scale.

What scales is infrastructure:

  • shared compute

  • centralized workflows

  • distributed execution

At the same time:

  • GPU demand is increasing

  • Cloud costs are harder to control

  • Data privacy requirements are tightening

The default assumption is that AI belongs in the cloud.

We don’t think that’s true.

Our Position

We believe:

  • AI should run where your data already lives

  • Infrastructure should adapt to workflows — not the other way around

  • Studios should own their compute, not rent it per request

This is why we built our Local AI Cloud.

Not as an experiment — but as a foundation.

What’s Next

The system is already in active use, with workflows across:

  • Image / Video generation with ControlNets

  • Style & Performance Transfer

  • Depth and Normal estimation

  • Rotoscoping and Matting

  • Upscaling and Denoising

  • Pipeline utilities (including EXR processing with full color science support)

More workflows are being added. The system improves as it’s used.

Closing

AI workflows won’t live on desktops for long. They’ll live in infrastructure.

And for teams that care about control, cost, and confidentiality — that infrastructure doesn’t have to be the cloud.
It can be your own.

If you're building something similar or exploring this direction, we’re always open to sharing what we’ve learned.

Contact Us: info@imajfuturelab.com

© 2026 Imaj Future Lab. All rights reserved.

Contact Us: info@imajfuturelab.com

© 2026 Imaj Future Lab. All rights reserved.