The question on every creative’s mind: As AI gets better at generating creative content, what role do the creators play in the process?
As generative AI matures, the world is being flooded with AI-generated content. Most of it is average by definition, because that’s what models produce by default. But the content that stands out, the top 10% that captures the overwhelming majority of audience attention, is made by creators with artistry & control, using tools to apply both at speed. The best creative tools don’t replace human judgment, they amplify it. The future belongs to platforms that put creators in the driver’s seat — not ones that reduce creativity to a prompt and a “generate” button.
That’s why we led Comfy’s $30 million financing, because Comfy is building the operating system for creative AI.
The evolution of the creative suite
The AI image and video generation market is targeted to hit $60 billion by 2030. While the market size grows rapidly, what’s more important is the structural shift driving this: creative workflows are becoming programmable.
What does that mean? For decades, creative production meant opening Photoshop or After Effects, working manually through a pipeline, and exporting a finished asset. That paradigm is breaking down. Today, teams need to chain together models, customize outputs, orchestrate multi-step pipelines across image, video, 3D, and audio — and do it all with precision and control. The one-click AI tools that dominated early headlines are useful for casual exploration, but are inadequate for serious production work.
This need for deeper control has given rise to node-based workflows — visual interfaces where each step in a creative pipeline is a building block (a “node”) that can be connected, rearranged, and reused. One node might generate an image from a text prompt, the next adjusts the lighting using a different model, another swaps the background, and a final node turns the result into video. Strung together, these nodes form a complete workflow — and when you run hundreds or thousands of assets through that workflow, it becomes a production pipeline. The future of creative production is modular, composable, and programmable, and Comfy arrived at this conclusion way back in 2022. Since then, the rest of the industry has validated the thesis: Adobe launched Project Graph, Figma acquired Weavy, and Runway launched Workflows.
The category leader and the industry standard
ComfyUI started as an open-source project born out of one developer’s obsession with getting the most out of Stable Diffusion. Today, Comfy has become the de facto standard for how production-grade generative AI workflows are built, shared, and run.
Comfy has over 4 million users, more than 60,000 community-built nodes, and roughly 50,000 daily downloads. Its organic traffic dwarfs competitors — nearly 5 million monthly visits with 99% organic reach. Comfy’s workflow files have become the format standard for generative AI workflows, analogous to what Photoshop’s PSD became for image editing. Other platforms, including fal and Replicate, import Comfy workflows directly. When your file format powers an ecosystem, you’ve built something competitors can’t simply replicate with a better UI.
Open source as a compounding advantage
We’ve seen with Supabase how a passionate developer community created compounding network effects that alternatives couldn’t match. Comfy’s open-source ecosystem follows the same playbook. The community has built tens of thousands of custom nodes, extensions, models, LoRAs, and workflow templates. New integrations launch daily. This is a self-reinforcing flywheel: more creators build more nodes, which enable more capabilities, which attract more creators.
Open source also means Comfy runs locally — on your own hardware, with your own models, connected to your own internal tooling. This is intentional. For studios working with confidential assets and creators who need speed and full flexibility, local execution is a core part of the product’s identity. Comfy is a local-plus-cloud dual-native platform, and that’s a strategic advantage that pure cloud offerings can’t replicate.
Far more than a UI — a full execution engine
Comfy is not just a visual interface for chaining models together. It offers a level of control that other tools can’t touch. It’s a full execution engine — teams fine-tune models, customize every parameter, export to code, and run production pipelines at scale. Enterprise customers told us they simply cannot build the complex workflows required for production-grade AI without it. Studios are building reusable creative pipelines on Comfy. Agencies are using it to power major brand campaigns — including the first primarily AI-generated Super Bowl commercial.
What makes Comfy truly durable is that it’s the source of truth for these workflows. Comfy is more analogous to GitHub than it is to a design tool — it’s the programmable infrastructure layer where generative media workflows are built, versioned, shared, and operationalized. Models will keep advancing. Interfaces will keep changing. But the orchestration layer underneath persists. That’s what Comfy owns.
Looking ahead
In a world increasingly flooded with AI slop, Comfy is building for the creators who refuse to settle for average. With incredible user and revenue growth, a rock-solid open-source foundation, and a cloud product that’s live and scaling, Comfy is at an inflection point. The longer-term vision is even more compelling: as agentic systems become a bigger part of creative production, Comfy’s reusable workflows and nodes become composable media building blocks that both humans and AI agents build on — less a single application and more the programmable infrastructure layer for how generative content gets made.
We’re thrilled to partner with Comfy as they build the operating system for generative media.




.jpg)