diffusion.garden is an AI canvas for image generation that I built during my residency at the Recurse Center. You start with a prompt, then branch it into connected text or image blocks that can each run tools like expand, twist, image variations, etc. Each block can take in input from multiple blocks, and direct output to many other blocks. The edges between blocks preserve the history of an idea, making it easy to branch and explore in multiple directions.
I decided to build this as an exercise to get myself comfortable integrating text and image generation into a real product. I also wanted to run an experiment in building software that is tightly tailored to my own needs. All the tools I’ve built into this product are the result of trying to sand down the rough edges of my workflow in generating images.
You bet! Check out the demo below:
The main canvas is built on ReactFlow, an excellent library that provides scaffolding to build node-based UIs, which runs inside a React+Typescript frontend. I decided to try out Zustand for state management, because it seemed like a lighter and simpler alternative to Redux, and it has delivered on that expectation.
It would have been feasible to run this entire application in the frontend, since I planned to run inference through the OpenAI and Gemini APIs. However, I had two key requirements for this project that steered me towards implementing a backend:
To meet these requirements, I decided to implement a backend in Python, running a FastAPI web server with a Postgres database. The backend has several responsibilities:
Both frontend and backend are hosted on Railway, as separate projects. Railway takes care of auto-scaling resources vertically to my web server and database.
The code for this project is open source and available on GitHub.