Prometheus

Distributed computing for the masses.


Prometheus, formerly known as HANC, the Heating Application with Network Computing (I'm so good at naming), started as an individual project in 2021. After a long night of stress testing computer systems, I realized all that extra heat was simply lost to the environment, turning electricity into carbon emissions. Prometheus was created to repurpose that computational waste heat as a source of residential and industrial heating.

The electric heater was invented in 1883.

Nearly 150 years later, modern electric heaters are still little hotboxes powered by a little electric coil despite massive improvements in technology. And if computers already convert every watt they draws into heat, why not put that heat somewhere we actually want it to be?

Even crazier is that on average, over 43% of the energy data centers use is spent entirely on cooling!

Through a distributed computing network composed of independently operating nodes, Prometheus was an attempt at an infinitely scalable, environmentally friendly alternative to traditional data centers, a new paradigm where the cooling problem no longer existed because heat was the intended outcome.

The goal was to bring environmentally conscious heating to consumers while making reliable computing power accessible to researchers, especially those working in medicine. To create a world where heating didn't just provide fire, but brought us a step into the future, like the Greek god Prometheus did for humanity.







I learned so much from this project, and I'm grateful for all the wonderful people who helped me along the way.
~ky

I initially partnered with BOINC and Folding@home to use open source infrastructure to distribute workloads.

Rather than ejecting waste heat into the environment, Prometheus dynamically generated heat only where it was needed, in both residential and industrial heating applications (e.g. greenhouses). It was also meant to revive old hardware: over 81% of the energy a computer consumes in its lifetime is spent in its manufacturing, and giving aged machines a second life meant keeping millions of tons of e-waste out of landfills.

For the consumer, instead of paying a utility to heat your home, your computer would pay you by selling its compute on the open market the way Salad and Orca are attempting to replicate today. The heat itself would be free, a byproduct of the work. My vision was to bring unlimited residential heating to everyone in America, funded entirely by the compute the network produced.

Prometheus was shuttered in 2025. The two things that broke it were the ones the current crop of companies still struggle with: bulk data transmission to and from residential nodes never worked at the scale I needed, and there was no good answer for verifying that work returned from an untrusted home machine was correct. My best attempt after years of research was a fuzzy verification scheme that traded off accuracy for throughput, and it still wasn't good enough.

Qarnot, my closest contemporary, took a similar approach; targeting industrial water pre-boiling with custom hardware rather than the broader consumer market where the majority of heating demand lives.

The workload has only gotten harder since. ML inference, the compute work that pays best today, is exactly the wrong shape for residential distributed compute: model weights are enormous, every cold start drags gigabytes across a home connection, latency expectations are tight, and verifying any inference result was impossible at scale. Not to mention security and privacy concerns with AI models on untrusted hardware.


Timeline

2021

Inception

2022

Worked with Elmor to produce v1 temperature control kits
Automated work splitting via AutoHotkey scripts and Unigine Heaven

2023

Developed Elmor v2 kits
Self-taught Python, 3D printing, and hardware interfacing
Successfully demoed first-of-a-kind dynamic heating control on a PC

2024

Deployed across 3 homes
Improved thermistor feedback loop and hysteresis
Replicated scientific reseach outputs at scale on ALiEn

2025

Implemented fuzzy verification and file recombination
Incubated via Berkeley StEP
Made the difficult decision to sunset the project


How it worked

Each node was an ordinary PC fitted with a thermistor control unit. Nodes coordinated over the internet to pull jobs from a distributed queue (e.g. protein folding) and the heat generated by the work was released into the room, with a thermistor monitoring temperature to regulate heat output. As each node operated independently, the network had natural redundancy and was, in principle, infinitely scalable.

Network topology — how nodes coordinated work
Architecture placeholder — to be added
Thermistor control loop — how a single node regulated heat
Diagram placeholder — to be added

Photos

A handful of physical test rigs were hand built to validate the thermistor control unit and to measure how a single node behaved as a heater.


Legacy

The idea wasn't wrong, just early.
The same premise of distributed compute that pays for itself is becoming one of the newest hype categories of 2026.

The grid argument has only sharpened in the years since. The AI datacenter buildout is straining utilities hard enough that several have delayed coal retirements just to feed warehouse parks; exactly the sort of infrastructure failure mode Prometheus was built to address. The residential grid is already sized for heating loads on cold nights. Running that same compute through it, instead of destroying communities for new power lines, costs the grid effectively nothing.

Salad built the market half of what Prometheus was reaching for: a distributed GPU cloud with 60,000+ active consumer GPUs serving inference, image and video generation, and batch jobs at an advertised "up to 90% off" hyperscaler prices. Customers ship Docker containers to a globally distributed pool of home machines, and operators get paid for the cycles. Salad found the exact same limits I found: "longer cold start times than usual", "subject to interruption", "workloads requiring extremely low latency are not a fit".
The 24 GB vRAM ceiling on their network puts most modern LLMs out of reach.

Orca sits earlier in the same arc, as a a peer-to-peer compute marketplace where users keep an app running, earn points, and cash out. They don't specify GPUs or any use cases beyond a stated focus on "social engagement" (because they have no users and no solution to the same scaling problems I hit), and like Salad, they don't talk about heat at all.

Qarnot started with an environmental focus and is currently the leader in Europe for residential distributed compute.

But what mattered most to me is still missing: closing the loop so your heating bill becomes zero.
To bring a sustainable future to our world, not just to earn a few dollars while the fans spin.
Nobody is shipping that yet.

For what I'm working on now, see kaelanyim.com.

Special thanks to Matthew Cai, Jennifer Tian, Yuexiang Wu, Elmor Labs, and the Falling Walls team.