Heat recovery and sustainablity is our focus since I founded Cloud&Heat in 2011. We have 19 locations with servers with heat recovery, most of them for heating purposes of adjacent buildings. The original idea was that this makes particular sense for decentralized
edge data centers in urban areas and cities.
We found that the key is to achive high temperatures in water of 60°C (140F) and beyond to get the transistion from waste head to a valuable resource. This requires quite careful heat exchanger design and flow rate control.
However we also found that the optimum temperature for chips and CPU largly varies on the internal structure and semiconductor properties like leakage current vs. low voltage operation.
One of our anchor projects is a datacenter in the heart of the financial district of the city of Frankfurt. We completely refurbished the former European Central Bank data center into a complete water cooled data center providing warm water to the skyscraper
heating infrastructure. If you are interested in more technical information and energy savings you might check out our white paper:
Well, asked for data, now I need someone to translate it for me.
Steve, look forward to your analysis
Hello Steve,
Great to be connecting with you.
I like to introduce you to Ed Trice. He is the brains behind this technology. Ed Trice CEO of Nuclear Lightning Enterprises
https://nuclearlightning.com/server_06.shtml
builds the fastest Servers and Workstations on the planet. He can back up this claim against any computer out there.
To test the speed yourself you’re going to
have to go to
www.teamviewer.com
and down load the free program. This is what we use to run the comparative speed and power testing.
To see pictures, from the CAD drawing to the final result go to this blog.
https://www.overclockers.com/forums/showthread.php/796606-PROJECT-LOG-3-stage-Thermosyphon-vastly-outperforms-quadruple-radiators
I am "Doctor Emmett Brown" in this discussion board.
We can custom build liquid cooled computer from the ground up. Hyperscale or edge technology. The 8-core Intel i9-9900KS chip is overclocked to 5.2 GHz.
This is the most advanced liquid cooling tech with the 8-core Intel i9-9900KS chip that can be overclocked to 5.2 GHz. After doing a remote test I saw that the other liquid
cooling tech could not compete with his thermosyphon technology.
I don't just provide capital for companies. I started the Cloud9 Capital Consulting so that I can fund the orders from clients of my liquid cooled edge and hyperscale computers
we build. It’s actually a marriage of Technology and funding.
The pumpless thermosyphon technology is capable of cooling entire 72U server racks running at overclocked speeds 24x7. We are capable of running chips faster and cooler than
quad-radiator liquid cooled solutions because we offer 3-stage (air, water, refrigerant vapor) dual phase (liquid refrigerant that vaporizes A custom built liquid cooled computer from the ground up. Hyperscale or edge technology.
The 8-core Intel i9-9900KS chip is overclocked to 5.2 GHz. If you need power and speed more than anything else, this is definitely the place to be. The engineer who developed
this is one of the brightest stars in the technology universe. If you have any funding needs feel free to reach out to me. I can fund any company that has good revenue and EBITDA
Best regards,
Noel Wideberg
CEO/President
Cloud9 Capital Consulting
www.Cloud9CapitalConsulting.com
772-913-1456

The NREL data center is very efficient. However, when looking at heat recovery, you have to balance the extra power required at higher CPU temperature with the cost of efficient heat pumps. (a one to one comparison of watts of heat to watts
of computer power is not appropriate) A hotter running processor may require 5% more power and a typical heat pump has a COP of 4.
This means that if you run your 100 watt processor hotter so it uses 105 watts to get usable heat, and the heat your are replacing requires 25 watts of power to produce 100 watts of heat, the 20 watt power savings for a 100 watts of heat
may not justify the expense of hooking the computer to the heating system.
Also, once you hook up a heat source that you can't control to a heating system, you need to have a backup source of heat.
I don't think it makes sense for San Diego, but it might for a place where heat is expensive and the computers are running at full load all of the time.
On 5/19/2020 12:10 PM, Kevin McCoin via
groups.io wrote:
Andrew,
NREL has done some really awesome stuff in this space and are currently operating at a 1.025 PUE. Contact for this facility is Otto VanGeet:
https://www.linkedin.com/in/otto-vangeet-pe-828b428/
Here’s a live dashboard that includes a depiction of their recovery approach:
https://www.nrel.gov/hpc/cool.html
Thanks!
Kevin L. McCoin
Distinguished Architect, Systems Engineering
Walmart Technology
Enterprise Critical Infrastructure Services
Phone: 479-277-2135 fax: 479-277-4244
kevin.mccoin@...

Wal-Mart Inc.
805 Moberly Lane
Bentonville, AR, 72712-9280-0560
Save Money. Live Better
Thank you Rolf,
This is very much appreciated.
Cheers,
Andrew Witteck | Specialty Markets Manager
Armstrong International
o (772) 213-9541 ext 4025
c (772) 473-3055
armstronginternational.com
Hi Andrew,
Yes, this is currently a topic in the ACS Harmonization project. Best bet is to get in touch with the DCF group. ACS and DCF are working together on topics like this. I will put you in touch directly as well.
Cheers,
Rolf Brink
ACS Immersion
Hi
Hesham, I am not aware of cellular in data centers. I am sorry.
I am interested to find out if there is any work in OCP projects that is related to using Air-cooled data center or liquid cooled data center hall expelled heat for thermal utilities in surrounding buildings? Also if there is any direct/indirect
evaporative media work being done as well.
Thanks,
Andrew
Hi
I am interested to find out if there is any work in OCP projects that is related to using Celluar in the Data Center.
Thanks
Hesham
Sent from
Mail for Windows 10