Not as sophisticated as your solution, but works with all hardware for the "backup use case": use a mechanical timer set to 10 minutes before backup starts (and power off 30 minutes after backup normally ends):
It's a more rigid solution that doesn't let you ssh in (unless during backup time ;-), but it saves electricity and it is implemented in 10 minutes (5 for an Amazon order and 5 to plug in and set the timer to your backup hours). It's also a more robust solution - little can go wrong (the only thing is you need to balance backup time against electricity/time savings as backup size grows).
Ha ha, I tried something similar when I had to go to Thailand for my wife's treatment. And failed spectacularly. Fortunately, my laptop had all the files. I didn't have Tailscale at that time.
My Desktop, WoL on, tested to be working
Android Phone, always on
All have SyncThing installed. Mobile had Tasker installed.
So, the idea was to have Tasker monitor a folder inside SyncThing, When I need my computer, I put a file inside that folder, when Tasker finds that folder, it sends WoL to my desktop, and deletes the file, Computer wakes up. When I see the file deleted I know, the beast is now awakens ....
When I actually did try that from Thailand, the file did not got deleted, nor the beast woke up.
What happened? Turns out, my mobile restarts automatically after some time of inactivity. Which, locks Tasker out, and the whole process fails.
Since you're already using Tasker, you could use it to launch Tailscale on the phone at boot - I've got mine set to do this so my phone reconnects to tailscale after a reboot.
Now I have another mobile with more control, so restart is not a problem anymore, and as I can SSH in to it using Termux. This basically handles all of my needs.
Now I just need to get enough momentum to set up the process of waking up my desktop as needed and set up radical caldav server in the mobile.
Some motherboards disable power to their ethernet port upon sleep and so WoL will not work.
This is particularly common if the NIC is a power hungry 10GbE port.
However, in the particular case I found, the motherboard also disables oower to any usb GigE adapter attached.
The solution I found was to attach a USB hub with (empty) SD slots and integrated GigE port. As SD cards require power to remain mounted, the motherboard did not shutdown power to this adapter and WoL worked.
20 years ago, I used to have a Linux server running Slackware at home, that would wake up the two PCs we had at home to back up their data if they were turned off.
If they were already turned on, they would send a WOL packet to the Linux server to turn it on in case the that server was off, and then start the backup routine. And the last one would tell the Linux server to turn-off itself. It used to work really great, good times.
Thank you for the post, very informative.
I do this half manually. I have a cgi script on the always on very power efficient sbc server that wakes up the bigger server if someone needs it. The big server powers itself down when no backup is running at a time everybody using the server usually sleeps. I thought about improving this and measure server usage and solar power generation to decide the shutdown, maybe with an additional warning email. for example: "The server shuts down in 5 minutes due to no demand and no solar power, if you want to prevent this click this link: http://server.lan/cgi-bin/keepalive "
I recall this being quite simple last time I tried, just enable WoL in bios and run etherwake from my router (or from some other machine on the network).
But this is about waking up on non-magic requests, just any request?
I did this recently as I was struggling to get WoL to work with my consumer PC. It seems like this ultra low-level stuff is a total crapshoot so if you can dodge it by just wiring up the power button, that's a good option.
In in the end I just went the whole hog and set up a PiKVM, so now if I mess up the machine's networking (or even completely break the OS) I can still recover it remotely even though it doesn't have a proper BMC or anything like that.
In general this approach seems ugly in principle but I really like it in practice. It lets you retrofit solid remote capabilities onto consumer hardware. That way you have such a broader market to buy from.
Note: If you're going to use an SBC _only_ for wake up signals, you might want to look into alternatives for the RPi such as the Radxa RockPi S [1]. My home server, for example, runs continuously at 7W, which beats many RPi models. Of course, a Pi to wake things doesn't need that much power and could be an older model, but even then, you'd still be burning "empty Watts".
Of course, the RockPi doesn't give you any KVM like functionality, though.
I considered doing something like that but eventually I went for the simpler solution of plugging my little servers into smart plugs. I shut them down, then power off the plug over wifi. I start them by powering on the plugs. The plugs draw very little power. The servers are ARM SoC and draw 1 to 4 W. One of them has an HDD that draws about 10 W but I can unmount and power off it when I don't need that disk and I still need the server on (it's also got a SSD.)
That's what I was thinking too. My home server consumes like 15 W and is silent.
If you get a rack mounted server made for data centers and stick it in a closet so you can't hear it then yes, i guess this approach makes sense.
How do you accurately measure how much current a PC is drawing at any given time? Do you have some kind of measurement device inline with the power cord?
You could set a calendar schedule for waking up itself and backing up the clients, and at night the server would go into standby only if no clients were running anymore since X minutes.
I would be tempted to try using the Pi as a router & firewall with the server on another subnet, having it wake the server using traditional WOL as needed. That feels simpler to me and more controlled. But my overall feeling is that not much power is saved here overall compared to a well set up server. Good project though quite educational.
Mine idles at around 130W from the walls. I think it's mostly the hard drives, maybe the SAS controller. I've migrated a few services to a miniPC and started turning it off too.
The main issue for me is the heat. I've got it next to me and 130W of heat adds up in the summer.
Do you usually run it full tilt? What's it typically using on average? I've forgotten how much mine averages, I only remember being surprised how little it sips because it's mostly idle.
It's surprising because reddit (and HN) would make you think you're throwing away tons of money unless you go with some tiny ARM board and that's not true.
Are you talking about processor's C-states? My old 6th gen i3 stays most of the time idling around C8, averaging 5w, really impressive, I suppose newer gens will be even more efficient
Although Intel processors are efficient, modern AMD processors have much higher idle power usage, due to their chiplet design. They typically use at least 20W more power.
C8 is a good state if you can get it. Intel is really good at this. They don't even bother energizing the L3 caches immediately when exiting deep package C-states. But there are lots of conditions that will inhibit C8, notably an Ethernet link on a NIC capable of PTP. This is why wireless is better.
If one can afford a GPU with a MSRP of $1999 and was scalped for $2999 during the initial craze, you are probably not struggling to pay your electric bill.
You can use nvidia-smi to set a target maximum power draw and performance mode to bring idle power levels down. Also make sure your computer is using the server/headless mode driver to keep idle power consumption down.
1W is a unicorn as just plugging in a power supply with no PC parts hooked up will register 1W. <10W is more realistic. Select a PC that can run off a laptop charger. Check manufacturer spec sheets for the idle power consumption. Don't install any PCIe cards or hard drives. Use powertop --autotune.
Only put the RAM you need in the box, use peripherals with working ASPM, attach them to the northbridge PCI ports instead of the CPU's root ports, use wireless instead of wired networking, and don't attach a display.
Chiplet based Ryzen CPUs inherently have higher idle power draw. Monolithic chips like 5600G have lower idle power draw. The motherboard, power supply, and internal peripherals all need to be carefully selected to get a really low idle figure.
Whatever software you are using is totally, utterly broken. Not sure what else I can tell you. Even a completely decked out Ryzen AI Max Pro 395 idles at 5W in Windows S0 (see: https://h20195.www2.hp.com/v2/getpdf.aspx/c09133726.pdf)
All this complexity to save a few bucks per year on your electricity bill? This is ridiculous, the Pi costs far more than what you can be expected to save.
I think it turned out a lot more complicated than the author expected, but that their solution they kindly wrote up will be pragmatic for someone.
(For example, imagine a big home GPU server that is needed only intermittently, and you want it to spin up automatically on network traffic from family's various devices that you can't modify.)
Of course, if you have simpler needs, and you're willing to send a WOL magic packet from the using devices, you can do in a few lines of shell script. It's a 1-line ssh-to-something-that-can-etherwake-on-that-vlan script, then wait in a loop for the service you need to appear, then 1-line ssh-to-server-to-shutdown when you're done.
In many European countries electricity is quite expensive. In the U.K. for example, running 20 watts nonstop for a year will cost you around $65 on a typical tariff. If you have more than one home server the savings can quickly add up.
Also 20W is fairly low for idle draw right? Like I think you can get proper machines down that low if you know what to look for but most stuff bigger than a mini PC is gonna be drawing 40+? I might be slightly miscalibrated though.
Anyway it's not about the money for me it's any the aesthetics. Burning power for nothing is yucky.
Edit: just been Googling around. OP is running one of these HP mini PCs. They are pretty efficient! Some go well below 10W. So yeah I would say for the specific use case it's unlikely to matter very much. But still it's a useful thing to be able to do in general.
You should try running powertop on it. It will scrape sysfs and look for things that seem misconfigured, and suggest changes to fix them. On one of my machines it enabled some peripheral power saving mode that made a pretty dramatic saving!
(I also heard that it sometimes suggests power saving modes that are usually switched off for a good reason, like apparently you really don't want some USB controllers going into certain sleep modes as they take seconds to come back).
I don't know any sysadmins who would do this. Task scheduler for Windows or rtcwake for Linux. We try to reduce complexity with existing battle tested tools, not create whatever this is. This is definitely not the easy way.
I definitely am. But this is a very solved problem. It just adds brittleness to the system. Maintenance for this is going to suck. It's going to break in less than a year because you forgot to set a static IP address, or a Python library changes, or the SD card in the PI gets corrupted, or the jack on it fries, or its cheap PSU died and fries the board, etc. Then you're going to have to try to remember how the damn thing works, then figure out what other tiny change somewhere broke something. You should never add more failure points to infra, ever.
I do agree that it would be nice if no external device would be needed. It does make me wonder if the Pi is truly needed. I'll be looking into this myself as well, I also have power hungry servers that are mostly idling and would benefit from WoL. In some EU countries, energy prices are truly a scam.
> Then you're going to have to try to remember how the damn thing works
Fortunately, the author wrote a derailed essay explaining all of this…
> You should never add more failure points to infra, ever.
Every time you add a new system or a new feature you necessarily “add more failure points”, there's no way around that.
One should avoid introducing more failure points than needed for the functionality you want, that's it.
You say it's a “solved problem”, but you only give solutions to a different problem (starting the server at a scheduled time, when the author wants to start the server on demand).
The complexity in TFA is due only to author's desire to not use magic packets for waking the server, making thus the state of the server transparent for users.
If you are willing to send magic packets to wake up the server, before using it, you can save money from the electricity bill with negligible complexity.
Nothing to stop you setting that up on the secondary device to trigger the listen/wake scripts, but if someone malicious is on your local network and has permission to trigger WOL, chances are you have bigger issues.
Indeed. Frankly, that would be a nice standardized solution to your machine register itself on a Bonjour Sleep Proxy and can be cheaper than Pi with a used AppleTV off of eBay.
While not Linux I have my Windows 11 rackmount gaming server sleep after 30 minutes but wake every morning at 8am for backups using WakeupOnStandBy which works great. I tried using built-in Windows task scheduler but it never worked correctly.
Seems to me that if you want to waste time and money engineering your setup more net efficient, just buy a few solar panels and LiFePo4 batteries to buffer. You can run other stuff off of it, too.
I always choose “make more money” over “pinch pennies”.