I'm puzzled by Espressif's naming here. We had the ESP32-S3, so "S31" sounds like "S3, variant 1," but this part doesn't really look like a simple S3 variant. And then there's an ESP32-E22, but no E21 or even a plain E2 anywhere.
It was because IA-64 was a completely different unrelated architecture that until AMD succeeded with K8 was "the plan" for both 64bit intel roadmap and the roadmap to kill off compatible vendors (AMD, VIA)
They claim that the chip has an "MMU". But unfortunately this doesn't seem to be a true RISC-V MMU (according to the Sv32 specification) integrated into the CPU core itself, but just a peripheral designed for memory mapped SPI flash and PSRAM. So as far as I understand there is no true process isolation with page faults and dynamic paging.
Maybe Espressif will notice that there are no RV32 chips with MMU so far (at least to my knowledge); we only have 32 bit MCUs or then only 64 bits for the CPUs. Something like Cortex-A7 is missing.
Interesting that they made a new chip with BLE+BR/EDR again. all the chips after the original ESP32 were BLE only.
Hope this chip has good low power options so we can use it in Bluetooth audio workloads.
Nah, ESP32's have had ethernet capability for a while and ESP-IDF supports it well. I've been using one I built for 5+ years now. Unfortunately RMII (ethernet phy) interface takes up a lot of the GPIO pins. This part looks like it'll remedy that issue.
There's two ESP32 boards that have been around for a while with PoE:
> I'm more hopeful for single-pair ethernet to gain momentum though!
I keep looking for a reasonably priced 10baseT to 10Base-T1L bridge... everything commercial seems too expensive (for me) and the two hobby designs [1] [2] I've seen are not orderable :(
But I'm seeing more commercial options lately, so that's hopeful.
Multidrop SPE isn't going to outperform newer CAN versions though. Somewhere in the sub-100Mb/s (e.g. 10-20Mb/s range) is the practical maximum speed of a multidrop bus at useful lengths, and that essentially applies equally to CAN or SPE. The only way to really get faster in a "multidrop-like" sense is with logically loop-like systems like ethercat and Fibre Channel where each network segment is point-to-point and the nodes are responsible for the routing.
On that note, why does the PoE capability often add such a big proportion of the price of various items? Is the technology really costly for some reason, or is it just more there's fairly low demand and people are still willing to pay?
PoE is not obvious to implement (take it from someone who has done it with a fair share of mistakes), uses more expensive components that normal ethernet, takes up more space on the board, makes passing emissions certification more complex, and is more prone to mistakes that ruin boards in the future, causing support/warranty issues. In other words, a bag of worms: not impossible to handle, but something you would rather avoid if possible.
I wouldn't call it "better", but the least-effort path among hobbyists and low end gear is often 12v or 24v sent over a pair with Gnd and a forgiving voltage regulator on the other end.
A full-module add-on in this power class is about $7 at 1,000 unit scale [0]. It would be around $3 with your own custom PCB design in terms of BoM addon at scale. That’s power only. Add another dollar or two for 10/100 PHY.
The trick is as others have said in what adding it to your design does in terms of complicating compliance design.
PoE power supplies need to be isolated (except in rare exceptions) and handle much higher voltages than common USB-C or wall wart power supplies.
They have to use a transformer and a more complex control strategy, not a simple buck regulator with an inductor. PoE inputs need to tolerate voltages several times higher than the highest USB-C voltages, so more expensive parts are used everywhere.
Any Ethernet (well, any RJ45 you expect in a home/office) has to have at least 1500V isolation from the RJ45 wire to anything metal that can be touched or is a connector on the device.
A PoE-only device with no electrical connectors besides the RJ45 can just use a very cheap RJ45 port with integrated magnetics and PoE allowance (tiny bit bigger wires and a center pin exposed, less than 50ct more than the cheapest RJ45 with integrated magnetics) and a cheap buck from 40~80V to e.g. 5V.
Oh, and a cheap bridge rectifier and some signaling resistors to take care of input polarity and signal to the source that we in fact want the approximately 50V that could hurt a device not made for it.
It sounds like the PoE spec was designed before the arrival of “IoT” type things like the esp32, raspberry pi’s, etc.
How much of the complexity is a “fundamental electrical engineering problem” and how much of it is just a spec written to solve a different set of problems?
Whenever you combine two things into one, the complexity and cost go up considerably. A regular coffee machine is pretty cheap. Add high pressure so it can make espresso and it gets considerably more expensive. Add milk so it can make cappuccino, again more complex and expensive. The same holds for electronics. Isolating power when it's alone is fairly straightforward. It gets considerably more tricky and hence more expensive the moment you want to place any kind of a meaningful data signal in its vicinity.
> You don't need long cables, just a local power source
Which means batteries that have to be replaced and maintained or cables... So ethernet with PoE or even better SPE (single pair Ethernet) with PoDL (power over data lines which is PoE for SPE) is the best from my point of view
I mean, if I just look at my house. There is just one ethernet outlet, but many power sockets. If I want to connect devices all over my house, the best way is to use wifi and usb power adapters. Not ethernet.
Both solutions require 1 cable per device, but the first solution would require only short and thin cables, and the second solution would require very long cables which I don't know even how to do properly without milling my walls.
Yep. Mains electricity is ubiquitous, highly interoperable, very reliable, very high power available per drop, can be outdoor capable, common standards, understandable by users, requiring no active components, with many on-call experts available who can come to fix problems or extend/alter connectivity. Mains power wall plates with inbuilt USB power outlets are even available at quite small cost if the look of the bigger plug and wiring is not appealing.
PoE is much fewer of those things. Difficult to recommend it these days with wifi being fast and reliable and so widely used. Certainly not for average residential user.
That's half the equation. The other half is the reliability and security of wifi, which is less than that of ethernet for people without physical access to my wall innards
On the other hand, _all_ the WiFi devices that I had at some point fell off the network, at least once. Including doorbells and cameras. While PoE devices just work.
Another point is that mains power in my area can go down periodically. My PoE switch is powered by a Li-Ion UPS and can provide power for about a day.
Can't you run a 5V supply from where your router is all the way to every god damn device in your house, and then pretend the wifi is also going through it? If you just want it to be inconvenient, there's no reason to let a lack of PoE stop you!
I don't understand what possesses these folks to continue making 2.4ghz devices. I understand there are use cases for low bandwidth, high range. But surely we've passed the point where that is more desirable to most than lower latency and high throughput, right?
Is what you described a truth for all IoT devices? If I have los of my AP, why do I need 2.4Ghz? Even so, what SNR do you truly need for this low bandwidth application? Where is the engineering here?
I have a unique position of having a data set over 8000 APs with 40k unique devices. If you design properly, there is no need for 2.4 ever. 2.4Ghz congestion (with nearly no actual 802.11 traffic) is very high. To the point where the IoT folks are struggling.
My 2.4ghz is basically all IOT these days. Things that matter are on 5 or 6 ghz. Busy moving the entire thing to be entirely firewalled off given how clean the separation is
Yes. And 2.4 lives and dies by that sword. What downsides might there be in areas where dozens of APs hear each other and 100s of clients hear each other?
It's an IoT device, not a laptop. It does not really need 5ghz to fulfill its purpose as an embedded CPU, and adding 5ghz likely would require making some room for it by removing other functionality.
Yes and in some uses cases it works against you. 2.4 is incredibly crowded without adding 802.11 to the mix. My IoT admins would have less complaints if they could take advantage of my small cell 5Ghz spectrum. This isn't 2005 with widely deployed asymmetrical wireless networks.
Can't you just underpower the antenna on a 2.4 radio if you need networks that don't bleed into each other as badly? Unless it's an issue because of the tiny antennas that usually come on microcontrollers.
I assume their chips don't really exist until they're actually supported by ESP-IDF. The ESP32-C5 was announced in June 2022, received initial support in -IDF in August 2025, and more complete support in December. It seems to have only recently started getting third party dev boards.
It would be good if this chip had good idle current comparable to other MCUs. I have used the ESP32S3 and it's idle current with the radio enabled, but not transmitting, is quite terrible.
My application needed both can bus and Bluetooth (though no wifi) so the S3 was one of the only options available. I'm sure the high current draw is because the wifi and ble share the same radio?
Realistically 2.4Ghz is far from "greatest backward compatibility" since there is a real benefit of running 5Ghz and 6Ghz only networks.
2.4Ghz makes sense because this tiny device does not need high speeds Wi-Fi connection, and deployment scenarios benefit from 2.4 GHz penetration more.
Without being hands on, it's difficult to make a direct comparison. There's 2 processors according to CNX [0], and the HP core's instruction set might roughly be comparable to M55.
Don’t know the specifics of the Espressif RISC-V cores, but in general they can’t really compete on those aspects with ARM.
ARM is a much more mature platform, and the licensing scheme helps somewhat to keep really good physical implementations of the cores, since some advances get “distributed” through ARM itself.
Compute capabilities and power efficiency are very tied to physical implementations, which for the best part is happening behind closed doors.
you can run linux on riscv without an MMU. There is mainline support for Kendryte K210 chip, so it should be possible port to this chip provided you have enough PSRAM.
Although, I'd like to seem some non-paid blogger head-to-head reviews benchmarking instruction cycle efficiency per power of comparable Arm vs. ESP32 Xtensa LX6* and RISC-V parts.
* Metric crap tons of WROOM parts are still available and ancient ESP8266 probably too.
That native sdk and the vscode plugin are very professional. There is a bit of a learning curve to get into it, but once you do, it's very functional and the developers are super supportive. They have fixed bugs for me in days.
Still requires using rust compiled against their llm fork. 'espup' makes it easy if you're okay with using it.
Other than that it works pretty well. This is if you run ESP-IDF, with bare-metal rust it's either best thing ever or meh. Rust community seems to use stm32 and picos more.
It’s not like creating a chip gives you unfettered access to it. You _can_ add 0-day flaws and backdoors, but these can be discovered, leaked, etc. Has there been any case of such a backdoor built in consumer chips like theses? I’m not talking about CIA ops like snowden described, that’s supply chain interception. I mean, has anybody ever found such a backdoor?
Well, that depends on what you count as a backdoor, but Espressif has had some questionable flaws:
- Early (ESP8622) MCUs had weak security, implementation flaws, and a host of issues that meant an attacker could hijack and maintain control of devices via OTA updates.
- Their chosen way to implement these systems makes them more vulnerable. They explicitly reduce hardware footprint by moving functionality from hardware to software.
- More recently there was some controversy about hidden commands in the BT chain, which were claimed to be debug functionality. Even if you take them at their word, that speaks volumes about their practices and procedures.
That’s the main problem with these kinds of backdoors, you can never really prove they exist because there’s reasonable alternative explanations since bugs do happen.
What I can tell you is that every single company I’ve worked which took security seriously (medical implants, critical safety industry) not only banned their use on our designs, they banned the presence of ESP32 based devices on our networks.
Except if you penetrate the market with modules that cost 5% of similar US made solutions, you start to win mindshare. At least some of those hobbyists start making a product, and sometimes the determination of whether a product is "safety critical" isn't agreed upon until after it's failed catastrophically.