We propose to focus the scope of our program on intentional radiators that generate and emit RF energy by radiation or induction.31 Such devices – if exploited by a vulnerability – could be manipulated to generate and emit RF energy to cause harmful interference. While we observe that any IoT device may emit RF energy (whether intentionally, incidentally, or unintentionally), in the case of incidental and unintentional radiators, the RF energy emitted because of exploitation may not be enough to be likely to cause harmful interference to radio transmissions.
I guess it is the FCC so this makes sense from their point of view. From my perspective, I'd like to see marks indicating:* If the devices can be pointed to an alternate API provider if the company stops supporting
* If firmware has been escrowed / will be made available if the company stops supporting
* If device data is stored by the company
* If that data is certified as end to end encrypted
* Some marks for who / how the data is used
* data stored/transmitted is secured by some kind of means
* the device supports software updates
* the device requires users to authenticate
* the device has documentation
* you can report security vulnerabilities to the developer
And even these are things that many devices fail to do, today. We gotta get the basics fixed first.
But for now, you can presume the Netflix button on your TV remote can't be configured to point to an alternative API if Netflix goes away. :)
'Cause they need somewhere to load in those exploits!
A hypothetical device which is all read-only (except perhaps for a very carefully crafted, limited set of configurable parameters) might in some cases be more secure than the bulk of what's on the shelves today. After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').
The more popular the device, the more knowable upside to an exploit. If the device can be updated, then usually the exploitable timeframe is limited and its unknown if the attempt is even worthwhile.
> After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').
Well basically because any device of consequence is trivially hacked by now. Think about game consoles or anything that would have DRM today.
If your threat model for consumer IoT devices does not include manufacturers in 2025, you are completely confused about computer security. Having a standard to encourage devices to talk to manufacturers is completely backwards. We should have certifications that devices create no outbound TCP/UDP flows.
https://news.ycombinator.com/item?id=42624930
I do not know for sure whether these devices use UPnP or similar, but considering that they are not intended to be accessible from the Internet, probably not. The blame probably lies with (in this case) all the random government agencies deploying the devices in an insecure way. But assigning blame won’t fix the problem. Something needs to change, and it’s probably going to be the devices.
Consumer devices are different. On one hand, they’re less likely to be exposed to the public Internet… at least proportionally. I think. But on the other hand, consumers expect to be able to access their devices from anywhere, and right now in practice that means going through a manufacturer-controlled proxy server. I would love if someone would come up with a standardized mechanism to make home devices securely remotely accessible, without a manufacturer-controlled proxy, but just as easy to use as the status quo. Until that happens, don’t expect anything to change.
That's a valid answer for an audience familiar with computer networking concepts. It's a silly suggestion for consumer IoT customers, who do not understand those concepts. They don't know what is or is not 'on the open internet'; they buy a product at the store and plug it in.
> We should have certifications that devices create no outbound TCP/UDP flows.
This is the "bury your head in the sand" method of solving the problem. If you design your requirement so that zero consumer accessible devices are capable of meeting them, then what's the point? As long as people (1) want to watch their camera away from home, and (2) don't have the networking expertise to configure a remote access VPN tunnel, the devices are going to have to reach outbound to traverse home router firewalls.
It would be technically quite easy for either a dedicated home-access box or just the router-AP combo box to have some auto-config wireguard setup (e.g. scan a QR code or install an app that looks for the box on the local network or through bluetooth). This would be far more secure than the current setup, which is for devices to constantly connect to generally malicious C&C servers. If regulations pushed for actual security (no-cloud), this would be the obvious solution to guide to market toward. Then you only have to trust your gateway device, which also would have no reason to ever create outgoing Internet connections, though it would need outgoing/forwarded LAN connections.
With SLAAC to generate a random initial IPv6 address that it never rotates combined with UDP so there's no indication that you talked to anything if your wireguard keys are wrong, there's basically no way to find such a box if you didn't have the correct config.
Because IoT devices have historically been known as secure? Definitely not. Devices that presume someone else has already configured a firewall correctly often presume wrong. Consumers are not networking professionals.
> It would be technically quite easy for either a dedicated home-access box or just the router-AP combo box to have some auto-config wireguard setup
Well yeah, if everything about home networks was different, then the situation would be different. The problem is, that isn't the reality in which IoT devices are manufactured.
> If regulations pushed for actual security (no-cloud), this would be the obvious solution to market.
If they pushed for this, the only solutions with the sticker would be ones that are commercial failures because they won't work out of the box with the router people actually have at their house. You may be 'right' but your labelling program will have failed. A labelling program has to be realistically achievable within the current reality to have any effect, otherwise it'll just be ignored by manufacturers.
Incremental improvements, such as this, are not bad, even if not perfect. People are going to buy doorbell cameras that connect outbound to the internet, because the technology works out of the box.
> After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').
Quite a lot -- these are some of the easiest devices to hack. The only saving grace is that most of them are not connected to the internet so they are only vulnerable to local attacks. But garage doors, cordless phones, keyless entry, smart locks, smart home protocols, etc are notoriously vulnerable.
The reason you don't hear about new vulnerabilities each week is precisely because they're aren't updatable. The fact that they don't get updates with new vulnerabilities is not an advantage when they permanently have older vulnerabilities.
Disagree, it is extremely common for e.g. TVs and smart phones to ship with malware included. In fact it is almost impossible to buy some classes of devices that aren't intentionally compromised.
Having the thing never connect to the Internet at all and never receive updates is a far better security posture, and is the common recommendation among knowledgeable people for e.g. TVs.
In practice, your neighbors are almost certainly quite a bit less malicious than whatever a smart device might talk to on the Internet. Your neighbor isn't going to hack your cordless phone. Your TV manufacture is definitely going to drop malware onto it, disable functionality (i.e. damage it), etc.
One thing the government can or perhaps should mandate, and is easily verifiable, is kill switches - devices should be physically incapable of connecting to any wireless network when a kill switch is engaged. If the FTC or a trade regulator wants to regulate at another level, maybe they could also say certain classes of devices must continue to function when disconnection (I might want the Apple TV plugged in to the TV connected to the internet, but the TV itself totally disconnected.) Because some devices now will surreptitiously search for open WiFi networks and try to go online even if a user does not connect via WiFi, this seems reasonable.
If we go down the route of a government agency creating and mandating security,
#1 Government back doors will persist and perhaps even be required (relevant to the whole TikTok thing this week.) #2 They will be unable to quickly respond to new threat surfaces while still representing that whatever is present is secure (it isn't.)
>it is extremely common for e.g. TVs and smart phones to ship with malware included
These aren't mutually exclusive. With that being said, I'm probably with you when it comes to the overall debate. For something to get exploited it needs a vulnerability and a means to exploit it. The most insecure device in the world can't be exploited if I'm the only one who ahs the means to do it. Unfortunately we live in a world where Zawinski's Law has itself expanded such that everyone wants to be able to access everything from everywhere, which rules out airgapping a lot of devices. It's a consumer economy - we have to build what people want and then secure it. We don't have the luxury of building secure things and then convincing people to want them.
Both are common! But there are many hundreds of thousands of known software vulnerabilities.
> Having the thing never connect to the Internet at all and never receive updates is a far better security posture
It might be, depending on the particular situation. But it doesn’t really matter for IoT devices, because they all, by definition, connect to the internet. “Don’t connect to the internet” is a nonsensical suggestion for IoT devices.
I actually just spent time last week getting rid of tftpd, telnetd, netcat etc... on some IP cameras last week.
You only need a few k of ram to have a bot, especially with how it is almost the rule that embedded system run everything as root.
If you have the ability to do firmware extraction, look at just how bad the industry is now.
Case in point when 700,000 Netgear routers pinged the University of Wisconsin–Madison NTP server (harcoded IP address) every second.
https://en.wikipedia.org/wiki/NTP_server_misuse_and_abuse#Ne...
Tons and tons? I don't understand this viewpoint at all. As the saying goes "There is no 'Internet of Things', just an Internet of unpatched Linux devices." That is, the primary vector for malware is devices that aren't (or can't) be patched after vulnerabilities are discovered.
No Bluetooth, no Wi-Fi, no protocol sophisticated enough to distribute code. Just locally transmitted instructions.
People thought old analog 900mhz cordless phones were fine until others realized you could just tune a radio to that freq and listen to your neighbors.
The problem with saying you need auth and crypto is now you just added a bunch of complexity you have to maintain and update and hence now you've introduced vulnerabilities.
But the 'I' in IoT is for internet. "Don't build IoT devices" is not a helpful proposal to increase the security of IoT devices, which is the scope of this initiative.
I would argue the crisis of IoT security is caused largely by poor IoT design.
Yeah, but most people are going to do that. Most people aren't security-conscious professionals, and they do like cheap things.
In a hypothetical reality where home networks were better designed to accommodate remote access, we wouldn't have this problem. And for those of us who can configure networks to be securely accessed remotely, there are definitely better ways to do things.
But that isn't the reality of the landscape of consumer IoT -- which is that people expect to buy a cheap device, connect it to the wifi network of any consumer wifi router, and have it work out of the box. They are already buying these devices regardless of whether they are secure, and will continue to do so. This initiative is about encouraging reasonable incremental changes to the existing reality.
If the requirements for this label were drastic enough that they required people to secure the devices behind behind a firewall, store data locally, and provide remote access only with an inbound VPN or something like that, it would simply be ignored by manufacturers and would have zero impact. Because vanishingly few people are going to replace their Comcast modem/router just to install some IoT device. To most people, they "get wifi" from their ISP. The concept of "reconfiguring a home network" is a nonstarter. Whatever the ISP provides is what normal people use, by default.
It's fine to have a computer or "gateway" device that calls outbound to a server for outside access, no firewall rules or VPN required. The point is there should only be one point of contact with the outside world, and that's a tech device with enough power to update, secure, etc. As Wi-Fi standards and such change, people should expect to replace it.
Hardware in and on your walls should be dumb, cheap, and long-lasting. Insteon's technology hasn't substantially changed in twenty years, most of my smarthome hardware is over ten years old, and is no less secure or current than when I installed it. And of course, all of it works just like a "dumb" counterpart when the Internet is down or there's no smarthome controller involved. This should be cheaper at scale than "wi-fi smart outlets", if they aren't selling your data to offset the cost.
I'm glad you bought up Insteon. They're a great example of this, they failed commercially.
https://www.pcmag.com/news/smart-home-company-insteon-shuts-...
I understand the benefits to the architecture you're advocating for. Personally, I started with X10 in 1995. But that simply isn't what people buy anymore. People are buying individual smart-home products, not integrated systems.
The requirements of this program are to address the reality of the types of devices people are actually buying, and the security concerns that affect them.
I disagree with your assumption you understand consumers: Many prefer to buy all products from unified systems, and the complaints about how disconnected and disjointed having odds and ends are have led to Matter, which is struggling to solve the problem.
The major players have all sold home automation hubs, and a lot of solutions still use them, but a lot of the hardware is still overengineered, has a short usable life, and creates security risks.
It is a fact that all of the top selling devices in this market overwhelmingly connect to WiFi directly.
e.g.: https://www.amazon.com/Best-Sellers-Smart-Home/zgbs/smart-ho...
Is that all it's capable of or all you use it for?
You could bring a custom-programmed RF transceiver and plant it on my property for the job, or you could use a $5 wrench to shut off my natural gas and pull the power shutoff on my AC condenser.
I think the latter is infinitely more likely.
> But for now, you can presume the Netflix button on your TV remote can't be configured to point to an alternative API if Netflix goes away. :)
It is HackerNews, so your statement is true UNLESS you're willing to hack your TV. (But this shouldn't be a thing people _have_ to do... ):Warning, I haven't personally tried this
> the device requires users to authenticate
Why in the world would anyone want unauthenticated access to these devices?
Regardless, the commonly expected use case for IoT devices is for people to be able to access them from their mobile device, on the internet (thus the 'I' in IoT). IoT devices, as their name implies, are on the internet, and need authentication because of this.
The problem is that this use case is real, and people are buying these devices, and so, how do we make them better? "Just don't do that" doesn't address the problem, it's a dismissal of it.
IoT is a marketing term. Networked devices don't necessarily need to use the Internet, and indeed most of the time there isn't even a use-case. You're going to open your garage while you're at the store?
Honestly I don't see the use-case for almost any IoT thing though. It mostly seems like gimmicks (color changing/dimming lights) or adding unnecessary complications that make it less secure and more failure prone so that someone can sell you a service (a cloud app for your garage instead of a remote/keypad, an app for your door instead of a key).
> You're going to open your garage while you're at the store?
One would mostly likely want to close it then :)
But really, I have an IoT garage door opener so that I can check to make sure it is closed if I forget. This is a common use case. Also, opening it for others when you are away.
> Honestly I don't see the use-case for almost any IoT thing though.
Okay, here's a few: Cameras are useful to see when people are at your house, when packages arrive, etc. Security sensors (or the aforementioned cameras) are also useful to check on the security of your home while you are away and respond if something unordinary happens. People like automatic pet feeders and automatic vacuums to perform tasks while they are away. People like thermostats that can be changed to more economical settings while away, and returned to comfortable settings when (or shortly before) they arrive. AirBnB hosts like being able to change the door code on their properties between visitors, and to monitor and secure their property while not physically there.
If you want some more, just read the reviews of these devices on a site like Amazon, and you'll see what people use them for.
People buy them because they find them useful. That doesn't mean you have to like them or want them. But an initiative to encourage manufacturers to implement basic best-practices is a good idea for other people regardless of whether you personally want them or not.
The AirBnB use-case seems fair enough. I'll still maintain that having devices connect to hostile C&C servers (which is the current status quo) is not a basic best-practice or an incremental improvement. The recent story about inverters being bricked by a distributor demonstrates why. Literally damaging electrical infrastructure (which is what happened) is one of the the boogeymen used to push for better security, and can put people's lives at risk. That attack should have never been possible.
The ones that are on the internet also allow you to trigger them remotely, monitor status, trigger automatically based on geo-location, etc.
Are you trying to say that people could do things another way? Of course they could. But they aren't. They are buying IoT devices, because they like the features.
Personally, I like to set my turn my thermostat off of the eco temperatures before I go home, which is not at the same time every day. I don't think this is a strange use case.
Also, I like to trigger my vacuum remotely when I'm already out of the house. I don't leave the house at the same time every day, so I don't have it set on a timer. I could turn it on before I leave, but I don't really want it trying to roll out the door or bump into me while I'm putting my coat and shoes on.
Again, your critique here seems to be your own person dismissal of these features, which really isn't relevant. Other people buy and use them.
It's not random hackers in Russia that rendered people's solar setups inoperable. It's the inverter company using their backdoor. Ubiquitous backdoors and pre-bundled malware are the top security problems in the industry.
I don't necessarily agree with that. The most widespread and issues to hit people in the US in recent years are not malicious cloud providers, but credential stuffing attacks against otherwise legitimate and reputable services.
> In practice, any home network in the last 20 years is behind a firewall that blocks incoming connections unless you go out of your way to open the firewall/forward ports, and will be secure from other attackers by default.
Right, the instructions with those legacy IP cameras instructed users to open ports to access them remotely.
> It's not random hackers in Russia that rendered people's solar setups inoperable. It's the inverter company using their backdoor.
Yeah, that's a problem. But probably not addressable through this program. There's nothing a voluntarily labelling program can do to protect you from a vendor that wants to fuck you over. One would presume a bad actor would simply: not volunteer to give up their ability to be bad.
Protecting customers from vendor abuses is not just a cybersecurity problem, it's a warranty and contract problem (or criminal fraud problem), and is probably better handled that way.
At least for Android TV devices, Button Mapper works for some.
https://play.google.com/store/apps/details?id=flar2.homebutt...
This needs better and more detailed clarification. I've reverse engineered a camera-equipped pet feeder, and videos sent to a cloud (or my emulating server in my case) were partially encrypted - I-frames were, P-frames were NOT. Someone ticked a checkbox "videos are encrypted", and still left the thing glaring open.
Then, of course, it's also a matter of ciphers and modes, authentication, key generation, transmission and storage, etc etc.
Feels like encrypted storage and transmission features alone require a full whole label, like the FCC's broadband facts label, or FDA's nutritional facts label, which outlines what data exists in the system, where the data is stored, how it's encrypted, how it's authenticated, and so on.
Which is probably not happening until cryptography 101 becomes a part of general school curriculum and layman people start to understand the basics. Without people asking real questions and refusing to purchase products from sloppy engineering companies (aka voting with their wallets*), companies will always wave it away with tried-and-proven "military-grade security" bullshit.
___
*) That is, if there's even a competition. When no one does things right (because consumers don't know and thus don't ask for it), there's nothing to pick from.
So now, any interested subject (any human or entity, even "group of hackers") could ask to responsible. Or could talk with deputies, as their contacts should appear soon.
I don't do hardware at all so this may be infeasible or misunderstood but I imagine a scheme whereby one needs the encryption key in order to properly change the key that the hardware attestation firmware is expecting. The attestation key is encrypted with a separate private key and is decrypted by the firmware with the corresponding public key.
Presuming that's feasible, it would only really work until that private key is leaked and our hostile trade partners pinky promise not to use it. Perhaps some licensing could be used to make the people who own the device to be responsible for repairing it at an approved repair shop but that still has to be enforced.
Seems reasonable from the FCC's perspective, but I'm not sure how I'd feel about it.
This is the best strategy, but let's be clear... consumers who make a purchase have a reasonable expectation of owning a durable product that does not increase the threat surface of consumers' lives.
This means that the product requirements should be clear and the supply chain must be secure.
Until a "trust label" can guarantee these principles, the proposal is just another prop in a grand security theatre.
I'd still put my faith in other indicators like a company's track record, third party audits, robustness of open source library choices where applicable, my own analysis of their stack and engineering choices based on signs I can observe about their product / interface / etc (there are usually several present), my own testing and so forth.
I'd argue the generally accepted pace of consumer product development these days is reckless, and not sustainable if you want truly robust results.
I would have been glad to see this step in the right direction if I weren't convinced all it will likely amount to in practice is security theatre. Here's hoping my skepticism is unwarranted.
The scary bit is that this label is going to be found to be ineffective, and then consumers may lose trust in government-issued safety stamps.
Suffice it to say, but the keywords are a google dork for finding easy to hack pentesting victims.
Now the BSI (German institute for cybersecurity, similar to CISA) also started to push out certifications for the BSI Grundschutz, which is an absolute meaningless certificate and literally tests the absolute bare minimum things.
The problem here is that there is no market, this cyber security crisis cannot be solved economically, because customers want a certificate without having to do further work. So they'll get it at whatever auditor that accepts their money.
This is how it's done, even for ISO 27001 and SOC2 certifications. Nobody gives a damn if a single working student has 20+ role descriptions laying on their table. Findings are always ignored and never corrected.
Cyber security policies and their effects over time need to be measurable first before there can be certification processes.
Additionally there needs to be legislation that cannot be interpreted. Things like "reasonably modern" cannot be used as a law text because it doesn't mean anything, and instead standardized practices have to be made mandatory requirements. Preferably by a committee that is not self controlling, maybe even something like the EFF, FSF, OWASP or Linux foundation.
Well, there's SELinux, TOR
EDIT:
Found the answer for you since you can't be bothered (previously saw the date in some other doc reading about this):
https://www.fcc.gov/CyberTrustMark
> When was the U.S. Cyber Trust Mark program created?
>> In August 2023, the FCC sought public comment on how to create the Cyber Trust Mark program. In March 2024, based on public input, we adopted rules establishing the framework for the program.
1) What are the requirements for the mark? E.g. no passwords stored in plaintext on servers, no blank/default passwords on devices for SSH or anything else, a process for security updates, etc.?
2) Who is inspecting the code, both server-side and device-side?
3) What are the processes for inspecting the code? How do we know it's actually being done and not just being rubber-stamped? After all, discovering that there's an accidental open port with a default password isn't easy.
Yep, pretty basic stuff, like 'require authentication', 'support software updates', etc
> 2) Who is inspecting the code, both server-side and device-side?
UL is administering the program and they're going to come up with the requirements
> UL Solutions will work with stakeholders to make recommendations to the FCC on a number of important program details, like applicable technical standards and testing procedures, post-market surveillance requirements, the product registry, and a consumer education campaign.
So now, any interested subject (any human or entity, even "group of hackers") could ask to responsible. Or could talk with deputies, as their contacts should appear soon.
1) Don’t be select Chinese products
2) Be select American products
It’s not reaaaally 3d chess, but a relatively crude misnomer for the “Made in America” stamp or “Its American and definitely not Chinese”.
The security practices are probably the same across products, it’s just the wrong time wrong presidency for China.
What the government probably _should_ do is begin establishing a record of manufacturers/vendors which indicates how secure their products have been over a long period of time with an indication of how secure and consumer-friendly their products should be considered in the future. This would take the form of something like the existing travel advisories Homeland Security provides.
Should you go to the Bahamas? Well, there's a level 2 travel advisory stating that jet ski operators there get kinda rapey sometimes.
Should you buy Cisco products? Well, they have a track record of deciding to EOL stuff instead of fixing it when it's expensive or inconvenient to do the right thing.
Should you buy Lenovo products? Well, they're built in a country that regularly tries and succeeds in hacking our infrastructure and has a history of including rootkits in their laptops.
But this is IoT stuff we're talking about here, not Lenovo/Cisco... but ReoLink/PETLIBRO/eufy/roborock/FOSCAM/Ring/iRobot/etc. Security (or the lack of it) in the IoT world is a whole different ball game. It isn't uncommon for IoT devices to be EOL on release date, or just lack authentication or encryption entirely.
They've provided thorough definitions and a label that implies they've all been understood by the manufacturer. It doesn't mean that this solves any real world problem.
> Security (or the lack of it) in the IoT world is a whole different ball game.
Those can be described as IoT devices. They're more appropriately categorized as "consumer electronics" and often have a firmware update right out of the box. That's what makes this badging program an absurd idea with no meaningful outcome. This segment is not going to care.
This isn't "Energy Star" where the purchased product does not have additional functionality which can be exposed or exploited through software and no third party testing can be exhaustive enough to prevent the obvious exploit from occurring.
Even to the extent they can it then enforces a product design which cannot be upgraded or modified by the user under any circumstances. Worse the design frustrates the users ability to do their own verification of the device security.
It's a good idea applied to the wrong category of products and users.
IoT devices are a subset of a much broader 'consumer electronics' category.
> and often have a firmware update right out of the box.
From major, established, mature companies, yes. Many device manufacturers in this category never issue firmware updates. Which is precisely why this is one of the requirements.
> This segment is not going to care.
Some may, some may not. The federal government will care, because they will be forced by law to comply.
> no third party testing can be exhaustive enough to prevent the obvious exploit from occurring.
Of course, no cybersecurity compliance plan can prevent exploits from occurring. If you try to address cybersecurity in that way, you will fail, anyway. The point is to place controls in place which are achievable, measurable, and help to mitigate risk.
> Even to the extent they can it then enforces a product design which cannot be upgraded or modified by the user under any circumstances.
NIST's requirements require the opposite of this.
Which means the program will have zero value outside of federal purchasing offices. They will not evaluate the criteria or care about the reality of the offering, they'll see the sticker, and know it's "default approved."
Is this a good outcome?
> mitigate risk
A sticker cannot do this.
I can’t guarantee much but I can guarantee a non zero number of non federal purchasers will consider the sticker.
>> mitigate risk
> A sticker cannot do this.
Correct. The sticker itself doesn’t mitigate the risk. The adherence to the requirements necessary to qualify for the sticker do.
What you’ve described is maybe more possible if provided by a Consumer Reports-style org that consumers could subscribe to.
- Must the Cyber Truck (Musk) bear the Cyber Trust Mark?
It's adding a standards and governance layer to tech, which creates a capacity for compliance management in regualted industries. annoying for sure, but the US has lost its unipolar superpower role because its critical infrastructure systems were made of garbage code and its population is effectively defenseless.
it doesn't solve it, but it improves the dynamic.
https://www.ul.com/news/ul-solutions-named-lead-administrato...
> UL Solutions will also work with the FCC and program stakeholders to develop a national registry of certified products that consumers can access via QR code on the label. The registry will have more detailed information about each product. Additionally, UL Solutions will serve as liaison between the FCC and other CLAs, as well as other key stakeholders. [emphasis added]
and here:
https://www.fcc.gov/CyberTrustMark
> The logo will be accompanied by a QR code that consumers can scan, linking to a registry of information with easy-to-understand details about the security of the product, such as the support period for the product and whether software patches and security updates are automatic.
This doesn't block full-blown counterfeit products (recreating certified devices including the label), but does address non-compliant devices trying to pose as compliant.
I've seen Energy Star logos for 30 years and never knew there was a public database, never thought to verify, and I don't think anyone else has either. The only thing Energy Star has been useful for is extracting rebates from utility companies and buying shitty dishwashers which were certain to be worse than what they were replacing.
Verification is useless if no one knows about it, or if the data isn't actionable. I have verified UL mark numbers for questionable products, but they often resolve to some Chinese ODM you've never heard of like 'Xionshang Industrial Electric Company' whose name certainly doesn't match the product label. Do you know the components haven't been swapped out since certification was achieved? Was the product actually sourced from there or counterfeit? You have no way to verify any of that.
UL issues holographic stickers but I've seen those like 10% of the time and probably just as easily faked.
And I'm not saying this will be that useful, just that it's not going to be a sticker and nothing else. That would be truly useless and pretty much just make money for sticker makers.
So, the product search works like a shopping cart site, and has no historical products, only new ones, and helpfully lists the prices.
Who is this meant to benefit?
It looks like part of the label [1] will include a QR or link to a public registry, so in theory you can easily confirm the device has actually been certified.
[1] https://docs.fcc.gov/public/attachments/FCC-23-65A1.pdf point 42
Not that it matters, posters on this very site who claim to care will continue buying stuff off AliExpress, proud they got it for pennies on the dollar.
Look ma, a mini PC for $22! And they didn't even charge for the preinstalled malware!
Has anyone ever considered this junk is sold at a loss as a price of doing business, to expand a PRC-controlled botnet?
I stored their number in my notebook and going to my shopping, calling them from bus stop, and they answered me some nonsense.
I made my shopping, and some walk, then opened platform and these shops already disappear. Less than hour.
In other case I managed to make order and paid from card, and also shop disappeared. - In a week I received SMS from bank "your payment returned to your account".
This is federal offense, like document falsification.
So if somebody will be caught on doing it - could go to jail.
How would they know that an item is fraudulently marked if they never look?
1. Organizations listed in subject (NIST, FCC, deputies from tech companies) constantly create or even invent methods to check products quality and to enforce penalties for offenders, and propose regulations to approve by parliament.
2. Parliament make juridical documents and approve budgets for 1 (and 3,4 when need).
3. Customs limit penetration of abroad subjects to internal market.
4. Police, courts, deal with internal offenders, or with abroad offenders managed to infiltrate through customs to internal market.
In real life, local shop become responsible when sell products from abroad, and regulations limited possibilities to create local shops for foreigners.
Unfortunately, life is constantly changed, technologies constantly grow, so old regulations eventually become obsolete, so all these things work in endless loop.
I just looked at the closest mains powered device I have here (a fancy humidifier/fan), and only saw an Inmetro mark, there's no UL mark at all.
(My point is: plenty of people are not from the USA. I happen to have already heard that the UL is sort of the USA equivalent of our Inmetro, though like many things in the USA it's a private entity instead of a government entity, but the parent poster probably hadn't heard of that.)
https://abcnews.go.com/International/us-diplomats-cuba-suffe...
No matter how good everyone in this trust mark program is, you're only one confused deputy[1] away from disaster.
Examples of eligible products include internet-connected home security cameras, voice-activated shopping devices, smart appliances, fitness trackers, garage door openers, and baby monitors."
Ok, nothing I use then. I hope this comes to home and SMB network gear.
User upgradability if the Company Folds or Sunsets the product. When that happens, the user will need to buy a new device or live with comprised devices. Most will live with the comprised device.
So, IMO, the product should be fully open source and easily upgraded in order to get the Cyber Trust Mark.
This isn't something which a company can meaningfully guarantee to consumers. Even if it's technically possible for users to install their own software on a device - for that matter, even if the company goes out of their way to support it by releasing documentation and source code - there simply isn't interest from developers to build and maintain custom software for those devices. And the same goes for devices which depend on online services - those services cost money to run, and the number of users capable and willing to run their own is miniscule.
there is no regulation in tech. they own the fed.
Verdict: nope.
This is something that an independent, international cybersecurity nonprofit should be in-charge of, not a standards org that shills for what we think may have been the NSA (BULLRUN).
[1]: https://nvlpubs.nist.gov/nistpubs/ir/2022/NIST.IR.8425.pdf
I wonder, how strict will be regulations on Chinese software parts. For EU/US/Australia/Korea originated should be less strict if could prove source.
We need a blue ribbon commission on transparency, honesty, and good governance desperately. Let's reduce any federal agencies that make any sort of direct-to-citizen recommendations by 100% and instead spend that on rooting out bad incentives, misinformation, etc.
Cybersecurity best practices are a point in time snapshot, the label will be dependent on at purchase time, how will that help people who have purchased second hand, or had products where items on shelves suddenly had a vulnerability discovered? You really think they are going to go through the cost of sending those back?
All software bugs can potentially be security bugs. This follows classic shock doctrine.