Don’t know the specifics of the Espressif RISC-V cores, but in general they can’t really compete on those aspects with ARM.
ARM is a much more mature platform, and the licensing scheme helps somewhat to keep really good physical implementations of the cores, since some advances get “distributed” through ARM itself.
Compute capabilities and power efficiency are very tied to physical implementations, which for the best part is happening behind closed doors.
Well, that depends on what you count as a backdoor, but Espressif has had some questionable flaws:
- Early (ESP8622) MCUs had weak security, implementation flaws, and a host of issues that meant an attacker could hijack and maintain control of devices via OTA updates.
- Their chosen way to implement these systems makes them more vulnerable. They explicitly reduce hardware footprint by moving functionality from hardware to software.
- More recently there was some controversy about hidden commands in the BT chain, which were claimed to be debug functionality. Even if you take them at their word, that speaks volumes about their practices and procedures.
That’s the main problem with these kinds of backdoors, you can never really prove they exist because there’s reasonable alternative explanations since bugs do happen.
What I can tell you is that every single company I’ve worked which took security seriously (medical implants, critical safety industry) not only banned their use on our designs, they banned the presence of ESP32 based devices on our networks.
Except if you penetrate the market with modules that cost 5% of similar US made solutions, you start to win mindshare. At least some of those hobbyists start making a product, and sometimes the determination of whether a product is "safety critical" isn't agreed upon until after it's failed catastrophically.
Except that strategy gets you killed through a thousand paper cuts.
What would have you done when the Bitcoin fork happened 50/50? Would you have gone int ICOs? Which ones? Etc…
There’s simply too many “new things”, so by trying to get exposure to them you’ll be massively in the red.
Let’s say you get into 1000 “new things”, and you strike it lucky and hit BTC. You’d had to buy BTC in early 2013, hold it over the whole period and sold at the historical maximum for you to be at break even.
If instead of buying 1000 “new things”, you’ve put your money into the S&P you’d be at +250% by the same time.
As a freelancer I do a bit of everything, and I’ve seen places where LLM breezes through and gets me what I want quickly, and times where using an LLM was a complete waste of time.
For sure. The more specialized or obscure of things you have to do, the less LLMs help you.
Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster.
Designing a new SLAM algorithm? Probably LLMs will spin around in circles helplessly. That being said, that was my experience several years ago… maybe state of the art has changed in the computer vision space.
> The more specialized or obscure of things you have to do, the less LLMs help you.
I've been impressed by how this isn't quite true. A lot of my coding life is spent in the popular languages, which the LLMs obviously excel at.
But a random dates-to-the-80s robotics language (Karel)? I unfortunately have to use it sometimes, and Claude ingested a 100s of pages long PDF manual for the language and now it's better at it than I am. It doesn't even have a compiler to test against, and still it rarely makes mistakes.
I think the trick with a lot of these LLMs is just figuring out the best techniques for using them. Fortunately a lot of people are working all the time to figure this out.
Agreed. This sentiment you are replying to is a common one and is just people self-aggrandizing. No, almost nobody is working on code novel enough to be difficult for an LLM. All code projects build on things LLM's understand very well.
Even if your architectural idea is completely unique... a never before seen magnum opus, the building blocks are still legos.
Specialized is probably not the word I'd use, because llms are generally useful to understand more specialized / obscure topics. For example I've never randomly heard people talking about the dicom standard, llms have no trouble with it.
I think there is a sweet spot for the training(?) on these LLMs where there is basically only "professional" level documentation and chatter, without the layman stuff being picked up from reddit and github/etc.
I was looking at trying to remember/figure out some obscure hardware communication protocol to figure out enumeration of a hardware bus on some servers. Feeding codex a few RFC URLs and other such information, plus telling it to search the internet resulted in extremely rapid progress vs. having to wade through 500 pages of technical jargon and specification documents.
I'm sure if I was extending the spec to a 3.0 version in hardware or something it would not be useful, but for someone who just needs to understand the basics to get some quick tooling stood up it was close to magic.
The standard for obscurity is different for LLMs, something can be very widespread and public without the average person knowing about it. DICOM is used at practically every hospital in the world, there's whole websites dedicated to browsing the documentation, companies employ people solely for DICOM work, there's popular maintained libraries for several different languages, etc, so the LLM has an enormous amount of it in its training data.
The question relevant for LLMs would be "how many high quality results would I get if I googled something related to this", and for DICOM the answer is "many". As long the that is the case LLMs will not have trouble answering questions about it either.
One tendency I've noticed is that LLMs struggle with creativity. If you give them a language with extremely powerful and expressive features, they'll often fail to use them to simplify other problems the way a good programmer does. Wolfram is a language essentially designed around that.
I wasn't able to replicate in my own testing though. Do you know if it also fails for "mathematica" code? There's much more text online about that.
> Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster.
This is actually where I would be most reluctant to use an LLM. Your website represents your product, and you probably don’t want to give it the scent of homogenized AI slop. People can tell.
They can tell if you let it use whatever CSS it wants (Claude will nearly always make a purple or blue website with gross rainbow gradients). They can also tell if you let it write your marketing copy.
If you decide on your own brand colors and wording, there’s very little left about the code that can’t be done instantly by an LLM (at least on a marketing website).
Some subscriptions offer "unlimited tokens" for certain models. i.e. GitHub co-pilot can be unlimited for GPT-4o and GPT-4.1 (and, actually, GPT-5 mini!). So: I spent some time with those models to see what level of scaffolding and breaking things down (hand holding) was required to get them to complete a task.
Why would I do that? Well, I wanted to understand more deeply how differences in my prompting might impact the outcomes of the model. I also wanted to get generally better at writing prompts. And of course, improving at controlling context and seeing how models can go off the rails. Just by being better at understanding these patterns, I feel more confident in general at when and how to use LLMs in my daily work.
I think, in general, understanding not only that earlier models are weaker, but also _how_ they are weaker, is useful in its own right. It gives you an extra tool to use.
I will say, the biggest findings for "weaknesses" I've found are in training data. If you're keeping your libraries up-to-date, and you're using newer methods or functionality from those libraries, AI will constantly fail to identify with those new things. For example, Zod v4 came out recently and the older models absolutely fail to understand that it uses some different syntax and methods under the hood. Jest now supports `using` syntax for its spyOn method, and models just can't figure it out. Even with system prompts and telling them directly, the existing training data is just too overpowering.
I would say they are not changing but evolving and you evolve with them.
For example: gemini became a lot better in a lot more tasks. How do I know? because i also have very basic benchmarks or lets say "things which haven't worked" are my benchmark.
Honestly I think this is the primary explanation for why there is so much disagreement on if LLMs are useful or not. If you leave out the more motivated arguments in particular.
> Most of the harsher regulations only come into effect when the company hits a specific size.
That’s very market and country specific. Spain makes more than 1k tweaks to it’s food regulations each year, which would kill lots of restaurants if they were to be in compliance.
The result is that everyone tries to make as much money as they can and build a “inspection fund”, because you’re guaranteed to get a fine if inspected.
I’m honestly very tired of this argument, everything about it is bad.
Features aren’t rights, if you want a phone that let’s you run whatever you want, buy one or make it yourself.
What you’re trying is to use the force of the state to make mandatory a feature that not only 99% users won’t use, it vastly increases the attack surface for most of them, specially the most vulnerable.
If anyone were trying to create a word that gives a “deviant” feel, they wouldn’t use “sideload”, and most people haven’t even heard the term. There’s a world of difference between words like “pirate”, “crack”, “hack” and “sideload”.
If anything I’d say it’s too nice of a term, since it easily hides for normies the fact that what you’re doing is loading untrusted code, and it’s your responsibility to audit it’s origin or contents (something even lot’s of devs don’t do).
If you want to reverse engineer your devices, all the power to you, but you don’t get to decide how others people’s devices work.
It's a proper argument on its surface, complete with claim, warrant, and impact.
"Features aren't rights"
> see: Consumer Rights.
"Force of the state making sideloading mandatory is bad"
> ...Except we have antitrust laws? The Play Store becomes the only source of apps, all transactions are routed through Google Billing? Not a problem for you?
"99% users won't use"
> Except for when Google demands that transactions happen exclusively through Google Billing, which resulted in the release of the Epic Games Launcher for the world's highest grossing games by download.
"Sideloading is too nice"
> Listen, either it's the case that "sideloading" is a threat to normies or it's not. Are normies your 1% or 99% of users? I thought according to you 99% of users won't sideload.
"You don't get to decide"
> That language ties in pretty well with your fear of the use of the 'force of the state'; that tells me that you support freedom. Great-- you're right, why not let corporations be corporations and do anti-consumer things, they'll be very good to us (while they lobby the state).
Consumer rights aren’t features, and they’re very intentionally written to not be.
> "Force of the state making sideloading mandatory is bad" > ...Except we have antitrust laws?
Then sue them over those.
> Listen, either it's the case that "sideloading" is a threat to normies or it's not. Are normies your 1% or 99% of users? I thought according to you 99% of users won't sideload.
I meant that 99% of users aren’t afraid by the term “sideloading”. That you’re not using something doesn’t mean you’re afraid of it, it just means you don’t want it.
> you're right, why not let corporations be corporations and do anti-consumer things, they'll be very good to us (while they lobby the state).
Because corporations tend to die when they do anti-consumer things, but governments keep doing anti-citizen things without much trouble.
"Consumer rights aren’t features" > Any attempt to weasel out of a marketed feature set is generally and colloquially known as "false advertising"; consumers have a right to the features of a product they purchase under the original conditions of the purchase agreement.
"Then sue them" > My point was that the force of the state is a necessary evil to ensure fair competition. Yours implied that the force of the state is overreach, but if you warrant that, then you wouldn't enjoy protections against corporations afforded to us by antitrust law.
"That you're not using something..." > For you to claim that sideloading presents additional threat surface to the normie consumer, you need to also claim that normie users are sideloading. This means that if 99 percent of users are not sideloading, there is no threat surface.
"Because corporations tend to die when they do anti-consumer things, but governments keep doing anti-citizen things without much trouble." > Absolutely not. The paradigm has changed from the time when you could vote with your dollar. You and I are economically and legally irrelevant (where is Congress, anyway?), and corporations like the Big G are too big to fail. They are -already- colluding with government to do both anti-consumer and anti-citizen things.
Nominatively, this is why both the government AND google do not want you to side-load software outside of their control.
> You don’t get to decide how others people’s devices work.
Perfectly reasonable. It's important that people can decide how their devices work for themselves. No one else should decide for them.
But I'm genuinely curious how you see this principle working in practice when there's effectively a duopoly. What's the path for someone who wants to still have any choices for their device? I'm not seeing an obvious answer, but maybe I'm missing something.
It's not possible to build your own phone in most markets anymore. Without iOS or Google Play Integrity you won't be able to install or run essential apps required for banking, taxes, healthcare, public transport, etc. This makes it impossible to compete because anyone who buys your phone are required to also buy a secondary Google approved Android or iPhone to lug around in order to function in society.
ARM is a much more mature platform, and the licensing scheme helps somewhat to keep really good physical implementations of the cores, since some advances get “distributed” through ARM itself.
Compute capabilities and power efficiency are very tied to physical implementations, which for the best part is happening behind closed doors.