This was never confirmed with any evidence and bloomberg only cites anonymous sources. Given Bloomberg's bad track record on infosec stories I say I have my doubts.
Of course I don't know whether or not the NSA knew about Heartbleed. But nothing that would even remotely qualify as evidence has ever been presented.
As someone who has designed shipping HDL including more SPI masters than I can count, and now works as a security researcher (among a couple other hats I wear aty current job), I really think Bloomberg got dragged through the mud.
Everything they said about the BMC was plausible. It was bizzare hearing about how such a scheme would literally break the laws of physics, when by accident (read shitty, undebugged HDL) I've caused exactly what they were claiming.
Like others have said, the pushback came from saying it was happening, not that it was plausible. As well as refusing to pull the story or clarify the details.
I also swear I had read articles months/years before that companies like Apple literally photographed motherboards before shipping and compared them after arriving to look out for hardware tampering in transit. That, to me, shows they are not only aware of issues like that, but taking meaningful steps to detect it.
The hack supposedly happened in 2015. Apple being super concerned about server motherboard supply chain management in 2016 for unspecified reasons lines up with the timeline very well.
Federal buyers use companies like Harris Corp to sample and analyze devices and components for tampering or counterfeiting. Even then, bad stuff gets through.
Apple cares about supply chain integrity, but it’s not enough to stop this kind of threat.
I'm good with Bloomberg being expected to do a better job covering this activity than they have. And, without further evidence that it's actually happening, I can't quite take the step into believing that it's probably happening, because that way lies tin foil hats.
But for all the security researchers that straight up claim that what Bloomberg reported was impossible, I wonder what their opinion would have been about reports that the NSA was bugging routing and server hardware in transit before 2014?
I wonder why we're so collectively afraid of being labeled 'conspiracy theorists'. What is so wrong with supposing that bad things are being done intentionally?
It's not a matter of supposing that certain things may be happening, or even that they probably are. It's a matter of believing with certainty that they are, without concrete evidence and sometimes even when there's evidence to the contrary.
It's a pernicious bug in certain kinds of psychology that makes it quite hard for someone to tell the difference between nightmarish fantasy and reality. I don't want it.
> It's a pernicious bug in certain kinds of psychology that makes it quite hard for someone to tell the difference between nightmarish fantasy and reality. I don't want it.
When reality has repeatedly put nightmarish fantasy to shame - mostly for lack of imagination on fantasy's part - it's not unreasonable to question the line between optimism-laced skepticism and naivety.
People used to have dreams they aspired to. Now we have nightmares we want to see happen. I don't want it either, but apparently society at large does.
Well said. Of interest to me is why these beliefs? Without evidence, one is capable of believing anything to be true. So why believe conspiracy at all? What psychological and/or aesthetic need are these specific beliefs satisfying?
Because there is already endless concrete evidence that the elites and establishment are out to fleece the common person, and when someone has their bubble burst on this fact, they start looking everywhere for where they might be screwed.
> I can't quite take the step into believing that it's probably happening, because that way lies tin foil hats.
I think it was Wired that broke the story about AT&T's secret fiber-splitting rooms several years before they were later confirmed by Snowden's leaks. Given the entities and sheer amount of resources in play (or available to be used for that sort of thing), it's not nearly as tinfoil-hattish as, say, HAARP.
Bloomberg was rightfully dragged through the mud (IMHO), and like the parent I am immediately distrustful of any technical stories they put out. The issue was not that the BMC hack was implausible, but rather Bloomberg's refusal to supply solid evidence backing up their claims in the face of strong denials and perceived issues with the reporting.
A subset of the perceived issues with the reporting:
- How do the exploited servers phone home to China, when they were not connected to the open Internet? Not impossible, but it's asking for a lot of trust without more information. [0]
- One of the only named sources, Ryan Fitzpatrick, saying the details in their big hack article are identical to an example he constructed for the journalists to show that type of attack is plausible. The entire podcast is a great listen, but here is a direct quote: "In September when he asked me like, 'Okay, hey, we think it looks like a signal amplifier or a coupler. What’s a coupler? What does it look like?' […] I sent him a link to Mouser, a catalog where you can buy a 0.006 x 0.003 inch coupler. Turns out that’s the exact coupler in all the images in the story." [1]
- An accusation that the journalists who authored the Big Hack have had a previous story that made a big claim, they had many anonymous sources that back up their claims, but in the end there were extreme doubts of the veracity from people in the know. [2]
- Bloomberg sent another reporter, completely separate from the Big Hack article, in their tracks to discreetly talk to sources / involved parties to figure out the truth. [3]
The problem with what Bloomberg reported was not that it was implausible, but rather that it was unsubstantiated as to whether the attack actually occurred. If Bloomberg had narrated the article as "this could happen", attempting to explain a possible attack vector, that would have been fantastic.
I do agree there was a lot of "but SPI requires 6 wires and the slave can only respond when talked to" (treating SPI with the assumptions of a design engineer rather than an attacker), but that was ultimately just noise.
Same problem. EVERYTHING that Dragos Ruiu claimed is plausible, and it could be a great cyberpunk plot written by Neal Stephenson. But there is ZERO evidence that the malware actually exists.
And finding an actual incident in real life is much more important than theoretical possibilities. For example, almost everyone knows that it's very possible for semiconductor vendors to include a silicon-level backdoor since the 1980s, but finding an actual Intel/AMD chip with such backdoor (not ME, something like a secret instructions) is another matter.
ME has a debug mode that might be possible to enable with a signal sent through the 3.5mm jack on some laptops[1]. I'd be pretty concerned about ME bugs and backdoors disguised as ME bugs.
I meant, finding a backdoor in its full form on the main system would be a much more significant find, and its impact and newsworthiness is greater than any hypothetical or baseless speculations, such as Bloomberg's BMC affair.
The impact of the BMC affair, if true, is showing real evidence and real demonstration that such an attack has happened, has been used in the wild, rather than showing that the attack is possible (we all know). Unfortunately, bad journalism at work.
P.S: I'm not saying that the ME subsystem, or buggy speculation (pun not intended) isn't a threat, just to make a point.
> I'd be pretty concerned about ME bugs and backdoors disguised as ME bugs.
Same consequences. I'd say they're effectively the same thing.
Well, a year later, we still haven't seen the backdoor chip in question being taken to a lab or DEFCON yet... Even the photograph was fake, just a stock photo...
I was excited to read the news story, and it was a huge disappointment.
To be fair to Bloomberg, some of the companies involved were also complicit with the NSA PRISM project and in the initial reporting of that they all denied giving the government a backdoor.
But you're right, it's been a year now and no further evidence has surfaced which seems odd.
I thought PRISM was an endpoint where companies upload data in response to NSLs. Whether that data is being pulled by a human or by a computer is irrelevant, that data is getting pulled either way.
Regarding the "spy chip" story: Other sources. Or other news agencies confirming the sources used. Or any evidence of the chips being planted on any hardware at all.
To be clear: I'm not saying it didn't happen, just that I'm skeptical about the validity and details of their story unless they have something to back it up with.
And the story was that the targeted boards were either destroyed or handed off to the government, are you asking for someone to have probably risked jail time by holding onto one of them?
Just because it's hard to do in a way that ensures the safety of the sources doesn't justify any sense of trustworthiness.
In national security matters like this it's okay to be skeptical about reporting and sources because journalists have gotten in wrong before, probably because of how difficult it is to investigate without endangering the sources.
..no I'm not. They handed over the boards as part of an investigation. Withholding evidence in an investigation involving national security, along with all the false statements you'd need to make in order to make that happen is a great way to not see your family for a few years. I'm not the first to suggest this.
I mean, relying on multiple anonymous sources saying "this happened and this is how it happened" along with third parties validating the plausibility is a common acceptable standard for journalism.
I don't know if I'd thought about it that way. There is so much ambiguity and embellishment that goes into journalism in general, I don't view the output with any sort of authority. Applying the standard of security research, they didn't detail any evidence demonstrating their claim that it was actually being exploited [0]. It seems that difference in perspective is how Bloomberg thought their story was reasonable.
IIRC it was also heavily focused on Supermicro, without any distinction whether Supermicro was specifically targeted or just happened to supply the boards that were bugged and caught.
[0] eg showing a chip, or ideally the whole motherboard system. Does it actually rewrite instructions going by on MISO or was something else more practical? Parasitic energy harvesting? Inquiring minds want to know!!
Oh no, it isn't. Look, I get that you've seen enough to either know the shit is happening or believe it is based on what you've seen happen. I'm in the same boat. I'm just saying you're hurting your already-good credibility saying things like that.
No, it's not acceptable if it's a highly-controversial claim in an industry or topic that normally comes with proof of exploitation. They should've got it even if it was independent party vetting it that most would trust who wouldn't give too many details that would compromise an investigation. They could get money and/or advertising for doing the review. Otherwise, present it like it's information coming from anonymous, unvetted sources who could be full of shit.
Literally some of the most influential journalism of all time has been sourced from anonymous informants that weren't vetted by third parties. We didn't find out who deep throat was until over thirty years later, after he was in a nursing home with dementia.
Remember that anonymous doesn't mean unvetted.
And that leak was over literally millions of lives, so it's not like an intelligence arm of a Nation state doing its job in peace time is so more serious that the standards are higher.
Didn't Deep Throat's testimony activate a government response indicating it was probably true? Or did everyone involved act like they were full of it? Honestly, I can't remember what I read and I wasn't from that time either.
Anonymous certainly doesn't mean unvetted. We should have something come out of the stories if something big is going on, though. If we don't, we have no reason to believe them if the source has other screwups on their record.
I don’t know which is more likely: China leaning on Apple etal to hush up their spying attempts or USG feeding misinformation to Bloomberg to gin up the trade war. But it’s probably one or the other or both.
I'm sorry for initially claiming that you DMed me a harassing ancient aliens meme in response to my comments regarding the Bloomberg hardware implant story.
> It's odd how dismissive some supposedly-serious security researchers are of hardware implant capabilities.
And it really doesn’t take much effort to sneak even a bashbunny into an internal USB header - especially in the last mile of the supply-chain.
Get a temp-job as a UPS delivery driver in an area that services the datacenter of your target, whenever you deliver a sever box - open it up, add your implant, re-seal it all in the privacy of the back of your delivery truck, and that’s it.
While I am skeptical of bloomberg, I'm even more skeptical of the NSA as they've burned all their goodwill in my eyes at this point. So I'm in the "probably true, if not then probably something nearly identical happened with a different serious vulnerability" camp.
Oh, they requisition budget for the IAD mission, and they use it on IAD things. In reality, the most important thing NSA does is get budget allocated to itself! But does anyone believe that in a conflict between IAD and CNO/SIGINT, IAD has ever won?
One of those goals benefits the people with power who are above the NSA. The other provides a benefit to the public at large that few will notice. Which goal do you think is likely to be top priority?
Well, in the case of heartbleed where first the NSA found it, and an independent researcher found it, and the DoD uses Linux and OpenSSL all over the place, you'd think that the information assurance side would be better represented. Who knows how many adversaries were using that as well before it was public (hence the whole point of responsible disclosure).
Edit: Like, stuff like cryptanalysis of SM4 is for sure on the table. I can even see their neat Diffie-Hellman hack that costs $100m per nonce. But a trivially remotely exploitable memory safety bug in software that runs large sections of the military? Like, come on.
Sure, if indeed the InfoSec arm is just for show and not the thrust of the organization, then they were chartered in such a way as to be incapable of cultivating goodwill, and incapable of existing in a just and free society.
And as such, the NSA (along with the CIA and perhaps, looking forward, the ONI, MIC, etc) are subject to deprecation.
In order for peace to come to earth in the information age, we must mature beyond a perceived need to have state agencies keeping secrets on the public dime and fomenting reasonable paranoia among the populous.
Well, yes, but the problem is that (unlike Dual EC-DRBG) other people can also exploit these things when open. For instance, I suppose, would the USA be better if Project Zero shipped all their stuff to the NSA and they both kept it quiet or would the USA be better if they fixed these things.
The point is to gain differential advantage. When you're the rich guy you don't want everyone's doors to be unlockable. When you're the poor guy you do. The USA is the rich guy.
The NSA has an interesting exhibit that talks about Heartbleed at their museum in Maryland (worth a visit if you're in the area—they have some Enigma machines which are a lot of fun). Of course I don't think the exhibit was there before the bug was publicly known.
I haven't been to the Cryptologic museum but I did get to see one of these Enigmas as it happened to be on loan to the American Computer Museum in Bozeman, MT when I visited there. If you like museums, don't miss this one, it was really awesome. Oh, and free.
I consider the likelihood that the NSA made a security audit of OpenSSL very high. And given the horror stories that we heard from people trying to fix it, heartbleed is probably not the only thing they found.
on the balance of probabilities, it is probably true.
I would expect NSA to make use of any vulnerabilities they find, because their job is to hack others, not to keep us safe. Unfortunately.
NSA is resposible for so-called SIGINT and SIGSEC, acronyms for signals intelligence, to which you refer, and signals security, which IS to keep our communications safe.
It seems of course that SIGINT is what's "popular" in news.
I work on a web platform team, and I've seen many vulnerability reports over the years (well over 100). I've never seen a report from the NSA or US government. Actually, the only government I've seen reports from are the UK, so credit to them for actually doing something to keep people secure. But most reports I see are from project zero or Chinese companies.
Either the US government doesn't care at all about browser security or they are keeping vulnerabilities for themselves.
No, the US government has taken the position that it’s always best to have a few tricks up your sleeve when the chips are down. It is most certainly intentional stock piling of zero days for strategic advantage.
I would argue that such a clear conflict in these two priorities should necessitate having a bespoke separate governmental organization for SIGSEC, so that the NSA can freely focus on SIGINT.
The SIGINT arm of the NSA has an incentive to take any exploitable vulnerabilities in existing software and keep them secret, so they can use them against their enemies, rather than disclosing them so they can be fixed.
That's how CNO exploitation works. They generally can't report; their adversaries are recording their own networks, and will retrospectively detect intrusions.
>Given Bloomberg's bad track record on infosec stories I say I have my doubts.
This is all speculation:
I've noticed a weird amount of ex-CIA find their way to that publication. I sometimes wonder if the China story was some kind of plant. So then the question becomes, do we think this is truthful propaganda or just propaganda?
Control is a strong word. I think from my reading, they have journalists willing to listen who don't have the skills (or inclination) to sniff bs. The article seems to not differentiate between SSL and TLS for example, mentioning people breaking the former.
Right, hold the most clandestine organizations in the world to the same standards as petty theft. Do we really expect the NSA to be trailing HN in knowledge on zero-day vulnerabilities?
This is 2019 and the security community has yet to deliver a proper solution to prevent the existence of such bugs. The mismatch between programmer intent and code behavior is appalling. Sure, super smart coders can avoid the bugs, much like super safe drivers can avoid the shoulder, but rumble strips are there for a reason. The bug would not have arisen if the language supported dependent typing. See Agda, for example. One day...
> security community has yet to deliver a proper solution
What do you want them to do? I think their solution is "use memory-safe languages like Golang or Rust (or even JavaScript/TypeScript) for new projects, not C or C++" and "use extensive fuzzing on legacy C code that hasn't been replaced yet".
Fuzzing was capable of finding Heartbleed, and it's advanced massively (and been set up at scale to continuously test open source projects) since then.
Rust is pretty useful against memory based bugs, and it is prudent that new projects take advantage of it. But this in no way fixes the headache of security, because (1) almost all of the existing infrastructure code depends on C (2) New classes of bugs will eventually emmerge, some partly unfixable by software solution.
>New classes of bugs will eventually emmerge, some partly unfixable by software solution.
Sure Rust, ADA and such don't remove all classes of bugs, but they can reduce the attack surface considerably, giving you more time to focus on the remaining security bugs.
And maybe people will invent software solutions that reduce the attack surface even more.
Assembly is a fast car with no security features. C is a sportscar with a seatbelt. Rust is a sportscar with a seatbelt, airbags, ABS, ESC and emergency breaking.
Of course you can still crash and die, it's just that you're less likely to do so.
I know what you mean, however I think C did bring down the amount of bugs compared to assembly, just by making code easier to read and higher level abstractions available.
I'm sure the security community would love for these things to get fixed, but it's a constant fight trying to get developers to learn the correct way to do something. I've had developers tell my manager that having to fix security issues I report is putting them behind, so they wanted me to only be able to report security issues when they weren't working on anything else. Luckily my manager has my back and told them security is a top priority, even if it means fatures come out late.
If the languages supported dependent types and everything that happens in cutting edge PL research, wouldn't programming become super hard anyway for average programmers who have pressure to deliver?
Perhaps. But for for critical systems where security or reliability is a requirement, it's more important to get it right than to get it done quickly. Trying to rush these things is just asking for trouble.
This is more of computer science issue with languages insisting on being turing complete. I suppose governments could incentivize language designers to produce and support languages that eliminate ever larger classes of vulnerabilities by only buying software that is written in those kinds of languages.
> The NSA has issued a statement denying the report. In an email to Ars, NSA spokesperson Vanee VInes provided this official statement: “NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private-sector cybersecurity report. Reports that say otherwise are wrong.”
They're at least as valuable as Bloomberg's anonymous sources. You can't choose which sources you trust more just because they agree with your pre-existing beliefs.
Secret services are in the business of exactly doing illegal things on behalf of the state. I think there is pretty much nothing in the daily business of the CIA or the NSA that wouldn’t land an ordinary citizen in jail. Whether it is to convince foreign officials to leak information, or bribe them, or plot a coup, or assassinate enemies of the state, or intercept some communications, etc. This is no different from any other secret service in the world.
The NSA is no more guilty than a fox in the henhouse is guilty of being a fox.
The answer to the question probably depends on unfettered access to all of the Executive Orders (and Decisions and Memos) and White House Counsel interpretations of existing law. We mere mortals will likely never know.
So far the state of the computer industry is pretty simple, if you're using american products, you're under american surveillance. Governments will always seize the monopoly of security, that's how civilization works.
And even if you're using open source, I'm sure the NSA has written tools that can scan source code to find vulnerabilities, and maybe generate exploit if they sprinkled some ML on it.
To be honest I'd rather have a government body have the monopoly of security than witness a cybersecurity chaos, which would quickly destroy the internet. The problem is that only the US does it well.
The main difference with other countries is that we all know about NSA thanks to what happened to people like Snowden, otherwise it would not be any different from other countries -> pure speculation. And why would the US do it better than countries like China, Russia or France?
Because the US has the silicon valley, historically it invented the internet and modern computers, the US holds most of the core tech companies (intel, AMD, microsoft, google).
The US just have much more expertise and engineers, which is essential if you want the NSA to recruit and be the best at what it does. It has many aspects, I guess cerebral and technical capital are important notions.
Even if other countries can compete with the US on cybersecurity, the US is holding most of the data but also is writing most of the software and designing everything around computers, so it makes it trivial for them to turn those products against other countries who buy them.
Except linux, I really don't see any computer product that doesn't have critical parts or system made in the US. And as I said I'm certain the NSA can exploit open source very easily since it's a problem with solutions: Torvalds said "given enough eyeballs, all bugs are shallow". That is true, but if NSA is supplying eyeballs to find vulnerabilities and use them at their advantage, linux will be an asset for the US.
I don't fully trust Bloomberg, but there's funny thing: nobody hit them hard in court. So, logically, it means a part of their materials are true and evidences exists.
I sincerely, sincerely doubt it. Just look at the history of the actual bug (not hard at all to believe it could have happened by accident), and the fact of how undermanned OpenSSL was, and I'm just surprised it didn't happen sooner.
That's a serious and damning accusation to level against a volunteer contributor to an open source project, and in very bad taste if indeed you have no evidence (or even a reasoned narrative of the hows and whys).
I would also argue that processes and tools should supplement developer mistakes, negligence, or maliciousness. Unit tests, static analysis, fuzzing, integration tests, security audits, code reviews, principle of least privilege, etc. all have a part to play and yet this lapse in validation still managed to make it into production and infect all of the downstream libraries and applications.
I would argue that even if you could pin an accusation of negligence on the developer (I've not seen any evidence that could substantiate this accusation), it doesn't rest only with that one developer. The project itself lacked redundant checks. The downstream applications that import OpenSSL similarly failed to audit it.
I think in the whole scheme of things, the Open Source movement had a lot of momentum by the time that code was written, but the corporations that relied on the benefits of open source largely didn't contribute to paying to maintain highly secure coding practices. HeartBleed was one of the incidents that made the internet infrastructure/platform companies (among others) start paying for humans, tools, and reviews to help make these common libraries more secure. Google's Project Zero was started in July 2014, soon after HeartBleed was announced.
What's funny about what you linked is a government demand led to the vulnerability at a point when owners or main people were thinking of walking away. A conspiracy around that would be more believable than most. Let's ignore that, though.
The thing is, they add vulnerabilities in a number of ways. They can do it directly with code. They can do it indirectly with standards hard to code correctly w/out vulnerabilities or side channels. There's lots of options. Whatever they do will usually look like a helpful contribution or useful requirement that went wrong in a way that leads to an attack. The better ones are those that look like common or inevitable errors. That's because obvious backdoors make folks run away from a project or supplier maybe forever on top of question who put it there. So, it's usually these flaws that look like obvious errors that still get the job done with everyone around defending the person that put them there.
And I'm not saying it was an NSA job. I have no idea. They've been doing too good of a job on most things for me to know. Could've been an accident. Even probability supports it's an accident just like it did all the times it was a subversion. At $200+ mil a year budget for backdoors/hacking, you bet there were a lot of accidents that, in non-TS version, had nothing to do with the NSA. ;)
Edit: There's lots of questionable things in this article. My favorite is this:
"And this group are the best of the best of the best."
The OpenBSD team doing LibreSSL had all kinds of summaries, live updates, and even presentations of what they found. It was about as far from that quote as you could imagine. Although my memory sucks, I think at one point they said there was even code that checked to see if endianness changed while it was operating. They at least had that covered. There were so many oddities about that codebase.
Exactly. I don't have any evidence to back up my suspicion. If the evidence exists, it exists in a locked safe somewhere at Ft Meade (or other place) and in the brains of a very few people.
I can feel what is known as code smell. So, let's develop a new feature in the most widely used security library. The very first thing that must be done is to sanitize network input. This is the first thing I would expect to be done by a seasoned developer. The lack of this check is suspicious. It could be an honest mistake, of course - we all make mistakes, and I am sure I've made my share of idiotic changes. But this isn't something I would expect about OpenSSL. I agree with @nickpsecurity, "many oddities".
It sounds like you probably haven't worked on a lot of C code. OpenSSL is a giant security vulnerability pile. It is not at all hard to believe that someone has added a vulnerability by mistake: it would be more surprising to me if they hadn't.
Of course I don't know whether or not the NSA knew about Heartbleed. But nothing that would even remotely qualify as evidence has ever been presented.