Anything public is dead, which is what you want to see.
> Because, you see: this has happened before. On iOS 12, SockPuppet was one of the big exploits used by jailbreaks. It was found and reported to Apple by Ned Williamson from Project Zero, patched by Apple in iOS 12.3, and subsequently unrestricted on the Project Zero bug tracker. But against all odds, it then resurfaced on iOS 12.4, as if it had never been patched. I can only speculate that this was because Apple likely forked XNU to a separate branch for that version and had failed to apply the patch there, but this made it evident that they had no regression tests for this kind of stuff. A gap that was both easy and potentially very rewarding to fill. And indeed, after implementing regression tests for just a few known 1days, Pwn got a hit.
And now I wonder how many other projects are doing this. Is anyone running a CI farm running historical vulnerabilities on new versions of Linux/FreeBSD/OpenWRT/OpenSSH/...? It would require that someone wrote up each vulnerability in automated form (a low bar, I think), have the CI resources to throw at it (higher bar, though you could save by running a random selection on each new version), care (hopefully easy), and think of it (surprisingly hard).
I’m 100% positive from experience doing VR in several non-iOS spaces that increased exploit value leads to fewer published public exploits, but! This is not a sign that there are fewer available exploits or that the platform is more difficult to exploit, just a sign that multiple (and sometimes large numbers) of competing factions are hoarding exploits privately that might otherwise be released and subsequently fixed.
As a complementary axiom, I believe that exploit value follows target value more closely than it does exploit difficulty, because the supply of competent vulnerability researchers is more constrained than the number of available targets. That is to say, someone will buy a simple exploit that pops a high value target (hello, shitty Android phones) for much more money than a complex exploit that pops a low value target. There are plenty of devices with high exploit value and low exploit publication rate that also have garbage security.
With that said, Apple specifically are a special (and perhaps the only) case where they are “winning” and people are genuinely giving up on research because the results aren’t worth the value. I just don’t think this follows across the industry.
If by 'projects' you mean intelligence agencies, then I would say it's safe to assume at least the G10 intelligence agencies are doing this along with Russia, China, NK - and likely a huge number of private groups.
X: Hi AppLE I haz jailb8?
Or is it via one of the intermediaries?
Or is there an email or some such that is published? (That will not to straight to 1st level support and forgotten about)
That boundary was broken in 2015, about a decade ago: https://www.dailymail.co.uk/sciencetech/article-3301691/New-...
[1] - https://www.urbandictionary.com/define.php?term=jailbait
Sucks if you're part of a public jailbreaking community, but, of course, good if you're a user.
It's basically Conway's law applied to the security/feature development split.
So even if they have a build/release procedure with a mature regression test suite it probably wouldn't have "security" issues like this in it just as a matter of internal organization
Bugs are a fact of life, but burning time and money to fix them only to have them return is the worst case scenario. Organizations that care about quality are definitely investing in regression testing. Unfortunately a whole lot of orgs give QA zero respect and offshore it to the lowest bidder, if they do it at all. It's absolutely insane to me that Apple wouldn't have regression tests for jail breaks, some of the most high profile bugs in history.
You can fairly criticize Mozilla for a number of things these days. But they had a very robust QA and CI/CD setup in the early 2000s with tools like Tinderbox and Bugzilla. When DevOps came around and popularized it I was like wait, people weren't already doing this stuff??? Turned out I had been living in a bubble and that was not the norm at all.
No other OS can restrict on this level and it makes it so not only do you need an exploit for say the Javascript engine, you also need an exploit for like 10 other pathways. The reason for this is since the kernel is immutable and checked out the wazoo, you get "Jailbreaks" by modifying different services and system processes and getting a capability from those apps. Which is where the exploit is required for them or an approved peer. But apple also has telemtry for what each app is doing with eachother.
This is like when you’re speaking in a foreign language with a friend and getting along fine, but in the next sentence they begin describing brain surgery or nuclear physics, and your understanding falls off a cliff.
Or that time I tried to interpret a conversation about blast furnace renovations.
As far as jailbreaks go, I’m sad it’s not a thing anymore; I don’t think I ever did anything useful with my jailbroken iPad, but it was fun. Today I’d install a tethering app and UTM + a JIT solution (1).
1: SideStore looked promising, but my account was once a paid Apple Developer account and I have 10 app IDs that won’t expire, so I can’t install any apps like the aforementioned UTM, unless I make a new account or pay again.
Every test starting with T and a number is an example created from a corresponding issue in their tracker. And there is, well, a lot of them.
There were multiple major components. There was the back-end server system that ran on Linux. There was the content creation system that ran on MacOS. There was the end-user clients that ran on iOS and iPadOS. And there was an extensive array of QA processes that they ran.
I ended up making minor changes to the code base for each of those components, so that they could build on the Jenkins server that was running underneath my desk (on an old Mac Pro server that had been lying around).
And I can tell you that they had extensive regression tests — as of the time I was there, over five thousand of them. Those took a really long time to run, which is why they needed the Jenkins server instead of doing this stuff on their laptops.
Now, I can’t speak for developers anywhere else at Apple, but I believe that they are well acquainted with the concept of regression testing.
My only point was that "anything public is dead is what you want to see" is not a particularly useful rubric in general. I get nervous when I see statements that suggest an absence of public exploit material or high "bid" price for grey market exploits as evidence that a platform is less vulnerable. My experience suggests this isn't really how the market works in general. There are way too many additional factors that affect both pricing and publication to use "public exploit availability" or "grey-market bid price" as a signal about a platform's security posture overall.
Anyway, reading back, I realize that you specifically weren't trying to draw that conclusion, but sibling comments are now - and it seems to be a really easy trap to fall into. See: every "security journalism" outlet every time a broker posts an Android bid that's higher than their standing iOS bid, or vendors and OEMs claiming their devices are secure because no public exploits exist.