> The way he managed to beat a trillion dollar corporation was through the kind of simple but tedious and boring work that Apple sucks at: regression testing.
> Because, you see: this has happened before. On iOS 12, SockPuppet was one of the big exploits used by jailbreaks. It was found and reported to Apple by Ned Williamson from Project Zero, patched by Apple in iOS 12.3, and subsequently unrestricted on the Project Zero bug tracker. But against all odds, it then resurfaced on iOS 12.4, as if it had never been patched. I can only speculate that this was because Apple likely forked XNU to a separate branch for that version and had failed to apply the patch there, but this made it evident that they had no regression tests for this kind of stuff. A gap that was both easy and potentially very rewarding to fill. And indeed, after implementing regression tests for just a few known 1days, Pwn got a hit.
And now I wonder how many other projects are doing this. Is anyone running a CI farm running historical vulnerabilities on new versions of Linux/FreeBSD/OpenWRT/OpenSSH/...? It would require that someone wrote up each vulnerability in automated form (a low bar, I think), have the CI resources to throw at it (higher bar, though you could save by running a random selection on each new version), care (hopefully easy), and think of it (surprisingly hard).
Yes, regression testing--making sure bugs you've fixed don't return--is a standard part of QA. I did volunteer QA for Mozilla in college a good 20 years ago (god that number is horrifying) and they had an ever-growing suite of regression tests. Mostly for rendering/layout or JavaScript engine bugs, since part of reproducing and proving you'd fixed those was creating a minimal test case. Which you could then easily throw into the build pipeline.
Bugs are a fact of life, but burning time and money to fix them only to have them return is the worst case scenario. Organizations that care about quality are definitely investing in regression testing. Unfortunately a whole lot of orgs give QA zero respect and offshore it to the lowest bidder, if they do it at all. It's absolutely insane to me that Apple wouldn't have regression tests for jail breaks, some of the most high profile bugs in history.
You can fairly criticize Mozilla for a number of things these days. But they had a very robust QA and CI/CD setup in the early 2000s with tools like Tinderbox and Bugzilla. When DevOps came around and popularized it I was like wait, people weren't already doing this stuff??? Turned out I had been living in a bubble and that was not the norm at all.
Many years ago, I did a six month contract for Apple Retail Software Engineering, to deliver a Jenkins CI/CD system for the code that was used to communicate with the employees in the stores, to allow those employees to communicate with each other, to deliver training to them, etc….
There were multiple major components. There was the back-end server system that ran on Linux. There was the content creation system that ran on MacOS. There was the end-user clients that ran on iOS and iPadOS. And there was an extensive array of QA processes that they ran.
I ended up making minor changes to the code base for each of those components, so that they could build on the Jenkins server that was running underneath my desk (on an old Mac Pro server that had been lying around).
And I can tell you that they had extensive regression tests — as of the time I was there, over five thousand of them. Those took a really long time to run, which is why they needed the Jenkins server instead of doing this stuff on their laptops.
Now, I can’t speak for developers anywhere else at Apple, but I believe that they are well acquainted with the concept of regression testing.
There is a FOSS project I've seen but cannot remember the name of currently (beer) but I do recall their test case directory, one for each issue of merit. Thousand of them, easy. Might of been Sqlite. Something to look up to. I guess if you're not back porting fixes you'd likely not back port the tests either.
I think the underlying problem is that lots of orgs have siloed out security stuff into its own workflow and its own class of bugs.
It's basically Conway's law applied to the security/feature development split.
So even if they have a build/release procedure with a mature regression test suite it probably wouldn't have "security" issues like this in it just as a matter of internal organization
> And now I wonder how many other projects are doing this.
If by 'projects' you mean intelligence agencies, then I would say it's safe to assume at least the G10 intelligence agencies are doing this along with Russia, China, NK - and likely a huge number of private groups.
> forget everything you know about kheap separation, forget all the task port mitigations, forget SSV and SPTM
This is like when you’re speaking in a foreign language with a friend and getting along fine, but in the next sentence they begin describing brain surgery or nuclear physics, and your understanding falls off a cliff.
Or that time I tried to interpret a conversation about blast furnace renovations.
As far as jailbreaks go, I’m sad it’s not a thing anymore; I don’t think I ever did anything useful with my jailbroken iPad, but it was fun. Today I’d install a tethering app and UTM + a JIT solution (1).
1: SideStore looked promising, but my account was once a paid Apple Developer account and I have 10 app IDs that won’t expire, so I can’t install any apps like the aforementioned UTM, unless I make a new account or pay again.
I had my old iPhone 4 jailbroken and it was literally the only way I would use an iPhone as my main device. Having lost that, I switched back to Android which had caught up in many basic features by then.
That's cool, Apple's bug bounty didn't exist ten years ago. Apple's bug bounty does max out at $1 million (although you can get bonus multipliers up to $2mil). Just read the content before throwing down the gotcha.
Bear in mind: different buyers and different price structured. You can get more selling a vulnerability to CNE shops (say: every intelligence organization in Germany), but you'll be accepting more risk --- the payments are effectively tranched (or, equivalently, back-loaded on "maintenance" fees), and if the vulnerability dies you're S.O.L. Apple also won't make you build all the reliable exploitation enablement tooling a CNE buyer will. So: they pay less.
If this is the case Apple employed an amazing strategy. By locking all ways to possibly root their devices they patch vulnerabilities discovered for free by jailbreak devs.
but they haven't, the article says the "private" community still has exploits and apple patches them. The public, like the dev, for some reason, don't anymore.
They're exclusive to private communities because they're very expensive, and getting more expensive over time; in other words, Apple's strategy has driven the cost of exploiting iOS up.
Anything public is dead, which is what you want to see.
I’m not sure I agree with the premise here, although I agree with the conclusion w.r.t Apple specifically.
I’m 100% positive from experience doing VR in several non-iOS spaces that increased exploit value leads to fewer published public exploits, but! This is not a sign that there are fewer available exploits or that the platform is more difficult to exploit, just a sign that multiple (and sometimes large numbers) of competing factions are hoarding exploits privately that might otherwise be released and subsequently fixed.
As a complementary axiom, I believe that exploit value follows target value more closely than it does exploit difficulty, because the supply of competent vulnerability researchers is more constrained than the number of available targets. That is to say, someone will buy a simple exploit that pops a high value target (hello, shitty Android phones) for much more money than a complex exploit that pops a low value target. There are plenty of devices with high exploit value and low exploit publication rate that also have garbage security.
With that said, Apple specifically are a special (and perhaps the only) case where they are “winning” and people are genuinely giving up on research because the results aren’t worth the value. I just don’t think this follows across the industry.
IOS requires so many exploits in the chain since they effectively sign system calls, and capabilities by each app at two steps. So you may be able to interact with another process, but only whitelisted processes. The kernel is also Immutable so persistence is impossible. They do a level of boundary checks that only Apple can do, and also have special telemetry flags on critical processes that either mean they're looking to end of life a pathway.
No other OS can restrict on this level and it makes it so not only do you need an exploit for say the Javascript engine, you also need an exploit for like 10 other pathways. The reason for this is since the kernel is immutable and checked out the wazoo, you get "Jailbreaks" by modifying different services and system processes and getting a capability from those apps. Which is where the exploit is required for them or an approved peer. But apple also has telemtry for what each app is doing with eachother.
I don't think I reach the deeper questions here, and pretty much just get back to "if it was cheap, Apple would have killed it already"; in that set of circumstances there can't be viable public exploits (or broad workable bug classes to fish from) to work with.
Sucks if you're part of a public jailbreaking community, but, of course, good if you're a user.
I agree with this. I also agree that there's no preferable situation. Apple have done a great job building mitigations and it shows in how difficult, expensive, and rare it is to fully exploit their platforms. I certainly wasn't intending to form a counter-argument that public exploits existing would be a positive signal, or that there's a preferable alternative situation.
My only point was that "anything public is dead is what you want to see" is not a particularly useful rubric in general. I get nervous when I see statements that suggest an absence of public exploit material or high "bid" price for grey market exploits as evidence that a platform is less vulnerable. My experience suggests this isn't really how the market works in general. There are way too many additional factors that affect both pricing and publication to use "public exploit availability" or "grey-market bid price" as a signal about a platform's security posture overall.
Anyway, reading back, I realize that you specifically weren't trying to draw that conclusion, but sibling comments are now - and it seems to be a really easy trap to fall into. See: every "security journalism" outlet every time a broker posts an Android bid that's higher than their standing iOS bid, or vendors and OEMs claiming their devices are secure because no public exploits exist.
But it's still more of obfuscation. You're effectively reducing the pool of researchers to those most likely to turn to the dark market. There's an entire zero-day industry privately developing exploits, and the public sees none of it. Sure, low-resource attackers can probably forget about exploiting iOS, but stuff like Pegasus still happens regularly.
Literally the alternative is more viable vulnerabilities. It's hard to understand a coherent argument that favors that over what we have now. We're in this situation because Apple has gotten good at killing whole bug classes. That's exactly what users want.
Is this actually true? Jailbreaks are more or less the same exploits used by things like Pegasus, the exploits are probably worth more to the individuals that discover them than the ability to give their friends access to side loaded apps
That's the rub of relative integrity. It's variably easier for some to rationalize taking the cash, even if that giant pile of coin is likely to lead to the imprisonment, deaths, and/or torturing of others for better or for worse.
It may be they believe they created a new word trying to sound how a l33t hax0r would present themselves and did not realize it was already in use. [1]
> The way he managed to beat a trillion dollar corporation was through the kind of simple but tedious and boring work that Apple sucks at: regression testing.
> Because, you see: this has happened before. On iOS 12, SockPuppet was one of the big exploits used by jailbreaks. It was found and reported to Apple by Ned Williamson from Project Zero, patched by Apple in iOS 12.3, and subsequently unrestricted on the Project Zero bug tracker. But against all odds, it then resurfaced on iOS 12.4, as if it had never been patched. I can only speculate that this was because Apple likely forked XNU to a separate branch for that version and had failed to apply the patch there, but this made it evident that they had no regression tests for this kind of stuff. A gap that was both easy and potentially very rewarding to fill. And indeed, after implementing regression tests for just a few known 1days, Pwn got a hit.
And now I wonder how many other projects are doing this. Is anyone running a CI farm running historical vulnerabilities on new versions of Linux/FreeBSD/OpenWRT/OpenSSH/...? It would require that someone wrote up each vulnerability in automated form (a low bar, I think), have the CI resources to throw at it (higher bar, though you could save by running a random selection on each new version), care (hopefully easy), and think of it (surprisingly hard).
Yes, regression testing--making sure bugs you've fixed don't return--is a standard part of QA. I did volunteer QA for Mozilla in college a good 20 years ago (god that number is horrifying) and they had an ever-growing suite of regression tests. Mostly for rendering/layout or JavaScript engine bugs, since part of reproducing and proving you'd fixed those was creating a minimal test case. Which you could then easily throw into the build pipeline.
Bugs are a fact of life, but burning time and money to fix them only to have them return is the worst case scenario. Organizations that care about quality are definitely investing in regression testing. Unfortunately a whole lot of orgs give QA zero respect and offshore it to the lowest bidder, if they do it at all. It's absolutely insane to me that Apple wouldn't have regression tests for jail breaks, some of the most high profile bugs in history.
You can fairly criticize Mozilla for a number of things these days. But they had a very robust QA and CI/CD setup in the early 2000s with tools like Tinderbox and Bugzilla. When DevOps came around and popularized it I was like wait, people weren't already doing this stuff??? Turned out I had been living in a bubble and that was not the norm at all.
Many years ago, I did a six month contract for Apple Retail Software Engineering, to deliver a Jenkins CI/CD system for the code that was used to communicate with the employees in the stores, to allow those employees to communicate with each other, to deliver training to them, etc….
There were multiple major components. There was the back-end server system that ran on Linux. There was the content creation system that ran on MacOS. There was the end-user clients that ran on iOS and iPadOS. And there was an extensive array of QA processes that they ran.
I ended up making minor changes to the code base for each of those components, so that they could build on the Jenkins server that was running underneath my desk (on an old Mac Pro server that had been lying around).
And I can tell you that they had extensive regression tests — as of the time I was there, over five thousand of them. Those took a really long time to run, which is why they needed the Jenkins server instead of doing this stuff on their laptops.
Now, I can’t speak for developers anywhere else at Apple, but I believe that they are well acquainted with the concept of regression testing.
I think they are referring to secretly regression testing other peoples code (to check if patched exploits become exploitable again).
There is a FOSS project I've seen but cannot remember the name of currently (beer) but I do recall their test case directory, one for each issue of merit. Thousand of them, easy. Might of been Sqlite. Something to look up to. I guess if you're not back porting fixes you'd likely not back port the tests either.
Glasgow Haskell Compiler project does this: https://gitlab.haskell.org/ghc/ghc/-/tree/master/testsuite/t...
Every test starting with T and a number is an example created from a corresponding issue in their tracker. And there is, well, a lot of them.
I think the underlying problem is that lots of orgs have siloed out security stuff into its own workflow and its own class of bugs.
It's basically Conway's law applied to the security/feature development split.
So even if they have a build/release procedure with a mature regression test suite it probably wouldn't have "security" issues like this in it just as a matter of internal organization
> And now I wonder how many other projects are doing this.
If by 'projects' you mean intelligence agencies, then I would say it's safe to assume at least the G10 intelligence agencies are doing this along with Russia, China, NK - and likely a huge number of private groups.
> forget everything you know about kheap separation, forget all the task port mitigations, forget SSV and SPTM
This is like when you’re speaking in a foreign language with a friend and getting along fine, but in the next sentence they begin describing brain surgery or nuclear physics, and your understanding falls off a cliff.
Or that time I tried to interpret a conversation about blast furnace renovations.
As far as jailbreaks go, I’m sad it’s not a thing anymore; I don’t think I ever did anything useful with my jailbroken iPad, but it was fun. Today I’d install a tethering app and UTM + a JIT solution (1).
1: SideStore looked promising, but my account was once a paid Apple Developer account and I have 10 app IDs that won’t expire, so I can’t install any apps like the aforementioned UTM, unless I make a new account or pay again.
I had my old iPhone 4 jailbroken and it was literally the only way I would use an iPhone as my main device. Having lost that, I switched back to Android which had caught up in many basic features by then.
I'm no security researcher, but this hits close to home for me personally.
I've heard Apple pays a million for Jailbreaks now. That's the lower bound for the price on the free market.
> now
That boundary was broken in 2015, about a decade ago: https://www.dailymail.co.uk/sciencetech/article-3301691/New-...
That's cool, Apple's bug bounty didn't exist ten years ago. Apple's bug bounty does max out at $1 million (although you can get bonus multipliers up to $2mil). Just read the content before throwing down the gotcha.
That 1M was not paid by Apple. It was paid by Zerodium, a company that sold/sells vulnerabilities to attackers (e.g. NSA).
Is there a way to contact Apple to apply for millions of dollars if one has a jailbreak?
X: Hi AppLE I haz jailb8?
Or is it via one of the intermediaries?
Or is there an email or some such that is published? (That will not to straight to 1st level support and forgotten about)
https://security.apple.com/bounty/
That's the market rate. https://cyberscoop.com/zerodium-android-zero-days-bounty/
Well TIL that there are zero-day market makers...
Bear in mind: different buyers and different price structured. You can get more selling a vulnerability to CNE shops (say: every intelligence organization in Germany), but you'll be accepting more risk --- the payments are effectively tranched (or, equivalently, back-loaded on "maintenance" fees), and if the vulnerability dies you're S.O.L. Apple also won't make you build all the reliable exploitation enablement tooling a CNE buyer will. So: they pay less.
My favorite line from the whole post "I’d also like to thank whoever unpatched the bug in iOS 13.0. That was a very cool move too."
> I can’t possibly imagine where we’ll be in 5 years from now.
I can. iMessage still allows device, account, and data takeovers.
If this is the case Apple employed an amazing strategy. By locking all ways to possibly root their devices they patch vulnerabilities discovered for free by jailbreak devs.
but they haven't, the article says the "private" community still has exploits and apple patches them. The public, like the dev, for some reason, don't anymore.
They're exclusive to private communities because they're very expensive, and getting more expensive over time; in other words, Apple's strategy has driven the cost of exploiting iOS up.
Anything public is dead, which is what you want to see.
I’m not sure I agree with the premise here, although I agree with the conclusion w.r.t Apple specifically.
I’m 100% positive from experience doing VR in several non-iOS spaces that increased exploit value leads to fewer published public exploits, but! This is not a sign that there are fewer available exploits or that the platform is more difficult to exploit, just a sign that multiple (and sometimes large numbers) of competing factions are hoarding exploits privately that might otherwise be released and subsequently fixed.
As a complementary axiom, I believe that exploit value follows target value more closely than it does exploit difficulty, because the supply of competent vulnerability researchers is more constrained than the number of available targets. That is to say, someone will buy a simple exploit that pops a high value target (hello, shitty Android phones) for much more money than a complex exploit that pops a low value target. There are plenty of devices with high exploit value and low exploit publication rate that also have garbage security.
With that said, Apple specifically are a special (and perhaps the only) case where they are “winning” and people are genuinely giving up on research because the results aren’t worth the value. I just don’t think this follows across the industry.
IOS requires so many exploits in the chain since they effectively sign system calls, and capabilities by each app at two steps. So you may be able to interact with another process, but only whitelisted processes. The kernel is also Immutable so persistence is impossible. They do a level of boundary checks that only Apple can do, and also have special telemetry flags on critical processes that either mean they're looking to end of life a pathway.
No other OS can restrict on this level and it makes it so not only do you need an exploit for say the Javascript engine, you also need an exploit for like 10 other pathways. The reason for this is since the kernel is immutable and checked out the wazoo, you get "Jailbreaks" by modifying different services and system processes and getting a capability from those apps. Which is where the exploit is required for them or an approved peer. But apple also has telemtry for what each app is doing with eachother.
I don't think I reach the deeper questions here, and pretty much just get back to "if it was cheap, Apple would have killed it already"; in that set of circumstances there can't be viable public exploits (or broad workable bug classes to fish from) to work with.
Sucks if you're part of a public jailbreaking community, but, of course, good if you're a user.
I agree with this. I also agree that there's no preferable situation. Apple have done a great job building mitigations and it shows in how difficult, expensive, and rare it is to fully exploit their platforms. I certainly wasn't intending to form a counter-argument that public exploits existing would be a positive signal, or that there's a preferable alternative situation.
My only point was that "anything public is dead is what you want to see" is not a particularly useful rubric in general. I get nervous when I see statements that suggest an absence of public exploit material or high "bid" price for grey market exploits as evidence that a platform is less vulnerable. My experience suggests this isn't really how the market works in general. There are way too many additional factors that affect both pricing and publication to use "public exploit availability" or "grey-market bid price" as a signal about a platform's security posture overall.
Anyway, reading back, I realize that you specifically weren't trying to draw that conclusion, but sibling comments are now - and it seems to be a really easy trap to fall into. See: every "security journalism" outlet every time a broker posts an Android bid that's higher than their standing iOS bid, or vendors and OEMs claiming their devices are secure because no public exploits exist.
But it's still more of obfuscation. You're effectively reducing the pool of researchers to those most likely to turn to the dark market. There's an entire zero-day industry privately developing exploits, and the public sees none of it. Sure, low-resource attackers can probably forget about exploiting iOS, but stuff like Pegasus still happens regularly.
Literally the alternative is more viable vulnerabilities. It's hard to understand a coherent argument that favors that over what we have now. We're in this situation because Apple has gotten good at killing whole bug classes. That's exactly what users want.
Jailbreaks need an itch to scratch. There isn't one for Ubuntu Desktop.
Is this actually true? Jailbreaks are more or less the same exploits used by things like Pegasus, the exploits are probably worth more to the individuals that discover them than the ability to give their friends access to side loaded apps
That's the rub of relative integrity. It's variably easier for some to rationalize taking the cash, even if that giant pile of coin is likely to lead to the imprisonment, deaths, and/or torturing of others for better or for worse.
My question wasn’t about ethics and I’d rather keep it that way.
[flagged]
[flagged]
Can you really not tell from context what they mean, even though that slang has a different meaning?
[flagged]
It may be they believe they created a new word trying to sound how a l33t hax0r would present themselves and did not realize it was already in use. [1]
[1] - https://www.urbandictionary.com/define.php?term=jailbait