kfreds 3 days ago

I've been working on technology like this for the past six years.

The benefits of transparent systems are likely considerable. The combination of reproducible builds, remote attestation and transparency logging allows trivial detection of a range of supply chain attacks. It can allow users to retroactively audit the source code of remote running systems. Yes, there are attacks that the threat model doesn't protect against. That doesn't mean it isn't immensely useful.

  • dboreham 2 days ago

    I've also worked in this field but it feels like a foundation built on quicksand. You depend on so many turtle layers and only one of them has to be adversarial and game over.

    • kfreds 2 days ago

      > it feels like a foundation built on quicksand. You depend on so many turtle layers and only one of them has to be adversarial and game over

      Interesting. Please elaborate.

      Here's how I see it.

      Reproducible builds: I think we'll eventually see Linux distributions like Debian make reproducible builds mandatory by enforcing it in apt-get's trust policy. The trust policy could be expressed as "I will only trust .deb packages where their build hash and source hash are signed by three different build pipelines I trust".

      Remote attestation: If you ensure that the server's CPU SoC and the TPM have different supply chains, you could construct a protocol where the supply chain attacker would have to own both supply chains in order to impersonate the server.

      Transparency logging: One of the projects I've been working on for the past four years is Sigsum (sigsum.org). It is a transparency log with distributed trust assumptions. Our goal was to figure out the essence of transparency logging technology, identify the most significant design parameters, and for each parameter minimise the attack surface. You'll find the threat model on our website.

      Here's a recent presentation by my colleague Rasmus on the subject: https://www.youtube.com/watch?v=Mp23yQxYm2c

      Here's a recent presentation by me on the subject of system transparency / runtime transparency / the technology underlying Apple PCC: https://www.youtube.com/watch?v=Lo0gxBWwwQE

      • 1oooqooq 2 days ago

        sadly linking to youtube this week is like linking to xitter earlier. i cannot see any of the content as google now requires me to create an account.

        ironically when talking about losing control of cloud compute.

    • liuliu 2 days ago

      I think the only shaking part, is the Secure Enclave, which provides the root of the guarantees. From there, everything is attested so if one layer is adversarial, other layers can notice.

    • bitexploder 2 days ago

      Each layer needs more than one safeguard then. If breaking the layer breaks the system then the layer needs better safe guards.

  • bitexploder 2 days ago

    The xz backdoor would have been a yawn instead the many hands fire drill it was at most big orgs. It was scary.

dewey 3 days ago

Looks like they are really writing everything in Swift on the server side.

Repo: https://github.com/apple/security-pcc

  • tessela 3 days ago

    I hope this helps people to consider Swift 6 as a viable option for server-side development, since it offers many of the modern safety features of Rust, including simpler memory management through ARC, compared to Rust’s more complex ownership system and more predictable than Go's garbage collector.

    • willtemperley 2 days ago

      I'd love to use Swift on Cloudflare Workers, but SwiftWASM doesn't seem production ready whereas Rust just works (mostly) on workers. Swift on AWS Lambda looks promising though.

  • miki123211 2 days ago

    It's worth keeping in mind that these AI machines run an environment very similar to Mac OS, XNU kernel and all, and are powered by Apple Silicon. Using Swift in that context makes sense.

    At least according to what we publicly know, no other backend Apple services follow this model.

    • valleyjo 2 days ago

      What do we know about apples other backend services? I’ve worked in compute infra in big tech for 8 years and I don’t know anything about apple’s backend.

      • withzombies 2 days ago

        Apple has a lot of Java WebObjects running on old Unix servers

  • danielhep 3 days ago

    Is using something other than XCode viable? I'd love to do more with swift but I hate that IDE.

mmastrac 3 days ago

I feel like this is all smoke and mirrors to redirect from the likelihood intentional silicon backdoors that are effectively undetectable. Without open silicon, there's no way to detect that -- say -- when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs, additional access is granted to a monitor process.

Of course, this limits the potential attackers to 1) exactly one government (or N number of eyes) or 2) one company, but there's really no way that you can trust remote hardware.

This _does_ increase the trust that the VMs are safe from other attackers, but I guess this depends on your threat model.

  • kfreds 3 days ago

    > I feel like this is all smoke and mirrors to redirect from the likelihood intentional silicon backdoors that are effectively undetectable.

    The technologies Apple PCC is using has real benefits and is most certainly not "all smoke and mirrors". Reproducible builds, remote attestation and transparency logging are individually useful, and the combination of them even more so.

    As for the likelihood of Apple launching Apple PCC to redirect attention from backdoors in their silicon, that seems extremely unlikely. We can debate how unlikely, but there are many far more likely explanations. One is that Apple PCC is simply good business. It'll likely reduce security costs for Apple, and strengthen the perception that Apple respects users' privacy.

    > when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs

    I would recommend something more deniable, or at the very least something that can't easily be replayed. Put a challenge-response in there, or attack the TRNG. It is trivial to make a stream of bytes appear random while actually being deterministic. Such an attack would be more deniable, while also allowing a passive network attacker to read all user data. No need to get code execution on the machines.

    • formerly_proven 3 days ago

      Apple forgot to disable some cache debugging registers a while back which in effect was similar to something GP described, although exploitation required root privileges and would allow circumventing their in-kernel protections; protections most other systems do not have. (And they still didn't manage to achieve persistence, despite having beyond-root privileges).

      • kfreds 3 days ago

        > Apple forgot to disable some cache debugging registers a while back which in effect was similar to something GP described

        Thank you for bringing that up. Yes, it is an excellent example that proves the existence of silicon vulnerabilities that allow privilege escalation. Who knows whether it was left there intentionally or not, and if so by whom.

        I was primarily arguing that (1) the technologies of Apple PCC are useful and (2) it is _very_ unlikely that Apple PCC is a ploy by Apple, to direct attention away from backdoors in the silicon.

  • stouset 3 days ago

    If you take as a fundamental assumption that all your hardware is backdoored by Mossad who has unlimited resources and capacity to intercept and process all your traffic, the game is already lost and there’s no point in doing anything.

    If instead you assume your attackers have limited resources, things like this increase the costs attackers have to spend to compromise targets, reducing the number of viable targets and/or the depth to which they can penetrate them.

    One of these threat models is actually useful.

    • gigel82 2 days ago

      Some of us just assume Apple itself is a bad actor planning to use and sell customer data for profit; makes all of this smoke and mirrors like GP said.

      There is absolutely no technical solution where Apple can prove our data isn't exfiltrated as long as this is their software that runs on their hardware.

      • stouset 2 days ago

        You have actually set up a completely impossible-to-win scenario.

        I can advertise a service running on open hardware with open software. Unless you personally come inspect my datacenters to verify my claims, you’ll never be happy. Even then you need to confirm that I’m not just sending your traffic here only when you’re looking, and sending it to my evil backdoored hardware when you aren’t.

        At some point you have to trust that an operator is acting in good faith. You need to trust that your own hardware wasnt backdoored by the manufacturer. You need to trust that the software you’re running is faithfully compiled from the source code you haven’t personally inspected.

        If you don’t trust Apple’s motives, that’s certainly your prerogative. But don’t act like this ridiculous set of objections would suddenly end if they only just used RISC-V and open source. I would bet my life’s savings that you happily use services from other providers you don’t hold to this same standard.

        • gigel82 17 hours ago

          I'm looking for a middle ground. I need to use and trust hardware from vendors like Apple but I use as few of their services as possible (and verify that with firewalls and traffic inspection).

          My concern here is with Apple Intelligence and this wishy-washy hybrid approach where some of the time your data is sent to the "private cloud" and some of the time it's processed locally on device. I absolutely hate that and need a big switch in Settings that completely turns off all cloud processing of data; but given how much they're spending on advertising how "private" their cloud is I suspect they plan to not make that optional at all (not just opt-out by default). At that point, all the photos you take might be sent to their cloud for "beautification" or whatever and there's no way to know whether they're also analyzed for other things or sent out to the CCP to make sure you're not participating in a protest against Xi.

        • Jerrrrrrry a day ago

          nope.

          FUD.

          Diffie Hellman, Homomorphic encryption, zero knowledge proofs, decentralized triple (sender, messenger, receiver) layered protocol.

          Faraday caged, deafened power supply, Heartbeat/pulse sensors, proximity sensors, voltage sensors, EM/radio triggers, dead-man switch, all reporting with constantly rotating codes.

          Multiple servers in different locations, depending on threat model because of jurisdiction, with XOR'd secrets, with random access to memory to obfuscate the real address if it needs zeroed/oned/zeroed, Da Vinci codex style.

          Make access directly tied to reputation/staked interest/invitation, with subtle canaries and watermarks.

          Even if it got super-chilled and no noticeable voltage disruption, and someone didn't set off any other alarm bells, they still have to get the other machine within the timeout.

          And you could just do 3/5 multi-sig, and apply CAP theorem.

          But if the best people in the world can't do it ("for more than a year"), then there is something less than ideal in the above hypothetical.

          • stouset a day ago

            I am (or was, at this point) a cryptographer.

            Throwing random cryptography buzzwords at a problem does not magically create a secure solution.

            Even if you had a genie to give you all of this, you’d absolutely want to include and build upon the work Apple is doing here.

            • Jerrrrrrry a day ago

                >Throwing random cryptography buzzwords at a problem does not magically create a secure solution.
              
              
              no but any more words between those naughty ones and im pushing my quota

                Diffie Hellman, 
              
              TIFU, handshake, auth between two+ parties

                Homomorphic encryption, zero knowledge proofs,
              
              
              Doesnt publicly exist in a useful manner; any implementations likely will be ITAR'd or NIST'd moled. allows verifiable, but anonymous computation, trust-less computing,

                decentralized triple (sender, messenger, receiver) layered protocol.
              
              
              like tor/i2p/other "pony express" decentralized trustless transport protocols - pass on what you cannot decrypt.

                Faraday caged, deafened power supply, Heartbeat/pulse sensors, proximity sensors, voltage sensors, EM/radio triggers, dead-man switch, all reporting with constantly rotating codes.
              
              ^forgot to add bluetooth / orientation / volume kill-switches for physical security.

                >Even if you had a genie to give you all of this,
              
              
              lets not joke the culture, they are the best at math

                >you’d absolutely want to include and build upon the work Apple is doing here.
              
              
              ive described a modern da vinci codex and you think itd be better in a safe to be reverse-engineered.
      • rnts08 2 days ago

        Anyone assuming otherwise is just foolish. No mega-corp is protecting the individuals privacy when developing products.

    • Jerrrrrrry 2 days ago

      Soviets used typewriters.

      American Lawyers of the highest pedigree (HNWI) don't even use email.

      Your hardware is back-doored, as Intel is named "Intel" for a (nearly too poignant) reason.

      • BenFranklin100 2 days ago

        Don’t forget the stealth black helicopters with zoom lenses hovering overhead.

        • Jerrrrrrry 2 days ago

          Remember lurkers:

            they glow in the dark, you can see em in your driving.
          
            run em over, thats what ya do
          
          
          
            The court entered this ruling despite testimony from an attorney who stated, “[b]ecause of the [FISA Amendments Act], we now have to assume that every one of our international communications may be monitored by the government.”  Id., 133 S.Ct. at 1148.
          
          https://www.mcslaw.com/firm-news/expectation-surveillance-ca...

            The gold standard for WAPS is the Gorgon Stare system which is deployed aboard the Reaper UAS. The current version of Gorgon Stare uses five electro-optical and four infrared cameras to generate imagery from 12 different angles. Gorgon Stare can provide a continuous city-sized overall picture, multiple sub-views of the overall field and what are high resolution “chipouts” of individual views, each of which can be streamed in real time to multiple viewers. A single Gorgon Stare pod can generate two terabytes of data a day.
          
          
          https://lexingtoninstitute.org/wide-area-persistent-surveill...

            "After scandals with the distribution of secret documents by WikiLeaks, the exposes by Edward Snowden, reports about Dmitry Medvedev being bugged during his visit to the G20 London summit (in 2009), it has been decided to expand the practice of creating paper documents," the source said.
          
          
          https://www.bbc.com/news/world-europe-23282308

            Since 2008, most of Intel’s chipsets have contained a tiny homunculus computer called the “Management Engine” (ME). The ME is a largely undocumented master controller for your CPU: it works with system firmware during boot and has direct access to system memory, the screen, keyboard, and network. All of the code inside the ME is secret, signed, and tightly controlled by Intel. Last week, vulnerabilities in the Active Management (AMT) module in some Management Engines have caused lots of machines with Intel CPUs to be disastrously vulnerable to remote and local attackers.
          
          https://www.eff.org/deeplinks/2017/05/intels-management-engi...
        • Jerrrrrrry 2 days ago

          They are sky blue, usually drones, and "zoom lenses" is a lil bit of an understatement.

  • kmeisthax 3 days ago

    The economics of silicon manufacturing and Apple's own security goals (including the security of their business model) restrict the kinds of backdoors you can embed in their servers at that level.

    Let's assume Apple has been compromised in some way and releases new chips with a backdoor. It's expensive to insert extra logic into just one particular spin of a chip; that involves extra tooling cost that would be noticeable line-items and show up in discovery were Apple to be sued about their false claims. So it needs to be on all the chips, not just a specific "defeat PCC" spin of their silicon. So they'd be shipping iPads and iPhones with hardware backdoors.

    What happens when those backdoors inevitably leak? Well, now you have a trivial jailbreak vector that Apple can't patch. Apple's security model could be roughly boiled down as "our DRM is your security"; while they also have lots of actual security, they pride themselves on the fact that they have an economic incentive to lock the system down to keep both bad actors and competing app stores out. So if this backdoor was inserted without the knowledge of Apple management, there are going to be heads rolling. And if it was, then they're going to be sued up the ass once people realize the implications of such a thing, because Tim Cook went up on stage and promised everyone they were building servers that would refuse to let them read your Siri queries.

    • mike_hearn 2 days ago

      Unfortunately that's not the case.

      All remote attestation technology is rooted by a PKI (the DCA certificate authority in this case). There's some data somewhere that simply asserts that a particular key was generated inside a CPU, and everything is chained off that. There's currently no good way to prove this step so you just have to take it on faith. Forge such an assertion and you can sign statements that device X is actually a Y and it's game over, it's not detectable remotely.

      Therefore, you must take on faith the organization providing the root of trust i.e. the CPU. No way around it. Apple does the best it can within this constraint by trying to have numerous employees be involved, and there's this third party auditor they hired, but that auditor is ultimately engaging in a process controlled by Apple. It's a good start but the whole thing assumes either that Apple employees will become whistleblowers if given a sufficiently powerful order, or that the third party auditor will be willing and able to shut down Apple Intelligence if they aren't satisfied with the audit. Given Apple's legal resources and famously leak-proof operation, is this a convincing proposition?

      Conventional confidential computing conceptually works, because the people designing and selling the CPUs are different to the people deploying them to run confidential workloads. The deployers can't forge an attestation (assuming absence of bugs) because they don't have access to the root signing keys. The CPU makers could, theoretically, but they have no reason to because they aren't running any confidential workloads so there's no data to steal. And they are in practice constrained by basic problems like not knowing what CPU the deployers actually have, not being able to force changes to other people's hardware, not being able to intercept the network connections and so on.

      So you need a higher authority that can force them to conspire which in practice means only the US government.

      In this case, Apple is doing everything right except that the root of trust for everything is Apple itself. They can publish in their log an entry that claims to be an Apple CPU but for which the key was generated outside of the manufacturing process, and that's all it takes to dismantle the entire architecture. Apple know this and are doing the best they can within the "don't team up with competitors" constraint they obviously are placed under. But trust is ultimately a human thing and the purpose of corporations is to let us abstract and to some extent anthropomorphize large groups. So I'm not totally sure this works, socially.

      • kfreds 2 days ago

        Hi Mike! Long time no see.

        > simply asserts that a particular key was generated inside a CPU ... There's currently no good way to prove this step

        Yes, but there are better and worse ways to do it. Here's how I think about it. I know you know some of this but I'll write it out for other HN readers as well.

        Let's start with the supply chain for an SoC's master key. A master key that only uses entropy from an on-die PUF is vulnerable to mistakes and attacks on the chip design as well as the process technology. A master key memory on-die which is provisioned by the fab, or during packaging, or by the eventual buyer of the SoC, is vulnerable to mistakes and attack during that provisioning step.

        I think state-of-the-art would be something like:

        - an on-die key memory, where the storage is in the vias, using antifuse technology that prevents readout of the bits using x-ray,

        - provisioned using multiple entropy source controlled by different supply chains such as (1) an on-die PUF, (2) an on-die TRNG, (3) an off-die TRNG controlled by the eventual buyer,

        - provisioned by the eventual buyer and not earlier

        As for the cryptographic remote attestation claim itself, such as a TPM Quote, that doesn't have to be only one signature.

        As for detectability, discoverability and deterrence, transparency logs makes targeted attacks discoverable. By tlogging all relevant cryptographic claims, including claims related to inventory and provisioning of master keys, an attacker would have to circumvent quite a lot of safeguards to remain undetected.

        Finally, if we assume that the attacker is actually at Apple - management, a team, a disgruntled employee, saboteurs employed by competitors - what this type of architecture does is it forces the attacker to make explicit claims that are more easily falsifiable than without such an architecture. And multiple people need to conspire in order for an attack to succeed.

        • mike_hearn 2 days ago

          Hello! I'm afraid I don't recognize the username but glad to know we've met :) Feel free to email me if you'd like to greet under another name.

          Let's agree that Apple are doing state-of-the-art work in terms of internal manufacturing controls and making those auditable. I think actually the more interesting and tricky part is how to manage software evolution. This is something I've brought up with [potential] customers in the past when working with them on SGX related projects: for this to make sense, socially, then there has to be a third party audit for not only the software in the abstract but each version of the software. And that really needs to be enforced by the client, which means, every change to the software needs to be audited. This is usually a non-starter for most companies because they're afraid it'd kill velocity, so for my own experiments I looked at in-process sandboxing and the like to try and restrict the TCB even within the remotely attested address space.

          In this case Apple may have an advantage because the software is "just" doing inferencing, I guess, which isn't likely to be advantageous to keep secret, and inferencing logic is fairly stable, small and inherently sandboxable. It should be easy to get it to be audited. For more general application of confidential/private computing though it's definitely an issue.

          The issue of multiple Apple devs conspiring isn't so unlikely in my view. Bear in mind that end-to-end encryption made similar sorts of promises that tech firm employees can't read your messages, but the moment WhatsApp decided that combating "rumors" was the progressive thing to do they added a forwarding counter to messages so they could stop forwarding chains. Cryptography 101: your adversary should not be able to detect that you're repeating yourself; failed, just like that. The more plausible failure mode here is therefore not the case of spies or saboteurs but rather a deliberate weakening of the software boundary to leak data to Apple because executives decide they have a moral duty to do so. This doesn't even necessarily have to be kept secret. WhatsApp's E2E forwarding policy is documented on their website, they announced it in a blog post. My experience is that 99% of even tech workers believe that it does give you normal cryptographic guarantees and is un-censorable as a consequence, which just isn't the case.

          Still, all this does lay the foundations for much stronger and more trustworthy systems, even if not every problem is addressed right away.

    • Jerrrrrrry 2 days ago

        >backdoors inevitably leak? Well, now you have a trivial jailbreak vector
      
      the discover-ability of an exploit vector relates little to its trivialness, definitely when considering the context (nation-state-APTs)

      You can hold the enter key down for 40 seconds to login into any certain Linux Server distro, for years. No one knew, ez to do.

      You can have a chip inside your chip that only accepts encrypted and signed microcode and has control over the superior chip. Everyone knows - nothing you can do.

      Nation state actors however, can facilitate either; APT's can forge fake digital forensics that imply another motive/state/false flag.

  • yalogin 3 days ago

    This is an interesting idea. However what does open hardware mean? How can you prove that the design or architecture that was “opened” is actually what was built? What does the attestation even mean in this scenario?

    • kfreds 3 days ago

      > what does open hardware mean?

      Great question. Most hardware projects I've seen that market themselves as open source hardware provide the schematic and PCB design, but still use ICs that are proprietary. One of my companies, Tillitis, uses an FPGA as the main IC, and we provide the hardware design configured on the FPGA. Still, the FPGA itself is proprietary.

      Another aspect to consider is whether you can audit and modify the design artefacts with open source tooling. If the schematics and PCB design is stored in a proprietary format I'd say that's slightly less open source hardware than if the format was KiCad EDA, which is open source. Similarly, in order to configure the HDL onto the FPGA, do you need to use 50 GB of proprietary Xilinx tooling, or can you use open tools for synthesis, place-and-route, and configuration? That also affects the level of openness in my opinion.

      We can ask similar questions of open source software. People who run a Linux distribution typically don't compile packages themselves. If those packages are not reproducible from source, in what sense is the binary open source? It seems we consider it to be open source software because someone we trust claimed it was built from open source code.

      • threeseed 3 days ago

        And what attestation do you have that the FPGA isn't compromised.

        We can play this game all the way down.

        • kfreds 3 days ago

          You're right. It is very hard, if not impossible, to get absolute guarantees. Having said that, FPGAs can make supply chain attacks harder. See my other comments in this thread.

      • yalogin 2 days ago

        No, you trust the HW and so starting with secure boot you can get measurements cryptographically vouched for. That you can prove and verify.

        So at some point you have no option but to trust something/someone

    • dented42 3 days ago

      This is my thought exactly. I really love the idea of open hardware, but I don’t see how it would protect against cover surveillance. What’s stopping a company/government/etc from adding surveillance to an open design? How would you determine that the hardware being used is identical to the open hardware design? You still ultimately have to trust that the organisations involved in manufacturing/assembling/installing/operating the hardware in question hasn’t done something nefarious. And that brings us back to square one.

      • mdhb 3 days ago

        This website in particular tends to get very upset and is all too happy to point out irrelevant counter examples every time I point this out but the actual ground truth of the matter here is that you aren’t going to find yourself on a US intel targeting list by accident and unless you are doing something incredibly stupid you can use Apple / Google cloud services without a second thought.

      • kfreds 3 days ago

        > How would you determine that the hardware being used is identical to the open hardware design?

        FPGAs can help with this. They allow you to inspect the HDL, synthesize it and configure it onto the FPGA chip yourself. The FPGA chip is still proprietary, but by using an FPGA you are making certain supply chain attacks harder.

        • warkdarrior 3 days ago

          How do you know the proprietary part of the FPGA chip performs as expected and does not covertly gather data from the configured gates?

          • kfreds 3 days ago

            > How do you know the proprietary part of the FPGA chip performs as expected and does not covertly gather data from the configured gates?

            We don't, but using an FPGA can make supply chain attacks harder.

            Let's assume you have a chip design for a microcontroller and you do a tapeout, i.e. you have chips made. An attacker in your supply chain might attack your chip design before the design makes it to the fab, maybe the attacker is at the fab, or they change out the chips after you've placed them on your PCB.

            If you use an FPGA, your customer could stress test the chip by configuring a variety of designs onto the FPGA. These designs should stress test timing, compute and memory at the very least. This requires the attacker's chip to perform at least as well as the FPGA you're using, while still having the same footprint. An attacker might stack the real FPGA die on top of the attacker's die, but such an attack is much easier to detect than a few malicious gates on a die. As for covertly gathering or manipulating data, on an FPGA you can choose where to place your cores. That makes it harder for the attacker to predict where on the FPGA substrate they should place probes, or which gates to attack in order to attack your TRNG, or your master key memory. Those are just some examples.

            If you're curious about this type of technology or line of thinking you can check out the website of one of my companies: tillitis.se

  • greenthrow 3 days ago

    If this is your position then you might as well stop using any computing devices of any kind. Which includes any kind of smart devices. Since you obviously aren't doing that, then you're trying to hold Apple to a standard you won't even follow yourself.

    On top of which, your comment is a complete non-sequitur to the topic at hand. You could reply with this take to literally any security/privacy related thread.

  • anonymousDan 2 days ago

    No one should consider this any protection against nation state actors who are in collaboration against Apple. That doesn't mean it's pointless. Removing most of the cloud software stack from the TCB and also protecting against malicious or compromised system administrators is still very valuable for people who are going to move to the cloud anyway.

  • astrange 2 days ago

    Transparency through things like attestation is capable of proving nothing unexpected is running; for instance you can provide power/CPU time numbers or hashes of arbitrary memory and this can make it arbitrarily hard to run extra code since it would take more time.

    And the secure routing does make most of these attacks infeasible.

  • rvnx 3 days ago

    Concrete example of such backdoors: https://www.bloomberg.com/news/features/2018-10-04/the-big-h...

    The system is protecting you against Apple employees, but not against law enforcement.

    No matter how much layer of technology you put, at the end of the day, the US companies have to respect the law of the US.

    The requests can be routed to specific investigation / debugging / beta nodes.

    Just to turn-on a flag on specific users.

    It's not like ultimate privacy, but at least it will prevent Apple engineers from snooping into private chatlogs.

    (like some pervert at Gmail was stalking a little girl https://www.gawkerarchives.com/5637234/gcreep-google-enginee... , or Zuckerberg himself reading chatlogs https://www.vanityfair.com/news/2010/03/mark-zuckerberg-alle... )

    • bri3d 3 days ago

      The Bloomberg SuperMicro implant in its various forms is an exceptionally poor example here: it's been widely criticized, never corroborated, and, Apple's Private Compute architecture has extensive mitigation against every type of purported attack in the various forms the SuperMicro story has taken. UEFI/BIOS backdoors, implanted chips affecting the BMC firmware, and malicious/tampered storage device firmware are all accounted for in the Private Compute trust model.

    • astrange 2 days ago

      That article is literally completely made up and didn't happen.

      > The requests can be routed to specific investigation / debugging / beta nodes.

      No, this is not possible with the design of PCC; they can't control how your requests are routed and there cannot be nodes with extra debugging.

    • dboreham 2 days ago

      The threat is real but that article is disinformation.

    • SheinhardtWigCo 3 days ago

      > Concrete example

      This has not been corroborated and Bloomberg has not produced any supporting evidence.

    • 0xCMP 3 days ago

      iirc, no real proof was ever provided for that bloomberg article (despite it also never being retracted). many looked for the chips and from everything I heard there was never a concrete situation where this was discovered.

      Doesn't make the possible threat less real (see recent news in Lebanon), but that story in particular seems to have not stood up to closer inquiry.

  • SheinhardtWigCo 3 days ago

    Yeah, but, considering the sheer complexity of modern CPUs and SoCs, this is still the case even if you have the silicon in front of you. That ship sailed some time ago.

    • kfreds 2 days ago

      It depends on what you want to do. If all you're trying to do is produce an Ed25519 signature you could use something like the Tillitis TKey. It's a product developed by one of my companies. As I've mentioned elsewhere in this thread it is open source hardware in the sense that the schematic, PCB design _and_ hardware design (FPGA configuration) are all open source. Not only that, the FPGA only has about 5000 logic cells. This makes it feasible for an individual to audit the software and the hardware it is running on to a much greater extent than any other system available for purchase. At least I'm not aware of a more open and auditable system than ours.

      • SheinhardtWigCo 2 days ago

        How is that relevant to Private Cloud Compute, though?

        • kfreds 2 days ago

          You're right that it isn't. I assumed that your "..sheer complexity of modern CPUs.." statement was in response to "Without open silicon, there's no way to detect..". That's what prompted my response.

          I realise now that you were probably responding to "This _does_ increase the trust that the VMs are safe from other attackers".

  • threeseed 3 days ago

    You have to be serious here.

    The level of conspiracy needed to keep something like this a secret would be unprecedented.

    And if Apple was able to do that why wouldn't they just backdoor iOS/OSX instead of baking it into the hardware.

    • azinman2 2 days ago

      Or just not make any hardware promises, and do it like iCloud or OpenAI or Gemini or any other cloud product.

  • layer8 3 days ago

    With virtualized hardware the backdoor doesn’t even strictly need to be in silicon.

    • astrange 2 days ago

      That's detectable through timing measurements, for the same reason you can't have data-dependent operations in cryptography.

      • saagarjha 2 days ago

        Ok, where are the timing measurements?

        • astrange 2 days ago

          In the hardware secure boot chain ;)

          You do have to trust the SEP/TPM here, it sounds like. That is verified by having a third party auditor watch them get installed, and by the anonymous proxy routing thingy making it so they can't fake only some of them but would have to fake all of them to be reliable.

          If they were okay with it being unreliable, then clients could tell via timing because some of the nodes would perform differently, or they'd perform differently depending on which client or what prompt it was processing. It's surprisingly difficult to hide timing differences, eg all those Spectre cache-reading attacks on browsers.

          It does look like there's room to add more verification (like the client asking the server to do more intensive proofs, or homomorphic encryption). Could always go ask for it.

magicloop 2 days ago

I was studying the code they posted on GitHub. One line of attack is to study the bugs/workarounds in the code.

For example, https://github.com/search?q=repo%3Aapple%2Fsecurity-pcc%20rd..., lists out all references to `rdar` which is a link schema for Apple's bug management system.

Also, it is clear that the code is cross platform (it references iOS and macOS). So the code here gives clues as to the security operation of iOS as well in case you wanted to do iOS security research.

It is lovely to see the middleware here written in Swift. It is quite chunky. Reading all that XPC code gives me the shivers (as I've personal experience with how tricky that can get).

Overall it is a very interesting offering. I wish I had two weeks to burn through the details... [I am the author of The Road to Zero, and iOS Crash Dump Analysis].

NightlyDev 2 days ago

This marketing is dumb, and if Apple believes that not even they themselves can get access to the information running on the platform, then they could put their money where their mouth is. Increase the max bounty reward from $50 000 to $50 000 000 000 with no other rules than if you can get access to users' request data without having the phone it's sent from, then you get the money, and Apple will not legally pursue the attacker.

Is it as secure as they say? Then it doesn't matter if all the money Apple has is the reward, because nobody can get it. A max bounty of $50 000 for "Accidental or unexpected data disclosure due to deployment or configuration issue" is silly low.

aabhay 3 days ago

A lot of people seem to be focusing on how this program isn’t sufficient as a guarantee, but those people are missing the point.

The real value of this system is that Apple is making legally enforceable claims about their system. Shareholders can, and do, sue companies that make inaccurate claims about their infrastructure.

I’m 100% sure that Apple’s massive legal team would never let this kind of program exist if _they_ weren’t also confident in these claims. And a legal team at Apple certainly has both internal and external obligations to verify these claims.

America’s legal system is in my opinion what allows the US to dominate economically, creating virtuous cycles like this.

  • layer8 3 days ago

    Unfortunately that doesn’t help anyone outside the US, not because of differences in the legal systems, but because as an American company Apple will always have to defer to the US agencies first.

    • aabhay 2 days ago

      I’m pretty sure a foreign shareholder can sue in a US court of law. While I agree that “shareholder” in this case means extra-massive moneyed entity, I firmly believe that even this provides a deterrence effect. At the very least, for the scale of operations in the US, there’s an extremely high trust environment. That level of trust doesn’t exist even for orders of magnitude smaller issues in most other countries

      • saagarjha 2 days ago

        Yes, this is why we don't see bad behavior and willful abuse of the legal system by companies in the US.

kfreds 3 days ago

Wow! This is great!

I hope you'll consider adding witness cosignatures on your transparency log though. :)

1oooqooq 2 days ago

smoke and mirrors.

there is no private other-people-computer, by definition.

what those attempts try to do, is to change the definition.

nerdjon 2 days ago

Maybe it is too soon, but I would love it if they included a section of "security research discussion" or something. A place linking to those who talk about the system (good and bad) that understand this far better than I do.

I have been trying to look for that but I guess it will likely be a bit longer.

KaiserPro 2 days ago

The thing that is interesting about PCC is what appears to be a microkernel that runs the virtualisation.

That to me is the innovation here, everything else is just standard bits.

I haven't had time to dig through the splunklogger, but I had hoped that logging was mostly disabled and you could only emit metrics. That was my reading of the PCC manifesto.

ram_rattle 2 days ago

Something similar published by samsung, but sad that they are not as agile as apple in this area

https://research.samsung.com/blog/The-Next-New-Normal-in-Com...

  • axoltl 2 days ago

    This doesn't look to be the same. Apple's talking about performing computation in their cloud in a secure, privacy-preserving fashion. Samsung's paper seems to be just on local enclaves (which Apple's also been doing since iPhone 5S in the form of the Secure Enclave Processor (SEP)).

doulouUS 2 days ago

Will there be SDKs to enable any developer to build things leveraging PCC? Like building a performant RAG system on personal/sensitive data.

gigel82 3 days ago

No amount of remote attestation and "transparency logs" and other bombastic statements like this would make up for the fact that they are fully in control of the servers and the software. There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

So unless they offer a way for us to run the "cloud services" on our own hardware where we can strictly monitor and firewall all network activity, they are almost guaranteed to be misusing that data, especially given Apple's proven track record of giving in to government's demands for data access (see China).

  • kfreds 3 days ago

    > No amount of remote attestation and "transparency logs" and other bombastic statements like this would make up for the fact that they are fully in control of the servers and the software. There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

    You are right. Apple is fully in control of the servers and the software, and there is no way for a customer to verify Apple's claims. Nevertheless system transparency is a useful concept. It can effectively reduce the number of things you have to blindly trust to a short and explicit list. Conversely it forces the operator, in this case Apple, to explicitly lie. As others have pointed out, that is quite a business risk.

    As for transparency logs, it is an amazing technology which I can highly recommend you take a look at in case you don't know what it is or how it works. Check out transparency.dev or the project I'm involved in, sigsum.org.

    > they are almost guaranteed to be misusing that data

    That is very unlikely because of the liability, as others have pointed out. They are making claims which the Apple PCC architecture helps make falsifiable.

  • astrange 2 days ago

    > There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

    Transparency logs are capable of verifying that, it's more or less the whole point of them. (Strictly speaking, you can make it arbitrarily expensive to fake it.)

    Also, if they were "transferring your data elsewhere" it would be a GDPR violation. Ironically wrt your China claim, it would also be illegal in China, which does in fact have privacy laws.

    • ls612 2 days ago

      Are transparency logs akin to Certificate Transparency but for signed code? I’ve read through the section a couple times and still don’t fully understand it.

      • astrange 2 days ago

        Yeah, it's a log of all the software that runs on the server. If you trust the secure boot process then you trust the log describes its contents.

        If you don't trust the boot process/code signing system then you'd want to do something else, like ask the server to show you parts of its memory on demand in case you catch it lying to you. (Not sure if that's doable here because the server has other people's data on it, which is the whole point.)

        • kfreds 2 days ago

          One approach would be a chip design where a remote attestation request issues a hardware interrupt, and then the hardware hashes the contents of memory, more specifically the memory containing the code.

          • astrange 2 days ago

            That's not quite enough but yes.

            (You need to prove that the system is showing you the server your data is present on, and not just showing you an innocuous one and actually processing your data on a different evil one.)

    • gigel82 2 days ago

      That makes no sense at all. They control the servers and services entirely; they can choose to emit whatever logs they want into the "transparent logs" and then emit whatever else they don't want into non-transparent logs.

      Even if they were running open source software with cryptographically verified / reproducible builds, it's still running on their hardware (any component or the OS / kernel or even hardware can be hooked into to exfiltrate unencrypted data).

      Companies like Apple don't give a crap about GDPR violations (you can look at their "DMA compliance" BS games to see to what extent they're willing to go to skirt regulations in the name of profit).

      • davidczech 2 days ago

        > they can choose to emit whatever logs they want into the "transparent logs" and then emit whatever else they don't want into non-transparent logs.

        The log is publicly accessible and append-only, so such an event would not go un-noticed. Not sure what a non-transparent log is.

        • gigel82 2 days ago

          Ok, but they write and fully control the closed-source software that appends to the log. How can anyone verify that all the code paths append to the log? I'm pretty sure they can just not append to the log from their ExfiltrateDataForAdvertisment() and ExfiltrateDataForGovernments() functions.

          Maybe I'm not being clear; transparent logs solve the problem of supply chain attacks (that is, Apple can use the logs to some degree to ensure some 3rd party isn't modifying their code), but I'm trying to say Apple themselves ARE the bad actor, they will exfiltrate customer data for their own profit (to personalize ads, or continue building user profiles, or sell to governments and so on).

          • kfreds 2 days ago

            > How can anyone verify that all the code paths append to the log?

            davidczech has already explained it quite well, but I'll try explaining it a different way.

            Consider the verification of a signed software update. The verifier, e.g. apt-get, rpm, macOS Update, Microsoft Update or whatever OS you're running. They all have some trust policy that contains a public key. The verifier only trusts software signed by the public key.

            Now imagine a verifier with a trust policy that mandates that all signed software must also be discoverable in a transparency log. Such a trust policy would need to include:

            - a pubkey trusted to make the claim "I am your trusted software publisher and this software is authentic", i.e. it is from Debian / Apple / Microsoft or whomever is the software publisher.

            - a pubkey trusted to make the claim "I am your trusted transparency log and this software, or rather the publisher's signature, has been included in my log and is therefore discoverable"

            The verifier would therefore require the following in order to trust a software update:

            - the software (and its hash) - a signature over the software's hash, done by the software publisher's key - an inclusion proof from the transparency log

            There is another layer that could be added called witness cosigning, which reduces the amount of trust you need to place in the transparency log. For more on that see my other comments in this thread.

            • gigel82 2 days ago

              Got it, that all makes sense. My concern is not someone maliciously attempting to infect the software / hardware.

              My concern is that Apple themselves will include code in their officially signed builds that extracts customer data. All of these security measures cannot protect against that because Apple is a "trusted software publisher" in the chain.

              All of this is great stuff, Apple makes sure someone else doesn't get the customer data and they remain the only ones to monetize it.

              • davidczech 2 days ago

                > cannot protect against that because Apple is a "trusted software publisher" in the chain.

                That's the whole point of the transparency log. Anything published, and thus to be trusted by client devices, is publicly inspectable.

                • kfreds 2 days ago

                  No, gigel82 is right. Transparency logging provides discoverability. That does not mean the transparency logged software is auditable in practice. As gigel82 correctly points out, the build hash is not sufficient, nor is the source hash sufficient. The remote attestation quote contains measurements of the boot chain, i.e. hashes of compiled artifacts. Those hashes need to be linked to source hashes by reproducible builds.

                • gigel82 2 days ago

                  Publicly inspectable how? Are you saying their entire server stack will be open source and have reproducible builds?

                  • kfreds 2 days ago

                    My understanding is that Apple PCC will not open source the entire server stack. I might be wrong. So far I haven't seen them mention reproducible builds anywhere, but I haven't read much of what they just published.

                    One of the projects I'm working on however intends to enable just that. See system-transparency.org for more. There's also glasklarteknik.se.

                  • davidczech 2 days ago

                    No, but the binaries executed will be available for download.

                    • gigel82 17 hours ago

                      Then shouldn't they allow us to self-host the entire stack? That would surely put me at ease; if I can self-host my own "Apple private cloud" on my own hardware and firewall the heck out of it (inspect all its traffic), that's the only way any privacy claims have merit.

          • davidczech 2 days ago

            > How can anyone verify that all the code paths append to the log? I'm pretty sure they can just not append to the log from their ExfiltrateDataForAdvertisment() and ExfiltrateDataForGovernments() functions.

            I think we have different understandings of what the transparency log is utilized for.

            The log is used effectively as an append-only hash set of trusted software hashes a PCC node is allowed to run, accomplished using Merkle Trees. The client device (iPhone) uses the log to determine if the software measurements from an attestation should be rejected or not.

            https://security.apple.com/documentation/private-cloud-compu...

            • gigel82 2 days ago

              One of the replies in this thread sent me to transparency.dev which describes transparency logs as something different. But reading Apple's description doesn't change my opinion on this. It is a supply-chain / MITM protection measure and does absolutely nothing to assuage my privacy concerns.

              Bottom line, I just hope that there will be a big checkbox in the iPhone's settings that completely turns off all "cloud compute" for AI scenarios (checked by default) and I hope it gets respected everywhere. But they're making such a big deal of how "private" this data exfiltration service is that I fear they plan to just make it default on (or not even provide an opt-out at all).

              • davidczech 2 days ago

                > It is a supply-chain / MITM protection measure

                It is so much more than that, but you are entitled to your own opinion.

      • astrange 2 days ago

        > They control the servers and services entirely

        There's a key signing ceremony with a third-party auditor watching; it seems to rely on trusting them together with the secure boot process. But there are other things you can add to this, basically along the lines of making the machine continually prove that it behaves like the system described in the log.

        They don't control all of the service though; part of the system is that the server can't identify the user because everything goes through third party proxies owned by several different companies.

        > Companies like Apple don't give a crap about GDPR violations

        GDPR fines are 4% of the company's yearly global revenue. If you're a cold logical profit maximizer, you're going to care about that a lot!

        Beyond that, they've published a document saying all this stuff, which means you can sue them for securities fraud if it turns out to be a lie. It's illegal for US companies to lie to their shareholders.

        • gigel82 2 days ago

          "ceremony" is a good choice of word; it's all ceremonial and nonsense; as long as they control the hardware and the software there is absolutely no way for someone to verify this claim.

          Apple has lied to shareholders before, remember those "what happens on your iPhone, stays on your iPhone" billboards back in the day they used to fool everyone into thinking Apple cares about privacy? A couple years later, they were proudly announcing how everyone's iPhone will scan their files and literally send them to law enforcement if they match some opaque government-controlled database of hashes (yes, they backed out of that plan eventually, but not before massive public outcry and going through a few "you're holding it wrong" explanations).

          • astrange 2 days ago

            > Apple has lied to shareholders before

            So sue them.

            > how everyone's iPhone will scan their files and literally send them to law enforcement

            That was a solution for if you opted into a cloud service, was a strict privacy improvement because it came alongside end-to-end encryption in the cloud, and I think was mandated by upcoming EU regulations (although I think they changed the regulations so it was dropped.)

            Note in the US service providers are required to report CSAM to NCMEC if they see it; it's literally the only thing they're required to do. But NCMEC is not "law enforcement" or "government", it's a private organization specially named in the law. Very important distinction because if anyone does give your private information to law enforcement you'd lose your 4th Amendment rights over it, since the government can share it with itself.

            (I think it may actually be illegal to proactively send PII to law enforcement without them getting a subpoena first, but don't remember. There's an exception for emergency situations, and those self service portals that large corporations have are definitely questionable here.)

ngneer 3 days ago

How is this different than a bug bounty?

  • alemanek 3 days ago

    Well they are providing a dedicated environment from which to attack their infrastructure. But also they have a section called “ Apple Security Bounty for Private Cloud Compute” in the linked article so this is a bug bounty + additional goodies to help you test their security.

  • floam 3 days ago

    There is a bug bounty too, but the ability to run one the same infrastructure, OS, models locally is big.

  • davidczech 3 days ago

    Similar, but a lot of documentation is provided, source code for cross reference, and a VM based research environment instead of applying for a physical security research device.