In preparation for launching its new Private Cloud Compute service, Apple has announced a significant reward for security researchers who identify vulnerabilities in its private AI cloud infrastructure. As outlined in a recent post on Apple’s security blog, the company is willing to pay up to $1 million for reports of exploits that can enable remote execution of malicious code on its Private Cloud Compute servers. This initiative is part of Apple's commitment to secure its AI cloud by incentivizing experts to identify and report potential weaknesses.
Apple’s bounty program offers a maximum reward for the most severe vulnerabilities and will also compensate researchers with up to $250,000 for reporting exploits that can reveal sensitive user data or confidential prompts processed within the AI cloud. For exploits capable of accessing sensitive information through privileged network positions, Apple will offer rewards of up to $150,000. According to Apple, any security issue with a substantial impact, even if outside predefined categories, may qualify for a reward under the program.
Apple clarified in its post that maximum payouts are reserved for vulnerabilities that compromise user data or inference request data beyond the secure boundaries of Private Cloud Compute. This extension of Apple's existing bug bounty program further underscores its commitment to strengthening device and data security. In recent years, Apple has taken other proactive steps, including developing a specialized research-focused iPhone, to support security testing and enhance defenses against spyware threats.
Alongside this announcement, Apple has shared further insights into the security measures of its Private Cloud Compute service, along with relevant documentation and source code. The service functions as an extension of Apple Intelligence, Apple's on-device AI model, designed to manage intensive AI tasks while maintaining robust privacy standards.