United States Securities and Exchange Commission
Washington, D.C. 20549
NOTICE OF EXEMPT SOLICITATION
Pursuant to Rule 14a-103
Name of the Registrant: Apple Inc.
Name of persons relying on exemption: National Legal and Policy Center
Address of persons relying on exemption: 107 Park Washington Court, Falls Church, VA 22046
Written materials are submitted pursuant to Rule 14a-6(g) (1) promulgated under the Securities Exchange Act of 1934. Filer of this notice does not beneficially own more than $5 million of securities in the Registrant company. Submission is not required of this filer under the terms of the Rule but is made voluntarily in the interest of public disclosure and consideration of these important issues.
PROXY MEMORANDUM
TO: Shareholders of Apple Inc.
RE: The case for voting FOR Proposal No. 4 on the 2025 Proxy Ballot (“Report on Ethical Al Data Acquisition and Usage”)
This is not a solicitation of authority to vote your proxy. Please DO NOT send us your proxy card; National Legal and Policy Center is not able to vote your proxies, nor does this communication contemplate such an event. NLPC urges shareholders to vote for Proposal No. 4 following the instructions provided on management's proxy mailing.
The following information should not be construed as investment advice.
Photo credits follow at the end of the report.
National Legal and Policy Center (“NLPC”) urges shareholders to vote FOR Proposal No. 4, which it sponsors, on the 2025 proxy ballot of Apple Inc. (“Apple” or the “Company”). The “Resolved” clause of the proposal states:
Shareholders request the Company to prepare a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year following the 2025 Annual
1
Meeting and updated annually thereafter. This report should assess the risks to the Company's operations and finances, and to the greater public health, safety and welfare, presented by Apple’s unethical or improper usage of external data in the development and training of its artificial intelligence projects and implementation; what steps the Company takes to mitigate those risks; and how it measures the effectiveness of such efforts.
Introduction
|
|
|
|
Artificial intelligence (AI) is one of the most transformative innovations in modern history – reshaping industries, revolutionizing business practices, and influencing how individuals and governments engage with technology. AI’s potential to improve everything from healthcare to financial services is undeniable, yet it comes with great risks. Apple, with its substantial AI investments, stands at a pivotal juncture where adopting strong privacy-centered policies could set it apart as a trusted leader.
|
|
|
Data is the lifeblood of AI. Machine-learning models require massive datasets to learn, adapt,
|
and improve their performance over time. This insatiable hunger for data drives developers to seek out large quantities of information via the Internet and other digital sources, some of which may be obtained unethically or illegally. AI models may incorporate data on human behavior, speech, images, and other sensitive content, making their development and deployment a privacy concern.
|
As AI matures, so does public awareness of AI data ethics. Consumers, regulators, and governments increasingly ask tough questions about where AI developers obtain the data used to train their models. Data scraping, unauthorized collection, and the use of proprietary or copyrighted content without permission have become focal points in the debate over AI ethics. Without proper internal checks and balances, Apple’s AI development may violate data privacy laws, infringe on intellectual property rights, or utilize personal information without consent.
The report requested in the Proposal would increase shareholder value by improving disclosure of Apple’s strategy for ethical usage of user data in AI development. This report seeks to encourage Apple to adopt a more ambitious pro-privacy stance, which may provide the Company a strong competitive advantage.
Privacy and Ethical Challenges Facing Apple in AI Development
Apple is a leading player in the AI space, thanks largely to its partnership with OpenAI. The Company’s position provides a platform to define expectations for responsible AI development.
2
Apple’s organizational size, scope, and influence – the Company is one of the largest in the world by market capitalization,1 revenue,2 and headcount3 – invite distrust. Public scrutiny is further amplified by Apple’s relationships with other power players in the industry, as well as the federal government.
For example, why were Apple and Microsoft both offered special seats on OpenAI’s board? Are they not competitors? The two rivals only dropped their seats after antitrust concerns were raised.4
| Apple’s partnership with OpenAI raises other issues. OpenAI has faced multiple allegations of unethical data collection practices, including data scraping without consent. Reports indicate that OpenAI has incorporated vast amounts of personal, copyrighted, and proprietary information into its AI models without notifying data owners or obtaining their permission. Such practices have led to legal action, including a high-profile lawsuit by the New York Times over alleged copyright
|
infringement. Finally Paul Nakasone, the former director of the National Security Agency, now sits on OpenAI’s board of directors. Under his tenure, he pushed to renew the expanded surveillance powers5 awarded to the NSA after 9/11, which have since been abused to spy on political opponents of the national security apparatus.6
|
The data-gathering practices that underpin Apple’s AI models raise ethical concerns. As mentioned in the Proposal, these include the Company’s partnership with OpenAI,7 8 and a proposed partnership with Meta.9 10 Apple argues in its 2025 Proxy Statement that the “proposal does not focus on any issues with Apple Intelligence, instead focusing its criticism on OpenAI, the developer of ChatGPT, an independent service that Apple users may choose to access.” Instead, the Company asserts that it “has a strong track record on protecting user privacy and a robust approach to integrating ethical considerations into our technology.”11
1 https://companiesmarketcap.com/
2 https://companiesmarketcap.com/largest-companies-by-revenue/
3 https://companiesmarketcap.com/largest-companies-by-number-of-employees/
4 https://www.theguardian.com/technology/article/2024/jul/10/microsoft-drops-observer-seat-on-openai-board-amid-regulator-scrutiny
5 https://www.cbsnews.com/news/nsa-director-us-surveillance-power-paul-nakasone/
6 https://apnews.com/article/election-2020-b9b3c7ef398d00d5dfee9170d66cefec
7 https://openai.com/index/openai-and-apple-announce-partnership/
8 https://www.businessinsider.com/openai-chatgpt-generative-ai-stole-personal-data-lawsuit-children-medical-2023-6
9 https://www.wsj.com/tech/ai/apple-meta-have-discussed-an-ai-partnership-cc57437e
10 https://www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html
11 https://d18rn0p25nwr6d.cloudfront.net/CIK-0000320193/d5ac8341-3708-4b1d-89f5-6a0dcec45aa0.pdf
3
This has always been Apple’s approach to privacy. The Company presents itself as privacy-friendly12 – to great success13 – but the monetization potential of its massive userbase is too high to pass up, so Apple outsources its unethical practices to other parties.
For example, the Company has a longstanding partnership with Alphabet – one of Apple’s major competitors – to make Google the default search algorithm on Apple products. The deal is estimated to be worth $25 billion to Apple, which is 20% of its pretax profit. Not only does the partnership generate antitrust scrutiny, but it provides Alphabet the opportunity to collect massive amounts of data on Apple users. Alphabet is known for a wide variety of ethical and privacy violations.14 15 16 17 18 19 In effect, Apple has outsourced its unethical activities to Alphabet while collecting substantial sums in the process.
This is the playbook that Apple is attempting to run with AI. In addition to its partnership with OpenAI, Apple has explored a partnership with Meta,20 another serial privacy violator.21 22 23 24 25 The Company’s userbase represents one of the single largest potential sources of AI training data in the world, and will be valued accordingly. Apple’s various AI privacy disclosures – outlined in Apple’s response to our Proposal in its 2025 Proxy Statement26 – focuses entirely on the Company’s own AI development efforts, and do not cover its attempts to outsource unethical activity to its competitors. The need for holistic disclosure should be obvious.
In addition, Apple’s algorithms are secret. As these systems become more integrated into daily life, shaping decisions from loan approvals to hiring, the lack of transparency around their inner workings poses significant ethical risks.
At the heart of AI development are machine-learning algorithms that rely on massive datasets to make predictions, detect patterns, and recommend actions. How these algorithms weigh certain variables, prioritize specific outcomes, and arrive at decisions often remains hidden in a “black
12 https://www.cnbc.com/2021/11/13/apples-privacy-changes-show-the-power-it-holds-over-other-industries.html
13 https://appleinsider.com/articles/24/04/24/apples-generative-ai-may-be-the-only-one-that-was-trained-legally-ethically
14 https://time.com/6962521/google-incognito-lawsuit-data-settlement/
15 https://www.tradingview.com/symbols/NASDAQ-GOOG/financials-revenue/
16 https://techcrunch.com/2023/01/17/privacy-sandbox-topics-api-criticism/
17 https://newrepublic.com/article/161629/sad-implosion-googles-ethical-ai?utm_source=chatgpt.com
18 https://www.reuters.com/world/europe/france-imposes-fines-facebook-ireland-google-2022-01-06/
19 https://www.businessinsider.com/google-plus-class-action-settlement-how-to-claim-cash-payment-2020-8
20 https://www.wsj.com/tech/ai/apple-meta-have-discussed-an-ai-partnership-cc57437e
21 https://katefitzgeraldconsulting.com/2024/05/31/your-privacy-rights-understanding-the-recent-changes-to-metas-privacy-policy/
22 https://www.cnn.com/2024/02/29/tech/meta-data-processing-europe-gdpr/index.html
23 https://www.sec.gov/Archives/edgar/data/1326801/000109690624001115/nlpc_px14a6g.htm
24 https://techcrunch.com/2024/12/17/meta-fined-263m-over-2018-security-breach-that-affected-3m-eu- users/?guccounter=1&guce_referrer=aHR0cHM6Ly9zZWFyY2guYnJhdmUuY29tLw&guce_referrer_sig= AQAAAD4HheTbEJQ2uv9gp9B3KxM9_N4994xQTrLYALsYK0BQFs2RFvMrMoIiPE7xio2aSdPY6M_ 97vxAsDBddazTKQk259_DpSY0MQTrnvvM_pz9eJVxD4l09oXkkRxINuC3yjod7RwAx1QHcfCX5_ IcvmV5Qg8qcYx7vLv3e29uEu8S
25 https://www.reuters.com/legal/facebook-parent-meta-pay-725-mln-settle-lawsuit-relating-cambridge-analytica-2022-12-23/
26 https://d18rn0p25nwr6d.cloudfront.net/CIK-0000320193/d5ac8341-3708-4b1d-89f5-6a0dcec45aa0.pdf
4
box.” This lack of transparency is more than a technical issue; It raises fundamental questions about accountability, trust, and fairness. If these algorithms are used in critical areas, such as healthcare diagnostics27 or criminal justice risk assessments,28 the consequences could include unfair outcomes or even life-altering mistakes. For Apple, this opacity may protect proprietary information and intellectual property, but it ultimately raises questions about whether the company values profit and competitive advantage over transparency and accountability.
In response, citizens and consumers demand increased protections for data privacy. At its core, the debate centers around who truly “owns” the data generated by users—be it personal information, behavioral patterns, or digital content—and what rights individuals have over how their data is used, stored, or shared.29 These evolving expectations have created challenges for companies like Apple and its partners, especially as they collect vast amounts of data to train and refine artificial intelligence (AI) systems.
|
|
The European Union has emerged as a global leader in the push for stronger data privacy rights through the General Data Protection Regulation (GDPR), which was enacted in 2018.30 GDPR represents one of the most comprehensive data privacy laws on the planet, fundamentally changing how companies collect, process, and store personal data for EU citizens. It grants individuals greater control over their data, including the right to access, correct, or delete their information, and the right to be informed about how their data is used. GDPR enforces strict penalties for non-compliance, with fines reaching up to four percent of a company’s global annual revenue, creating a powerful incentive for companies to adhere to the principles of transparency, accountability, and user control. For companies like Apple, which operate on a global scale, GDPR has raised the stakes of data ethics.
In the United States, data privacy laws have traditionally been less stringent than those in the EU, with no comprehensive federal data privacy law akin to GDPR. However, this landscape is changing. States have begun to adopt their own data protection regulations, reflecting a growing recognition of the need for privacy protections. California, for example, enacted the California Consumer Privacy Act (CCPA) in 2020,31 giving residents similar rights to those under GDPR, such as the right to know what personal information is being collected, the right to delete that information, and the right to opt out of its sale.
27 https://www.forbes.com/sites/saibala/2023/01/23/microsoft-is-aggressively-investing-in-healthcare-ai/
28 https://counciloncj.org/the-implications-of-ai-for-criminal-justice/
29 https://www2.deloitte.com/us/en/insights/topics/digital-transformation/data-ownership-protection-privacy-issues.html
30 https://gdpr-info.eu/
31 https://oag.ca.gov/privacy/ccpa
5
The movement for data privacy is gaining momentum in other states as well, creating a patchwork of state-level regulations that corporations like Apple must navigate. These new expectations around data privacy indicate a shift in public attitudes toward data ownership, with Americans increasingly demanding the right to control their digital information.
The aforementioned lawsuit filed by the New York Times against OpenAI serves as a high-profile example of how changing expectations around data ownership intersect with legal challenges. The Times has accused OpenAI of scraping its copyrighted content to train AI models without permission or compensation, thereby infringing on intellectual property rights.32 If the Times lawsuit succeeds, it could set a precedent that imposes greater restrictions on data scraping, especially when it involves proprietary or copyrighted content. This would create additional hurdles for OpenAI, forcing it to either secure permission from data sources or reconsider their datasets.
By continuing its current practices, Apple risks becoming entangled in more lawsuits and regulatory actions that could erode shareholder value and harm its reputation. Additionally, as consumers become more privacy-conscious, they may choose to support companies that demonstrate a genuine commitment to respecting data rights.
Increasing Shareholder Value and Building Competitive Advantage Through Privacy Leadership
Consumers have consistently expressed concern with the lack of control they have over their personal data.33 McKinsey & Company has argued that companies that prioritize data privacy will build a competitive advantage over their competitors that do not:34
As consumers become more careful about sharing data, and regulators step up privacy requirements, leading companies are learning that data protection and privacy can create a business advantage.
Given the low overall levels of trust, it is not surprising that consumers often want to restrict the types of data that they share with businesses. Consumers have greater control over their personal information as a result of the many privacy tools now available, including web browsers with built-in cookie blockers, ad-blocking software (used on more than 600 million devices around the world), and incognito browsers (used by more than 40 percent of internet users globally). However, if a product or service offering—for example, healthcare or money management—is critically important to consumers, many are willing to set aside their privacy concerns.
32 https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html
33 https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
34 https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-consumer-data-opportunity-and-the-privacy-imperative
6
Consumers are not willing to share data for transactions they view as less important. They may even “vote with their feet” and walk away from doing business with companies whose data-privacy practices they don’t trust, don’t agree with, or don’t understand.
The authors add:
Our research revealed that our sample of consumers simply do not trust companies to handle their data and protect their privacy. Companies can therefore differentiate themselves by taking deliberate, positive measures in this domain. In our experience, consumers respond to companies that treat their personal data as carefully as they do themselves.
The report drives home the reality that as data privacy concerns grow, consumers increasingly favor companies that prioritize ethical data handling and transparency. This underscores the reality that companies with transparent, privacy-focused practices have a strategic advantage in a market where trust is paramount.
The shift in expectations around data ownership represents an opportunity for Apple to position itself as a leader in ethical AI by adopting transparent and consent-driven data practices. Such a shift would not only help Apple avoid legal challenges but would also build consumer trust, aligning the company with global standards that prioritize the individual’s right to control their own data.
| For Apple, this means that transparent and privacy-respecting AI practices can foster customer loyalty and reduce churn. The financial benefits of customer retention are well-documented, as retaining an existing customer is often significantly less expensive than acquiring a new one.
|
|
Moreover, a privacy-centric approach aligns with the growing “techno-optimism”
|
movement, which advocates for technology that empowers individuals rather than exploits them. Champions of this movement, such as venture capitalist Marc Andreessen,35 argue that technology should decentralize power, enhance transparency, and empower users. By supporting these values, Apple can attract a growing demographic of users who view technology as a tool for personal empowerment rather than corporate control. This alignment would not only attract consumers but also influence public perception, positioning Apple as a leader in ethical AI.
|
Finally, the emphasis on privacy and transparency could reduce Apple’s vulnerability to regulatory backlash and legal issues. With stricter data privacy regulations emerging globally and cases like the New York Times lawsuit against OpenAI highlighting the risks of unethical data practices, Apple can preemptively mitigate risks by setting a high standard for transparency.
35 https://a16z.com/the-techno-optimist-manifesto/
7
Apple’s competitors – including Microsoft, Meta, Alphabet, and Anthropic – vary significantly in their approaches to privacy, reflecting their values, business models, and strategic goals. Understanding how each company handles privacy provides insight into the broader landscape of AI ethics, transparency, and consumer trust.
Microsoft
Microsoft is a major player in AI, largely due to its partnership with OpenAI. However, this relationship has sparked concerns over unethical data practices, including allegations of data scraping and the use of personal and proprietary information without consent. These issues have led to lawsuits, such as one filed by the New York Times, and raised questions about the ethical foundations of Microsoft’s AI development.
Additionally, Microsoft’s extensive government contracts, often involving sensitive technologies, have drawn criticism for potentially aligning the company’s AI initiatives with state interests. 36 37 38 39 40 This has fueled skepticism about its commitment to privacy and independent oversight.
While Microsoft emphasizes responsible AI, its algorithms often operate as “black boxes,” offering little transparency about data use or decision-making processes. Strengthening privacy measures and aligning with global standards like GDPR could help rebuild trust, but current practices highlight a need for greater accountability.
Meta
Meta has faced scrutiny over data privacy issues, particularly regarding how user data informs targeted advertising algorithms.41 42 In recent years, however, Meta has made strides toward increasing transparency in its AI research. Its release of the open-source Llama AI tool stands as a testament to its new direction, signaling a willingness to contribute to transparent and accessible AI development. Open-source AI models, like Llama, allow researchers and developers to examine and modify the code, increasing transparency.
Despite this open-source shift, privacy concerns persist due to Meta’s reliance on user data for advertising revenue. Meta’s AI algorithms extensively leverage personal data to generate targeted ads,43 which raises concerns about whether the open-source commitment will extend to the company's most valuable and sensitive data-driven algorithms. The public scrutiny Meta has
36 https://prospect.org/power/2024-06-11-defense-department-microsofts-profit-taking/
37 https://prospect.org/power/2024-06-11-defense-department-microsofts-profit-taking/
38 https://ccianet.org/news/2021/09/new-study-shows-microsoft-holds-85-market-share-in-u-s-public-sector-productivity-software/
39 https://theintercept.com/2024/10/25/africom-microsoft-openai-military/
40 https://wwps.microsoft.com/blog/ai-public-sector
41 https://www.theguardian.com/technology/2023/jul/11/threads-app-privacy-user-data-meta-policy
42 https://www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html
43 https://www.reuters.com/technology/meta-gets-11-eu-complaints-over-use-personal-data-train-ai-models-2024-06-06/
8
faced in recent years, including the Cambridge Analytica scandal,44 has also impacted trust. Although open-sourcing Llama may signal greater transparency, questions remain about whether Meta’s privacy improvements go far enough.
Alphabet
Alphabet, via subsidiary Google, commands a powerful position in AI, utilizing vast data resources to fuel services like search engines and voice assistants.45 However, its data practices, deeply tied to advertising revenue, have drawn consistent criticism for prioritizing user data collection over privacy.46 Alphabet’s extensive data tracking for targeted ads has repeatedly sparked privacy concerns, leading to significant fines, particularly under the EU’s GDPR, for lack of transparency in data use. Despite implementing features like “auto-delete” options and experimenting with federated learning, where data is stored on devices instead of centralized databases, these measures are limited in scope and appear reactive rather than foundational.
Alphabet’s reputation suffers further from incidents like tracking user location data even when location services are off,47 highlighting inconsistencies between its public privacy commitments and real-world practices. Critics argue Alphabet treats user privacy as secondary to its ad-driven business, contrasting sharply with companies like Apple, which prioritize data minimization. As privacy expectations grow, Alphabet’s reliance on extensive data collection may increasingly conflict with consumer demands for transparency and data sovereignty, ultimately challenging the sustainability of its approach.
Anthropic
Anthropic, an AI research lab founded by former OpenAI employees, has positioned itself as a company dedicated to “alignment” and AI safety. Its primary mission is to develop AI systems that are aligned with human interests, prioritizing safety and ethics over rapid deployment.48 Although Anthropic is smaller than Microsoft, Meta, or Alphabet, its focus on long-term AI safety makes it a relevant player in the privacy conversation.49
Anthropic emphasizes transparency in AI behavior and is cautious about deploying its models in commercial applications without rigorous testing. While Anthropic’s approach does not specifically prioritize privacy in the same way as Microsoft or Meta, its emphasis on safety, alignment, and ethical concerns indirectly supports a privacy-conscious framework. By promoting transparency and caution in deployment, Anthropic positions itself as an organization willing to sacrifice rapid growth for responsible, user-centered AI practices.
44 https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
45 https://www.thestreet.com/investing/stocks/analyst-update-alphabet-stock-price-target-after-ai-event
46 https://www.bloomberg.com/news/articles/2022-02-28/all-the-ways-google-is-coming-under-fire-over-privacy-quicktake
47 https://time.com/6209991/apps-collecting-personal-data/
48 https://www.anthropic.com/
49 https://etc.cuit.columbia.edu/news/ai-community-practice-hosts-anthropic-explore-claude-ai-enterprise
9
Given that Anthropic is still relatively new, it has yet to encounter significant regulatory or public scrutiny. Yet its foundational principles suggest a commitment to ethical practices, which could offer a competitive advantage as privacy expectations evolve.
Apple, Meta, Alphabet, Anthropic, and Microsoft each approach privacy differently, reflecting their distinct business models and consumer expectations. While Apple promotes privacy as a competitive advantage, Alphabet and Meta face challenges due to their reliance on advertising revenue. Anthropic’s focus on long-term safety and ethical alignment positions it as a distinct player in the privacy conversation, especially as AI continues to evolve. As consumer demand for privacy grows, these varying approaches will shape the public’s perception of each company’s commitment to responsible AI.
For one reason or another, each of Apple’s competitors have barriers preventing them from staking out a dominant position in the AI industry as a leader in both quality and privacy. Taking a strong, privacy-centered stance could set Apple apart from competitors and align it with modern values, thereby strengthening both consumer trust and shareholder value.
|
|
|
The economic benefits could be tremendous to Apple. The penalties for violating the
|
GDPR or CCPA are billions of dollars alone. However, the more important issue is that the generative AI market could reach $1.3 trillion by 2032,50 and small percentage changes in market share will be worth tens of billions of dollars. Apple’s competition is too strong, and the potential reward is too big, to not take data ethics and privacy seriously.
|
Conclusion
By prioritizing privacy-oriented and ethical AI, Apple can distinguish itself in an industry where consumer trust is critical. As regulatory pressures grow and public expectations shift towards data transparency and control, Apple’s commitment to responsible AI would not only safeguard its reputation but also enhance shareholder value. Embracing a privacy-first approach positions Apple as a leader in ethical technology, aligning it with both consumer and societal values. This strategic shift can help Apple gain a sustainable competitive advantage, fostering long-term growth and making a positive impact on the industry as a whole. For these reasons, we urge shareholders to support Proposal No. 4.51
50 https://www.bloomberg.com/company/press/generative-ai-to-become-a-1-3-trillion-market-by-2032-research-finds/
51 https://d18rn0p25nwr6d.cloudfront.net/CIK-0000320193/d5ac8341-3708-4b1d-89f5-6a0dcec45aa0.pdf
10
Photo credits:
Page 2 – IMAGE: MikeMacMarketing, Creative Commons
Page 3 – IMAGE: focal5, Creative Commons
Page 5 – IMAGE: Visual Content, Creative Commons
Page 7 – IMAGE: Antonio Marin Segovia, Creative Commons
Page 10 – Apple CEO Tim Cook/ tuaulamac, Creative Commons
THE FOREGOING INFORMATION MAY BE DISSEMINATED TO SHAREHOLDERS VIA TELEPHONE, U.S. MAIL, E-MAIL, CERTAIN WEBSITES AND CERTAIN SOCIAL MEDIA VENUES, AND SHOULD NOT BE CONSTRUED AS INVESTMENT ADVICE OR AS A SOLICITATION OF AUTHORITY TO VOTE YOUR PROXY.
THE COST OF DISSEMINATING THE FOREGOING INFORMATION TO SHAREHOLDERS IS BEING BORNE ENTIRELY BY THE FILERS.
THE INFORMATION CONTAINED HEREIN HAS BEEN PREPARED FROM SOURCES BELIEVED RELIABLE BUT IS NOT GUARANTEED BY US AS TO ITS TIMELINESS OR ACCURACY, AND IS NOT A COMPLETE SUMMARY OR STATEMENT OF ALL AVAILABLE DATA. THIS PIECE IS FOR INFORMATIONAL PURPOSES AND SHOULD NOT BE CONSTRUED AS A RESEARCH REPORT.
PROXY CARDS WILL NOT BE ACCEPTED BY US. PLEASE DO NOT SEND YOUR PROXY TO US. TO VOTE YOUR PROXY, PLEASE FOLLOW THE INSTRUCTIONS ON YOUR PROXY CARD.
For questions regarding Apple Inc. Proposal No. 4 – requesting the Board of Directors to produce a “Report on AI Data Sourcing Accountability,” submitted by National Legal and Policy Center – please contact Luke Perlot, associate director of NLPC’s Corporate Integrity Project, via email at lperlot@nlpc.org.
11