Apple stays still as Google, Microsoft commit to control AI and combat cancer

1comment
Apple stays still as Google, Microsoft commit to control AI and combat cancer
A new AI-focused association has been formed by OpenAI, Google, Microsoft and Anthropic (an AI safety and research company based in California). They’re calling it the Frontier Model Forum. Notice a missing tech giant among these four? Yup, Apple is not going to (join) that party, at least not for now.

Before covering what this new consortium will do (via AppleInsider), it’s important to note it’s the second time Apple has not joined a major AI-related consortium. Last time (July 21), the Cupertino giant was missing from the White House’s announcement that it has secured voluntary commitments from several leading companies in the AI field to manage the risks posed by artificial intelligence.

Here’s what a Google blog post about the new AI-focused association announces:


What’s a frontier model?


As explained in the same Google blog article, frontier models are defined as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”.

There’s a membership criteria for organizations that want to join: they have to develop and deploy frontier models (as defined by the Forum), demonstrate a strong commitment to frontier model safety, including through technical and institutional approaches and are willing to contribute to advancing the Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.

What’s their goal, again?


There are four core objectives, according to members of the Frontier Model Forum. Here they are in their own words:

  1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
  2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
  3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
  4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

Here’s what Anna Makanju, Vice President of Global Affairs, OpenAI says:


Some are skeptic


Emily Bender, a computational linguist at the University of Washington and AI expert, looks at these pledges from tech giants more like “an attempt to avoid regulation; to assert the ability to self-regulate, which I’m very skeptical of”. According to her, “The regulation needs to come externally. It needs to be enacted by the government representing the people to constrain what these corporations can do”.

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless