Security is given top priority by the company’s in-house AI, although other experts say “it remains to be seen.”

Apple revealed Apple Intelligence, their much anticipated artificial intelligence system, on Monday at their annual developers conference. CEO Tim Cook claims that this system would automate chores, customise user experiences, and establish a “new standard for privacy in AI.”

Apple claims that security is the top priority for its in-house AI, but there has been a lot of criticism directed at the company’s partnership with OpenAI. Since its November 2022 launch, OpenAI’s ChatGPT has raised privacy issues because it collects user data without obtaining explicit authorization in order to train its models. Users might only choose not to participate in this data gathering as of April 2023.

According to Apple, the ChatGPT relationship would only be utilised for particular tasks like drafting emails and other documents with express agreement. Professionals in security, however, will be keenly watching how these and other issues develop.

Cliff Steinhauer, the National Cybersecurity Alliance’s director of information security and engagement, stated, “Apple is saying a lot of the right things.” “But it remains to be seen how it’s implemented.”

Apple entered the generative AI race later than its rivals, Microsoft, Amazon, and Google, whose stock has benefited from investor trust in AI projects. Conversely, up until today, Apple had not included generative AI in any of its marquee consumer devices.

According to Cook during the event on Monday, the business indicates that the delay was deliberate in order to “apply this technology in a responsible way.” Using its own technology and unique core models, Apple has spent the last few years building the majority of Apple Intelligence capabilities, while other businesses have raced to introduce devices. The goal of this strategy is to guarantee that the least amount of user data ever leaves the Apple ecosystem.

Apple’s long-standing emphasis on privacy is uniquely challenged by artificial intelligence, which relies on gathering enormous volumes of data in order to build language models. It’s hard to integrate AI and protect user privacy, according to critics like Elon Musk. Musk also threatened to forbid his staff from using Apple products for work if the widely anticipated improvements are put into effect. Some experts, though, disagree.

Gal Ringel, co-founder and CEO of data privacy software company Mine, said, “With this announcement, Apple is setting an example for how companies can balance data privacy and innovation.” “The positive reception of this news, compared to other recent AI product releases, demonstrates that prioritising privacy is a strategy that pays off in today’s world.”

The range of recent AI releases, which mirror Silicon Valley’s infamous “move fast and break things” mentality, ranges from dysfunctional and frivolous to downright deadly. Steinhauer observed that Apple seemed to be adopting a different strategy.

“Considering the concerns we’ve had about AI so far, platforms often release products and then address issues as they arise,” he stated. “Apple is proactively resolving frequent issues right away. It’s the distinction between reactive security, which is inherently flawed, and security by design.”

A key component of Apple’s AI privacy guarantees is its recently introduced Private Cloud Compute technology. The majority of the computer processing required to execute Apple Intelligence features on devices is done by Apple. According to Apple officials on Monday, the business would outsource processing to the cloud while guaranteeing the protection of customer data for jobs requiring more processing power than the device can offer.

In order to accomplish this, Apple will avoid storing data permanently, put extra security measures around the data at each endpoint, and export just the data required to fulfil each request. According to officials, Apple will also make all tools and software associated with the private cloud publicly available for independent third-party verification.

Private Cloud Compute is “a significant advancement in AI privacy and security,” according to Krishna Vishnubhotla, vice president of product strategy at mobile security platform Zimperium. He specifically noted the independent inspection component as being very important.

“In addition to building user trust, these innovations elevate security standards for mobile devices and apps,” he stated.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *