Connect with us

Tech

Humanity at risk: Musk, others ring alarm bells over hasty ‘giant AI experiments’

Published

on

Hundreds of celebrated artificial intelligence (AI) researchers including Tesla owner Elon Musk have undersigned an open letter recommending AI labs to revisit gigantic AI systems, ringing alarm bells over the “profound risks” these bots pose to society and humanity.

According to the letter, published by the nonprofit Future of Life Institute, AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the open letter.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

AI engineers around the world want to make sure that these powerful AI systems should be allowed to take logical time for the researchers to make sure they were safe.

Among the signatories of the letter are author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and several well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque. 

The letter was mainly prompted by the release of GPT-4 from the San Francisco firm OpenAI.

The company says its latest model is much more powerful than the previous version, which was used to power ChatGPT, a bot capable of generating tracts of text from the briefest of prompts.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” says the letter. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Musk was an initial investor in OpenAI, spent years on its board, and his car firm Tesla develops AI systems to help power its self-driving technology, among other applications.

The letter, hosted by the Musk-funded Future of Life Institute, was signed by prominent critics as well as competitors of OpenAI like Stability AI chief Emad Mostaque.

The letter quoted from a blog written by OpenAI founder Sam Altman, who suggested that “at some point, it may be important to get independent review before starting to train future systems”.

“We agree. That point is now,” the authors of the open letter wrote.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

They called for governments to step in and impose a moratorium if companies failed to agree.

The six months should be used to develop safety protocols, AI governance systems, and refocus research on ensuring AI systems are more accurate, safe, “trustworthy and loyal”.

The letter did not detail the dangers revealed by GPT-4.

Latest News

SIFC Wants To Promote Innovation In Agriculture: An Agriculture Event With Networking And Exhibitions

Published

on

By

Karachi is the starting point for the two-day international conference on sustainable agriculture.

The conference, organised by exhibitor TV, ripple concept, and the Pakistan Media Development Foundation, is backed by the Green Pakistan Initiative, which has recently achieved significant strides.

Enhancing agricultural productivity with contemporary technology is the goal of this event, which is in line with the Special Investment Facilitation Council’s emphasis on agriculture.

Experts will talk about sustainable techniques and organic farming, and there will be a plenary discussion on the land information and management system.

The event will include exhibits that highlight contemporary methods and technology, giving professionals and stakeholders a place to network.

Sindh and Balochistan’s agriculture departments will display their accomplishments in provincial pavilions. The Bank of Punjab, National Bank of Pakistan, and Saudi-Pak Investment Company are among the sponsors who will help make the event possible.

Continue Reading

Latest News

Apple provides a $1 million incentive to hack its secret AI cloud.

Published

on

By

A reward of up to $1 million has been offered by the multinational computer giant Apple to anyone who breaches its Private Cloud Compute, which will integrate artificial intelligence (AI) capabilities.

The corporation recently posted a blog post titled “Security research on Private Cloud Compute,” in which it offered a reward to anyone who could find cloud service vulnerabilities that could endanger the service.

The news coincided with Apple’s planned release of iOS 18.1 and Apple Intelligence on iPhones the following week.

For the first time, the update will also bring AI capabilities to the iPhone, such improvements to Siri, the speech assistant.

The tech giant will use its own silicon servers to power the Private Cloud computation, which it describes as “the most advanced security architecture ever deployed for cloud AI compute at scale.”

“We made resources to facilitate this inspection, such as the PCC Virtual Research Environment, available to third-party auditors and a few security researchers in advance in the weeks following our announcement of Apple Intelligence and PCC,” Apple stated in the blog.

The business has extended an invitation to researchers, security experts, and anybody else who wants to pinpoint the platform’s weaknesses.

In addition to giving $1 million for identifying significant vulnerabilities through “remote attack on request data,” the corporation is rewarding anyone who can gain access to sensitive information or user request data outside the boundaries of trust with $250,000.

Apple went on to say, “We will consider any security issue that has a significant impact to PCC for an Apple Security Bounty reward, even if it doesn’t match a published category, because we care deeply about any compromise to user privacy or security.”

It would “consider each report based on the quality of the information provided, the evidence of what can be exploited, and the impact to users,” according to Apple.

Visit the Apple Security Prize page to submit your research and learn more about the project, which is open to anybody interested in participating and winning the prize.

Continue Reading

Latest News

WhatsApp will introduce new tools for sharing music and managing stickers.

Published

on

By

According to reports, WhatsApp is developing two intriguing new features that would improve the user experience. Several reports indicate that one of these enhancements is aimed at enhancing the way users organize their sticker collections.

Details reveal that users will soon be able to quickly arrange their stickers by adding them to their favorites or selecting many stickers at once for deletion, which will move them to the top for easy access.

Users will also be able to make their own personalized sticker packs within WhatsApp thanks to this future functionality. Users have more control over their collections and can add or delete stickers as necessary to maintain a unique and customized look. Additionally, group conversations and contacts can be used to share these personalized sticker packs.

Additionally, it has been rumored that WhatsApp is developing a new function that would enable users to exchange music via status updates. When users post a picture or a video to their status, a music button will be added to the drawing editor.

Even while these capabilities are still in the early stages of development, they should be included in upcoming updates, giving WhatsApp users more imaginative options.

Continue Reading

Trending