Connect with us

Tech

Stanford researchers terminate ChatGPT-like OpenAI two months after launch

Published

on

Researchers of artificial intelligence (AI) from Stanford managed to develop their ChatGPT chatbot demo Alpaca in less than two months but terminated it citing “hosting costs and the inadequacies of content filters” in the large language model’s (LLM) behaviour.

The termination announcement was made less than a week after it was released, as per Stanford Daily.

The source code of the ChatGPT model of Stanford — developed for less than $600 — is available publicly.

According to researchers, their chatbot model had a similar performance to OpenAI’s ChatGPT 3.5.

Scientists in their announcement said that their chatbot Alpaca is only for academic research and not for general use in the near future.

Alpaca researcher Tatsunori Hashimoto of the Computer Science Department said: “We think the interesting work is in developing methods on top of Alpaca [since the dataset itself is just a combination of known ideas], so we don’t have current plans along the lines of making more datasets of the same kind or scaling up the model,”

Alpaca was developed on Meta AI’s LLaMA 7B model and generated training data with the method known as self-instruct.

Adjunct professor Douwe Kiela noted that “As soon as the LLaMA model came out, the race was on.”

Kiela who also worked as an AI researcher at Facebook said that “Somebody was going to be the first to instruction-finetune the model, and so the Alpaca team was the first … and that’s one of the reasons it kind of went viral.”

“It’s a really, really cool, simple idea, and they executed really well.”

Hashimoto said that the “LLaMA base model is trained to predict the next word on internet data and that instruction-finetuning modifies the model to prefer completions that follow instructions over those that do not.”

The source code of Alpaca is available on GitHub — a source code sharing platform — and was viewed 17,500 times. More than 2,400 people have used the code for their own model.

“I think much of the observed performance of Alpaca comes from LLaMA, and so the base language model is still a key bottleneck,” Hashimoto stated.

As the use of artificial intelligence systems has been increasing with every passing day, scientists and experts have been debating over the publishing of the source code, data used by companies and their methods to train their AI models and the overall transparency of the technology.

He was of the view that “I think one of the safest ways to move forward with this technology is to make sure that it is not in too few hands.”

“We need to have places like Stanford, doing cutting-edge research on these large language models in the open. So I thought it was very encouraging that Stanford is still actually one of the big players in this large language model space,” Kiela noted. 

Latest News

Cybersecurity firm reports exposure of sensitive DeepSeek data on the internet.

Published

on

By

The New York-based cybersecurity firm Wiz has discovered a cache of sensitive data from the Chinese artificial intelligence business DeepSeek that was mistakenly exposed to the public internet.

In a blog post released on Wednesday, Wiz reported that examinations of DeepSeek’s infrastructure revealed that the company had inadvertently exposed over a million lines of unencrypted data. The materials were digital software keys and chat logs that seemingly documented prompts transmitted from consumers to the company’s complimentary AI assistant.

The chief technical officer of Wiz stated that DeepSeek promptly safeguarded the data following the notice from his organisation.

“It was removed in under an hour,” stated Ami Luttwak. “However, this was exceedingly easy to locate, leading us to believe we are not the sole discoverers.”

DeepSeek did not promptly respond to a request for comment.

DeepSeek’s rapid success after the introduction of its AI helper has exhilarated China and incited concern in America. The Chinese company’s evident capacity to rival OpenAI’s skills at a significantly reduced cost has raised concerns regarding the viability of the business models and profit margins of U.S. AI behemoths like Nvidia and Microsoft.

By Monday, it surpassed the U.S. competitor ChatGPT in downloads from Apple’s App Store, prompting a worldwide decline in technology stocks.

Continue Reading

Latest News

WhatsApp launches bulk channel management functionality

Published

on

By

WhatsApp has initiated testing of a bulk channel administration feature on iOS for select beta users, enabling the simultaneous selection of many channels, hence enhancing the efficiency of managing following channels.

This essential update enables users to perform bulk activities, including muting many channels simultaneously, designating them as read, and altering notifications. If the chosen channels are muted, users will now have the option to activate notifications. Users can swiftly silence unmuted channels in one action.

Additionally, this feature enables users to unfollow many channels simultaneously, thereby optimizing the process of decluttering their channel list. This change is particularly beneficial for users that oversee numerous subscriptions, as reported by WABetaInfo.

Previously, users were required to manage each channel individually, rendering tasks such as muting or designating channels as read laborious and time-consuming.

The functionality provides enhanced flexibility and control over channel subscriptions, enabling users to efficiently manage notifications. The solution streamlines laborious operations for consumers who subscribe to numerous channels, hence enhancing their entire experience.

Accessibility
The bulk management feature is presently accessible exclusively to a limited number of beta testers who installed the latest WhatsApp beta for iOS using the TestFlight application. WhatsApp, owned by Meta, plans to expand the feature’s availability to a larger user base in the next weeks.

This update demonstrates WhatsApp’s dedication to enhancing user experience by offering a clear and efficient method for managing channels and notifications.

Continue Reading

Latest News

Pakistani internet slowdown: ongoing submarine cable issue

Published

on

By

Even after two weeks, the global submarine cable AAA-1 problem that was discovered on January 2 near Qatar has not been fixed, causing sluggish internet connection in several Pakistani towns.

According to a representative for Pakistan Telecommunication Company Limited (PTCL), the issue has affected customers’ capacity to effectively access social media applications and browse the online. Even with initiatives to fix the problem, social networking sites still lag during busy times.

Internet traffic has been redirected via alternate channels to lessen the impact, and more capacity has been set up to stabilize the service.

The PTCL representative promised that “Internet service across the country is operating normally, and there will be no issues with web browsing,” noting that social media applications’ lag is common during

Continue Reading

Trending