Open Source Language Model Named Dolly 2.0 Trained Similarly To ChatGPT (2024)

Databricks announced the release of the first open source instruction-tuned language model, called Dolly 2.0. It was trained using similar methodology as InstructGPT but with a claimed higher quality dataset that is 100% open source.

This model is free to use, including for commercial purposes, because every part of the model is 100% open source.

Open Source Instruction Training

What makes ChatGPT able to follow directions is the training it receives using techniques outlined in the InstructGPT research paper.

The breakthrough discovered with InstructGPT is that language models don’t need larger and larger training sets.

By using human evaluated question and answer training, OpenAI was able to train a better language model using one hundred times fewer parameters than the previous model, GPT-3.

Databricks used a similar approach to create prompt and response dataset called they call databricks-dolly-15k.

Their prompt/response dataset was created without scraping web forums or Reddit.

databricks-dolly-15k is a dataset created by Databricks employees, a 100% original, human generated 15,000 prompt and response pairs designed to train the Dolly 2.0 language model in the same way that ChatGPT model was created with InstructGPT.

The Hugging Face page for the dataset explains how they did it:

“databricks-dolly-15k is an open source dataset of instruction-following records used in training databricks/dolly-v2-12b that was generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.

…Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category.

The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the types of questions and instructions appropriate to each category.

Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.”

Databricks claims that this may be the very first human generated instruction dataset created to train a language model to follow instructions, just like ChatGPT does.

The challenge was to create a 100% original dataset that had zero ties to ChatGPT or any other source with a restrictive license.

Employees were incentivized by a contest to contribute to generating the 15,000 prompt/responses along seven categories of tasks such as brainstorming, classification, and creative writing.

Databricks asserts that the databricks-dolly-15k training set may be superior to the dataset used to train ChatGPT.

They note that although their dataset is smaller than the one used to train the Stanford Alpaca model, their model performed better because their data is higher quality.

They write:

“Dolly 2.0 model, based on EleutherAI’s pythia-12b, exhibited high-quality instruction following behavior. In hindsight, this isn’t surprising.

Many of the instruction tuning datasets released in recent months contain synthesized data, which often contains hallucinations and factual errors.

databricks-dolly-15k, on the other hand, is generated by professionals, is high quality, and contains long answers to most tasks.

…we don’t expect Dolly to be state-of-the-art in terms of effectiveness.

However, we do expect Dolly and the open source dataset will act as the seed for a multitude of follow-on works, which may serve to bootstrap even more powerful language models.”

Limitations to the Dataset

The GitHub page for the dataset acknowledges that there may be some shortcomings to the dataset.

Wikipedia data was used for some of the training in the context of creating prompts and responses. Thus, it’s possible that whatever bias contained in Wikipedia may end up reflected within the resulting dataset.

Some of the employees who worked to create the dataset were not native speakers of English, which could introduce some anomalies in the dataset.

The demographic makeup of the employees who created the dataset may itself influence the dataset to contain biases that are peculiar to those employees.

Despite those possible shortcomings in the dataset, Databricks expressed that theirs is of a higher quality.

Additionally, Dolly 2.0 is meant to serve as a starting point for others to create and innovate even better versions.

Databricks Insists that Open Source AI Is Better

One of the motivations behind creating Dolly 2.0 is that users of the data can own the models they created and can better safeguard their data by not having to share it with a third party.

They also believe that AI safety should not be concentrated in the hands of three large corporations but spread out among all the stakeholders.

Open source is picking up momentum and it will be interesting to see where this industry is at within the next two years.

More information on where to download the Dolly 2.0 model and how to use it can be found in their announcement.

Free Dolly: Introducing the World’s First Truly Open Instruction-Tuned LLM

Featured image by Shutterstock/Kamil Macniak

Category News SEO Machine Learning

Open Source Language Model Named Dolly 2.0 Trained Similarly To ChatGPT (2024)

References

Top Articles
BMO HARRIS BANK, NA. USA ABA Routing Number List
BMO U.S. hiring Bank Manager in Chicago, IL | LinkedIn
Basketball Stars Unblocked 911
Gfr Soccer
Mâcon: Stadtplan, Tipps & Infos | ADAC Maps
Incredibox Deluxe
Express Pay Cspire
NO CLUE: deutsche Übersetzung von NCT 127
Treasure Hunt Deals Racine Wi
Muckleshoot Bingo Calendar
Dvax Message Board
Myhr North Memorial
Craigslist.com Seattle Wa
Japan’s Dagashi Treats: A Tasty Trip Down Memory Lane – Umami bites
Browse | Obituaries | Enid News and Eagle
Promotional Code For Spades Royale
Dominion Post Obituaries Morgantown
Bx11
Kristian Andersen | Scripps Research
Baddiehub Cover
Myhr.bannerhealth.com
How 'Tuesday' Brings Death to Life With Heart, Humor, and a Giant Bird
By Association Only Watsonville
Www.cvs/Otchs/Simply
Kleen Krete Concrete Remover 1 Gal Liquid 32110
Rate My Naughty.com
Wgu Admissions Login
Harleyxwest Of Leaks
Trailmaster Fahrwerk - nivatechnik.de
San Diego Box Score
Solarmovies Rick And Morty
Arialectra Baby Alien
Candy Land Santa Ana
Issue November 5, 1949 - The Hockey News
Pathfinder 2E Beginner Box Pdf Trove
Ups Store.near Me
Seattle Rpz
Smokingmeatforum
eCare: Nutzung am PC | BARMER
Snapcamms
Cetaphil Samples For Providers
Hexanaut.io – Jouez en ligne sur Coolmath Games
Section 528 Sofi Stadium
55Th And Kedzie Elite Staffing
Obtaining __________ Is A Major And Critical Closure Activity.
La Monja 2 Pelicula Completa Tokyvideo
Sona Systems Tcu
Leslie Pool Supply Simi Valley
Swoop Amazon S3
Swag Codes: The Ultimate Guide to Boosting Your Swagbucks Earnings - Ricky Spears
Ucf Cost Calculator
Texas State Final Grades
Latest Posts
Article information

Author: Corie Satterfield

Last Updated:

Views: 6283

Rating: 4.1 / 5 (42 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Corie Satterfield

Birthday: 1992-08-19

Address: 850 Benjamin Bridge, Dickinsonchester, CO 68572-0542

Phone: +26813599986666

Job: Sales Manager

Hobby: Table tennis, Soapmaking, Flower arranging, amateur radio, Rock climbing, scrapbook, Horseback riding

Introduction: My name is Corie Satterfield, I am a fancy, perfect, spotless, quaint, fantastic, funny, lucky person who loves writing and wants to share my knowledge and understanding with you.