1 Simon Willison's Weblog
debrafetty5657 edited this page 5 months ago


That design was trained in part utilizing their unreleased R1 "thinking" model. Today they have actually launched R1 itself, together with an entire family of new designs obtained from that base.

There's a great deal of things in the brand-new release.

DeepSeek-R1-Zero appears to be the base design. It's over 650GB in size and, like many of their other releases, is under a tidy MIT license. DeepSeek alert that "DeepSeek-R1-Zero comes across obstacles such as limitless repetition, poor readability, and language blending." ... so they also released:

DeepSeek-R1-which "incorporates cold-start information before RL" and "attains performance comparable to OpenAI-o1 across mathematics, code, and thinking jobs". That a person is also MIT certified, and is a comparable size.

I don't have the ability to run designs larger than about 50GB (I have an M2 with 64GB of RAM), so neither of these two models are something I can easily have fun with myself. That's where the new distilled models are available in.

To support the research study neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and 6 thick models distilled from DeepSeek-R1 based on Llama and Qwen.

This is an interesting flex! They have actually models based on Qwen 2.5 (14B, opentx.cz 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).

Weirdly those Llama designs have an MIT license attached, which I'm uncertain is suitable with the underlying Llama license. Qwen models are Apache certified so perhaps MIT is OK?

(I also just observed the MIT license files state "Copyright (c) 2023 DeepSeek" so they might need to pay a little bit more attention to how they copied those in.)

Licensing aside, these distilled models are fascinating beasts.

Running DeepSeek-R1-Distill-Llama-8B-GGUF

Quantized versions are already beginning to show up. Up until now I have actually attempted just one of those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF launched by AI-and it's truly enjoyable to play with.

I'm running it utilizing the mix of Ollama, yewiki.org LLM and wiki.rolandradio.net the llm-ollama plugin.

First I fetched and ran the model utilizing Ollama itself:

This downloads around 8.5 GB of model data and begins an interactive chat interface.

Once the model has been fetched, LLM can talk with it also. I choose utilizing LLM for experiments since it logs everything to SQLite for later exploration.

I installed the plugin and ran it like this:

Each reaction begins with a ... pseudo-XML tag containing the chain of thought used to help create the response. For "a joke about a pelican and a walrus who run a tea space together" that started like this:

Okay, so I need to come up with a joke about a pelican and a walrus running a tea space together. Hmm, that's an interesting combination. Let me believe about how these 2 characters may engage in a funny situation.

First, let's consider their qualities. Pelicans are understood for koha-community.cz their long beaks and Webbed feet, often seen near the beach or water. Walruses have huge teeth, thick fur, garagesale.es and they're usually discovered in cooler environments, like icebergs or snowy areas. So, integrating these two into a tea room setting is already an amusing image.

[...]
It then believed for 20 paragraphs before outputting the joke! Here's the full transcript and here's the output at the end, after the tag:

The Joke:

A pelican and archmageriseswiki.com a walrus choose to open a quaint little tea room together. The walrus, with its huge size, struggles to discover comfy chairs, so it rests on the table by accident, overturning the teapot. Meanwhile, the pelican, trying to help, uses its beak to place dishes on the table, causing a couple of spills.

After a series of comical accidents, the walrus searches for and states with a grin, "This isn't so fishy anymore." The pelican smirks and says, "Maybe not, however we do have a great deal of krill in our tea!"

Explanation:

- Setup: Pelican and Walrus run an unlikely tea space. Their sizes lead to comedic mayhem.