QUOTE(kingkingyyk @ Jan 28 2025, 02:23 PM)
If you already have some big VRAM GPUs, open source LLM lets you utilize the computing power at no cost (well, you could argue there is TNB bill). 
16GB VRAM sufficient to run o1 level intelligence ?Full DeepSeek R1 At Home 🥳🥳🥳
|
|
Jan 28 2025, 02:31 PM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
3,833 posts Joined: Oct 2006 From: Shah Alam |
|
|
|
|
|
|
Jan 28 2025, 02:31 PM
Show posts by this member only | IPv6 | Post
#22
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,894 posts Joined: May 2008 |
ok.....so world peace achieved?
|
|
|
Jan 28 2025, 02:32 PM
|
![]() ![]() ![]()
Junior Member
473 posts Joined: Sep 2019 |
|
|
|
Jan 28 2025, 02:32 PM
|
![]() ![]()
Junior Member
239 posts Joined: May 2022 |
The speed of China Tech is really terrifying
Luckily such power is in the hand of highly civilised nation China is the future of humanity |
|
|
Jan 28 2025, 02:32 PM
|
|
Elite
15,694 posts Joined: Mar 2008 |
QUOTE(emefbiemef @ Jan 28 2025, 02:23 PM) https://github.com/deepseek-ai/DeepSeek-R1/...s/benchmark.jpgHere is the comparison. The full 671b models need over 400GB of memory to run, which is out of reach for most people. Distillation transfers the knowledge to smaller model (i.e. feeding the QA chain), but smaller model has way fewer parameters so they won't be generating result so well. The list of distilled models: - DeepSeek-R1-Distill-Qwen-1.5B - DeepSeek-R1-Distill-Qwen-7B - DeepSeek-R1-Distill-Llama-8B - DeepSeek-R1-Distill-Qwen-14B - DeepSeek-R1-Distill-Qwen-32B - DeepSeek-R1-Distill-Llama-70B 32B can let you run on RTX5090 nicely with context (the QA chain is involved to generate further response), but how many of us can justify buying that 5 digits GPU + heavy TNB bill just to run this? This post has been edited by kingkingyyk: Jan 28 2025, 02:33 PM |
|
|
Jan 28 2025, 02:33 PM
|
![]() ![]()
Junior Member
239 posts Joined: May 2022 |
At this rate Full fledged offline AI robot is possible Penamer liked this post
|
|
|
|
|
|
Jan 28 2025, 02:33 PM
|
![]() ![]() ![]()
Junior Member
473 posts Joined: Sep 2019 |
QUOTE(kurtkob78 @ Jan 28 2025, 02:31 PM) ah lol. 16GB vram is a kid toy that can only run 4-bit quantized small models.o1 is a big model. you probably need around 400GB vram to run it (in 4 bit probably). To run the full 32 bit idk need how much. lazy to calculate. |
|
|
Jan 28 2025, 02:35 PM
|
![]() ![]() ![]()
Junior Member
473 posts Joined: Sep 2019 |
|
|
|
Jan 28 2025, 02:36 PM
|
|
Elite
15,694 posts Joined: Mar 2008 |
QUOTE(kurtkob78 @ Jan 28 2025, 02:31 PM) https://ollama.com/Try the 12b/14b models yourself. The general performance is quite acceptable, but due to the amount of parameters, won't work well in depth. 7900XTX is the best price-performance card for such purpose. Gives you 24GB VRAM and just cost you 5K. |
|
|
Jan 28 2025, 02:37 PM
|
![]() ![]() ![]() ![]() ![]()
Junior Member
830 posts Joined: Mar 2010 |
QUOTE(hellothere131495 @ Jan 28 2025, 02:32 PM) 1.5b 4bit is slow? what spec are you using? btw, you probably still need the o1. 1.5b 4 bit is a shit model that hallucinates a lot. N100 processor je, no GPU, its truenas machineoriginally installed ollama just for fun now considering putting it on my desktop |
|
|
Jan 28 2025, 02:50 PM
Show posts by this member only | IPv6 | Post
#31
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,894 posts Joined: May 2008 |
|
|
|
Jan 28 2025, 03:01 PM
Show posts by this member only | IPv6 | Post
#32
|
![]() ![]() ![]()
Junior Member
382 posts Joined: Dec 2008 From: /k/ |
So it's better than Nvidia?
|
|
|
Jan 28 2025, 03:01 PM
|
![]() ![]()
Junior Member
90 posts Joined: May 2022 |
QUOTE(GagalLand @ Jan 28 2025, 02:33 PM) I think it is still a long way to go before we can get AI to the same level as shown in A.I. movie by Steven Speilberg (2001). What we have now shows the promise of what could be achieved. But the power drain is way too much to put it into a machine that not just needs to run the AI, but still needs portable power to move around as well. There are way more things that need to be improved before we can even get a robot similar to the one that is shown in the Interstellar (2014) movie.Then we have other benchmark to reach, which seem still very far out of reach. Robot in Lost in Space (1960s TV series version, not the new movie or Netflix version) R2D2 C3PO etc etc |
|
|
|
|
|
Jan 28 2025, 03:03 PM
|
![]() ![]()
Junior Member
90 posts Joined: May 2022 |
|
|
|
Jan 28 2025, 03:08 PM
Show posts by this member only | IPv6 | Post
#35
|
![]() ![]()
Junior Member
86 posts Joined: Jan 2012 |
Aiya at home you need robot vacuum, automated home system, not this bull crap to ask hallo how is the weather tomorrow
|
|
|
Jan 28 2025, 03:09 PM
Show posts by this member only | IPv6 | Post
#36
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,894 posts Joined: May 2008 |
QUOTE(kaizoku30 @ Jan 28 2025, 03:08 PM) Aiya at home you need robot vacuum, automated home system, not this bull crap to ask hallo how is the weather tomorrow Nah man he's building Skynet at homeTrust me bro alfiejr liked this post
|
|
|
Jan 28 2025, 03:10 PM
Show posts by this member only | IPv6 | Post
#37
|
![]() ![]() ![]()
Junior Member
382 posts Joined: Dec 2008 From: /k/ |
Can someone explain this in a easier way?
|
|
|
Jan 28 2025, 03:13 PM
|
![]() ![]()
Junior Member
90 posts Joined: May 2022 |
|
|
|
Jan 28 2025, 03:15 PM
Show posts by this member only | IPv6 | Post
#39
|
![]()
Junior Member
41 posts Joined: Oct 2010 |
can calculate toto supreme winning combo JonSpark and emefbiemef liked this post
|
|
|
Jan 28 2025, 03:31 PM
|
![]() ![]()
Junior Member
191 posts Joined: Mar 2007 |
Can run on ipong? JonSpark liked this post
|
| Change to: | 0.0164sec
0.71
5 queries
GZIP Disabled
Time is now: 18th December 2025 - 10:48 AM |