\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Nvidia ASHAMED of its current GPU lineup, cancels all reviews

https://youtu.be/4n_J7jclifM?si=Ndp-VEWuRXlOJIe_
https://imgur.com/a/o2g8xYK
  05/11/25
the days of NVIDIA caring about the consumer gaming market a...
Oh, you travel?
  05/12/25
Google is doing fine without Nvidiashit. Apple too. I wonder...
https://imgur.com/a/o2g8xYK
  05/12/25
Google is running NVIDIA GB200 NVL72s champ.
Oh, you travel?
  05/12/25
I got things mixed up. Apple did it without Nvidia, using Go...
https://imgur.com/a/o2g8xYK
  05/12/25
Apple is behind in "AI" and Google only sells proc...
Oh, you travel?
  05/12/25
Apple doesn't seem "behind" in any meaningful way....
https://imgur.com/a/o2g8xYK
  05/12/25
Apple Intelligence is a Joke https://cybernews.com/tech/ipho...
Oh, you travel?
  05/12/25
Oh wow an AI gives inaccurate responses? Shoulda used Nvidia...
https://imgur.com/a/o2g8xYK
  05/12/25
It seems that TPUs provide most of their training and infere...
.,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
  05/12/25
?
Oh, you travel?
  05/12/25
Their nvidia GPUs are primarily for their cloud services. Ma...
.,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
  05/12/25
cr, their edge was never in hardware but in developer suppor...
https://imgur.com/a/o2g8xYK
  05/12/25
Right. The advances in AI coding will also cause this to hap...
.,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
  05/12/25
now is the time for Intel to really flood the GPU market if ...
Oh, you travel?
  05/12/25
They are working on dedicated AI chips and aren't pushing GP...
https://imgur.com/a/o2g8xYK
  05/12/25
are you talking about NPUs? they already have those in their...
Oh, you travel?
  05/12/25
Yeah they are going that way. No one thinks Intel chips will...
https://imgur.com/a/o2g8xYK
  05/12/25
Yes, and that’s still using traditional models where a...
.,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
  05/12/25
tyft
https://imgur.com/a/o2g8xYK
  05/12/25
I am blown away by even current local LLMs. Gemma 3 is great...
.,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
  05/12/25
Gemma is the only one I mess with anymore. I might do some c...
https://imgur.com/a/o2g8xYK
  05/12/25
The two people I know who work in the biz (one at Meta) run ...
https://imgur.com/a/o2g8xYK
  05/12/25
explainw hy it needs 64 gig ram wtf? there are only 1 billio...
VoteRepublican
  05/12/25
You don't, 32 is fine. However some people think it's nbd to...
https://imgur.com/a/o2g8xYK
  05/12/25
I just ordered a Core Ultra 9 285K and a 5070ti. Rate me.
Oh, you travel?
  05/12/25
If you got a z890 chipset you won the gen.
https://imgur.com/a/o2g8xYK
  05/12/25
I did but it's one of the mid-range ones, not the crazy ASUS...
Oh, you travel?
  05/12/25
That looks fine.
https://imgur.com/a/o2g8xYK
  05/12/25


Poast new message in this thread



Reply Favorite

Date: May 11th, 2025 12:10 PM
Author: https://imgur.com/a/o2g8xYK


https://youtu.be/4n_J7jclifM?si=Ndp-VEWuRXlOJIe_

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48921540)



Reply Favorite

Date: May 12th, 2025 4:33 PM
Author: Oh, you travel? ( )

the days of NVIDIA caring about the consumer gaming market are over. it's all about AI Data Center for them now, anything else is scraps.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924790)



Reply Favorite

Date: May 12th, 2025 4:36 PM
Author: https://imgur.com/a/o2g8xYK


Google is doing fine without Nvidiashit. Apple too. I wonder how long it will take others to notice.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924793)



Reply Favorite

Date: May 12th, 2025 4:37 PM
Author: Oh, you travel? ( )

Google is running NVIDIA GB200 NVL72s champ.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924796)



Reply Favorite

Date: May 12th, 2025 4:39 PM
Author: https://imgur.com/a/o2g8xYK


I got things mixed up. Apple did it without Nvidia, using Google chips:

https://www.reuters.com/technology/apple-says-it-uses-no-nvidia-gpus-train-its-ai-models-2024-07-29/

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924801)



Reply Favorite

Date: May 12th, 2025 4:43 PM
Author: Oh, you travel? ( )

Apple is behind in "AI" and Google only sells processing units through their GCP cloud platform. Not remarkable - Apple is a hardware company first and foremost.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924808)



Reply Favorite

Date: May 12th, 2025 4:44 PM
Author: https://imgur.com/a/o2g8xYK


Apple doesn't seem "behind" in any meaningful way. What opportunities do you think they're missing here?

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924813)



Reply Favorite

Date: May 12th, 2025 4:45 PM
Author: Oh, you travel? ( )

Apple Intelligence is a Joke https://cybernews.com/tech/iphone-users-disable-apple-intelligence/

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924815)



Reply Favorite

Date: May 12th, 2025 4:51 PM
Author: https://imgur.com/a/o2g8xYK


Oh wow an AI gives inaccurate responses? Shoulda used Nvidia I guess.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924835)



Reply Favorite

Date: May 12th, 2025 4:47 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.


It seems that TPUs provide most of their training and inference capacity and that will only become more true as time goes on. Companies like Microsoft going in a similar direction is not a positive thing.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924818)



Reply Favorite

Date: May 12th, 2025 4:48 PM
Author: Oh, you travel? ( )

?

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924820)



Reply Favorite

Date: May 12th, 2025 4:50 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.


Their nvidia GPUs are primarily for their cloud services. Major AI companies using their own chips cuts Nvidia out of the market as time goes on. They will not retain the market share they have now.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924832)



Reply Favorite

Date: May 12th, 2025 4:52 PM
Author: https://imgur.com/a/o2g8xYK


cr, their edge was never in hardware but in developer support. They could dump more money into developer support than anyone else, but it was always only a matter of time until developers started doing their own thing.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924836)



Reply Favorite

Date: May 12th, 2025 4:58 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.


Right. The advances in AI coding will also cause this to happen sooner than it would otherwise. Even some of the local LLm market will likely migrate to AMD or Intel GPUs as the software improves

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924854)



Reply Favorite

Date: May 12th, 2025 5:03 PM
Author: Oh, you travel? ( )

now is the time for Intel to really flood the GPU market if they want to escape Ignominy, I don't know much about their ARC shit but I hear it's ok.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924866)



Reply Favorite

Date: May 12th, 2025 5:06 PM
Author: https://imgur.com/a/o2g8xYK


They are working on dedicated AI chips and aren't pushing GPUs for that purpose, although they let you run local LLMs on them now using their own proprietary software. It's not bad.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924877)



Reply Favorite

Date: May 12th, 2025 5:08 PM
Author: Oh, you travel? ( )

are you talking about NPUs? they already have those in their latest chips afaik.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924880)



Reply Favorite

Date: May 12th, 2025 5:10 PM
Author: https://imgur.com/a/o2g8xYK


Yeah they are going that way. No one thinks Intel chips will be used to train models, but running local inference should be easy. I think the cloudshit is going to become less relevant as performance of local LLMs improves. You can already get models that are <1gb that run great.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924885)



Reply Favorite

Date: May 12th, 2025 6:41 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.


Yes, and that’s still using traditional models where all the weights are in the GPU. For many specialized tasks, the model likely only needs a small amount of additional information in order to predict a particular token well. The neural networks that play particular games, for example, are quite small and could be quickly transferred from an NVME solid state drive into GPU memory if it was necessary for predicting the next token. I can imagine mixture of expert type models that retrieve from a database of thousands of expert modules based on the current context. No need for hardware with tons of memory next to the tensor cores.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925085)



Reply Favorite

Date: May 12th, 2025 6:46 PM
Author: https://imgur.com/a/o2g8xYK


tyft

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925095)



Reply Favorite

Date: May 12th, 2025 6:51 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.


I am blown away by even current local LLMs. Gemma 3 is great.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925110)



Reply Favorite

Date: May 12th, 2025 6:52 PM
Author: https://imgur.com/a/o2g8xYK


Gemma is the only one I mess with anymore. I might do some coding at some point but I have no urge to use LLMs for that rn.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925113)



Reply Favorite

Date: May 12th, 2025 5:03 PM
Author: https://imgur.com/a/o2g8xYK


The two people I know who work in the biz (one at Meta) run local LLMs on Macbooks with unified RAM. There's nothing you can buy for $5k that gives you better performance than a M4 ultra, or even the M2 ultra with 64-128gb unified RAM

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924868)



Reply Favorite

Date: May 12th, 2025 6:53 PM
Author: VoteRepublican (A true Chad!! where's your gf/wifew?)

explainw hy it needs 64 gig ram wtf? there are only 1 billion parameters

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925119)



Reply Favorite

Date: May 12th, 2025 6:55 PM
Author: https://imgur.com/a/o2g8xYK


You don't, 32 is fine. However some people think it's nbd to drop $5k on a Macbook, and the models with high capacity are in short supply so people grab them as they become available. Mac Minis are really the best value right now if you want to run M4 Max or M2 Ultra. You need at least the Max to run LLMs, and if you're getting more than 32gb you really want the Ultra

EDIT oh shit there's a M3 Ultra now. 180

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925124)



Reply Favorite

Date: May 12th, 2025 7:05 PM
Author: Oh, you travel? ( )

I just ordered a Core Ultra 9 285K and a 5070ti. Rate me.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925151)



Reply Favorite

Date: May 12th, 2025 7:06 PM
Author: https://imgur.com/a/o2g8xYK


If you got a z890 chipset you won the gen.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925155)



Reply Favorite

Date: May 12th, 2025 7:07 PM
Author: Oh, you travel? ( )

I did but it's one of the mid-range ones, not the crazy ASUS "gamerz" one or whatever

https://pcpartpicker.com/product/xPtLrH/msi-mpg-z890-carbon-wifi-atx-lga1851-motherboard-mpg-z890-carbon-wifi

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925159)



Reply Favorite

Date: May 12th, 2025 7:10 PM
Author: https://imgur.com/a/o2g8xYK


That looks fine.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925167)