\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Nvidia ASHAMED of its current GPU lineup, cancels all reviews

https://youtu.be/4n_J7jclifM?si=Ndp-VEWuRXlOJIe_
transparent ticket booth
  05/11/25
the days of NVIDIA caring about the consumer gaming market a...
Haunting Stubborn Center
  05/12/25
Google is doing fine without Nvidiashit. Apple too. I wonder...
transparent ticket booth
  05/12/25
Google is running NVIDIA GB200 NVL72s champ.
Haunting Stubborn Center
  05/12/25
I got things mixed up. Apple did it without Nvidia, using Go...
transparent ticket booth
  05/12/25
Apple is behind in "AI" and Google only sells proc...
Haunting Stubborn Center
  05/12/25
Apple doesn't seem "behind" in any meaningful way....
transparent ticket booth
  05/12/25
Apple Intelligence is a Joke https://cybernews.com/tech/ipho...
Haunting Stubborn Center
  05/12/25
Oh wow an AI gives inaccurate responses? Shoulda used Nvidia...
transparent ticket booth
  05/12/25
It seems that TPUs provide most of their training and infere...
disturbing pit fanboi
  05/12/25
?
Haunting Stubborn Center
  05/12/25
Their nvidia GPUs are primarily for their cloud services. Ma...
disturbing pit fanboi
  05/12/25
cr, their edge was never in hardware but in developer suppor...
transparent ticket booth
  05/12/25
Right. The advances in AI coding will also cause this to hap...
disturbing pit fanboi
  05/12/25
now is the time for Intel to really flood the GPU market if ...
Haunting Stubborn Center
  05/12/25
They are working on dedicated AI chips and aren't pushing GP...
transparent ticket booth
  05/12/25
are you talking about NPUs? they already have those in their...
Haunting Stubborn Center
  05/12/25
Yeah they are going that way. No one thinks Intel chips will...
transparent ticket booth
  05/12/25
Yes, and that’s still using traditional models where a...
disturbing pit fanboi
  05/12/25
tyft
transparent ticket booth
  05/12/25
I am blown away by even current local LLMs. Gemma 3 is great...
disturbing pit fanboi
  05/12/25
Gemma is the only one I mess with anymore. I might do some c...
transparent ticket booth
  05/12/25
The two people I know who work in the biz (one at Meta) run ...
transparent ticket booth
  05/12/25
explainw hy it needs 64 gig ram wtf? there are only 1 billio...
mischievous exhilarant set turdskin
  05/12/25
You don't, 32 is fine. However some people think it's nbd to...
transparent ticket booth
  05/12/25
I just ordered a Core Ultra 9 285K and a 5070ti. Rate me.
Haunting Stubborn Center
  05/12/25
If you got a z890 chipset you won the gen.
transparent ticket booth
  05/12/25
I did but it's one of the mid-range ones, not the crazy ASUS...
Haunting Stubborn Center
  05/12/25
That looks fine.
transparent ticket booth
  05/12/25


Poast new message in this thread



Reply Favorite

Date: May 11th, 2025 12:10 PM
Author: transparent ticket booth

https://youtu.be/4n_J7jclifM?si=Ndp-VEWuRXlOJIe_

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48921540)



Reply Favorite

Date: May 12th, 2025 4:33 PM
Author: Haunting Stubborn Center

the days of NVIDIA caring about the consumer gaming market are over. it's all about AI Data Center for them now, anything else is scraps.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924790)



Reply Favorite

Date: May 12th, 2025 4:36 PM
Author: transparent ticket booth

Google is doing fine without Nvidiashit. Apple too. I wonder how long it will take others to notice.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924793)



Reply Favorite

Date: May 12th, 2025 4:37 PM
Author: Haunting Stubborn Center

Google is running NVIDIA GB200 NVL72s champ.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924796)



Reply Favorite

Date: May 12th, 2025 4:39 PM
Author: transparent ticket booth

I got things mixed up. Apple did it without Nvidia, using Google chips:

https://www.reuters.com/technology/apple-says-it-uses-no-nvidia-gpus-train-its-ai-models-2024-07-29/

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924801)



Reply Favorite

Date: May 12th, 2025 4:43 PM
Author: Haunting Stubborn Center

Apple is behind in "AI" and Google only sells processing units through their GCP cloud platform. Not remarkable - Apple is a hardware company first and foremost.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924808)



Reply Favorite

Date: May 12th, 2025 4:44 PM
Author: transparent ticket booth

Apple doesn't seem "behind" in any meaningful way. What opportunities do you think they're missing here?

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924813)



Reply Favorite

Date: May 12th, 2025 4:45 PM
Author: Haunting Stubborn Center

Apple Intelligence is a Joke https://cybernews.com/tech/iphone-users-disable-apple-intelligence/

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924815)



Reply Favorite

Date: May 12th, 2025 4:51 PM
Author: transparent ticket booth

Oh wow an AI gives inaccurate responses? Shoulda used Nvidia I guess.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924835)



Reply Favorite

Date: May 12th, 2025 4:47 PM
Author: disturbing pit fanboi

It seems that TPUs provide most of their training and inference capacity and that will only become more true as time goes on. Companies like Microsoft going in a similar direction is not a positive thing.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924818)



Reply Favorite

Date: May 12th, 2025 4:48 PM
Author: Haunting Stubborn Center

?

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924820)



Reply Favorite

Date: May 12th, 2025 4:50 PM
Author: disturbing pit fanboi

Their nvidia GPUs are primarily for their cloud services. Major AI companies using their own chips cuts Nvidia out of the market as time goes on. They will not retain the market share they have now.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924832)



Reply Favorite

Date: May 12th, 2025 4:52 PM
Author: transparent ticket booth

cr, their edge was never in hardware but in developer support. They could dump more money into developer support than anyone else, but it was always only a matter of time until developers started doing their own thing.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924836)



Reply Favorite

Date: May 12th, 2025 4:58 PM
Author: disturbing pit fanboi

Right. The advances in AI coding will also cause this to happen sooner than it would otherwise. Even some of the local LLm market will likely migrate to AMD or Intel GPUs as the software improves

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924854)



Reply Favorite

Date: May 12th, 2025 5:03 PM
Author: Haunting Stubborn Center

now is the time for Intel to really flood the GPU market if they want to escape Ignominy, I don't know much about their ARC shit but I hear it's ok.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924866)



Reply Favorite

Date: May 12th, 2025 5:06 PM
Author: transparent ticket booth

They are working on dedicated AI chips and aren't pushing GPUs for that purpose, although they let you run local LLMs on them now using their own proprietary software. It's not bad.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924877)



Reply Favorite

Date: May 12th, 2025 5:08 PM
Author: Haunting Stubborn Center

are you talking about NPUs? they already have those in their latest chips afaik.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924880)



Reply Favorite

Date: May 12th, 2025 5:10 PM
Author: transparent ticket booth

Yeah they are going that way. No one thinks Intel chips will be used to train models, but running local inference should be easy. I think the cloudshit is going to become less relevant as performance of local LLMs improves. You can already get models that are <1gb that run great.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924885)



Reply Favorite

Date: May 12th, 2025 6:41 PM
Author: disturbing pit fanboi

Yes, and that’s still using traditional models where all the weights are in the GPU. For many specialized tasks, the model likely only needs a small amount of additional information in order to predict a particular token well. The neural networks that play particular games, for example, are quite small and could be quickly transferred from an NVME solid state drive into GPU memory if it was necessary for predicting the next token. I can imagine mixture of expert type models that retrieve from a database of thousands of expert modules based on the current context. No need for hardware with tons of memory next to the tensor cores.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925085)



Reply Favorite

Date: May 12th, 2025 6:46 PM
Author: transparent ticket booth

tyft

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925095)



Reply Favorite

Date: May 12th, 2025 6:51 PM
Author: disturbing pit fanboi

I am blown away by even current local LLMs. Gemma 3 is great.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925110)



Reply Favorite

Date: May 12th, 2025 6:52 PM
Author: transparent ticket booth

Gemma is the only one I mess with anymore. I might do some coding at some point but I have no urge to use LLMs for that rn.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925113)



Reply Favorite

Date: May 12th, 2025 5:03 PM
Author: transparent ticket booth

The two people I know who work in the biz (one at Meta) run local LLMs on Macbooks with unified RAM. There's nothing you can buy for $5k that gives you better performance than a M4 ultra, or even the M2 ultra with 64-128gb unified RAM

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924868)



Reply Favorite

Date: May 12th, 2025 6:53 PM
Author: mischievous exhilarant set turdskin

explainw hy it needs 64 gig ram wtf? there are only 1 billion parameters

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925119)



Reply Favorite

Date: May 12th, 2025 6:55 PM
Author: transparent ticket booth

You don't, 32 is fine. However some people think it's nbd to drop $5k on a Macbook, and the models with high capacity are in short supply so people grab them as they become available. Mac Minis are really the best value right now if you want to run M4 Max or M2 Ultra. You need at least the Max to run LLMs, and if you're getting more than 32gb you really want the Ultra

EDIT oh shit there's a M3 Ultra now. 180

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925124)



Reply Favorite

Date: May 12th, 2025 7:05 PM
Author: Haunting Stubborn Center

I just ordered a Core Ultra 9 285K and a 5070ti. Rate me.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925151)



Reply Favorite

Date: May 12th, 2025 7:06 PM
Author: transparent ticket booth

If you got a z890 chipset you won the gen.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925155)



Reply Favorite

Date: May 12th, 2025 7:07 PM
Author: Haunting Stubborn Center

I did but it's one of the mid-range ones, not the crazy ASUS "gamerz" one or whatever

https://pcpartpicker.com/product/xPtLrH/msi-mpg-z890-carbon-wifi-atx-lga1851-motherboard-mpg-z890-carbon-wifi

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925159)



Reply Favorite

Date: May 12th, 2025 7:10 PM
Author: transparent ticket booth

That looks fine.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925167)