\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Nvidia ASHAMED of its current GPU lineup, cancels all reviews

https://youtu.be/4n_J7jclifM?si=Ndp-VEWuRXlOJIe_
impertinent carnelian telephone
  05/11/25
the days of NVIDIA caring about the consumer gaming market a...
Vermilion stirring jap
  05/12/25
Google is doing fine without Nvidiashit. Apple too. I wonder...
impertinent carnelian telephone
  05/12/25
Google is running NVIDIA GB200 NVL72s champ.
Vermilion stirring jap
  05/12/25
I got things mixed up. Apple did it without Nvidia, using Go...
impertinent carnelian telephone
  05/12/25
Apple is behind in "AI" and Google only sells proc...
Vermilion stirring jap
  05/12/25
Apple doesn't seem "behind" in any meaningful way....
impertinent carnelian telephone
  05/12/25
Apple Intelligence is a Joke https://cybernews.com/tech/ipho...
Vermilion stirring jap
  05/12/25
Oh wow an AI gives inaccurate responses? Shoulda used Nvidia...
impertinent carnelian telephone
  05/12/25
It seems that TPUs provide most of their training and infere...
Cerebral chocolate shrine hairy legs
  05/12/25
?
Vermilion stirring jap
  05/12/25
Their nvidia GPUs are primarily for their cloud services. Ma...
Cerebral chocolate shrine hairy legs
  05/12/25
cr, their edge was never in hardware but in developer suppor...
impertinent carnelian telephone
  05/12/25
Right. The advances in AI coding will also cause this to hap...
Cerebral chocolate shrine hairy legs
  05/12/25
now is the time for Intel to really flood the GPU market if ...
Vermilion stirring jap
  05/12/25
They are working on dedicated AI chips and aren't pushing GP...
impertinent carnelian telephone
  05/12/25
are you talking about NPUs? they already have those in their...
Vermilion stirring jap
  05/12/25
Yeah they are going that way. No one thinks Intel chips will...
impertinent carnelian telephone
  05/12/25
Yes, and that’s still using traditional models where a...
Cerebral chocolate shrine hairy legs
  05/12/25
tyft
impertinent carnelian telephone
  05/12/25
I am blown away by even current local LLMs. Gemma 3 is great...
Cerebral chocolate shrine hairy legs
  05/12/25
Gemma is the only one I mess with anymore. I might do some c...
impertinent carnelian telephone
  05/12/25
The two people I know who work in the biz (one at Meta) run ...
impertinent carnelian telephone
  05/12/25
explainw hy it needs 64 gig ram wtf? there are only 1 billio...
Beady-eyed Swollen Turdskin
  05/12/25
You don't, 32 is fine. However some people think it's nbd to...
impertinent carnelian telephone
  05/12/25
I just ordered a Core Ultra 9 285K and a 5070ti. Rate me.
Vermilion stirring jap
  05/12/25
If you got a z890 chipset you won the gen.
impertinent carnelian telephone
  05/12/25
I did but it's one of the mid-range ones, not the crazy ASUS...
Vermilion stirring jap
  05/12/25
That looks fine.
impertinent carnelian telephone
  05/12/25


Poast new message in this thread



Reply Favorite

Date: May 11th, 2025 12:10 PM
Author: impertinent carnelian telephone

https://youtu.be/4n_J7jclifM?si=Ndp-VEWuRXlOJIe_

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48921540)



Reply Favorite

Date: May 12th, 2025 4:33 PM
Author: Vermilion stirring jap

the days of NVIDIA caring about the consumer gaming market are over. it's all about AI Data Center for them now, anything else is scraps.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924790)



Reply Favorite

Date: May 12th, 2025 4:36 PM
Author: impertinent carnelian telephone

Google is doing fine without Nvidiashit. Apple too. I wonder how long it will take others to notice.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924793)



Reply Favorite

Date: May 12th, 2025 4:37 PM
Author: Vermilion stirring jap

Google is running NVIDIA GB200 NVL72s champ.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924796)



Reply Favorite

Date: May 12th, 2025 4:39 PM
Author: impertinent carnelian telephone

I got things mixed up. Apple did it without Nvidia, using Google chips:

https://www.reuters.com/technology/apple-says-it-uses-no-nvidia-gpus-train-its-ai-models-2024-07-29/

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924801)



Reply Favorite

Date: May 12th, 2025 4:43 PM
Author: Vermilion stirring jap

Apple is behind in "AI" and Google only sells processing units through their GCP cloud platform. Not remarkable - Apple is a hardware company first and foremost.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924808)



Reply Favorite

Date: May 12th, 2025 4:44 PM
Author: impertinent carnelian telephone

Apple doesn't seem "behind" in any meaningful way. What opportunities do you think they're missing here?

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924813)



Reply Favorite

Date: May 12th, 2025 4:45 PM
Author: Vermilion stirring jap

Apple Intelligence is a Joke https://cybernews.com/tech/iphone-users-disable-apple-intelligence/

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924815)



Reply Favorite

Date: May 12th, 2025 4:51 PM
Author: impertinent carnelian telephone

Oh wow an AI gives inaccurate responses? Shoulda used Nvidia I guess.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924835)



Reply Favorite

Date: May 12th, 2025 4:47 PM
Author: Cerebral chocolate shrine hairy legs

It seems that TPUs provide most of their training and inference capacity and that will only become more true as time goes on. Companies like Microsoft going in a similar direction is not a positive thing.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924818)



Reply Favorite

Date: May 12th, 2025 4:48 PM
Author: Vermilion stirring jap

?

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924820)



Reply Favorite

Date: May 12th, 2025 4:50 PM
Author: Cerebral chocolate shrine hairy legs

Their nvidia GPUs are primarily for their cloud services. Major AI companies using their own chips cuts Nvidia out of the market as time goes on. They will not retain the market share they have now.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924832)



Reply Favorite

Date: May 12th, 2025 4:52 PM
Author: impertinent carnelian telephone

cr, their edge was never in hardware but in developer support. They could dump more money into developer support than anyone else, but it was always only a matter of time until developers started doing their own thing.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924836)



Reply Favorite

Date: May 12th, 2025 4:58 PM
Author: Cerebral chocolate shrine hairy legs

Right. The advances in AI coding will also cause this to happen sooner than it would otherwise. Even some of the local LLm market will likely migrate to AMD or Intel GPUs as the software improves

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924854)



Reply Favorite

Date: May 12th, 2025 5:03 PM
Author: Vermilion stirring jap

now is the time for Intel to really flood the GPU market if they want to escape Ignominy, I don't know much about their ARC shit but I hear it's ok.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924866)



Reply Favorite

Date: May 12th, 2025 5:06 PM
Author: impertinent carnelian telephone

They are working on dedicated AI chips and aren't pushing GPUs for that purpose, although they let you run local LLMs on them now using their own proprietary software. It's not bad.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924877)



Reply Favorite

Date: May 12th, 2025 5:08 PM
Author: Vermilion stirring jap

are you talking about NPUs? they already have those in their latest chips afaik.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924880)



Reply Favorite

Date: May 12th, 2025 5:10 PM
Author: impertinent carnelian telephone

Yeah they are going that way. No one thinks Intel chips will be used to train models, but running local inference should be easy. I think the cloudshit is going to become less relevant as performance of local LLMs improves. You can already get models that are <1gb that run great.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924885)



Reply Favorite

Date: May 12th, 2025 6:41 PM
Author: Cerebral chocolate shrine hairy legs

Yes, and that’s still using traditional models where all the weights are in the GPU. For many specialized tasks, the model likely only needs a small amount of additional information in order to predict a particular token well. The neural networks that play particular games, for example, are quite small and could be quickly transferred from an NVME solid state drive into GPU memory if it was necessary for predicting the next token. I can imagine mixture of expert type models that retrieve from a database of thousands of expert modules based on the current context. No need for hardware with tons of memory next to the tensor cores.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925085)



Reply Favorite

Date: May 12th, 2025 6:46 PM
Author: impertinent carnelian telephone

tyft

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925095)



Reply Favorite

Date: May 12th, 2025 6:51 PM
Author: Cerebral chocolate shrine hairy legs

I am blown away by even current local LLMs. Gemma 3 is great.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925110)



Reply Favorite

Date: May 12th, 2025 6:52 PM
Author: impertinent carnelian telephone

Gemma is the only one I mess with anymore. I might do some coding at some point but I have no urge to use LLMs for that rn.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925113)



Reply Favorite

Date: May 12th, 2025 5:03 PM
Author: impertinent carnelian telephone

The two people I know who work in the biz (one at Meta) run local LLMs on Macbooks with unified RAM. There's nothing you can buy for $5k that gives you better performance than a M4 ultra, or even the M2 ultra with 64-128gb unified RAM

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48924868)



Reply Favorite

Date: May 12th, 2025 6:53 PM
Author: Beady-eyed Swollen Turdskin

explainw hy it needs 64 gig ram wtf? there are only 1 billion parameters

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925119)



Reply Favorite

Date: May 12th, 2025 6:55 PM
Author: impertinent carnelian telephone

You don't, 32 is fine. However some people think it's nbd to drop $5k on a Macbook, and the models with high capacity are in short supply so people grab them as they become available. Mac Minis are really the best value right now if you want to run M4 Max or M2 Ultra. You need at least the Max to run LLMs, and if you're getting more than 32gb you really want the Ultra

EDIT oh shit there's a M3 Ultra now. 180

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925124)



Reply Favorite

Date: May 12th, 2025 7:05 PM
Author: Vermilion stirring jap

I just ordered a Core Ultra 9 285K and a 5070ti. Rate me.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925151)



Reply Favorite

Date: May 12th, 2025 7:06 PM
Author: impertinent carnelian telephone

If you got a z890 chipset you won the gen.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925155)



Reply Favorite

Date: May 12th, 2025 7:07 PM
Author: Vermilion stirring jap

I did but it's one of the mid-range ones, not the crazy ASUS "gamerz" one or whatever

https://pcpartpicker.com/product/xPtLrH/msi-mpg-z890-carbon-wifi-atx-lga1851-motherboard-mpg-z890-carbon-wifi

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925159)



Reply Favorite

Date: May 12th, 2025 7:10 PM
Author: impertinent carnelian telephone

That looks fine.

(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2#48925167)