Nvidia ASHAMED of its current GPU lineup, cancels all reviews
| https://imgur.com/a/o2g8xYK | 05/11/25 | | Oh, you travel? | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | Oh, you travel? | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | Oh, you travel? | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | Oh, you travel? | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,. | 05/12/25 | | Oh, you travel? | 05/12/25 | | .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,. | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,. | 05/12/25 | | Oh, you travel? | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | Oh, you travel? | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,. | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,. | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | VoteRepublican | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | Oh, you travel? | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 | | Oh, you travel? | 05/12/25 | | https://imgur.com/a/o2g8xYK | 05/12/25 |
Poast new message in this thread
 |
Date: May 12th, 2025 4:47 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
It seems that TPUs provide most of their training and inference capacity and that will only become more true as time goes on. Companies like Microsoft going in a similar direction is not a positive thing.
(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924818) |
 |
Date: May 12th, 2025 4:50 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
Their nvidia GPUs are primarily for their cloud services. Major AI companies using their own chips cuts Nvidia out of the market as time goes on. They will not retain the market share they have now.
(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924832) |
 |
Date: May 12th, 2025 4:58 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
Right. The advances in AI coding will also cause this to happen sooner than it would otherwise. Even some of the local LLm market will likely migrate to AMD or Intel GPUs as the software improves
(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48924854) |
 |
Date: May 12th, 2025 6:41 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
Yes, and that’s still using traditional models where all the weights are in the GPU. For many specialized tasks, the model likely only needs a small amount of additional information in order to predict a particular token well. The neural networks that play particular games, for example, are quite small and could be quickly transferred from an NVME solid state drive into GPU memory if it was necessary for predicting the next token. I can imagine mixture of expert type models that retrieve from a database of thousands of expert modules based on the current context. No need for hardware with tons of memory next to the tensor cores.
(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925085) |
 |
Date: May 12th, 2025 6:51 PM
Author: .,.,,..,..,.,..:,,:,...,:::,.,.,:,.,.:.,:.,:.::,.
I am blown away by even current local LLMs. Gemma 3 is great.
(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925110) |
 |
Date: May 12th, 2025 6:55 PM
Author: https://imgur.com/a/o2g8xYK
You don't, 32 is fine. However some people think it's nbd to drop $5k on a Macbook, and the models with high capacity are in short supply so people grab them as they become available. Mac Minis are really the best value right now if you want to run M4 Max or M2 Ultra. You need at least the Max to run LLMs, and if you're getting more than 32gb you really want the Ultra
EDIT oh shit there's a M3 Ultra now. 180
(http://www.autoadmit.com/thread.php?thread_id=5723631&forum_id=2)#48925124) |
|
|