we can't ban open source AI so we'll remove the ability to run it locally
| merry screenmas | 12/17/25 | | OYT Magnus | 12/17/25 | | merry screenmas | 12/17/25 | | staring solemnly at your 263-game steam backlog | 12/17/25 | | incompetent fraud | 12/17/25 | | https://ibb.co/y3Zp4t6 | 12/17/25 | | merry screenmas | 12/17/25 | | https://ibb.co/y3Zp4t6 | 12/17/25 | | merry screenmas | 12/17/25 | | staring solemnly at your 263-game steam backlog | 12/17/25 | | merry screenmas | 12/17/25 | | staring solemnly at your 263-game steam backlog | 12/17/25 | | merry screenmas | 12/17/25 | | i gave my cousin head | 12/17/25 | | merry screenmas | 12/17/25 | | incompetent fraud | 12/17/25 | | i gave my cousin head | 12/17/25 | | merry screenmas | 12/17/25 | | nude jahangir | 12/17/25 | | i gave my cousin head | 12/17/25 | | merry screenmas | 12/17/25 | | Louis Poasteur | 12/17/25 | | i gave my cousin head | 12/17/25 | | staring solemnly at your 263-game steam backlog | 12/17/25 | | i gave my cousin head | 12/17/25 | | i gave my cousin head | 12/17/25 | | nude jahangir | 12/17/25 | | i gave my cousin head | 12/17/25 | | nude jahangir | 12/17/25 | | staring solemnly at your 263-game steam backlog | 12/17/25 | | https://i.imgur.com/chK2k5a.jpeg | 12/17/25 | | i gave my cousin head | 12/17/25 | | i gave my cousin head | 12/17/25 | | Kenneth Play | 12/17/25 | | merry screenmas | 12/17/25 | | wfh | 12/17/25 | | prospero ano y luicidad | 12/17/25 | | cant believe this moniker wasnt taken | 12/17/25 | | incompetent fraud | 12/17/25 | | Taylor Swift is not a hobby she is a lifestyle | 12/17/25 | | merry screenmas | 12/17/25 | | staring solemnly at your 263-game steam backlog | 12/17/25 | | incompetent fraud | 12/17/25 | | merry screenmas | 12/17/25 | | staring solemnly at your 263-game steam backlog | 12/17/25 | | wfh | 12/17/25 | | OYT Magnus | 12/17/25 | | incompetent fraud | 12/17/25 | | https://i.imgur.com/chK2k5a.jpeg | 12/17/25 | | Taylor Swift is not a hobby she is a lifestyle | 12/17/25 |
Poast new message in this thread
 |
Date: December 17th, 2025 9:25 AM Author: merry screenmas
why don't you "ping" your pencil neck you weird little freak
(http://www.autoadmit.com/thread.php?thread_id=5811303&forum_id=2.#49516133)
|
 |
Date: December 17th, 2025 1:13 PM Author: i gave my cousin head
here are the normie instructions for low iq mos
https://lmstudio.ai/
+
https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v1
also if you cant answer these questions on your own you probably dont even have a strong enough setup to begin with
(http://www.autoadmit.com/thread.php?thread_id=5811303&forum_id=2.#49516771) |
 |
Date: December 17th, 2025 1:16 PM Author: incompetent fraud (דירה בתחתונים)
This is really fucking my enterprise shit up
I had breakfast with a PHD Swiss (redacted) AI guy earlier this year whos designing AI guided ultrasound software and he basically mentioned this and said to just go fully ML locally and feed only your inputs entirely and not even bother with LLM because of this and I didn’t get it at the time bc I was piping this mentally ill Russian kike bitch who fried my synapses
(http://www.autoadmit.com/thread.php?thread_id=5811303&forum_id=2.#49516788)
|
 |
Date: December 17th, 2025 7:49 PM
Author: https://i.imgur.com/chK2k5a.jpeg
You need 24gb VRAM to do inference. 16gb isn't enough. However, unless you are coding there's little reason to go above 24gb. The reason coding can use more VRAM is because going through each iteration of the code generates long context windows. If you run out of context window the AI will forget what it was doing earlier. This is also why you can't run 15gb models on 16gb of VRAM. The context windows spills into system RAM and slows everything down
48gb VRAM lets you do more with image and video generation, but will not give you measurable gains in inference. You can put bigger models on the system but they probably won't give you better results.
(http://www.autoadmit.com/thread.php?thread_id=5811303&forum_id=2.#49517900) |
|
|