There is 0% possibility of “AGI” ever occurring
| contagious hyperactive business firm tank | 10/18/25 | | supple university mad cow disease | 10/18/25 | | Multi-colored faggot firefighter jew | 10/18/25 | | dull offensive circlehead | 10/18/25 | | Irradiated Fishy Den Mad-dog Skullcap | 10/18/25 | | Fragrant pungent therapy ratface | 10/18/25 | | aquamarine dashing set | 10/18/25 | | flatulent curious lodge legend | 10/18/25 | | Galvanic Kink-friendly Step-uncle's House | 10/18/25 | | green provocative cruise ship hunting ground | 10/18/25 | | Vigorous range | 10/19/25 | | Bipolar laughsome indian lodge elastic band | 10/19/25 | | Dun milk | 11/08/25 | | The Penis | 04/20/26 | | painfully honest narrow-minded wagecucks | 10/19/25 | | aromatic walnut trailer park pisswyrm | 11/08/25 | | kemp | 04/20/26 | | the walter white of this generation (walt jr.) | 04/20/26 | | kike panopticon | 04/20/26 | | the walter white of this generation (walt jr.) | 04/20/26 | | .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:. | 04/20/26 | | The Penis | 04/20/26 | | The Penis | 04/20/26 | | kemp | 04/20/26 | | The Penis | 04/20/26 |
Poast new message in this thread
Date: October 18th, 2025 8:11 AM Author: contagious hyperactive business firm tank
AI’s greatest accomplishment will be making millions of otherwise sentient / marginally intelligent people retarded.
It’s just another fucking computer program and that’s all it will ever be.
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2),#49357405) |
Date: October 18th, 2025 12:30 PM Author: Irradiated Fishy Den Mad-dog Skullcap
The only thing that would/could stop the eventual development of artificial intelligent minds is the breakdown and collapse of civilization
Which could totally happen
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2),#49357795) |
Date: October 18th, 2025 12:36 PM Author: aquamarine dashing set
that's AGI though
make users insanely retarded
everyone uses AI to try not be retarded
agents do basic tasks that humans can no longer do
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2),#49357811) |
Date: April 20th, 2026 1:40 AM Author: kike panopticon
it's a bit like the breathless overselling that went on during the mid-20th century regarding technology, space travel, et cetera.
at the time, it felt like things were moving so rapidly that SURELY our computers and rockets would soon morph into robotic butlers and flying cars a la The Jetsons!
people deadass believed that the Moon 'landing' was an obvious auguring of interplanetary space travel and 'colonization' in the near-term. even in the 80s, they were still making sci-fi movies set in 'THE YEAR 1999' where people had flying cars and lived on Mars, like this was 100% plausible futurecasting.
the AI thing will fizzle, just like all tech has. we'll get all the low-hanging fruit benefits of AI, while 'AGI' talk quietly fades away.
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2),#49828620) |
 |
Date: April 20th, 2026 2:42 AM
Author: .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.
This is generally a reasonable criticism of technological hype but the arguments for near term AGI are more complicated than “we made a lot of progress quickly, therefore we are near the end.” The success in modeling multiple different modalities using a highly generic architecture with minimal implementation differences tells you something meaningful about intelligence. Gradient descent with transformers works in multiple different domains because it’s a tractable approximation of Solomonoff induction. Small generalizing circuits make up more of the weight space and are easier for gradient descent to stumble on. It’s basically just a dumb circuit search process that tends to find programs from data that are likely to generalize. This becomes more true the more data you train on, with more epochs and with heavier regularization and more parameters. As compute budgets increase and training techniques become more efficient, this process converges closer and closer to optimal prediction of whatever modality you are training on. Even if we run out of ideas for how to improve the training techniques, the parametric circuit search can be increasingly applied to the learning algorithms themselves. There’s really no plausible obstacle that could stop this from happening relatively soon. Manifesting intelligence is no longer contingent on brilliant insights or ideas but FLOPS and engineering
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2),#49828660) |
|
|