There is 0% possibility of “AGI” ever occurring
| swashbuckling hyperactive preventive strike | 10/18/25 | | Yellow Mother Ceo | 10/18/25 | | seedy dopamine selfie | 10/18/25 | | big soggy theater stage | 10/18/25 | | smoky menage mediation | 10/18/25 | | mahogany cumskin | 10/18/25 | | comical ivory range | 10/18/25 | | Free-loading Razzle Garrison | 10/18/25 | | Glittery university | 10/18/25 | | Trip maroon space | 10/18/25 | | titillating chad | 10/19/25 | | Appetizing flesh ape weed whacker | 10/19/25 | | hairraiser generalized bond personal credit line | 11/08/25 | | The Penis | 04/20/26 | | irate home hissy fit | 10/19/25 | | violent vivacious private investor | 11/08/25 | | kemp | 04/20/26 | | the walter white of this generation (walt jr.) | 04/20/26 | | kike panopticon | 04/20/26 | | the walter white of this generation (walt jr.) | 04/20/26 | | .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:. | 04/20/26 | | The Penis | 04/20/26 | | The Penis | 04/20/26 | | kemp | 04/20/26 | | The Penis | 04/20/26 |
Poast new message in this thread
Date: October 18th, 2025 8:11 AM Author: swashbuckling hyperactive preventive strike
AI’s greatest accomplishment will be making millions of otherwise sentient / marginally intelligent people retarded.
It’s just another fucking computer program and that’s all it will ever be.
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2#49357405) |
Date: October 18th, 2025 12:30 PM Author: smoky menage mediation
The only thing that would/could stop the eventual development of artificial intelligent minds is the breakdown and collapse of civilization
Which could totally happen
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2#49357795) |
Date: October 18th, 2025 12:36 PM Author: comical ivory range
that's AGI though
make users insanely retarded
everyone uses AI to try not be retarded
agents do basic tasks that humans can no longer do
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2#49357811) |
Date: April 20th, 2026 1:40 AM Author: kike panopticon
it's a bit like the breathless overselling that went on during the mid-20th century regarding technology, space travel, et cetera.
at the time, it felt like things were moving so rapidly that SURELY our computers and rockets would soon morph into robotic butlers and flying cars a la The Jetsons!
people deadass believed that the Moon 'landing' was an obvious auguring of interplanetary space travel and 'colonization' in the near-term. even in the 80s, they were still making sci-fi movies set in 'THE YEAR 1999' where people had flying cars and lived on Mars, like this was 100% plausible futurecasting.
the AI thing will fizzle, just like all tech has. we'll get all the low-hanging fruit benefits of AI, while 'AGI' talk quietly fades away.
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2#49828620) |
 |
Date: April 20th, 2026 2:42 AM
Author: .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.
This is generally a reasonable criticism of technological hype but the arguments for near term AGI are more complicated than “we made a lot of progress quickly, therefore we are near the end.” The success in modeling multiple different modalities using a highly generic architecture with minimal implementation differences tells you something meaningful about intelligence. Gradient descent with transformers works in multiple different domains because it’s a tractable approximation of Solomonoff induction. Small generalizing circuits make up more of the weight space and are easier for gradient descent to stumble on. It’s basically just a dumb circuit search process that tends to find programs from data that are likely to generalize. This becomes more true the more data you train on, with more epochs and with heavier regularization and more parameters. As compute budgets increase and training techniques become more efficient, this process converges closer and closer to optimal prediction of whatever modality you are training on. Even if we run out of ideas for how to improve the training techniques, the parametric circuit search can be increasingly applied to the learning algorithms themselves. There’s really no plausible obstacle that could stop this from happening relatively soon. Manifesting intelligence is no longer contingent on brilliant insights or ideas but FLOPS and engineering
(http://www.autoadmit.com/thread.php?thread_id=5787428&forum_id=2#49828660) |
|
|