5/29/05 AI thread
| Cracking Obsidian Doctorate | 05/29/25 | | Cracking Obsidian Doctorate | 05/29/25 | | Odious brass theater | 05/29/25 | | Cracking Obsidian Doctorate | 05/29/25 | | Odious brass theater | 05/29/25 | | dave portnoy being baked in a pizza oven | 05/29/25 | | ,.,.,.,....,.,..,.,.,. | 05/29/25 | | dave portnoy being baked in a pizza oven | 05/29/25 | | ,.,.,.,....,.,..,.,.,. | 05/29/25 | | dave portnoy being baked in a pizza oven | 05/29/25 | | Bearded Wine Queen Of The Night | 05/29/25 | | Cracking Obsidian Doctorate | 05/29/25 | | Cracking Obsidian Doctorate | 05/29/25 | | wlmas | 05/29/25 | | Summa | 05/29/25 | | Bateful well-lubricated scourge upon the earth stage | 05/29/25 | | Cracking Obsidian Doctorate | 05/29/25 | | Bateful well-lubricated scourge upon the earth stage | 05/29/25 | | Cracking Obsidian Doctorate | 05/29/25 | | Summa | 05/29/25 | | Cracking Obsidian Doctorate | 05/29/25 | | wlmas | 05/29/25 | | scholarship | 05/29/25 | | George Jetson | 05/30/25 | | pitbulls eating your face in hell forever tp | 05/30/25 |
Poast new message in this thread
Date: May 29th, 2025 12:42 PM Author: Cracking Obsidian Doctorate
https://x.com/MoonL88537/status/1927927988399575070
They don't have world-models. They can't have world-models
Even their own explanations for their own thought processes are flame
LLMs are not getting us to "AGI" whatever that ends up being. They have to be able to build their own real world, empirical world-models from scratch
(http://www.autoadmit.com/thread.php?thread_id=5731013&forum_id=2).#48970898) |
 |
Date: May 29th, 2025 4:10 PM Author: ,.,.,.,....,.,..,.,.,.
SGD with transformers isn’t actually doing minimum description length learning. For inputs that are consistently structured in the same way, it will learn the underlying program that generalizes to other samples. You see certain types of verbal reasoning or inference that are consistently seen in certain “standard” ways and the models handle them well. But when these models fit data, they can learn a mixture of things that are contextually dependent in a way that isn’t desirable and produces imperfect generalization. They will not necessarily learn how to combine different programs in a way that generalizes to new samples. SGD with weight normalization can be viewed as a learning process that produces substantial but imperfect generalization. Current LLMs kind of bypass this problem to a large extent by training on everything, but this approach will ultimately fail.
Note that I don’t think this means AGI is far away. I think SGD and backprop and transformers are all human designed components of the learning process that are very likely learnable or evolvable.
(http://www.autoadmit.com/thread.php?thread_id=5731013&forum_id=2).#48971612) |
 |
Date: May 29th, 2025 4:47 PM Author: ,.,.,.,....,.,..,.,.,.
I think you could have a transformer or something similar trained to take in training samples and then write weight updates directly to another network. It would be essentially trained to program the network actually used for predictions. This outer network would be trained such that for a certain number of samples, the network it is training has the lowest possible generalization error. You move that network down a gradient based on generalization error. This sort of meta-learning is expensive but could likely produce powerful learning algorithms that don’t do the stupid things our current ones do.
(http://www.autoadmit.com/thread.php?thread_id=5731013&forum_id=2).#48971731) |
 |
Date: May 29th, 2025 2:23 PM Author: Bearded Wine Queen Of The Night
i am very much a layman in the realms of AI, tech, and even probably philosophy of mind.
but as naysayers and skeptics like you continue to voice their doubts, Im watching shit happen that was fantasy as recently as covid.
these computers, whatever the fuck you call them, can now express a human personality and demonstrate most of the signs we use to assign the value of intelligence.
"It's mimicry!"
But what human behavior isnt
"Its a fake kind of intelligence!"
Maybe maybe to the philosopher. Do you think 99% of human laypeople have discriminating enough minds to care?
And from Amazon Alexa to Grok, or whatever, the rapidity of the increase in intelligence has been breathtaking.
Even if that growth curve were to slow down, this suggests that in a few years - five certainly - these machines will produce all the verifiable "signs" humans use to gauge intelligence in other humans or animals.
(http://www.autoadmit.com/thread.php?thread_id=5731013&forum_id=2).#48971239) |
Date: May 29th, 2025 1:25 PM Author: Cracking Obsidian Doctorate
In our recent interpretability research, we introduced a new method to trace the thoughts of a large language model. Today, we’re open-sourcing the method so that anyone can build on our research.
Our approach is to generate attribution graphs, which (partially) reveal the steps a model took internally to decide on a particular output. The open-source library we’re releasing supports the generation of attribution graphs on popular open-weights models—and a frontend hosted by Neuronpedia lets you explore the graphs interactively.
https://www.anthropic.com/research/open-source-circuit-tracing
this shit seems like total flame
i'm playing with it right now and it doesn't seem to work
(http://www.autoadmit.com/thread.php?thread_id=5731013&forum_id=2).#48971018) |
Date: May 29th, 2025 2:13 PM Author: Bateful well-lubricated scourge upon the earth stage
this guy is interesting and talks a lot about AI
https://x.com/signulll
(http://www.autoadmit.com/thread.php?thread_id=5731013&forum_id=2).#48971210) |
Date: May 29th, 2025 2:32 PM Author: Cracking Obsidian Doctorate
https://x.com/dystopiangf/status/1928156746989633625
ℜ𝔞𝔢
@dystopiangf
This week’s Totally Normal Teenage Trends™️:
- Spoke to a researcher at a character AI company. They surveyed high schools & found that a majority of students have friends who are “dating” character AIs
- Teens are identifying as “solosexual,” i.e. they only have “sex” alone
(http://www.autoadmit.com/thread.php?thread_id=5731013&forum_id=2).#48971274) |
|
|