8/5/25 AI thread
| philosophy 101 weed discussions | 08/05/25 | | WLMAS, btw | 08/05/25 | | philosophy 101 weed discussions | 08/05/25 | | philosophy 101 weed discussions | 08/05/25 | | philosophy 101 weed discussions | 08/05/25 | | ,.,.,.,,,.,,.,..,.,.,.,.,,. | 08/05/25 | | philosophy 101 weed discussions | 08/05/25 | | ,.,.,.,,,.,,.,..,.,.,.,.,,. | 08/05/25 | | philosophy 101 weed discussions | 08/05/25 |
Poast new message in this thread
Date: August 5th, 2025 11:07 AM Author: philosophy 101 weed discussions
Introducing Genie 3, the most advanced world simulator ever created, enabled by numerous research breakthroughs. 🤯
Featuring high fidelity visuals, 20-24 fps, prompting on the go, world memory, and more.
https://x.com/OfficialLoganK/status/1952732206176112915
(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158202) |
Date: August 5th, 2025 11:39 AM
Author: ,.,.,.,,,.,,.,..,.,.,.,.,,.
the hierarchical reasoning paper is interesting and appeared the likely direction to go in. chain of thought is a terrible way to get iterative depth computation from a transformer. recurrent circuits that compute for the necessary period of time is much more like the brain and is more likely to produce generalization benefits than using chain of thought with a verifier (that will only work in the domains you are verifying for).
(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158304) |
 |
Date: August 5th, 2025 12:15 PM
Author: ,.,.,.,,,.,,.,..,.,.,.,.,,.
it seems like the models try to construct a consistent character to respond to a prompt. they are guessing what the best character for a particular prompt is (which can be many things since they are trained on the entire web), and sometimes it isn't appropriate. this doesn't seem surprising and is consistent with other LLM behavior.
(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158455) |
|
|