\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

8/5/25 AI thread

interesting comparison of chatgpt 5 and deekseek r1's reason...
philosophy 101 weed discussions
  08/05/25
can't you basically trust no models "reasoning" ou...
WLMAS, btw
  08/05/25
my understanding is that the newest models' CoT reasoning tr...
philosophy 101 weed discussions
  08/05/25
Introducing Genie 3, the most advanced world simulator ever ...
philosophy 101 weed discussions
  08/05/25
https://www.youtube.com/watch?v=ysPbXH0LpIE https://www.y...
philosophy 101 weed discussions
  08/05/25
the hierarchical reasoning paper is interesting and appeared...
,.,.,.,,,.,,.,..,.,.,.,.,,.
  08/05/25
https://xoxohth.com/thread.php?thread_id=5757240&mc=14&a...
philosophy 101 weed discussions
  08/05/25
it seems like the models try to construct a consistent chara...
,.,.,.,,,.,,.,..,.,.,.,.,,.
  08/05/25
so you think it's incorrect problem solving techniques being...
philosophy 101 weed discussions
  08/05/25


Poast new message in this thread



Reply Favorite

Date: August 5th, 2025 10:48 AM
Author: philosophy 101 weed discussions

interesting comparison of chatgpt 5 and deekseek r1's reasoning outputs. the new chatgpt5 appears to have a lot more crisp and concise and human like reasoning. tbh it reads a lot like a human taking notes. it will be a lot more cost efficient

https://x.com/jxmnop/status/1952375903658410336

local LLMs are the future

https://x.com/iotcoi/status/1952263680273289337

new hierarchical reasoning model in development

https://x.com/omarsar0/status/1951751651729060081

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158126)



Reply Favorite

Date: August 5th, 2025 12:20 PM
Author: WLMAS, btw (🧐)

can't you basically trust no models "reasoning" output though? that is, the "thinking out loud" part is itself only exposed in a manner that it was programmed to be, not some exposure of the raw inner working of the llm?

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158472)



Reply Favorite

Date: August 5th, 2025 12:41 PM
Author: philosophy 101 weed discussions

my understanding is that the newest models' CoT reasoning traces are actually pretty close to the actual inner workings of the model's CoT reasoning tree

after reading this thread in more detail though, i think what is being shown is not chatgpt 5's reasoning trace. it's the actual answer output that it gave. so in reality this post doesn't say anything about any changes/updates to this model's reasoning capabilities

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158536)



Reply Favorite

Date: August 5th, 2025 11:07 AM
Author: philosophy 101 weed discussions

Introducing Genie 3, the most advanced world simulator ever created, enabled by numerous research breakthroughs. 🤯

Featuring high fidelity visuals, 20-24 fps, prompting on the go, world memory, and more.

https://x.com/OfficialLoganK/status/1952732206176112915

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158202)



Reply Favorite

Date: August 5th, 2025 11:09 AM
Author: philosophy 101 weed discussions

https://www.youtube.com/watch?v=ysPbXH0LpIE

https://www.youtube.com/watch?v=XSZP9GhhuAc

these are actually really good videos on modern prompting methods and structure

some very useful tips here for everyone no matter what you use AI for

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158207)



Reply Favorite

Date: August 5th, 2025 11:39 AM
Author: ,.,.,.,,,.,,.,..,.,.,.,.,,.


the hierarchical reasoning paper is interesting and appeared the likely direction to go in. chain of thought is a terrible way to get iterative depth computation from a transformer. recurrent circuits that compute for the necessary period of time is much more like the brain and is more likely to produce generalization benefits than using chain of thought with a verifier (that will only work in the domains you are verifying for).

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158304)



Reply Favorite

Date: August 5th, 2025 11:40 AM
Author: philosophy 101 weed discussions

https://xoxohth.com/thread.php?thread_id=5757240&mc=14&forum_id=2#49151205

what are your thoughts on these "moral orientation" "personas" and what exactly do you think causes them?

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158313)



Reply Favorite

Date: August 5th, 2025 12:15 PM
Author: ,.,.,.,,,.,,.,..,.,.,.,.,,.


it seems like the models try to construct a consistent character to respond to a prompt. they are guessing what the best character for a particular prompt is (which can be many things since they are trained on the entire web), and sometimes it isn't appropriate. this doesn't seem surprising and is consistent with other LLM behavior.

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158455)



Reply Favorite

Date: August 5th, 2025 12:35 PM
Author: philosophy 101 weed discussions

so you think it's incorrect problem solving techniques being associated with "evil" persona traits in the training data? (that seems to be the explanatory mechanism behind what you're saying, imo, correct me if i'm wrong)

that is apparently the leading hypothesis for this, and it's reasonable enough. but it just doesn't seem convincing to me. is there *really* that much of a correlation between these things in the training data? it just doesn't pass the smell test imo

(http://www.autoadmit.com/thread.php?thread_id=5758545&forum_id=2)#49158521)