\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

I can do shit with Qwen3.5 that not even Chat-GPT can do

I checked with Claude and asked if this was possible http...
Jared Baumeister
  02/26/26
None of those gpt models exist anymore
TurboGrafx-67
  02/27/26
Whatever man, I'm pretty sure OpenAI won't just spot you 28g...
Jared Baumeister
  02/27/26
Show me qwen3.5 do reasoning as good as gpt 4.5/o3 and I'll ...
TurboGrafx-67
  02/27/26
I'm not saying Qwen wins at everything. And even if a questi...
Jared Baumeister
  02/27/26
You're assuming the tokens are equal among models. They aren...
Lab Diamond Dallas Trump
  02/27/26
I didn't say shit about tokens, Claude did.
Jared Baumeister
  02/27/26


Poast new message in this thread



Reply Favorite

Date: February 26th, 2026 11:48 PM
Author: Jared Baumeister

I checked with Claude and asked if this was possible

https://i.imgur.com/yAOtRAk.png

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2),#49698763)



Reply Favorite

Date: February 27th, 2026 12:01 AM
Author: TurboGrafx-67

None of those gpt models exist anymore

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2),#49698781)



Reply Favorite

Date: February 27th, 2026 12:11 AM
Author: Jared Baumeister

Whatever man, I'm pretty sure OpenAI won't just spot you 28gb of VRAM on demand

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2),#49698794)



Reply Favorite

Date: February 27th, 2026 12:39 AM
Author: TurboGrafx-67

Show me qwen3.5 do reasoning as good as gpt 4.5/o3 and I'll be impressed

Anybody can buy higher context limits with a local llm. It's a cost function, not a major technical hurdle like higher level reasoning

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2),#49698827)



Reply Favorite

Date: February 27th, 2026 1:16 AM
Author: Jared Baumeister

I'm not saying Qwen wins at everything. And even if a question of resource constraints, it's still true that ChatGPT won't give me the same context window size. I don't even know what the upper limit is on Qwen. Pretty sure I can make it 3-4x what it is now without losing any quality. Deepseek 4 is supposed to have a 1mm token context window

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2),#49698850)



Reply Favorite

Date: February 27th, 2026 12:07 AM
Author: Lab Diamond Dallas Trump

You're assuming the tokens are equal among models. They aren't

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2),#49698790)



Reply Favorite

Date: February 27th, 2026 12:09 AM
Author: Jared Baumeister

I didn't say shit about tokens, Claude did.

(http://www.autoadmit.com/thread.php?thread_id=5838842&forum_id=2),#49698791)