(...) Using brain analogies (while acknowledging their imperfection), he suggests the transformer architecture resembles “cortical tissue”—extremely plastic and general-purpose, trainable on any modality (audio, video, text), similar to how biological cortex can be rewired between sensory domains. Reasoning traces in thinking models might correspond to prefrontal cortex function, and fine-tuning with reinforcement learning engages basal ganglia-like structures. However, numerous brain regions remain unexplored: there’s no clear analog for the hippocampus (critical for memory consolidation), the amygdala (emotions and instincts), and various ancient nuclei. Some structures like the cerebellum may be cognitively irrelevant, but many components remain unimplemented. From an engineering perspective, the simple test is: “You’re not going to hire this thing as an intern”. The models exhibit cognitive deficits that all users intuitively sense during interaction, indicating the system is fundamentally incomplete.