Friday, September 26, 2025

How AI Could be Used to Reconstruct Ancient Brains from Their Endocasts

 

I have been wondering recently about the brains of extinct animals. Unfortunately, the soft tissues of these brains almost always decay completely without any traces of fossilization. However, we do, of course, have the interior of the skull for many extinct animals from dinosaurs, to the first mammals, to our ancient ancestors. In fact, the inside of the skull is commonly preserved in fossils and its geometry can give researchers modest clues about its previous contents.

 

Scientists can use the hollowed-out brain cavity of the skull to make inferences about the brains of long-gone species such as the Tyrannosaurus rex and homo erectus. The shape of this cavity can tell us a lot about the T. rex brain, especially when we compare it to that of reptiles and birds. It tells us that the Tyrannosaurus had one of the largest brains of all the dinosaurs, and that it had large areas devoted to smell and sight. There are numerous informative details that can be gleaned from brain cases by trained paleontologists. But I suspect that many more details could be uncovered using machine learning.

In this essay, we will discuss how information could be squeezed out of brain endocasts. An endocast is a three-dimensional model or mold of the internal space of the cranial cavity of a skull. These casts are used in paleontology and paleoneurology to study the size, shape, and organization of the central nervous system of extinct animals. In this essay we will discuss how we might be able to take an endocast, say from an australopithecine, and make reliable guesses about its brain structure using AI. Of course, the interior of the skull is mostly smooth and thus the endocast does not contain gross brain anatomy, but I’m going to assume that it holds lots of hidden information that is invisible to humans, but that AI could discern.

A collage of different types of skulls

AI-generated content may be incorrect.

The type of AI we would use would be a neural network like a 3D transformer model or 3D CNN. Such an AI system would be trained to look at an endocast and then predict what the corresponding brain should look like (generating plausible probabilistic brain anatomy). I believe that we can employ currently assessable (or obtainable) data to train such an AI system to do this. During training, we would have to show the system many matched pairs. For instance, one pair would be your brain and your endocast. Then we could use mine, and then many hundred more. The best way to do this might be to use a CT scan of the inside of the skull as a proxy for the endocast and then we could pair this with the MRI of the same brain. Perhaps 500 to 2,000 CT/MRI pairs would be needed.

We would have to give the AI hundreds of examples of these matched pairs to sufficiently train it to use the bone to predict the shape and form of the brain itself. Fortunately, much of this data has already been collected and exists in medical imaging datasets. Then we would collect similar data from apes and monkeys. Training on both humans and primates is crucial because it lets the system interpolate across evolutionary neighbors rather than just extrapolate. After training on this data, the system would be optimized to accept the endocast of a prehistoric human ancestor and produce a statistical 3D estimate of its brain. Cross-validation would come from leaving one species out of training and testing whether the model can predict its brain anatomy from the endocast alone. If it can reconstruct an orangutan brain without ever seeing one during training, for example, then it has captured a genuinely generalizable mapping.

This AI system would use cross-modal learning to analyze the relationships between two data types (bone geometry vs. brain anatomy). This would use a form of supervised learning where the matched pairs provide the system with a question (endocast) and the corresponding ground truth answer (brain anatomy) to check its predictions against. During training, when the AI system gets things wrong, we would weaken the weights responsible for making the mistake, and when it gets things right, we would strengthen them (backpropagation and gradient descent). Over time this feedback would optimize its ability to predict the brain by learning how the geometric features of the interior of skulls map onto brain anatomy. It would embed both skull interiors and brain anatomy in a shared mathematical space, learning the probabilistic mappings necessary to go from one to the other. It is important to remember that when the AI generates a brain, this wouldn’t represent a single “true” brain but a plausible reconstructions. Doing this many times would result in a range of possibilities that most likely capture many facets of the true brain. It could even do so while quantifying the uncertainty with confidence intervals.

 

 

A diagram of the human brain

AI-generated content may be incorrect.

 

 

To be analyzed by the computer the endocast would have to be turned into a number using morphometric analysis (i.e., 3D grid, point cloud, or mesh). Basically, the geometry of each inner skull would have to be mapped, parameterized and transformed into a long, standardized number (encoded) that can be read into a neural network so that it can be compared with other such numbers. This digitization process would also have to be done to the brains. The brains and endocasts would be trained to live in the same latent space via contrastive alignment.

We could train this system on humans and other primates so that we could ask it to triangulate toward hominins such as Neanderthals, Homo erectus, and Australopithecines. This technique would work for any species where we have a fossil skull interior and living relatives. However, the more ancient the animal, the less fidelity the system will have. Trying to similarly interpolate the brain of a Tyrannosaurus rex based on data from reptiles and birds would be much more difficult. This is because the evolutionary distances involve hundreds of millions of years rather than just a couple million years in the case of our prehistoric ancestors. Nevertheless, using this technique on animals that have been gone for eons could still produce meaningful results.

The AI would use an encoder–decoder architecture, where an encoder would turn the skull into a latent code, and a decoder would turn the code into a predicted brain. This prediction would be a conditional generation or probabilistic model. The trained model would use transfer learning (generalizing knowledge about humans and apes to hominins) to accomplish allometric domain adaptation. Essentially, I am proposing a form of geometric deep learning that uses morphometrics (the quantitative study of shape (volumes, curves, landmarks)) to process information about non-Euclidean data involving meshes and point clouds. This system trys to build a latent space where skull/endocast features and brain features line up creating correlational structure. It would not rely on a single topological feature, but rather integrate hundreds of subtle shape cues simultaneously (multimodal fusion) into nonlinear multivariate mappings. The system would embed both skulls and brains into a shared latent space, a kind of statistical arena where structural correspondences can be learned. Learning the brain/endocast correspondence essentially involves a geometry-to-geometry mapping.

I think a properly constructed machine learning system will be able to create copious information from the endocasts alone. I think it will be enough to make predictions about gross brain anatomy, and not just the brain’s surface (cortical mantle). Especially if thousands of human and primate brains can be compared to their endocasts. Even though they look smooth, endocasts contain multiple overlapping latent signals. Humans have trouble integrating these diffuse features simultaneously, but a neural network can. This will allow it to learn non-obvious mappings (e.g., that a particular vault curvature pattern predicts relative cerebellar size).

The endocast may be most informative for reconstructing the outer cortex that lies a few millimeters underneath it. But what about the deeper brain structures? I believe that by using full brains and detailed endocasts, the endocasts themselves might be able to offer plenty of information about the entire brain, including white matter and subcortex. Endocasts contain subcortical structures indirectly and this means that with enough training data, a network can map endocast geometry not just to the outer cortical sheet, but to whole-brain region proportions. Given sufficient compute and precision, I think it is possible that endocasts could even be used to make detailed predictions about connectivity or even fine-grained microanatomy. This could move paleoneurology from an “interpretive art” into statistical prediction of whole-brain anatomy. Using not just the endocast, but the entire skull could contribute additional informative data. It may even be possible to squeeze information about the brain out of the full skeleton.

 

 

A Horizontal Section of the Skull’s Brain Case from the Top


A Cross (Sagittal) Section of the Detailed Surface of the Skull Showing the Brain Cavity



Let’s talk about the landmarks on the endocast that the AI would have available to it to learn from. As you can see from the pictures there is a lot detail and keep in mind that there is a lot of variation in this detail from person to person. The folds of the cortex come in the form of gyri (ridges) and sulci (grooves) which can be visible in an endocast. But the level of detail depends heavily on the species, the individual, and the preservation quality. Smaller-brained primates, like macaques and australopithecines, tend to have more pronounced gyral and sulcal impressions on their endocasts than larger-brained hominins, including modern humans. In adult humans, the folds are barely visible, particularly on the upper part of the skull. The clarity of brain surface impressions on endocasts varies with age. Endocasts of developing brains in infants, for example, tend to show more detail than those of adults.

Despite limitations, paleoneurologists routinely use endocasts to study brain evolution in extinct species. They have successfully used the imprints of some major, consistently preserved sulci, such as the lunate sulcus, to track key evolutionary changes in hominin brain organization. However, the interpretation of these surface details remains a complex and sometimes subjective task, which is why using AI could be very helpful. Natural fossil endocasts, such as the famous Taung child (Australopithecus africanus), can have remarkably detailed surface features. For artificially or virtually created endocasts, the resolution of the imaging technique (e.g., CT scan) can dramatically influence the observable detail. New, high-resolution imaging and analysis techniques, though, are continuously improving the reliability of these analyses.

Meninges, blood vessels, and cerebrospinal fluid all exist between the brain and the bone above it and so they obscure cortical contact with bone. Nevertheless, consistent signals remain: endocranial volume, vault proportions, asymmetries, olfactory fossae, vascular channels, and gross lobe outlines. These are exactly the types of geometric data that machine learning excels at exploiting. The endocast can also give clues about the relative size and position of regions of the brain such as the frontal, temporal, parietal, and occipital lobes. However, the connection is weaker in the superior frontal and parietal areas.

There are a great number of measurable endocast traits, features, shapes, curves, and metrics, that can be extracted directly from the bony vault. These include:

  • Overall endocranial volume (ECV)
  • Vault shape and proportions (elongation, globularity)
  • Asymmetries (petalias)
  • Sulcal and gyral impressions (if present)
  • Vascular grooves
  • Cranial base angles and fossae
  • Foramen magnum orientation
  • Olfactory fossa and cribriform plate region
  • Canal openings
  • Cerebellar impressions
  • Brainstem/pons impressions
  • Relative lobe expansion (bulging and bossing, flattening and angulation)

What could the results of a system like this do for science? For fossils like the Taung child, AI could sharpen our sense of which sulci impressions are genuine. For Neanderthals, it could provide a statistical measure of parietal expansion. For dinosaurs, it might offer credible intervals for sensory specializations. It is worth mentioning that a machine learning model like the one discussed here could be used on much more recent skulls. It could even play a role in helping to model the brains of deceased humans. Having a recreation of a loved one’s brain could help make their AI counterpart or avatar more authentic. Realistically this kind of thing should probably only be done with permission, but I wouldn’t mind, and in fact, I would like to grant permission to use my skull and brain for machine learning right here and now.

By turning these cavities into data-rich predictors, AI could breathe new life into the study of extinct cognition and allow us to glimpse the hidden architecture of minds long vanished. While the results will always be probabilistic and uncertain, they could bring new rigor to paleoneurology, transforming smooth stone endocasts into testable models of ancient cognition. The smooth surfaces of fossil skulls, long thought to be mute, may hold hidden records that only modern computation can translate. In doing so, we may begin to see not only the brains, but also the minds, from worlds long vanished.

Tuesday, September 9, 2025

Skull to Face: Using AI to Recreate the Lost Faces of Human Evolution

  

Teaching AI to See the Faces in Fossil Skulls

What did our extinct prehistoric ancestors look like? For more than a century, artists have tried to rebuild ancient human faces from skulls. They were forced to use their imagination and ingenuity to do it. Until recently, that meant the faces of Neanderthals, Australopithecines, Homo floresiensis, and earlier hominins were partly unanswerable questions. Modern AI changes the terms of that question.

The idea is simple. We can show an AI model thousands of paired examples where the input is a 3-D skull and the output is the real face that belonged to it. We start with humans (where matched medical scans exist), then add living apes and monkeys to teach the system how primate skull shape influences the soft tissue above it. Once the model learns those relationships, we can point it at fossil skulls and ask: “Given this bone structure, what faces are most consistent with what you’ve learned?” Even though cartilage and soft tissues don’t fossilize, the skull gives strong hints. Its bony architecture sharply constrains the overall head shape, including, forehead slope, brow, zygomatic arches, dental arcade, mandible, nasal aperture, chin, and muscle attachment sites.


If we can feed enough of these skull/face pairs to a machine learning model it will starts to learn the regularities and turn them into a mathematical function. It will learn, for example, how the curve of the zygomatic arch relates to cheek contour, how the nasal aperture and spine relate to the bridge of the nose, and how jaw robustness relates to mouth shape. Widening the training set with non-human primates will keep the model from overfitting to modern. The apes and monkeys will also extend the range of skull shapes and soft tissues and provide a phylogenetic bridge so that all the model must do is create interpolations. This project should use a large, highly diverse sample of humans as well as several member from all of the great apes (chimps, bonobos, gorillas, orangutans), lesser apes (gibbons), and possibly also monkeys as well. It might help to remove the hair from at least some of the primate 3D models so that the features that are normally covered by fur can be mapped.

This system would effectively be teaching the AI how to translate between skulls and faces (and remember translation is what the “transformer” AI architecture was originally developed to accomplish). So, the AI would learn a skull-to-face “vocabulary” from living species, and then use it to translate fossil skulls. It’s like a police sketch process, except now the skull is the witness.

There are many human artists who have created paleoart depictions of prehistoric human ancestors. Some of these paintings are exquisite and highly evocative. The human artists make their design decisions using the extremely sophisticated neural networks in their heads, but the job would be better accomplished by machines due to their computational advantages. Making a precise statistical prediction about a face from a reference skull involves attention to more features and patterns than humans are capable of keeping in mind at once. There are all kinds of hidden, latent correlations that the human mind does not notice, but that machine learning will pick up on very easily. This is why we should use AI.


Top Row: Vervet Monkey, Australopithecus, Lucy. Middle Row: Homo Habilis, Australopithecus Afarensis, Paranthropus Boisei, Australopithecus Africanus, Australopithecus Afarensis. Bottom Row: Owl Monkey, Cotton Top Tamarin, 


How Would This System Work?

At its core the machine learning system has two halves. The first part of this AI system is a "transformer" which is a type of neural network that pays attention to the geometric relationships in the 3D skull, how different parts of the bone relate to each other and how they shape the face. It would work as a geometric encoder that studies the skull as a true 3-D object rather than a photograph. We would feed it cleaned skull meshes (or dense point clouds) in a standard pose. This 3-D transformer would learn which bony regions co-vary, distilling the skull into a compact, numerical description.

This first system would then lead to an image generation system that would create a prediction of the face it expects to lie on top of each skull. Thus, during training, it would try to generate (using an imagery generation technique called diffusion) a face for each skull and it would get feedback based on the actual (ground truth) face. Over time, it would learn the “mapping” of skulls to face and get really good at understanding which skull features correspond to which facial features.

The best fit here for the face generator may be a “conditional diffusion model” that would take the transfomer’s latent description of the skull and sample many faces conditioned on it. Most state-of-the-art image generators today are from the “diffusion family.” They dominate because they’re stable to train, scale well, condition cleanly on text or other inputs, and naturally produce diverse samples rather than collapsing to a single look. Adobe Firefly, Midjourney, DALL·E 3, Stable diffusion, Google’s Imagen line, Runway’s recent models, and many research systems are all diffusion/flow variants under the hood. For a single, simple stack, a diffusion network over a mesh/point representation with the skull encoder’s tokens plugged in via cross-attention is probably the clean, modern choice.

Hundreds of pairs might get this model off the ground, but it would need thousands of human and primate pairs to show some precision. Of course, we wouldn’t want to reinvent the wheel. So, we wouldn’t start entirely from scratch. Instead, we would use something known as transfer learning, where we start with an existing model that’s already been trained on a large amount of 3D shape or image data. These models have a good baseline understanding of physical general patterns, even those related to faces. Then we would fine-tune it with the skull-face pairs so it learns the specialized task of transforming skulls to faces. By leveraging an existing image generation or 3D model and then adapting it, we would save a lot of time and data.

Once that mapping is learned, we can turn it toward the past. We can present a well-preserved Neanderthal cranium or the small skull of Homo floresiensis and ask the model to propose faces that are consistent with the bone. Some of the skulls coming from paleoanthropology departments are crushed. However, we already have algorithms that can digitally “uncollapse” a fossil that was crushed in the ground so that the AI model sees a truer shape before it begins to infer a face. The right output is not a single portrait. Bone to face is an underdetermined problem and many faces can fit the same skull. A responsible system would produce an ensemble of possibilities that agree where the skull is informative and vary where it is silent. The bony frame nails down head shape, brow, jawline and overall proportions. The details that live in cartilage and fat, such as the tip of the nose, the lips and the ears, carry wider uncertainty. A good system would make that uncertainty visible and apparent to users.

Next, we would want to show that transforming skulls to faces works on humans with measurable accuracy and well calibrated uncertainty. Then we would want to show that this generalizes to other primates by holding out a species (such as orangutans) from training, but then testing the model’s ability to recreate that species face after training. Only then would we present fossils. With a team of researchers, or with a sufficiently advanced AI agent, the fossil could be presented alongside expert commentary and comparisons to traditional hand-built reconstructions.


Top Row: Australopithecus Aethiopicus, Australopithecus, Homo Heidelbergensis, Homo Erectus. Middle Row: Homo Habilis, Australopithecus Afarensis, Paranthropus Boisei, Australopithecus Africanus, Australopithecus Afarensis. Bottom Row: Wedge Capped Capuchin, Siamang


What Would This System Give Us?

Several extra sources of information can be used to help reduce guesswork. Forensic science has measured average soft tissue thickness at standard points on the face across age, sex and ancestry. Those tables can be used as anchors. Ancient DNA, where we have it, could inform pigmentation and other features. Known information pertaining to the genes that influence skin, hair and eye color can narrow the palette for Neanderthals and some early modern humans. Comparative anatomy helps too. A specimen placed on a particular branch of the family tree should look like its neighbors on that branch. A reconstruction of Australopithecus should not drift toward a modern human nose, and a reconstruction of early Homo should not drift toward a chimp.

A museum could present the results in a way that reveals both the power and the limits of the method. Imagine standing in front of a fossil cranium and a rotating 3-D viewer. You can toggle through twenty plausible faces derived from the same skull. A simple overlay highlights the regions where the model is most confident in green and least confident in red. A control lets you alter the features based on ancient DNA when it exists. It invites the public to see how bone constrains flesh and where knowledge gives way to uncertainty. Reconstructions should be clearly labeled as probabilistic and training data should be de-identified.

What might we actually see if we were to build this model? Neanderthals provide a best case because we have several complete crania and high-quality DNA. The ensemble would likely be tight where bone speaks strongly and more variable around the nose and lips. The tiny LB1 skull from Flores would show a different pattern, with robust jaws and a short midface but wider uncertainty in soft tissue. A classic Australopithecus like “Mrs. Ples” would land in between ape-like prognathism and human-like flattening. Denisovans are the opposite case. For Denisova hominins, we have remarkable DNA and finger fossils, but no fossil skull so this technique could not be applied to them (yet).

This project also belongs to a larger idea that modern AI can act as telescope capable of peering back through time. Accordingly, this technique could be used for dinosaurs or any extinct creatures. We could recreate a T. Rex face based on a system trained on birds and reptiles. Furthermore, I have created a genome prediction pipeline that could estimate a plausible genome for when we don’t have a genome for the species in question.

https://www.observedimpulse.com/2025/07/ai-mediated-reconstruction-of-dinosaur.html

Conclusion

I remember back in my twenties, I thought the face of an extinct hominin was one of the great mysteries of science and something I very much longed to see. Although these faces will never be known with finality, with careful methods, we can recover their outlines and likely expressions. We can give museums new tools to teach. We can give readers a stronger sense of kinship. The result is not just a new picture in a textbook. It is a new way to talk with the public about evidence. A single confident portrait invites argument about smiles and haircuts. An interactive ensemble would invite questions about how bone shapes flesh, about what DNA can and cannot tell us, and about how evolution channels variation. It lets us look an ancestor in the eyes while remembering that some parts of that gaze come from rock and measurement, and some parts come from honest uncertainty.

I have always thought it unfortunate that most modern depictions make hominins look brutish. I think the main take away from high fidelity facial recreations is that we will see these people were beautiful and noble. Some of them would’ve been diminutive and adorable. Others would have been intimidating due to their size and robusticity. Some of the phylogenetically older ones might be eerie looking, but their visages would help us see our continuity with the rest of the primate order. But I predict here that we will see the intellect in their eyes, we will find them attractive, and we will see them as cousins and equals. I believe that they will invariably look interesting, making us want to reach out, talk, and engage with them. And I think, in the not-so-distant future, AI could help us have a scientifically accurate, communicative interaction with the avatar of a homo erectus individual. Before we wrap here, let’s take a look at which hominin species we actually have skulls from.

Here’s a list of hominin taxa with substantially complete crania/skulls (good enough for 3-D modeling) that this pipeline could take as inputs. They are grouped here roughly by era, with exemplar specimens in brackets.

• Sahelanthropus tchadensis — Toumaï cranium [TM 266-01-060-1]
• Australopithecus anamensis — near-complete cranium [MRD-VP-1/1]
• Australopithecus afarensis — adult A.L. 444-2; juvenile “Selam” [DIK-1-1]
• Australopithecus africanus — “Mrs. Ples” [STS 5]; “Little Foot” [StW 573]
• Australopithecus sediba — MH1 (“Karabo”), MH2 crania
• Paranthropus aethiopicus — “Black Skull” [KNM-WT 17000]
• Paranthropus boisei — “Zinj” [OH 5], KNM-ER 406
• Paranthropus robustus — SK 48; DNH 155
• Kenyanthropus platyops — cranium KNM-WT 40000 (distorted but complete enough)
• Homo habilis — KNM-ER 1813; OH 24 “Twiggy” (reconstructed)
• Homo rudolfensis — KNM-ER 1470
• Dmanisi early Homo (often H. georgicus / early H. erectus) — D2280, D2700, D4500
• Homo ergaster — KNM-ER 3733, 3883; Turkana Boy skull [KNM-WT 15000]
• Homo erectus — Sangiran 17; Zhoukoudian crania; Mojokerto; Ngandong
• Homo heidelbergensis sensu lato (incl. “H. rhodesiensis” / “H. bodoensis”)
• East Asian mid-Pleistocene archaic Homo — Dali, Jinniushan, Maba
• Homo longi (“Dragon Man”) — Harbin cranium (very complete)
• Homo naledi — several near-complete crania (e.g., “Neo”; composite but model-ready)
• Homo neanderthalensis — (e.g. La Chapelle-aux-Saints 1, La Ferrassie 1)
• Homo floresiensis — LB1 (nearly complete), LB6 (partial)
• Homo sapiens (anatomically modern) — Herto (BOU-VP-16/1), Omo 1, Skhul, Qafzeh

Borderline/partial but still useful (reconstructed from crushed pieces; less than “intact”):
• Ardipithecus ramidus [ARA-VP-6/500]
• Australopithecus garhi [BOU-VP-12/130]
• Homo antecessor (excellent face ATD6-69, but not a full skull)

Not yet eligible for recreation due to lack of an intact cranium (as of now):
• Denisovans (parietal fragments + Xiahe mandible; no complete skull)
• Homo luzonensis (teeth and hand/foot bones only)
• Orrorin tugenensis, Ardipithecus kadabba, Australopithecus bahrelghazali (no cranial vaults)

Prehistoric Brains After Trauma: Could Dinosaurs Have Had Mental Disorders?

When we imagine dinosaurs, we picture thunderous hunters and herds of armored giants, instinctive, unstoppable, alien in their otherness. But beneath those ancient scales were brains shaped by the same evolutionary pressures that still mold animal minds today: fear, excitement, competition, and survival. What if we could peer past the fossilized bone into the neurobiological logic that governed a tyrannosaur’s behavior under stress? What if, like mammals and birds, dinosaurs possessed ancient neural circuits tuned to adversity, that recalibrated perception, emotion, and behavior in ways that echo what we now call mental disorders?

A dinosaur in a prison

AI-generated content may be incorrect.

This blog entry explores the provocative idea that traits like impulsivity, hyper-vigilance, and even cognitive rigidity, hallmarks of conditions like anxiety, schizophrenia, depression, and PTSD, are not modern dysfunctions, but predictive adaptive responses: deeply conserved survival strategies honed across deep time. We will draw from the latest research in comparative neuroscience involving birds, reptiles and others, and consider behavioral shifts under chronic stress that mirror aspects of human psychopathology. By looking backward, through the lens of both paleontology and psychiatry, we might gain a clearer view not only of dinosaur behavior, but of the ancient, adaptive roots of our own minds.

Archosaurs and Their Living Legacy: The Key to Understanding Dinosaur Minds

To understand the minds of stressed dinosaurs, we start with a powerful evolutionary clue: dinosaurs were archosaurs, a group of reptiles that also includes modern birds and crocodilians. While the non-avian dinosaurs disappeared 66 million years ago, their lineage didn’t vanish, it branched and persisted, giving us rich comparative models to work with today. These two groups offer a remarkable contrast: birds are warm-blooded, highly social, and behaviorally flexible; crocodilians are cold-blooded, solitary, and behaviorally stereotyped, yet both exhibit complex learning, parental care, and powerful stress responses rooted in deep evolutionary history.

Beyond archosaurs, we can also draw meaningful comparisons from a broader set of vertebrates, fish, amphibians, and reptiles, to trace how animals adapt their brains and behavior in response to adversity. These comparisons let us triangulate what aspects of dinosaur neurobiology may have been ancestral, what was shared, and what may have been unique. By examining how modern animals cope with stress, through changes in vigilance, aggression, impulsivity, and cognitive flexibility, we can begin to piece together a picture of how dinosaurs, too, may have shifted into adaptive mental modes to survive harsh or unpredictable environments.

Stress in Birds and Reptiles: Ancient Roots of Emotional and Cognitive Shifts

While most studies of mental health and stress responses focus on mammals, compelling evidence from birds and reptiles suggests that adaptive emotional and behavioral changes under stress are deeply conserved across vertebrate evolution.

Birds

Birds, especially species like pigeons, crows, zebra finches, and chickens, have demonstrated striking parallels to mammals in how they respond to chronic or early-life stress:

  • Cognitive inflexibility: Stressed birds show reduced ability to shift strategies or adjust to new information, mirroring the rigidity seen in human conditions like anxiety, depression, and schizophrenia.
  • Impulsivity: Repeated exposure to stress in birds leads to riskier decision-making, impulsive feeding, and poor long-term planning, behavioral shifts that prioritize short-term survival over long-term optimization.
  • Vigilance and emotional reactivity: Chronically stressed birds exhibit exaggerated fear responses, heightened startle reactions, and avoidance behaviors, aligning with hyperarousal and anxiety.
  • Reduced neurogenesis and brain volume: Regions like the hippocampus, vital for learning and memory, shrink under chronic stress in birds, paralleling mammalian and human stress responses. The homologue of the mammal PFC is also affected by stress in ways like that in mammals.
  • Sensory filtering deficits: There is preliminary evidence that stress disrupts pre-attentive sensory gating in birds, similar to prepulse inhibition (PPI) deficits seen in schizophrenia and PTSD.

Reptiles

Though less studied, reptiles, such as lizards and turtles, also display conserved neuroendocrine responses to stress:

  • HPA-axis analog activation: Like mammals and birds, reptiles possess a hypothalamic-pituitary-adrenal system that releases glucocorticoids under stress. This hormonal cascade alters metabolism, behavior, and reactivity.
  • Behavioral withdrawal and freezing: Under stress, reptiles often reduce exploratory behavior, exhibit rigid threat postures, and favor freezing or defensive aggression, responses likely rooted in ancient survival strategies.
  • Memory and learning impairments: Chronic stress impairs spatial learning and reduces problem-solving flexibility in reptiles, particularly in geckos and turtles.
  • Epigenetic modulation: Some reptilian studies show that early developmental stress can cause lasting changes in gene expression, particularly in genes involved in neuroplasticity and hormone sensitivity.

Together, these findings suggest that adaptive behavioral and neurological responses to adversity, impulsivity, hypervigilance, reduced flexibility, and altered stress reactivity, are not limited to humans or even mammals. They appear in birds and reptiles too, pointing to an ancient, shared toolkit for coping with danger and uncertainty. If these traits emerge consistently in stressed modern archosaurs, it’s highly plausible that dinosaurs exhibited similar neurobehavioral shifts under environmental pressure. These may have served as temporary survival modes, reactive mental states tuned for hostile conditions, just like we see in modern animals today.

DSM Diagnoses That May Have Parallels In Dinosaurs

If you reframe “mental disorders” as context-dependent neuroecological strategies, then yes, Tyrannosaurs and other dinosaurs likely exhibited a full spectrum of stress-calibrated behavioral phenotypes. Not disorders in their world, but adaptive shifts in brain state, shaped by evolution. While we can’t diagnose extinct animals with DSM-5 categories, we can infer plausible analogs to human psychiatric traits.

Anxiety: Modern reptiles and birds show anxiety analogs like vigilance, avoidance, reduced foraging, and freezing when stressed. Dinosaurs almost certainly exhibited anxiety symptoms, some more than others, across a continuum.

Depression: In mammals, birds, reptiles, and fish, chronic stress, social defeat, or loss can lead to anhedonia (loss of pleasure-seeking), lethargy or reduced exploration, social withdrawal, as well as appetite and sleep changes. While we can't measure “sadness” in a T. rex, we can infer that low-dominance individuals or those in harsh ecological niches may have displayed reduced foraging, passivity, or increased vulnerability, the functional equivalents of depressive states.

PTSD: If a juvenile T. rex experienced prolonged early-life stress, their perceptual systems might have over-prioritized threat signals, resulting in false positives, exaggerated reactions, or preemptive aggression. This is functionally similar to paranoia or hyperarousal in PTSD.

OCD: Some dinosaurs could have exhibited repetitive ritualized patterns that stemmed from compulsions. Compulsive behaviors could have led to stereotypies such as pacing, repetitive head bobbing, overgrooming, or self-mutilation. These are common in modern zoo animals, birds in cages, and reptiles in tanks, especially under boredom or stress.

Autism: Some social dinosaur species may have had members with limited sociality, with strong solitary focus, low social interest, repetitive behaviors, or reduced communication and this might not have been maladaptive. These may have represented natural variation, helping them to focus on food and survival rather than fraternizing and this impaired social perception might have led to fitness in certain environments.

ADHD: Hyperactivity and novelty seeking could have been adaptive in certain niches leading to impulsivity, roaming, inattention, and erratic exploration. Birds, reptiles, and even fish show dopaminergic disinhibition after chronic stress.

Bipolar: Some members of a dinosaur species could have cycled from depression to a form of mania. This may have been adaptive because it helped them swing from resting in oppressive times to fully taking advantage of beneficial circumstances.

Psychopathy: In rough, traumatic settings predatory aggression and low affiliative bonding could have led to adaptive lack of fear, goal-driven behavior that was non-pathological in context. This could have surfaced as erratic attack behavior, risk-taking, boundary violations, and territorial overexpansion. Stress-altered dopaminergic systems are linked to impulsivity in birds, mammals, and fish.

We can also imagine that dinosaurs could have had panic disorder, startle hyperreactivity, sleep disorders, attachment disorders, and social anxiety. If dinosaurs had rich, plastic brains, as their descendants do, then they almost certainly experienced state shifts we now pathologize in humans. Dinosaurs likely had a wide behavioral range, and under chronic stress or developmental trauma, that range probably skewed toward survival-prioritized mental modes. These behaviors were not "disorders" in their context, but reactive calibrations of the brain, conserved across deep time. The full catalog of dinosaur mental states remains speculative, but the building blocks of modern disorders were almost certainly present.

The Benefits of Cognitive Inflexibility in Response to Stress

Stress reshapes learning and cognition in birds along several interacting axes: timing, intensity, and social context. Acute stressors can sometimes sharpen vigilance and speed up threat learning, but repeated or unpredictable stress tends to tax the HPA axis, elevate corticosterone over longer windows, and push cognition toward faster, noisier strategies.

Song learning is one of the clearest windows into these effects. In species with sensitive periods, stress during tutoring or practice narrows the repertoire, reduces motif accuracy, and increases variability from rendition to rendition. Neuroanatomically, stress perturbs the development and plasticity of nuclei such as HVC and RA and the basal-ganglia–like Area X, where dopaminergic signals normally calibrate error-based refinement; disrupted dopamine and reduced BDNF signaling can degrade the precision of auditory–motor matching. Adults that were stressed as juveniles often carry these fingerprints forward: they are slower to adjust song after auditory feedback manipulations, less able to acquire new syllables, and less consistent in performance when challenged.

Memory systems are similarly tuned. The hippocampus in food-caching and scatter-hoarding birds supports high-resolution spatial memory and shows adult neurogenesis; chronic or early corticosterone exposure reduces neurogenesis, dendritic complexity, and long-term potentiation, yielding poorer cache recovery and less flexible route choice. In decision-making tasks, stressed individuals show more perseveration and slower reversal when contingencies flip, essentially a reduction in cognitive flexibility, while at the same time showing quicker acquisition of simple avoidance responses, an asymmetry that reflects the well-known inverted-U relation between arousal and learning.

Attention and learning style also reconfigure. Under stress, juveniles become more neophilic in some contexts (sampling new foods and sites more readily) yet more neophobic in others (warier of handlers, novel objects, or traps), reflecting a reallocation of attention toward cues that best predict immediate survival. In judgment-bias paradigms, birds exposed to chronic stress interpret ambiguous cues more pessimistically, which can dampen exploration and learning from partial feedback.

These cognitive changes are mediated not only by circulating hormones but also by lasting molecular marks. Early adversity can alter glucocorticoid receptor expression and epigenetic regulation in brain regions supporting learning, biasing later stress reactivity and plasticity. This makes these changes look like predictive adaptive responses. Mothers can transmit some of this tuning before hatch via yolk hormones, effectively “pre-setting” offspring for expected environmental volatility. The result is a developmental program that trades fine-grained accuracy and tutor fidelity for independence, rapid sampling, and broad social learning when parental care is unreliable.

Chronic stress in humans leads to similar changes in higher-order brain states that can reduce activity in the PFC and hippocampus (the stress cascade), causing executive deficits and mental inflexibility. I have previously written about how this may have been adaptive in nature (Reser, 2016; 2007). I believe cognitive inflexibility can enhance survival in highly constrained, high-threat environments by favoring reliability, predictability, and resistance to distraction over exploratory flexibility. In other words, in dangerous or unpredictable settings, the cost of being wrong or of switching strategies too easily can be much greater than the cost of being rigid.

Sticking to a narrow set of well-practiced defensive behaviors (e.g. freezing, fleeing, hiding, aggression) may be more reliable than deliberation. In flexible cognition, ambiguity is tolerable; in defensive states, it’s dangerous. Inflexibility reduces hesitation. In abusive environments, seeking novelty or shifting strategies might repeatedly backfire. Cognitive rigidity helps avoid exploration-exploitation errors, keeping an individual locked into “safe” (even if suboptimal) routines. It may have helped animals not get stuck in overthinking or analysis paralysis so that they could simplify things and focus on survival. Essentially, it may have been a heuristic compression strategy under duress. Fascinatingly chronic stress often impairs song learning in songbirds, leading to simpler repertoires or less accurate mimicry. Stress-exposed birds may also shift from careful, attention-based learning to faster, riskier trial-and-error strategies.

When the environment is hostile, chaotic, or resource-poor, spending time evaluating options or mentally simulating outcomes may become a liability. Cognitive inflexibility strips away nuance in favor of quick threat classification (“friend or foe,” “safe or unsafe”), actionable routines over deliberation, and hyper-salient cues over ambiguous signals. In that sense, it’s not just about freezing out complexity, it’s about optimizing the brain for immediate survival rather than long-term gain. You’re simplifying the model of the world to minimize uncertainty, even if that means overestimating threat or underutilizing social opportunity. There’s a strong parallel here with defensive downshifts in perception and cognition across species, like prey animals who reduce exploratory foraging when they smell a predator. Flexibility is a luxury of safety. In this context, schizophrenia-like features such as vigilance, rigidity, and emotional reactivity become a tightly integrated “defense set” that may play a role in most tetrapod vertebrates. This makes me wonder if T. Rex had a brain state comparable with schizophrenia.

In a chronically threatening world, a simpler, more rigid mind may be better tuned to survive. “When the world’s on fire, don’t process, pattern-match and act.” And it scales: from rodents freezing at a twig snap, to humans forming fast heuristics like “everyone’s out to get me” after prolonged adversity. It's not accurate, but it’s efficient, predictive, and in some contexts, life-saving. You might even define schizophrenia not as cognitive breakdown, but as the overactivation of a compressed defensive schema, built for ancestral threat, miscalibrated for modern peace. I imagine dinosaurs could have found use in this inflexibility sometimes too.

Tyrannosaurus Rex as a Case In Point

The brain size of Tyranosaurus Rex makes it interesting for the present discussion. T. rex had the largest brain of any dinosaur. In fact, its brain was nearly twice the size of any other dino. It measured around 300 cubic centimeters, placing it around the size of a chimpanzee brain (380 cc) but far smaller than a human’s 1,300 cc. For its time though, it was colossal, especially when compared to the velociraptor’s 15cc brain or the gargantuan dreadnaughtus’ 25cc brain. The T Rex cerebrum would likely have had some cognitive flexibility, adequate for learned routines, territory, and hunting strategies. The brain-to-body ratio (EQ) was small compared to mammals, and many theropod dinosaurs have larger EQs than T Rex, suggesting energy-efficient, heuristic processing, consistent with reactive, low-plasticity cognition. But in the Cretaceous, and all the time that came before it, T Rex may have been one of the smartest animals around. And this might mean that it had much to benefit from downplaying high-level cognition during hostile times. Did stress turn it into an even more savage tyrant king?

Tyrannosaurus rex, like modern mammals and birds, likely possessed an adversity-calibrated neurobehavioral system that could shift individuals toward a “reactive defense phenotype” under chronic environmental stress. This phenotype could have favored heightened vigilance, cognitive rigidity, and impulsive aggression. In T. rex, early-life stress (e.g. poor food availability, high parental competition) likely led to suppressed hippocampal development, prefrontal-like downregulation, and upregulated amygdaloid/defensive circuitry. Juvenile tyrannosaurs exposed to chronic stress may have developed rigid, antisocial behavioral patterns. Delusional-like perceptual misfires are harder to prove, but oversensitivity to ambiguous stimuli (false positives) could have helped avoid ambush or surprise in a volatile ecosystem. Just like a stressed rodent shows prepulse inhibition deficits, a stressed T. rex might have been easily startled, overly reactive, or “trigger-happy” in its threat detection system, trading false alarms for survival.

As we discussed, chronic corticosterone exposure in birds is known to reduce neurogenesis, increase impulsivity, impair spatial memory, and suppress social behavior. As in humans, these are brought forth through epigenetic modifications (e.g., DNA methylation, histone acetylation) in response to environmental stress. Modern birds exposed to prenatal stress develop smaller brains, reduced sociability, slower learning, and heightened reactivity because of these changes.

Stressed adult birds may reduce investment in parental care, feeding chicks less often, abandoning nests more readily, or investing in fewer offspring overall. In response, early life stress in youg birds often reduces birds responsiveness to parental cues while increasing their attention to unrelated birds of the same species (conspecifics). This shift likely reflects an adaptive strategy, helping juveniles learn from peers and integrate into broader social groups when parental care is unreliable. Experiments in species like zebra finches and starlings confirm that stressed chicks attend less to parents and more to non-kin, accelerating independence. Could this have been a strategy also used by dinosaurs like tyrannosaurs?

How closely related was T rex to birds? Tyrannosaurus rex, like all birds, was a member of the Theropoda, a group of bipedal, mostly carnivorous dinosaurs. Both T. rex and modern birds even belong to a more specialized branch called Coelurosauria, which includes feathered dinosaurs and some of the most cognitively advanced species of the Mesozoic. This makes me feel pretty confident about drawing parallels between birds and Tyrannosaurus. Although it is important to mention that they would have shared a pretty ancient common ancestor. The ancestor of all birds lived around 100 million years ago and the common ancestor that bird shared with the T Rex would have lived 160 million years ago, that leaves lots of time for genetic changes in the way the brain responds to stress.

Conclusion

By studying dinosaur’s closest living relatives, birds and crocodilians, and by examining how stress alters the brains and behaviors of animals across the tree of life, we can begin to build a scientifically grounded picture of how dinosaurs may have responded to adversity. The evidence is clear: chronic stress in birds and reptiles can lead to heightened vigilance, impulsivity, emotional reactivity, and cognitive inflexibility, traits long associated with human mental disorders like anxiety, depression, and schizophrenia. But in nature, these traits often represent adaptive strategies, not pathologies. They reflect survival-oriented recalibrations, predictive shifts that prioritize defense, alertness, and short-term action in the face of threat or instability.

If such responses are found in birds, lizards, and mammals alike, it is reasonable to infer that dinosaurs also possessed ancient neural systems capable of mounting similar behavioral and physiological shifts. In this light, what we label as "mental disorders" in humans may reflect a shared evolutionary toolkit: a repertoire of adaptive responses, once sculpted by danger and deprivation, now mismatched with the modern world.

References:

Reser, J. 2016. Chronic stress, cortical plasticity and neuroecology. Behavioural Processes. 129:105-115.

Reser, J. 2007. Schizophrenia and phenotypic plasticity: Schizophrenia may represent a predicitive, adaptive response to severe environmental adversity that allows both bioenergetic thrift and a defensive behavioral strategy. Medical Hypotheses, Volume 69, Issue 2, 383-394.