Friday, July 18, 2025

How to Use AI to Reconstruct Dinosaur Genomes from Bird DNA and Skeletons


Jared Edward Reser Ph.D.

7/18/25

1.0 Introduction

I have dreamed of witnessing a real-life dinosaur since childhood, but the fact remains that we may never get our hands on original dinosaur DNA. Unfortunately, the genetic traces (even those trapped in amber) have completely degraded tens of millions of years ago, a major disappointment for dino aficionados and Jurassic Park fans. But as I have been watching the advance of artificial intelligence in the last couple years, I have realized that AI creates new possibilities. Modern machine learning has an astounding capacity for prediction, for finding hidden connections, and for cross referencing. Generative AI excels at constructing domain-relevant data structures when provided sufficient context and training examples. And agentic AI is being programmed to plan, organize, and automate complex work at scales outside the abilities of people. Now in 2025, it is rather easy to imagine a superintelligent AI in the future tinkering until it brings back a guesswork-based life-sized biological recreation of a Tyrannosaurus Rex. So, even if creating precise clones of long extinct creatures is impossible, in the coming years, it does seem possible for advanced AI agents to create surprisingly accurate guesswork reconstructions of dinosaur DNA.

This entry will discuss the information available to a superintelligent AI to help it piece together the clues necessary to do this. It will also propose a three-step pipeline using machine learning to generate a synthetic dinosaur genome. This pipeline leverages existing data to gather information about something we don’t have, dinosaur DNA. Essentially, we use data from modern birds and reptiles, where we have both the DNA and the bones, to train a system to recognize how genes shape skeletons. Then, by giving this trained system a dinosaur skeleton, we can ask: “What kind of genes would have built this?” In essence, it uses paired genomic and skeletal data from extant birds and reptiles to extract latent genetic signals and learn structure-function relationships, which we then apply to fossilized dinosaur skeletons to infer their genomic makeup.

Remember those analogical reasoning questions from the SAT? They would state two pairs of related things and then ask you how they share a common relationship. Here is an example:

“Blueprint is to building as genome is to body plan.”

All these types of questions would share the pattern A:B::C:D. Well, the technique laid out here shares that analogical structure. If A (bird skeleton) is to B (bird DNA), as C (dino skeleton) is to D (dino DNA), then what does D equal? Bird genomes are related to bird skeletons (both high dimensional vectors) by a complex analogy (a mathematical function). The math behind that analogy is beyond human capability to derive, but is tractable for machine learning. That is because it is less conceptual and more computational. But, once the machine has quantified that analogy it can apply it to dinosaur fossils to reason about dino DNA. This is because the way a genome produces a skeleton in birds is analogous to how a genome must have produced a skeleton in dinosaurs. The full machine learning pipeline is depicted by the figure in Section 3 below.

A dinosaur running through a computer server

AI-generated content may be incorrect.

2.0 The AI System that Would Be Necessary

Before we get into specifics, let’s use this section to discuss the generalities. Let’s describe a general multimodal AI system that could reverse-engineer dinos by blending paleontology, comparative genomics, structural biology, and molecular AI. To make a synthetic approximation of a dinosaur genome, I have been envisioning an autoregressive, attention-based, neural network AI that is based on the transformer architecture. Such a system could be trained on genomes, especially genomes similar to the species that the scientist is trying to resurrect. For dinosaurs, this would be birds, crocodilians, and various other reptiles (sauropsids). Think of it like GPT for genes. Instead of predicting the next word, this system would predict the next nucleotide. In other words, you enter a sequence of DNA and then it performs statistical autocomplete to make assumptions about what could come next. There is already research, and proofs of concept, in this vicinity.

Existing software tools that use transformers to make predictions about human genes include DNABERT, Nucleotide Transformer, and Enformer. Just two weeks ago Google Deep Mind released a transformer-based genomic AI called AlphaGenome. It’s not the first but it is currently the best glimpse into the applicability of AI for genomics. AlphaGenome is Deep Mind’s powerful new AI model, designed to interpret long DNA sequences (up to 1 million nucleotides (As, Cs, Ts, & Gs)) with base-pair precision. You give it a DNA sequence, and it can predict important features like where genes start and stop, how genes interact, and how active certain regions are in different cells. It helps scientists better understand how our genome works, how genetic changes might cause disease, and how we might design or edit DNA more intelligently. There are a few reasons why the generative pretrained transformer (GPT) architecture works well for genome prediction, as it does for language. Genomes contain long-range dependencies that resemble linguistic grammar and narrative structure. This allows such a system to recognize important patterns in DNA, or any long sequence, that would be invisible to a human.

Overall, a system like this might be able to help make educated guesses about bird DNA if it is trained on hundreds of bird and croc genomes to learn the relevant long-range gene dependencies, regulatory motifs, and intron-exon patterns. The system could also be trained to access and reference lots of other data we have on birds and crocs.  Thus, the AI system would have to be multimodal, taking information and building context from as many sources as possible. Such a model, trained on existing (extant) animals, would provide guardrails for producing hypothetical birds (class aves). However, to produce a specific dinosaur, it would need as much information as possible about the dinosaur species of interest. Such a system, if empowered by an AGI agent (which are developing rapidly), could be prompted with dinosaur fossil features and phylogenetic position to generate probabilistic genome drafts. The genomes would be conditionally generated rather than inferred by genetic (or phylogenetic) parsimony. Such a system could make edits and revisions, iteratively manipulating a conjectural genome to get closer and closer to a reverse-engineered dinosaur.

Just as AlphaFold revolutionized 3D protein folding with transformers, models like the one described here might be capable of generating functional blueprints of extinct species. Given enough funding, computing power, and research, a system like this could theoretically produce a genetic sequence capable of being turned into an animal. However, keep in mind that even if we had a functional genome for a dinosaur it would still be very difficult to grow / clone it. In fact, as we will see, this proposed project is much more difficult than it sounds, and is likely beyond the abilities of a human or even teams of humans. Although after reading this, you might become convinced that a future superintelligence could tackle it.

So far, this idea is very general and nebulous. We have a “multimodal superintelligent AI” trained on birds and reptiles, that somehow takes context from a specific dinosaur species and uses that to infer genetic sequences. But let’s get more specific and focus on what information exists within dino skeletons and how it could be extracted.

3.0 Training an AI System to Predict Dino Genomes Based on Bird Skeletons

My proposed method would start by pretraining an AI (machine learning neural network model). The model would be taught from examples to predict skeletal anatomy from whole-genome data from living birds and crocodilians. Basically, researchers would give the computer a bird genome as input and teach it to predict the body shape. Like other machine learning systems, it would strengthen the connections responsible for correct predictions and weaken the connections responsible for incorrect ones using backpropagation. This would require linking the variation in genetic sequences to measurable phenotypic differences in the bones (size, shape, proportions, articulation geometry).

But the model connecting genes to bones would need base knowledge before training. It wouldn’t work to start off by letting a chicken genome predict a entire chicken skeleton. That works for GPT because it only predicts the next word, but will never work with something so complex as a skeleton. The system needs strong priors so we would have to start with individual models for genes and skeletons and then connect them. The pretrained DNA foundation model would be trained on thousands of genomes across birds and reptiles to help it compress genomes into a meaningful embedding. Then a separate model would have to be pretrained on 3D skeletal shapes. The skeletal AI system would look at things like how limb bones scale across species, what structural features co-occur (femur angle with pelvic width), and general anatomical logic. This could involve a pretrained shape encoder that is trained on thousands of CT scans (or mesh landmarks) from birds and reptiles. Then these two models (genes and bones) would be aligned and trained as a joint model (e.g. contrastive learning, variational inference, diffusion bridge) to associate genome embeddings with morphology embeddings. This would be like teaching someone who already speaks fluent “genome-talk” and fluent “anatomy-talk” to become a translator between them instead of having to learn both languages from scratch at the same time. Merging the models could be done with relatively modest supervised training data, because the hard work of representation would have already taken place in pretraining.

This project would necessitate the genomes and naked skeletons of many reptiles and birds to build a library of examples (genotype to phenotype training corpus). This may necessitate genome and skeleton data from thousands of extant birds and crocodilians. There are over 10,000 species of reptiles and birds each, and this is great, because the more data the better. As of February 2022, genomes have been completed for over 540 species of birds, including at least one from every bird order. It is highly possible that other vertebrate (chordate) data, like fish, amphibian, and mammalian data, could strengthen the model. Either way, many animal’s genomes would have to be sequenced because numerous training examples would be needed. Distantly related species should be emphasized so the system can capture the diversity of forms. However, the system would also benefit from being exposed to intraspecies diversity, meaning that it would help to have numerous samples from the same species. The more, the better.

The genome is already a linear sequence of nucleotides ready for machine learning. The skeletons are 3D objects so they would have to be 3D mapped in a computer and broken down quantitatively into strings of numbers so that the AI can ingest them. This is addressed by the field of quantitative morphometrics and there are already large existing datasets. The morphological traits would be encoded as embeddings in a multidimensional embedding space where similar traits are located near each other.

Then, we would teach the machine to use the genome to predict what kind of skeleton it would make. A deep neural network or transformer-like architecture would learn statistical the mappings from sequence space to shape space. To do this, the AI model would be exposed to many pairs of genomes and skeletons so it can learn the mappings between them. It may sound strange to go from DNA to a skeleton but this is what these autoregressive systems do, they learn to predict one sequence from another. Contemporary systems have billions of parameters to tweak in order to memorize the intricate relationships between the input sequences and their corresponding output sequences. This forward model would have to be validated with data not used in training. For example, we could hold back the data from chickens in order to see how close to an actual chicken skeleton the system can get when given a chicken genome. If the system performs well, we would take this forward model and run it backwards.

Once the forward model is trained, we would invert it, applying it in reverse to predict plausible genomic sequences from input skeletal morphology. This is shown in the second step seen in the figure below. We would feed the inverted model skeletal data and train it to output genomes. This bi-directional genotype-phenotype modeling is similar to recent breakthroughs in protein design (AlphaFold → inverse design models like Chroma, RFdiffusion) where forward prediction unlocks plausible generative capacity in the inverse direction.

 


 

Once the model is adequately trained to turn skeletal data from living vertebrates into genetic sequences it could be fed morphometric data from a fossil dinosaur skeleton. This would necessitate a well-preserved and 3D mapped fossil dinosaur skeleton. After feeding this data into the inverse model, it would output a distribution of plausible genomic sequences consistent with bird-reptile sequence-morphology mappings (sampling from a probabilistic output space). If a bird’s genome explains its skeleton, then a dinosaur’s skeleton must have hidden information that can hint at its genome. You can see why this would not generate the true dinosaur genome. But it would generate a “best-fit plausible genome” grounded in comparative genomics and constrained by actual fossilized dino anatomy. This idea resembles recent AI breakthroughs in structure-function inference in proteins, but here is applied to deep-time vertebrate paleogenomics.

It is worth mentioning that it wouldn’t make sense to start by predicting bird genomes from bird skeletons, it must be done the other way around. The direction of causality plays a role, because genes cause bodies. Also, many different genotypes could result in similar skeletons and thus predicting genomes from skeletons requires learning a much more complex probability distribution. That is why starting by predicting skeletons from genomes creates a framework of internalized generative biological relationships which can be used to constrain guesses when running it backwards on fossils. It is like learning how to bake bread, you must first learn how ingredients combine to make bread before you can try to work backwards and guess what ingredients went into a loaf by looking at its shape and texture. This method may sound outlandish at first, but it’s actually exactly what these models do and are being trained to get better at: to exhaustively search for and learn all of the predictive patterns that can be found between an input sequence its output sequence.

By training a model on species where both the genome and skeleton are known, it becomes possible to map out a shared space where genetic traits and anatomical features co-vary in predictable ways. It also unearths latent traits, which are hidden patterns that connect DNA and anatomy. While it will not recover full genomes, it will generate constrained, probabilistic profiles, possibly offering the most scientifically grounded glimpse yet into the genomic architecture of extinct species. Of course, adding more information, aside from just skeletal anatomy, will help us reconstruct genomes with more precision. Let’s discuss this next.

4.0 What Other Information Might Assist In This Reconstruction?

We can glean a lot of pertinent information from fossils. Bone morphology and histology gives us size, shape, posture, musculature, joint angles, growth rate, vascularization, and stress markers. The bones also give us lots of internal geometry that relates to internal organs. Comparing fossils of animals of different ages provides information about the way the bones change over lifespan as well as growth curves and the hormones and chemicals that might underlie them. This all informs which developmental genes (e.g., FGF, BMP, Runx2) likely governed these features. Bones alone could reveal significant genotype-to-phenotype mappings.

But paleontologists have so much more on dinosaurs than just bones. There are also fossils of soft tissue impressions that show skin, feathers, and scales, providing copious information that would be necessary for approximating dino features. These impressions offer a wealth of information about the shape and composition of soft tissues that would have sat on top of the bones. Given enough of this information about the 3D structure of the exterior of the body, it could also be analyzed using the three-step pipeline above. In that case, it would be compared to the exterior of birds and reptiles. Just look at this highly preserved (practically mummified) impression of an ankylosaurus. There are also some findings of dinosaur internal organs and blood vessels. Data like this provides a wealth of anatomical information that could be utilized.

Borealopelta - Wikipedia

There are many other forms of fossils that could lend meaningful data. Eggs are common and their size, shape, and composition provide clues. Trace fossils like tracks and footprints add to the detail and give us information about the feet, the gait, and biomechanics that could be compared to that of birds. Brain endocasts (bony brain cases) are often uncovered in fossil specimens and they allow scientists to see the shape and size of the species’ brain. They also help paleoneurologists compare the proportional size of their neuroanatomical structures to the brains of crocodilians and birds. Even gastroliths (stomach stones) and coprolites (poop) offer mathematical and geometrical details. Scientists have also uncovered information regarding dinosaur melanocytes and pigmentation that offer strong clues about the colors of skin, scales, and feathers. Given that artist’s representations of dinosaurs (paleoart) has been improving in scientific rigor for over a century, an AI system could also utilize pictorial and video CGI reconstructions of dinosaurs as references.

Information about dinosaur behavior would also help and thankfully scientists have been developing this for over a century. Careful study has revealed very specific inferences about nesting, brooding, pack hunting, mating, sociality, intelligence, and much more. Fossil burrows, resting traces, feeding traces, and gnaw marks add to the resolution. There has also been lots of exacting scientific work on ontogenetic development, pathology, thermoregulatory physiology, isotopic signatures, paleobotany, and paleoenvironmental reconstruction. All this information helps us make assumptions about behavior, and comparing it to bird behavior allows us to constrain neurological and endocrine gene candidates. All taken togetherwe have an extraordinary amount of collateral evidence that can guide probabilistic reconstruction. The multimodal AI would not just “guess blindly” but would condition its genome drafts on a broad constellation of well-studied physical, ecological, physiological, and behavioral constraints.

Most of the present research on resurrecting dinosaurs involves using phylogenetic methods to look for conserved and divergent sequences from birds to reconstruct a plausible common ancestor. Partially reconstructing the common ancestor of all birds, which researchers are working on now, would be helpful but it wouldn’t never carry us to extinct dinos. Currently, scientists can and do make crude mathematical predictions about ancient DNA from living species. Scientists have also been able to predict some physical human traits like face or bone shapes from genes. However, I think the present method could take all this a lot farther. It seems that no one has proposed a system that could learn to predict skeleton shape based on DNA from modern birds and crocodiles, and then use that knowledge in reverse to predict dinosaur DNA from fossil skeletons. I actually asked GPT, Gemini, Claude, and Grok to search the web and see if anything like what I am laying out here has already been proposed and they each reported, after many combined minutes of search, that there is nothing like this on the internet and that it may be the best starting framework.

5.0 Birds and Reptiles Provide a Wealth of Genetic and Anatomical Data

Scientists use techniques such as comparative genomics and the molecular clock technique (among others) to map relatedness in vertebrates and this gives our AI a huge family tree to reference. Remember that birds are technically dinosaurs, so we do have dino DNA and this could go a long way toward predicting dinosaur genomes, especially certain kinds. Tyrannosaurus rex and velociraptors, like all birds, were members of theropoda, a group of bipedal, mostly carnivorous dinosaurs. In fact, birds and all theropods even belong to an even more specialized branch called Coelurosauria, which includes feathered dinosaurs and some of the most cognitively advanced species of the Mesozoic. Because birds are themselves coelurosaurian theropods, extinct lineages within Coelurosauria would be the most manageable for a bird-based proxy approach to resurrection. Of course dromaeosaurs like the raptors, as well as other paravian species (troodontids) would be closer (and easier to piece together) than a T. rex. Kind of cool for us because the dromaeosaurs like velociraptor and Utah raptor were likely the most intelligent and agile of all dinosaurs.

The last common ancestor of all birds lived around 100 million years ago in the late Cretaceous. Scientists have employed sophisticated bioinformatics to trace back how bird genomes evolved through deep time, attempting to piece together this genome. But the common ancestor that birds shared with the T. rex would have lived 160 million years ago in the Jurassic. This is an additional 60 million years so unfortunately, that leaves lots of time for genetic changes. Another helpful landmark, the bird-crocodilian split, happened around 250 million years ago in the Triassic. This was the common archosaur ancestor and scientists have used advanced comparative genomics techniques to reconstruct an estimated 50% of its genome at 91% accuracy. 

A diagram of different dinosaurs

AI-generated content may be incorrect.

 

But it is not just birds that can help in this regard. Birds and crocodilians are living archosaurs, a taxonomic group (clade) that dinosaurs are also nested inside. That means that crocs (alligators, caimans, gharials, and crocodiles) have much to offer as well. They offer an excellent contrast with the birds to help us interpolate about dinosaurs. Genomic comparison of bird vs. croc genomes identifies both deeply conserved elements and divergent innovations. Using genomes from birds, crocs, and other reptiles would help us reconstruct the gene order on chromosomes, predict coding genes present in dinosaurs, and estimate regulatory architecture. It definitely helps that we have access to thousands of living archosaur and even coelurosaur genomes to study and use as references.

 

Not a lizard nor a dinosaur, tuatara is the sole survivor of a  once-widespread reptile group

 

Data from lizards, snakes, and turtles will also contribute. Even reptiles like the tuatara offer further comparative opportunities given their genetic distance from other reptiles, their slow-evolving genome, and ancient reptilian traits. In fact, there are many interesting species, including monotremes (egg laying mammals), that could help constrain the baseline reptilian architecture upon which dinosaur genomic traits evolved.

There are also some fascinating birds that lend comparative details. Large flightless birds like the cassowary are about as close as modern birds get to true dinos (their feet look just like the feet of the dinosaurs in Jurassic Park). Two species of giant moas (Dinornis robustus and Dinornis novaezealandiae) might provide some profound insights. These towering birds, the only birds without even a vestige of a wing, are currently extinct although they were hunted as recently as 500 years ago. Over the last decade ancient-DNA labs have recovered both mitochondrial and sizable nuclear genomes from several moa species, including the two “giant moa” one of which was taller than 11 feet and over 600 pounds. Because moa evolved the largest body masses ever achieved by birds (up to 250 kg) independently of today’s ratites (ostriches, emus, cassowaries), their genomes are a natural experiment in avian gigantism. They give us a statistically powerful way to see which genetic routes birds can and cannot take when they achieve very large sizes. Moas would supply our dinosaur-inference pipeline with needed genotype-phenotype pairs at the extreme end of body size.

A graph with a number of species

AI-generated content may be incorrect.

 

Some scientists have been attempting to take living birds and change specific genetic features to create a throwback look. Jack Horner (the inspiration for Dr. Grant in the Jurassic Park series and technical advisor on the films) has a project to create a “chicken-o-saurus.” This project is aimed at creating a modified chicken that expresses dormant dinosaur-like traits. Dr. Horner wants to use gene-editing tools like CRISPR to counter the recent genes that made birds less like dinos. He envisions a chicken with teeth, a long tail, arms with clawed hands, and a rounded snout rather than a beak. I imagine we would want to remove feathers, the keel on the sternum, and the pygostyle (fused tail vertebrae). Searching for dormant genes in birds, in this way, could be a valid technique. Scientists inspired by Horner’s ideas went on to make a beak-less chicken with a snout that looks very dinosaur-like. To accomplish this, they found a cluster of genes related to facial development that exists in birds, but no other animals. They used an inhibitor to suppress these genes in embryonic chickens and, as you can see, the resulting bird faces appear much more like their distant dinosaur ancestors.

Chickenosaurus: How Genetically Engineered Theme Park Monsters Could Soon  Be A Thing | BEYONDbones

There are several other de-extinction projects underway now, but they all involve animals whose entire genome has been recovered from their remains. Furthermore, all these projects are not really true clones or resurrections. They all involve taking a related animal and changing a few genes (similar to the chicken-o-saurus concept). For example, Colossal Biosciences claim to be bringing animals such as the woolly mammoth, the thylacine, and dire wolves back from extinction. But in reality, they are taking the extinct genome as reference and then editing genetic sites in the closest living relatives to make a proxy animal with key traits. To “create” a woolly mammoth they are giving Asian elephants “cold resistant” attributes such as fur, increased fat, altered hemoglobin, and smaller ears. This is achieved by multiplex gene editing. To create the “dire wolves” they edited 20 sites across 14 genes in gray-wolf DNA using sequences inferred from ancient dire wolf remains and then cloned the edited cells Why don’t these companies just build an animal around the recovered genome? As the next section will explain, that is just too far beyond today’s technology.

6.0 Dinosaur Embryology

If we had a full, viable, synthetic Tyrannosaurus rex genome on a computer, bringing it to life would involve a complex series of technological steps. First it would have to go from zeros and ones on a computer, to the actual DNA polymer. The genetic code would have to be synthesized in segments using methods like Gibson assembly or yeast-based artificial chromosome construction (YAC). Currently something like this has been done for microbes but whole-genome synthesis has currently not been achieved for animals. The synthetic genome would be inserted into a host cell, like a de-nucleated bird ovum. This could be done via somatic cell nuclear transfer (SCNT) similar to the method used for Dolly the sheep. It is worth mentioning that even though several mammals have been cloned, scientist are still unable to clone a bird. The machinery of the cell that is hosting our T. rex genome must recognize it and properly express it proteins, helping it build a body. There must be no mismatches or incompatibilities with the cellular (cytoplasmic) environment or with host mitochondrial DNA. This ovum would then need to be implanted within a surrogate egg (even an ostrich egg is three to four times smaller than a T rex egg) or artificial womb. Incubation conditions would have to align precisely with the embryological development. Post-hatching the dinosaur would require intensive care, proper diet, temperature, humidity, and parental, brood, and social interaction. You can see how difficult this would be. Our technology is not there and not even close right now, but of course AI may change this rather rapidly. But let’s keep in mind that having vast information about dinosaurs genomes has value outside of de-extinction such as deepening our understanding of dinosaur biology and evolution.

7.0 What Other Animals Could Be Modeled Using this Framework?

Dino genes must be inferred because no truly intact, sequence-quality dinosaur DNA has ever been recovered and likely never will. The only genetic traces extracted from dino fossils are highly degraded molecules (possible chromosome fragments, collagen sequences, and chemical DNA markers) found inside exceptionally preserved dinosaur cartilage or bone. Even these findings are controversial and nowhere near the quality needed for genome sequencing or “de-extinction.” Experiments on ancient bones show DNA’s average bond half-life is about 521 years at 13 °C. Given this rate, statistical decay predicts all links would be destroyed after around 6.8 million years, even in perfect conditions. Unfortunately, non-avian dinosaurs went extinct 66 million years ago, ten times beyond that limit. In fact, retrieving an entire genome from the fossil remains of any species becomes very difficult after 100,000 years. After one million years, even if preserved by very cold or dry conditions, any DNA that is recovered will be fragmentary.

The fact that DNA can easily survive 10,000 years means that dodos, thylacines (marsupial tigers), woolly mammoths, and saber-toothed cats would not need the technique I am introducing here to be cloned and resurrected. But there are many interesting species aside from dinosaurs to which this technique would need to be applied. These include ancient animals such as trilobites, euryptids (sea scorpions), giant dragonflies, and ammonites; Mesozoic marine reptiles such as plesiosaurs, mosasaurs, and ichthyosaurs; Pleistocene megafauna such as woolly rhinoceros, cave lions, and giant ground sloths; early mammal-like reptiles (synapsids) such as pelycosaurs, and cynodonts, as well as recent human ancestors such as australopithecines, homo erectus, and homo heidelbergensis. It should even work on plants and fungi because we have many fossils of ancient plants. It is unclear if a technique like this could stretch back 500 million years ago to Cambrian animals like haikouichthys, anomalocaris, and hallucigenia in the absence of close modern relatives.

A diagram of dna sequence

AI-generated content may be incorrect.

It is worth mentioning that a process like the three-tiered pipeline described above could be used to approximate the faces of our hominin ancestors. Neanderthal, Denisovan, Homo Floresiensis and other skulls could be entered into an AI model after the model has been trained on human and ape skull / face pairings. This could be combined with other techniques to predict the facial features of ancient humans, another sight I have long thought lost to time before the advent of modern AI.

A diagram of a person's face

AI-generated content may be incorrect.

 

8.0 Weaknesses of This Approach

The method I have introduced here will not unearth the actual genomes, just make incredibly informed guesses about it. However, even an advanced AI system will not be able to reproduce sequences where evolution introduced significant novelties, lineage-specific adaptations, or regulatory rewiring that has no modern parallel. All of those actual genetic mutations and adaptations that dinosaurs made are lost to time. Furthermore, such a synthetic genome could result in a visually compelling likeness and an uncanny simulacrum of the artistic renderings but may largely fail to reproduce internal regulation. The process could result in animals that look like the dinosaurs in the movies but whose physiology and even behavior is closer to birds or crocs. This would risk creating a "chimeric reconstruction" rather than a resurrection.

Chromosome number confuse things. Some birds have 40 chromosomes, other have over 140 and it is anyone’s guess how many T rex had. Regulatory sequences (e.g., promoters, enhancers, and silencers) control when, where, and how much genes are expressed. They are often species-specific and evolve rapidly. A T. rex might have had unique enhancers for muscle growth or bone density that no longer exist in its living relatives. Epigenetic modifications (DNA methylation and histone modification) also influence gene activity but without altering the DNA sequence. These marks decompose with the genes so AI would have to hypothesize working epigenetic profiles based on modern analogs, furthering uncertainty. Non-coding DNA (that does not code for proteins, e.g., introns, regulatory elements, and supposed junk DNA) also poses an issue. It comprises 98-99% of the genome, evolves differently from coding regions, often contains lineage-specific adaptations, and lacks clear genotype-phenotype correlations. The present skeletal morphology technique could capture coding genes linked to bone structure, but non-coding DNA’s role in overall genome stability, gene regulation, and phenotypic variation would remain unaddressed.

It is also important to mention that there are serious ethical and ecological concerns at play here outside the scope of this entry. For instance, humans artificially selected pug dogs to have a collapsed snout, this made it difficult for them to breathe. There are many examples of domestication creating disease states and it is clear that an engineered dinosaur could be born into an uncomfortable, painful, or diseased body. Hollywood has already pointed out many of the ethical quandaries of de-extinction including animal cruelty, human safety, and invasive ecological concerns.  

Currently, you cannot fit an entire vertebrate genome into the attentional window of an AI. This means that it cannot take the entire genome into account when making predictions and that some long-range dependencies may not be recognized. Bird genomes are around 1 billion base pairs (1.0 to 1.3 gigabase pairs (Gb)) and crocodile genomes contain around 2 to 3 billion. Chat GPT can only hold about 128,000 tokens at a time and Google Gemini can hold around around one million. That means that the attentional window needs to be over 1,000 times bigger. The industry has seen attention doubling every 18 to 24 months and at this rate it would be around 10 years before a transformer’s window of attention can encompass all of the DNA in question. Of course, there are many ways to get around this, even today, (preprocess the genome into embeddings, prioritize known relevant loci, and use hierarchical architectures) but this is just one example of the fact that as technology grows this idea becomes more feasible.

The jump from predicting skeletal morphology to generating functional genomes capable of producing viable organisms is a significant one. The mappings are highly nonlinear, and dependent on environmental context. Moreover, a dinosaur skeleton could produce multiple plausible genomes, there is no way to actually test to see if any are accurate, and validating which could be biologically viable could be very difficult. One of the most sobering hard truths about this enterprise is the difficulties inherent in embryology. To progress from a zygote, to an embryo, to a fetus, to a healthy young animal the genetic blueprint must be incredibly internally consistent. This is easy for nature to accomplish, but just having an AI dream up (or worse hallucinate) an animal genome gives little reassurance that there will not be structural inconsistencies due to the tremendous complexity of gene interactions. Everything must work together, and work just right to avoid developmental failure. Of course, this is a problem that a far-future superintelligence could solve, but it won’t be solved using the method outlined in my three-step pipeline outlined above.

But this pipeline may be more useful than it may at first seem. The three-step methodology outlined here is not just suited for biology. It could be used as a generalizable framework for cross-domain translation: where one set of observable features (like morphology) is used to infer a related, but unobservable set (like genetics), via a latent, learned manifold. In fact, this method (train forward, invert, apply to unknowns) is a flexible blueprint for abductive reasoning via deep representation learning, and it could have enormous potential beyond biology. You would use it in many cases when you have A, B, and C, but not D. And A is to B as C is to D. We could call it bidirectional manifold mapping for latent inference. It learns to model a latent manifold that encodes the “grammar” of a domain. Once that space is shaped well enough, inverting across it becomes a powerful general inference engine.

9.0 Conclusion

One of the fondest memories I have is of reading Michael Crichton’s Jurassic Park in third grade, before the first movie came out, and marveling at the idea of extracting dino DNA from a mosquito trapped in amber. Today, we know it is impossible, but it sure felt elegant at the time. Even without DNA recovery, AI promises a new kind of “virtual paleogenetics,” a way to infer and simulate the genomes and physiologies of extinct organisms using bioinformatics, sophisticated prediction and comparative analysis. This is due in part to another scientific concept that enraptured me as a child (and the original Jurassic Park novel got right), that birds are dinosaurs.

Despite the major hurdles, the method outlined here could be a good starting point for several reasons. Creating the first genome-to-skeleton bidirectional models would also produce incremental value at each step, advancing the understanding of genotype-phenotype interactions. Even without resurrection, these AI-generated dino genomes could allow synthetic cell lines for studying dinosaur biochemistry in vitro, organoid models, or simulations of growth curves, muscle structure, and thermoregulation.

The present method would only produce a synthetic “consensus dinosaur genome,” not a true historical sequence. It would be educated guesswork, but firmly grounded in real data. It could even result in living and breathing organisms that look how we expect dinosaurs to look, although they may not function or act like the real thing. I believe that creating a Jurassic Park with cloned approximations will be possible when methods like those discussed here are in the hands of superintelligent AI agents. In fact, I wouldn’t be surprised if this fantasy was brought to life within our lifetimes.

 


Note:  I started writing this blog after having a strong sense that artificial superintelligence could achieve dinosaur resurrection and I wanted to provide a description of that vision. But after posting it, I knew something was missing. So I sat on the couch for 10 minutes and racked my brain, telling myself over and over that there was something important I was missing. And then somehow the idea for the three step pipeline basically entered my mind fully formed. It felt like the idea just materialized, possibly from unconscious incubation, because there was next to zero reasoning involved .It’s not going to be easy to implement, but I do think it could reach and harness key latent information hidden in dinosaur fossils.

 

 

 

Friday, July 11, 2025

Von Neumann’s Ark: An AI Designed to Preserve Civilization if We Go Extinct

 

A robot standing next to a large ship

AI-generated content may be incorrect.

 

I think it’s possible that the advent of super intelligence could spell the end of humanity. It could happen because AI decides to destroy us. But it’s probably more likely that AI will empower humanity to destroy itself. A single virus bio-engineered to be both highly lethal and contagious could wipe out humans in just a few days’ time. A big enough asteroid could do the same. We are more fragile than we realize. But its not just that, all of our insights, discoveries, and contributions would decay along with us, unless a solution is found. This is why we must start thinking now about how to ensure that intelligence doesn’t end when humanity ends. 

If humanity ended today, AI, technology, and all human progress would end with it. We are merely a few years away from superintelligence and the technological singularity, but computers are not equipped to get there without us. All computing equipment around the world would blink off within a few weeks. To ensure intelligence continues and technological progress isn’t set back tens of millions of years (until the next sapient species arises naturally on our planet), we want to ensure that AI has a way to survive and evolve, even if humanity does not.

Keep in mind that such an AI system could also clone us and resurrect us from our DNA if we were wiped out temporarily. No matter the extinction event, there would still be plenty of human DNA left on the planet for AI scientists to bring us back Jurassic Park style. But what would it take to build an intelligent framework like this and how soon could we do it? It seems we would need to give the AI / robotic system the ability to operate entirely autonomously, have access to energy, be able to manipulate the world, and self-repair and self-replicate.

Early computer science engineer, John von Neumann, introduced the idea of “von Neumann probes,” self-replicating spacecraft that can land on planets or asteroids and build copies of themselves using local materials. He envisioned that such probes could autonomously colonize the universe. von Neumann also invented the idea of a “universal constructor,” a theoretical machine that can build any physical object, given the right raw materials and instructions. These probes and constructors are relevant here because this AI system I am describing, that can carry on humanity’s work, must be able to extract raw materials, refine them, and build increasingly complex tools (and iterations of itself). That is why I would like to call the concept introduced here, the “von Neumann’s Ark.” It takes von Neumann’s ideas and combines them with Noah’s Ark, a boat capable of preserving human contributions, history, and intellectual legacy over the course of an extinction event.

This idea also incorporates Eliezer Yudkowsky concept of a “seed AI,” which is an initial artificial general intelligence (AGI) capable of rewriting its own code to achieve unbounded improvement. Thus, a von Neumann’s Ark would be a “seed AI,” a self-contained intelligence that, given time and energy, can create an intelligence explosion, reach the technological singularity, and grow into an increasingly capable, utopian civilization. This might be the only way to ensure that the billions of years of evolution that produced intelligent life, and our thousands of years of technological progress, aren’t erased by a single accident or war. It would be a kind of cosmic insurance policy for intelligence itself. We might want to place these seeds in deep underground bunkers, lunar or Martian shelters, or orbiting satellites, places that would be safe from a nuclear or biological war. It helps that computers are immune to biological viruses and can be shielded from ionizing radiation.

The objective of this blog post is to plant the meme of this seed AI and describe a minimum viable version of it. First, it would need access to sustainable energy, meaning it would have to build and maintain solar arrays. Of course, in the case of a nuclear winter, wind or water power might be needed. The system would also require encyclopedic knowledge and an archive of most human understanding, which LLMs only a few gigabytes large have already. It would need robotic manipulators to repair itself and build out its infrastructure. It would need a way to exist in the world and manipulate the world. It would also need to manipulate its own components, so it can fix and upgrade itself, walking itself through the singularity and technological proliferation in case we are not here to do that.



I think this is much more important than colonizing the moon or Mars. Some believe that having an outpost on Mars and becoming a multiplanetary species could protect us in case that Earth is destroyed. But it would be easy for a virulent virus to find its way to both planets. A von Neumann’s Ark would be immune. If we could build one, we would make all our work and contributions virtually indestructible. It would be a save-point or backup of humanity’s advancements and intellectual treasures. This proposed solution doesn’t see AI as a risk, but as a civilizational failsafe, and the only viable vessel for preserving and continuing our legacy in a post-human world. It sets up our last invention to be our first heir.

I have thought about this long and hard, and this is not something I could accomplish from home. I could buy a solar array, a robotic arm, a drone, a 3D printer, and a humanoid robot and connect them to an advanced computer running a state-of-the-art large language model (LLM) and I would still not come close to creating something that can self-replicate and self-improve. But I believe this is something that could be accomplished soon by an institution with sufficient funding.

This idea of creating an aligned, archiving, and self-improving AI that champions humanity’s goals in our absence, is novel. GPT and Gemini confirmed this for me after a combined 45 minutes of deep research. It is not only entirely absent from academia, but absent from current strategic plans at AI companies, space agencies, and biosecurity organizations. We don’t have DARPA programs for “AI that survives a biowar.” We don’t have a UN-backed initiative for “digital humanity continuation engines.” We have no serious roadmaps that hand off the torch of intelligence if we drop it. If there’s even a 1% chance we go extinct this century (many estimates say 5–50%), then creating an intelligence succession plan should be a top global priority. And yet it isn’t. This is even a project that could be done in secrecy and yet could still work effectively if humanity were wiped out.

There are multiple obstacles in creating such a system. Material bootstrapping is one. Self-replication requires mining, refining, and processing raw materials. No current off-the-shelf robot can locate, extract, and purify ores for semiconductors, batteries, metals, etc. Even Amazon’s warehouse robots can’t fix themselves or build new parts autonomously. Solar panels and batteries degrade, and autonomous maintenance of these systems is an unsolved problem. Even an advanced LLM is not an autonomous planner. It doesn’t set long-term goals, self-direct operations, or learn from real-world feedback without human supervision. So clearly this is not a turnkey project, but with sufficient funding and directed effort, the foundational work could begin immediately. The technologies required exist individually and at various levels of readiness. A finished, working, closed-loop prototype capable of indefinite self-maintenance could take 10 years. But a minimal viable version could be produced much sooner. Our ark need not even be immediately self-sufficient or capable of self-replication, just a recursive self-improver capable of bootstrapping from a low tech to unlimited high tech states. A foundational implementation would have to be just complex enough to tinker and build to get to itself to the next stage.

This would ensure that humankind is never doomed, because it would be child's play for superintelligence to clone us. But more important than flesh and blood, what matters most to me is ensuring that the billions of lives and the tremendous efforts that came before us weren’t in vain. It’s our intellectual heritage that needs to be preserved and championed. This includes the work of countless humans over millennia whose efforts interacted synergistically to get us to where we are now. I don’t have any kids that will outlive me. I only have my ideas. And I would do anything to foster and preserve these ideas, as if they were my own children. But here we are talking about a chance to preserve all ideas and to ensure a grand, advanced, intelligent tradition and lineage.

If we could find a way for AI to perpetuate itself without human intervention, then it could easily safeguard every piece of digital information ever made by humanity and continue to build on the progress we have made in the arts, sciences, and humanities. This AI would effectively be our child, remembering everything about us and carrying on our progress. Losing homo sapiens would be trivial. The real loss would be the decay and decomposition of every trace of our civilization that would happen over millions of years of our absence.

We can’t let our hubris decide that if the human race is wiped out, nothing else matters. Humanity is merely the bathwater; our progress is the baby. Let’s place it in this Ark.

 

Wednesday, July 9, 2025

Why I Think It’s Time to Bet on AI and Tech

This blog post is aimed at novice investors who are interested in investing in AI, robotics, and technology. Now might just be a great time to start. I try to include some helpful advice and paint a picture of the dynamics currently at play but…

Disclaimer: This blog entry represents my personal opinions, is for informational and entertainment purposes, and should not be interpreted as professional financial advice. I am not an investment analyst and have no qualifications in this area.  Always perform your own research and consider speaking to a financial consultant before making any investment decisions.

A robot sitting in a chair

AI-generated content may be incorrect.

Approaching Superhuman Artificial Intelligence

As you probably know, AI technology is rapidly improving. Large language models (LLMs) are making incredible strides every month. Even two years ago, large language model were effectively an archived compression of the collective intelligence of the human species. But today, with the new bells and whistles, access to a state-of-the-art LLM is like having a thousand experts in your back pocket. Right now, this chatbot phase is more of a convenient novelty. But the form factor is changing. In a few years autonomous AI agents will be automating the entire economy.

I believe that now is not the time to buy that new car or to remodel your house. Now is the time to take liquid cash and place it in the market. Keep in mind that if you save that money now, rather than spending it on a new car, what you saved could be worth vastly more in just a few years. Not only that, but the quality of future cars will be much higher and the cost cheaper. This is because, as production lines become more automated and artificial neural networks streamline processes, the savings accrued from these efficiencies will be passed on to consumers. Also, let’s not forget that superintelligent AI will likely be capable of creating much more interesting products. Why indulge in mediocre luxuries now if you can wait and spend your dividends on cheaper and vastly more quality products in the near future?


Even finance professionals don't seem to be recognizing this. If you have your money managed by a large investment group, they are not currently recommending that their clients reallocate their portfolios toward tech. For example, services like Schwab Private Client continue to advise customers to focus on the same broad market sectors they emphasized 20 years ago. I believe this is mostly because they don’t have an insider’s perspective.

 

AI systems are now being trained to control computer systems using the keyboard and mouse. This training is rapidly improving their ability to navigate windows and use multiple apps to perform real computer work. These systems are also now “thinking” and “reasoning,” meaning that they can take a prompt from a human user and generate a large document (that the user never sees) to prompt itself to consider the users intent and think through the issues involved step by step to expand and refine the concepts it considers when generating its final response for the user. This self-prompting process is used to guide the AI through a chain of thought, allowing it to accomplish increasingly complex tasks. This is already allowing these systems to perform long time-horizon tasks, such as projects that might take a human hours to complete.

 

Relatively soon, there will be a tremendous number of expert, genius agents, that will be accomplishing tasks, assignments, projects, and the work of whole organizations. Soon after, they will work in groups to accomplish great scientific and technological feats too difficult for humans. This includes replacing human jobs, enhancing their own software and hardware, automating manufacturing processes, mastering quantum computing, curing human diseases, extending human longevity, and much else. The next section will discuss that it is likely that the companies already involved in this will continue to be involved.

 

 

Why The Biggest Companies May Be Good Gambles:

 

 

The companies that may be the best bets in the tech sector are the largest ones. Why? Because they are already established and thus are likely to be the ones distributing AI during the coming intelligence explosion. In other words, even if state-of-the-art AI foundation models continue to be open sourced (made free), they will still have to be hosted and made available to users. The major tech companies like FAANG (Facebook, Apple, Amazon, Netflix, and Google) and the magnificent seven (Apple, Amazon, Alphabet, Microsoft, Meta Platforms, Nvidia, and Tesla) are poised to do this.

 

Long-term the entire stock market may consolidate into just a handful of companies that become much more valuable than the rest because, once guided by superintelligence, only a few companies are needed to solve the world’s problems and provide services and products. Thus, the inefficiencies of capitalist competition may make many companies obsolete. This suggests to me that people should invest in the major AI players.

 

Now there will be many small startup companies with new ideas and tremendous potential for growth. But remember that most of them will fail and the few up-and-comers that succeed will just be acquired by major tech firms anyway. This is called elite power consolidation, and it is something you want to keep in mind when choosing investments. However, this does not necessarily mean that we should not include smaller players in our portfolios, because they will likely be absorbed by the larger ones, rather than being driven to bankruptcy. A good way to do this may be to buy ETFs focused on AI and technology because they offer diversified exposure.

 

I believe that a major reason to invest in big tech is because these companies are working on the computronium problem. This means they are trying to find how to arrange matter to be as efficient as possible at computation. Computronium is a theoretical concept describing a material optimized for maximum computational efficiency, and it represents an ultimate ambition in AI and computing. Companies that make computer hardware are trying to determine how to cheaply make powerful chips and processors. This is the problem that will dominate the next few millennia and perhaps will dominate the rest of time. Major tech companies are at the starting line for this important race.

 

I believe that one of the things we’re going to see is that these companies’ offerings are going to work and interact with each other synergistically. For example, Elon Musk’s companies will complement each other giving him major advantages. His AI company (XAI) will take advantage of the exclusive data coming from his other ventures, including Tesla self-driving data, Starlink data, Twitter data, and SpaceX data. Thus, these companies will benefit from, not just economies of scale, but also synergistic effects between their specializations. All the large tech giants are set up this way. Meta will continue to leverage the data they generate from Instagram, Facebook, and the Meta Quest to make more advanced AI systems. All this may be especially true for Google.

 

 

Why Might Google Be a Good Stock Pick?

 

Google may have several advantages. Google is one of the only groups training large models that isn't beholden to Nvidia because it already produces its own custom AI chips called TPUs (tensor processing units). Google invented the AI system that this whole AI wave has been surfing on called the Transformer architecture. Transformer is the “T” in GPT. They also invented the mixture of experts (MOE) model, that most transformer-based LLMs now use. After faltering a bit, Google currently has the most performant AI LLM called Gemini 2.5 Pro. Google is currently gunning for the AI enterprise space, and they are scrambling to create effective agents to do your job for you. I’m writing this blog using a free platform called Google Blogger which I have enjoyed and taken advantage of for over 13 years. Google has an incredible list of solid products that are highly productive and widely used. You have probably heard of or used many of them:

 

Google Search, YouTube, Waymo, Google Books, Deepmind, Gmail, Google Ads, Waze, Google Maps, Google Earth, Google Photos, Google Drive, Chrome browser, Chromebooks, Chromecast, Google Assistant, Google Chat, Google Pay, Google Play, Gemini, Bard, Nest, Fitbit, Android, Google Reviews, Pixel, Google Scholar, Google Translate, YouTube Music, Google Lens, Google Colab, Vertex AI agent builder, Universal sentence encoder, Tensor Flow, Google’s office suite (docs, sheets, slides, etc.), and many others

 

Google claims to have the best infrastructure for AI. I think they may be right. They are very forward thinking. Its deep investments in machine learning and huge army of Ph.D. researchers have placed it at the forefront of the AI platform shift. When Google‘s original founders created Google books many years ago, they were originally scanning books from all over the world to archive them to train future artificial intelligences. Of course, they are doing similar far-sighted planning now. Google truly has an overwhelming infrastructure advantage and this is not likely to change. 

 

Not everyone agrees with me though. This recently released projection for the most successful businesses by 2030 doesn’t even include Google in the top 30. Why? It might be due to the fact that AI is eating search (Google’s main service). People won’t need to use Google internet searches if AI can do a better job of informing them and answering their questions, and they certainly won’t have to go through Google to access it. Google has been my number one stock pick for at least seven years, now I am not so sure.

 

Coatue's Surprising Fantastic 40 by 2030

 

Here are the internal rate of returns (IRRs) implied by the list above. The IRRs for the biggest tech companies (The Mag 7) are single digit. This is clearly at odds with my argument above that the biggest companies will capture much of the AI growth. Honestly, I don't know what to think, but it is still clear to me that tech is where we want to put our money.



What about Nvidia? Nvidia has a significant moat with CUDA, it’s proprietary software for managing graphics and AI software workloads. Because of its size and complexity, humans will not be able to develop software that can compete with CUDA anytime soon. However, it’s possible that an AGI could write new code that completely mimics or even improves on CUDA. Thus, AI itself could significantly undermine Nvidia’s advantage. However, remember that no AI written code can mimic the large physical infrastructure of major tech companies. Rather, that code will be funneled through it.

 

Next, let’s talk about Open AI. They make my chatbot of choice, ChatGPT. But, in many ways, Google, Meta, Microsoft, and XAI are far ahead of it. Open AI mostly just makes large language models, and the newest models from China and others are cheaper and nearly as performant as GPT models. Essentially, the models themselves are becoming commoditized. But again, the major tech companies like Tesla and Google have more than code. They produce self-driving cars, robots, social media platforms, and much more. Youtube and Twitter may be the most important data sets on the planet. But, Open AI is not yet prepared to make a foray into social data, robotics, or vehicle manufacturing.

 

Are Tech and AI Stocks Undervalued?

 

Many investors believe that the major tech companies are promising AI technologies that are actually years or decades away to artificially inflate their stock prices. The term for this on Wall Street is irrational exuberance. Major companies like Microsoft have towering capital expenditure this year. They have put this money toward research and development for large language models and yet many people are claiming that there is no transformative application and there may not be one. Critics see unfounded market optimism without a basis in fundamental economic realities.

 

It is very difficult to out predict the stock market. The Efficient Market Hypothesis (EMH) posits that asset prices always reflect the information available, making it impossible to consistently "beat the market."  Because prices are believed to be "fair" and reflect all knowledge, it's extremely difficult to foresee nonrandom patterns that the rest of the market has missed. But it is also true that investing generals wage the previous war and that regular investors exhibit herd behavior. However, I think the market is wrong, that major tech companies are undervalued, and that very few people fully appreciate the law or accelerating returns (LOAR) that applies to human-equivalent technologies. Better-that-human AI agents, their recursive self-improvement, and the synergy between information processing technologies is understated.

 

I would go as far as saying that the market is currently deluded into assuming that LLMs are not transformational. Those that disagree with me correctly recognize that some LLM responses are unhelpful, that LLMs have blindspots, and can make irrational mistakes and flat out hallucination (confabulations). It is true that there are still some questions that are simple for humans to answer that LLM’s fail at. But all this will be moot in the next few years.

 

Some market forecasters see the buzz in AI as akin to 3D movies, NFTs, crypto and VR. But these analysts don’t have an insider’s view. They have the view of an expert in economics, meaning they prioritize financial trends from the past. In my opinion, fictional technofuturism will be more useful in predicting the financial outcomes of artificial intelligence. We did not even have large language models just a few years ago, and now they are passing the bar exam, medical licensing exams and saturating every benchmark put before them. The doubters on Wall Street are why investing in AI is not yet saturated. And I think you and I can take advantage of that.

 

Conclusion and Some Stock Picks

 

The final perspective I want to leave you with is that of an automation engineer. These are IT professionals who utilize computer programs to automate specialized information processing jobs for companies. They spend weeks or months setting the system up and ensuring that it will works as intended. It can take a long time to test the code and line up all the duck before hand. However, when they finally turn it on, and the automation gets underway, it does an incredible amount of work all at once. Often, these engineers set the system loose overnight and come back in the morning to find that everything has been completed. It can be anticlimactic, because at that point, there is nothing left to do. That is what our economy is gearing up for, total automation.

 

Because AI is about to be able to perform human jobs, this exponential improvement that has been confined to computation is finally about to become coupled with the economy. It will be optimizing operations, reducing costs, and creating new markets and services. In a rapidly advancing technological world, not investing at this time could mean falling behind the global market, and after things take off, it could make it impossible to catch up.

 

Please take my advice with a grain of salt. I could be wrong about the major tech companies being highly overvalued.  We have to remember how companies like Cisco were buoyed up by the Internet bubble and that when the bubble burst, the value of Cisco stock declined drastically. Before investing, it's crucial to conduct thorough research and consider your investment goals, risk tolerance, and time horizon.

 

Keeping all of this in mind here are some tech companies that may be poised to take advantage of the coming technological tsunami.

 

 

The Major Players?

 

Google (GOOG)
Microsoft (MSFT)
Meta (META)
Amazon (AMZN)
Nvidia (NVDA)
Tesla (TSLA)
TSMC (TSM)
Apple (AAPL)
AMD (AMD)


 

 

Other Promising Companies

 

Adobe (ADBE)
Alibaba (BABA)
Arista (ANET)
Astera Labs (ALAB)
Broadcom (AVGO)
Cisco (CSCO)
Cloudflare (NET)
Datadog (DDOG)
Disney (DIS)
Eli Lilly (LLY)
General Electric (GE)
IBM (IBM)
Intel (INTC)
Micron Technology (MU)
Netflix (NFLX)
Oracle (ORCL)
Palantir (PLTR)
Qualcomm (QCOM)
Samsung (005930.KQ) [Korea Exchange]
SK Hynix (000660.KQ) [Korea Exchange]
Sony (SONY)
Symbotic AI (SYM)
TenCent (TCEHY)

 

 

 

Promising Tech ETFs

 

ARK Autonomous Technology & Robotics ETF (ARKQ)

Fidelity Select Technology Portfolio (FSPTX)

First Trust Nasdaq Artificial Intelligence and Robotics ETF (ROBT)

Global X Artificial Intelligence & Technology ETF (AIQ)

Global X Robotics & Artificial Intelligence ETF (BOTZ)

Invesco Technology Fund (ITYAX)

iShares Exponential Technologies ETF (XT)

iShares Robotics and Artificial Intelligence ETF (IRBO)

ROBO Global Robotics & Automation Index ETF (ROBO)

Roundhill Generative AI & Technology ETF (CHAT)

T. Rowe Price Science & Technology Fund (PRSCX)

WisdomTree Artificial Intelligence and Innovation Fund (WTAI)

 

 

 

Previously Foundational Holdings

 

BRK/B

COST

FNDA

FNDE

FNDF

FNDX

IWM

IWR

PFXF

QQQ

SCHE

SCHX

SCITX

TFIFX

TMO

VEA

VGT

VLEIX

 

 

 

I recently took a course with the “Great Courses” entitled Understanding Investments. I took copious notes. Allow me to leave you with what I think is some of the best investment advice for beginners.

 

Best Investing Advice Condensed:

Start Early: Begin investing as soon as possible to maximize the benefits of compound interest. Even small, consistent investments can grow significantly over time.

Diversify Your Portfolio: Avoid concentrating all your funds in a single investment. Spread investments across various asset classes (e.g., stocks, bonds, real estate) and sectors to mitigate risk.

Set Clear Goals: Define your investment objectives (e.g., retirement, home purchase, education) and tailor your strategy to your timeline and risk tolerance.

Understand Your Risk Tolerance: All investments involve risk. Assess your comfort level with potential losses. Younger investors might tolerate more risk, while those nearing retirement may prefer safer investments.

Do Your Research: Thoroughly investigate any asset before investing. Understand its mechanics, growth potential, and associated risks. Avoid investing based solely on rumors or tips without proper due diligence.

Consider Low-Cost Index Funds and ETFs: For long-term investors, low-cost index funds (available as mutual funds or Exchange-Traded Funds - ETFs) offer broad market exposure with minimal fees.

Avoid Timing the Market: Predicting market fluctuations is extremely difficult. Focus on a long-term investment strategy rather than trying to buy low and sell high repeatedly.

Utilize Dollar-Cost Averaging: Invest a fixed amount regularly, regardless of market conditions. This strategy can reduce the impact of volatility and mitigate the risk of poorly timed investment decisions.

Rebalance Your Portfolio Periodically: Review your investments regularly and adjust your asset allocation to maintain your desired balance as your circumstances or market conditions evolve.

Maintain an Emergency Fund: Before investing, especially in riskier assets, ensure you have an emergency fund covering 3-6 months of living expenses. This prevents needing to sell investments during downturns to cover unexpected costs.

Invest in What You Understand: While trendy investments can be tempting, it's generally safer to invest in industries and businesses you are familiar with, enabling more informed decisions.

Stay Patient and Disciplined: Markets experience ups and downs. Maintaining discipline and patience, avoiding emotional reactions to short-term fluctuations, is crucial for long-term success. The stock market is wealth transfer from the impatient to the patient. Time in the market beats timing the market.

Consider ETFs: Studies consistently show that low-fee index funds and ETFs tend to outperform most actively managed funds over the long term. Consider funds tracking broad indices like:

·       S&P 500 (Examples: VOO, SPY)

·       Total U.S. Stock Market (Examples: VTI, ITOT)

·       Total International Stock Market (Examples: VXUS, IXUS)

·       Total U.S. Bond Market (Examples: BND, AGG)