Pattern Recognition: What Astrologers, Crows, and AI Detectors All Have in Common

AI Detectors

A crow on a telephone pole knows your face. An astrologer at a kitchen table knows your year. A piece of detection software at a university knows that the essay you submitted last Tuesday is statistically suspicious. Three minds, separated by every possible kind of distance, doing variations of the same thing. They are reading patterns. None of them is always right. All of them are doing something very old, very strange, and quietly miraculous.

Two of these pattern-readers are mostly harmless when they are wrong. The third one can ruin your career.

This is a piece about that asymmetry, and what to do about it.

The crow on the telephone pole

In the spring of 2006, the wildlife biologist John Marzluff and a small team of graduate students walked across the campus of the University of Washington wearing rubber masks. The masks were ordinary, the kind you might wear at a Halloween party. The biologists used them to do something that crows have a long memory for. They trapped seven to fifteen crows at each of five sites, banded the birds, and released them. The work took a few hours. The masks came off and went into a bag.

For the next several years, on a roughly six-month rotation, the masks would come back out. A researcher would walk through the same campus wearing the same face. The crows knew. Not just the original birds. The next generation of crows knew too, having learned about the dangerous mask from their parents and from other crows who had seen it before. Almost seven years after the original trapping, the masked researcher was still being scolded and mobbed by crows at every site, even though most of the birds had never been banded and many had never seen the mask before. The percentage of crows who joined the harassment had roughly doubled in seven years.

Brain imaging work by the same research group, published in the Proceedings of the National Academy of Sciences, showed that the regions of the crow brain that lit up when the bird saw the dangerous face were the regions associated with fear and threat. The same pathways, in approximate equivalent, that light up in the human brain when we see a face we have learned to associate with harm.

The crow recognizes the face. The crow remembers. The crow tells other crows. The pattern, once seen, is held.

What is happening, mechanically, is that the crow’s brain is extracting a signal from a noisy visual environment, comparing it against a stored template, and triggering a response when the match crosses some threshold. Whatever else this is, it is not different in kind from what a modern classifier does when it identifies a face in a photo or a piece of writing as artificial. The substrate is different. The cognitive feat is the same. So is the failure mode.

The brain that sees faces in toast

Human pattern recognition runs on the same basic architecture, but in our case it is so deeply integrated into the cognitive machinery that we mostly do not notice it operating. We see faces in clouds. We see meaning in coincidences. We see a story in a constellation that is, mathematically, just a few stars at very different distances from Earth that happen to lie along a line of sight. The technical name for the tendency to find faces in inanimate objects is pareidolia. The broader tendency to find meaningful patterns in random data is apophenia.

The fusiform face area in the human brain is the small region most directly responsible for face recognition. It will activate at the slightest provocation. Two dots and a line are enough. The headlights and grille of a car can produce a momentary, involuntary perception of a grinning face. The face is not there. The brain is producing it from less information than the world has actually given.

This used to be confusing. Why would evolution build a brain that hallucinates faces in toast?

The answer is that, for almost the entire history of our species, the cost of falsely seeing a tiger that was not there was much lower than the cost of failing to see a tiger that was. A pattern-detection system that errs on the side of finding signal even where there might not be any will produce a lot of false positives. It will also keep its owner alive long enough to reproduce. The selection pressure pushed the threshold down, generation after generation, until we ended up with a species that sees a face in a piece of bark and finds meaning in the position of distant stars at the moment of its own birth.

The pattern recognition is not a bug. It is the foundation of nearly every cognitive ability we admire about ourselves. Scientific discovery is formalized pattern recognition. Language acquisition is pattern recognition. Music is pattern recognition. Love, in some uncomfortable but defensible sense, is pattern recognition.

The same cognitive primitive that lets us read poetry lets us see the Virgin Mary in a tortilla. We did not evolve two systems. We evolved one. And every system built in our image, every algorithm trained to recognize signal in noise, inherits both the gift and the failure mode.

What an astrologer reads

This is the part where most science writing about astrology goes wrong. It treats the practice as a failure mode of human cognition and stops there. The cognitive science literature does describe astrology in terms of well-documented biases. The Barnum effect (the tendency for people to find vague, general descriptions uniquely descriptive of themselves) is real. Confirmation bias (the tendency to remember confirmations and forget disconfirmations) is real. Astrological readings, in controlled tests, do not predict outcomes more accurately than chance.

All of that is true. None of it is the whole picture.

What an astrologer is doing, when she sits across from a client and reads a chart, is performing an extremely refined pattern-recognition act on a very small amount of data. She is given a name, a date, a time, a place, and a face. From that, she constructs a narrative that makes the client feel known. The construction draws on a centuries-old symbolic vocabulary, a sensitivity to the client’s body language and word choice, and the practitioner’s accumulated experience of how human lives tend to go. The accuracy is not in the stars. The accuracy, where it appears, is in the practitioner’s reading of the human across the table and in the client’s willingness to fill in the gaps.

This is not nothing. This is, in fact, a lot. It is what therapists and good doctors and old priests do, with different vocabularies and different framings. The pattern-reading mind looks at a person and tries to extract the meaningful signal from the noise of a life. The astrologer’s tools (planetary positions, sign archetypes, aspect geometries) are a culturally refined notation system for organizing what would otherwise be an overwhelming amount of detail. The notation has no causal link to the planets. The act of using the notation, with attention and care, can produce real insight about the person it is being used on.

The crow does this with faces. The astrologer does it with biographies. The substrate is different. The cognitive feat, once again, is similar.

The error mode is also similar. A crow that has seen a particular face threaten one of its kind will respond to that face for years, even when the face is now attached to a different intention. The astrologer who reads a chart with confidence will sometimes attribute to a transit of Mars something that is, more plausibly, the consequence of a job change. Pattern-recognition systems produce false positives. That is the price of a system tuned to never miss a real signal.

The crucial thing to understand about the astrologer’s false positives, and the crow’s, is that they are mostly soft errors. The crow scolds a face that is no longer dangerous. Nobody dies. The astrologer predicts a transformation that does not arrive in the form she described. The client is mildly disappointed. The cost of being wrong is absorbed by the system and life moves on.

The third pattern-reader does not have this property.

What a detector reads

A modern AI detection tool is a younger, narrower, far less interesting cousin of the crow and the astrologer, but it is doing recognizably the same kind of work. It is also, increasingly, the pattern-reader whose mistakes carry the highest cost.

The two main signals it reads are perplexity and burstiness. Perplexity is a measure of how predictable each word in a passage is, given the words around it. A language model is asked to guess what comes next in a sentence, over and over, and the average confidence of those guesses is the passage’s perplexity score. Human writing tends to score higher on this metric, because real people make unexpected choices, leave odd word juxtapositions, repeat themselves in idiosyncratic ways. Generic AI output tends to score lower, because the model has been trained to produce the most statistically likely next word at every step. Human perplexity scores typically land somewhere between 80 and 100 on the standard scale. AI output frequently lands between 20 and 30.

Burstiness measures variation across sentences. Real human writing has wild rhythm shifts. A long, winding clause followed by a fragment. A formal sentence followed by a casual one. Human burstiness scores tend to spread between 0.6 and 1.2. AI output tends to cluster tightly between 0.2 and 0.4. This is, mechanically, what gives certain ChatGPT-written essays their faintly hypnotic, evenly-paced, slightly soporific feel. The sentences are all roughly the same shape.

The detector reads these signals, feeds them into a trained classifier alongside dozens of other features, and outputs a number between zero and one. The number is the model’s guess at the probability that an AI wrote the passage. That is the entire substance of what is happening. There is no understanding involved. There is no semantic comprehension of whether the writing is good or true or interesting. There is just pattern matching, against a learned template of what AI output statistically looks like.

When it works, it is a small marvel of applied statistics. When it fails, it fails for the same reason every other pattern-recognition system fails. False positives. The Stanford research team that ran 91 essays by verified human TOEFL test-takers through seven popular detectors found that on average, 61.22 percent of the essays were flagged as AI. Eighteen of the essays were flagged unanimously by every single tool. Internal audits surfaced through 2026 industry reporting have shown false positive rates above 30 percent for human-written professional writing. A separate 2026 study testing commercial detectors against student writing found false positive rates as high as 83 percent depending on the tool and the writing style.

In other words, this particular pattern-recognition system, when pointed at writing by non-native English speakers, neurodivergent writers, formally trained writers, or anyone who uses Grammarly, will declare a great deal of genuine human writing to be artificial. The wider context of how the failure rates are evolving as the underlying language models get better is worth reading separately. The short version is that AI detection in 2026 is not getting more accurate at the rate vendors imply. In several configurations, the false positive rates are getting worse, not better.

The error mode is the same as the crow’s, the same as the astrologer’s, the same as the brain that sees the Virgin Mary in a tortilla. A pattern-recognition system tuned to find a signal will sometimes find that signal in the absence of its actual cause.

The difference is what happens next.

The cost of the third reader

The crow’s mistake costs the wearer of a mask some shouting birds. The astrologer’s mistake costs the client an evening of slightly-off advice. These errors are absorbed by the people who receive them. The systems live on, the false positives drift away, and tomorrow is mostly fine.

The detector’s mistake costs the person on the other side of the screen something they cannot easily get back.

It is a student called into a misconduct meeting because her TOEFL essay scored 91 percent AI on a tool that flags two-thirds of TOEFL essays. It is a job applicant whose cover letter reads “too clean” and gets quietly moved to the rejection pile by a recruiter who does not even tell her she was screened. It is a freelance writer whose deliverable gets refused payment because the client ran it through GPTZero. It is a graduate student whose visa status hangs on the outcome of a hearing built around an algorithmic score that the QAA, the national agency that oversees academic standards in the UK, has explicitly said institutions should not rely on.

These are not edge cases. They are happening at scale, every day, in every English-speaking country with a university system or a hiring pipeline. The pattern-reader is wrong, the same way the crow and the astrologer are sometimes wrong, but the architecture around the detector treats its readings as evidence rather than as a probabilistic guess. The asymmetry between what the tool actually is (a statistical classifier with documented failure modes) and what it is treated as (a verdict that ends careers) is the thing that makes this third pattern-reader different in kind from the first two.

If the system were less load-bearing, the false positives would be a curiosity. Instead, they are the price of a tool that institutions adopted before they understood its limits, and the people paying that price are not the institutions. They are the writers, the students, the applicants, the freelancers. They are the people whose work has just been mistaken for the output of a machine.

The pattern that breaks the pattern

Here is the part of the story that might be the most interesting.

In every domain where pattern recognition runs at scale, a counter-practice emerges. Crows can be acclimatized to a previously dangerous face by sustained, gentle re-exposure that overwrites the original association. Astrology has, over centuries, developed corrective traditions of reflection and ethical practice that distinguish good readings from confident ones. Modern statistics, in turn, was developed in part as a way of training human pattern-recognition machinery to know when to disbelieve itself.

In the case of AI detectors, the counter-practice is mechanical and immediate. If the detector is reading perplexity and burstiness, then writing whose perplexity and burstiness signals look more like human writing will be read as human, regardless of who or what produced the first draft. This is the entire premise of the humanization category that has emerged over the last three years. UndetectedGPT, one of the more rigorous tools in the space, processes AI-assisted or AI-adjacent writing in a way that adjusts the underlying statistical signals back into the range that detectors associate with human output. The text reads as human because, at the level the detector measures, it now is.

This is not a defeat of pattern recognition. It is the natural consequence of any pattern-recognition system being deployed at scale against people who have a real stake in the outcome. The pattern is read. The pattern becomes known. The pattern can then be reproduced or disrupted by anyone who understands how it is being read. That cycle has played out in counter-surveillance, in advertising, in evolutionary biology, in fashion. It is now playing out in detection, and the people running ahead of the cycle are the ones who understood early that you do not argue with a pattern-reader. You give it the pattern it is looking for.

For the writer whose essay gets flagged, who knows she did not use AI but cannot prove it to a misconduct panel, the humanization layer is not a way of cheating. It is a way of speaking the language the detector understands so the detector reads her writing correctly the first time. For the freelancer whose deliverable gets refused on a false positive, it is the difference between being paid and not being paid for work she actually did. For the job applicant whose cover letter sounds too professional, it is the way her real qualifications make it past a screening layer that would otherwise discard her on a technicality.

None of this is what the detector vendors had in mind when they built their tools. It is, however, the inevitable end state of any pattern-reading system that is treated as authoritative when it is, in fact, fallible. The people on the receiving end of false positives find a way to live around them. The technology that helps them do so becomes infrastructure.

What this is, in the end

The crow, the astrologer, and the detector are not the same. They are three points on a long line of pattern-reading practices, each tuned to a different domain, each producing useful signal much of the time and false positives some of the time. Recognizing that they are doing the same kind of cognitive work does not flatten the differences between them. It does, possibly, suggest a softer attitude toward the first two and a much harder-headed attitude toward the third.

We are pattern-reading creatures all the way down. We see meaning in stars and in toast and in statistical scores spit out by software. We are right often enough that the practice has earned its place in our toolkit. We are wrong often enough that we should hold every reading with at least a little humility, and that humility should scale with the cost of the mistake.

The crow on the telephone pole has been holding the wrong face for seven years and counting. The cost is some shouting. The astrologer holds her client’s life in her hands twice a week. The cost is occasional bad advice. The detector flags eighteen perfectly human essays out of ninety-one and never knows it was wrong. The cost is eighteen students who now have to defend themselves in a system that treats the score as evidence.

The first two readers do not require a counter-practice. The third one does. And the people who have figured this out are not waiting for institutions to catch up.

The reading is not the territory. It never was. But when the reading is the thing you are being judged by, the only sensible response is to make sure the reading is correct.