Movies have taught us that the future of human integration with technology looks like, well, cyborgs. A harsh juxtaposition of the human body augmented or partially replaced with hardware. Critically though, the entire body - organic and synthetic - is controlled through the brain. There is no extra process required, just seamless control as though they are one.
The kernel of truth is that element of seamless control. Increasingly, I believe that the future of integration does not include much if any hardware. Instead our brains will adapt to outsourcing various processes and tasks to technology while remaining as the orchestrator of the collective tasks and processes.
I’m convinced this is the case because it’s already begun happening. Not just in lab experiments, but globally. It has happened to me, and it has almost certainly happened to you too.
This is your brain on drugs technology
It’s no secret that technology changes our brains. At the highest level of news headlines, there is a never-ending stream of articles about the detrimental effects of increased screen time. Delayed and impaired childhood development1. Decreased sleep quantity and quality2.
There’s strong data for how it’s changing the fabric of our society and making us more isolated3. But the changes to our brains from screen time are not so simple as better or worse. The human brain, like our muscles, adapts and changes according to use. A marathon runner, a weight-lifter, and a couch potato have radically different musculature. Similarly, it’s not just the number of hours a day that you use a screen that determines the impact on your brain, it’s a combination of how you are using the technology, how old you are, and what your brain has already been trained to do.
This is a topic full of nuance. It’s important to simplify this enough to make it easily understood, but we must tread carefully. Over-simplification will lead us to false conclusions - such as the belief that all screen time is detrimental to your health.
The truth is more complicated and far more interesting.
Is screen time actually that bad, or are we just afraid of change?
Let’s clear the air about screen time. It’s become the cultural scapegoat for…just about everything. Toddler acting out? Too many cartoons. Spouse didn’t hear you? Too much TikTok. Can’t sleep? Stop scrolling. Kids won’t talk at the dinner table? Social media killed their social skills. The tricky thing is that there are many studies that have shown over and over again that high levels of “screen time” exacerbate each of those examples.
But each of those examples have been complaints for generations before digital devices existed. The other thing that’s been true for generations is the steady drumbeat of older generations claiming that younger generations are inferior for their social skills, work ethic, and manners. Is screen time just a convenient scapegoat?
Not entirely. There are arguments to be made for why many of the social changes are just changes and not degradations. Predictably, these arguments often divide along generational lines. But instead of debating social change, let’s focus on brain health: activities that increase dementia risk are bad, and those that reduce it are good. That’s the lens we should use to assess screen time.
There is no golden ticket approach to dementia prevention, to meaningfully lower your risk you need to do many things that reduce your risk and avoid many things that would increase your risk. Among these many things is in fact screen time, but it turns out that not all screen time is bad for you.
Not all screen time is created equal
The term “screen time” is too broad to be meaningful. Even in scientific literature, it’s used to describe everything from television to video games to work on computers. A closer look at studies (especially their methods sections) reveals important distinctions.
One of the biggest factors is differentiating between passive and active activities. It’s pretty intuitive, passive activities include things like watching television or scrolling on social media. You aren’t using your cognitive capacity or problem-solving so much as you’re enjoying colorful flashing lights with coordinated sounds. Active activities are those that require you to use your cognitive faculties. Drafting a message, researching information to solve a problem, or designing a new visual are just a few examples of activities that require your brain to be fully engaged. Yes, there is intake of colorful flashing lights with coordinated sounds, but those inputs are immediately processed for relevant details to synthesize into the task at hand.
I start here because this is one of the most significant distinctions in whether that so-called screen time is going to raise or lower your risk of dementia. As you’ve likely heard, the more you engage in passive activities the more your risk of dementia increases. But it’s also true that the more you engage in active activities such as those I described above the more your risk of dementia decreases4. The crux of this can be summarized in a single question: how cognitively engaged is your brain?
Remembering knowledge vs remembering where you found it
Actively engaged screen time, which is protective against dementia, still has some significant changes on how our brains function and form memory. There are many different types of memory, but for this topic you need to understand content memory and source memory. Content memory is a specific piece of knowledge, whereas source memory is where that piece of knowledge is located.
For example, when you know that you wrote down the quote from the plumber on the back of the blue envelope, but you don’t remember what the quoted price was. That is an example of a source memory without content memory, because you remembered where the information is located but you don’t remember the key information itself.
Another example is when you use navigation software. You use it to go to a new location, and pay close attention to your driving the entire way. A few weeks later you need to go back, but aren’t quite confident you’ll remember all of the turns at the right time. You stored the source memory of using the navigation software to get there, but didn’t fully store the content memory of the step-by-step directions.
How we’re outsourcing content knowledge to the internet
The internet is full of content knowledge, but finding that information is as important as it existing. And thus the rise of search engines, allowing us to find information across the internet in a few keystrokes. We now have access to virtually any piece of knowledge we seek, provided enough persistence.
For a long time there’s been a popular conception of remembering only what you must to “save space” in your brain. I’ve seen this attributed to a number of famously smart individuals. I don’t think it’s a special insight of any one person so much as a recognition that we won’t remember everything forever and should be intentional about what we commit to memory.
In fact, there are many types of information that we’ve long since stopped trying to commit to memory. Things like transaction logs at a grocery store, tax documents, birthdates of extended family. We focus on remembering where to find these pieces of information when we need them instead of remembering the details themselves. If we wanted to draw loose category lines around this, we could say that these are pieces of information, while important, we don’t often need. It’s more efficient to remember that the calendar in the kitchen contains all of the birthdays than to remember 30 birthdays.
There’s a phenomenon known as the “google effect” that runs parallel to this5. The google effect is commonly understood to be the tendency of those who frequently use the internet to rely on their ability to find a piece of information on the internet again instead of actually remembering it. It’s also known as digital amnesia but I think this term is a bit misleading. It’s true that you don’t remember the information itself, because you don’t store the content memory. But, different from a true amnesia, you do store the source memory. In fact, it’s common for people to remember details of the search query or the webpage they found the information on.
Just as you might have remembered that you wrote the plumber’s quote down on a blue envelop, you might remember that the URL contained the words “plum” and “plumbers” and that the website was purple. The extent to which you experience this transition of storing source knowledge instead of content knowledge is high dependent on two things: the extent of your prior knowledge, and your age.
Prior knowledge promotes formation of content memory
If you already have a strong knowledge base, you are more likely to store content memory and remember the knowledge itself. If you do not know much about the subject, you are more likely to store source memory and remember where you found the information. Similarly, if you are likely to use the information again in the future you are more likely to store the knowledge itself, whereas if it seems like you’ll only need that information once you are more likely to store source memory. Taking these two together, it seems likely to me that it has a lot to do with how actively you use the information in the moment you learn it.
This isn’t new - the concept of active learning to increase knowledge retention is pedagogically an old story. This is why great educators are so valuable. What is being discovered through these studies is that this pattern recurs when you try to learn knowledge from the internet instead of in a classroom.
I’m speculating here but I strongly suspect that the extent to which adults6 store knowledge as content memory has to do with how capable you are of engaging with the topic. When you have a strong knowledge base and learn a new piece of information, there is an entire ecosystem of context available for that knowledge to form relations too. When you lack a pre-existing knowledge ecosystem you need additional assistance to contextualize and actively engage with the information.
Am I becoming a cyborg with the internet?
The tension between implementing technology while remaining fully capable is not new. We adaptively offload according to the strength of the technology available. Few farmers today need to have the physical conditioning once required to work a field. But as technology has moved closer to the mind, the stakes feel higher. I’m spending so much time understanding the factors underlying the google effect for two reasons.
First, there’s an inherent tension between the idea that active activities on the internet can be dementia protective, yet results in us not forming content memory. When I consider that the loss of content memory is a hallmark of dementia, I’m concerned that we have a lot left to figure out.
Second, what’s the long-term trajectory of these cognitive changes? Am I increasing my capacity with a tool, outsourcing work, or am I slowly outsourcing myself?
Heavy use of the internet is an inevitable, critical, source of information and connection for me. I’m typing this blog from Mexico, while actively collaborating with groups across the United States, Canada, United Kingdom, and Europe. I’m okay with outsourcing some dull facts to the internet, and appreciate the ability to get near-instant feedback. But am I really okay with remembering less because I’m good at crafting queries?
Is this what becoming a cyborg looks like in reality?
These findings are from working-age adults, as vulnerability to the google effect differs dramatically at different ages. Children are quite resilient to it, adolescents are especially vulnerable to it, and older adults seem to experience little, if any, google effect. See reference 5 above.