On Pattern versus Chunking

(Epistemic Status: Endorsed)

When a person encounters something new, they take some time to figure out what it is. An interesting thing happens based on how much data the person is coming in with and how much data they use before defining something – essentially, if the person recognizes enough details to consider it to fit into a pattern, they will regenerate their understanding of the thing from that pattern at the cost of errors filtering into their recall. On the flip side, if the thing is sufficiently strange, or their pattern matching threshold is very high, then they’ll store it as a new “chunk”, or a primitive that doesn’t have enough data to be a pattern yet.

Where this gets interesting is that if you set your pattern matching threshold very low, you can save a lot of space/memory because everything fits into a smaller set of patterns – but your error rate is high enough that you’ll probably just generate noise when trying to process new information. On the flip side, if you set the threshold too high, you’re going to basically take up way too much space and not be able to access the information about the world around you quickly enough to contextualize new things – but you will be very precise about what things are.

Another thing that happens is that if you basically garbage collect all your chunks once in awhile, you can basically get new and unusual patterns by relating everything to each other and then seeing what actually seems to predict things in reality. You can also deconstruct all your patterns into chunks and try to put them back together in different ways. This process is usually mediated by intense experience, though you can do it more slowly with meditation, writing, conversation, and other things that give you an opportunity to reframe your existing ontology.

Overall, it can sometimes be useful to assess what your base strategy for handling novelty is and see if there’s any garbage collection you can do. Also, playing with your pattern matching threshold (either by increasing it by assuming everything is linked and beautiful and seeing what it generates, or decreasing it by assuming everything is discrete and separate no matter how elegant it would be if it were together) can generate very different conclusions about how things work. You can also play with the size of chunks by only accepting smaller chunks (basically, if you read about something, try to figure out what its made of), or by only chunking things that are big enough (basically, considering all primitives that go into a new chunk as patterns – if you read about something, consider what existing patterns you have could generate this chunk, and how the patterns that the writer proposes generate similarly big things). I am using big and small here in a very loose way, it’s hard to directly point at the mental process of chunking to a precise size – so I also advise discarding anything that doesn’t cash out into a useful mental motion, but occasionally reviewing to see if things make sense later.

Discussion Questions: What generally is your threshold for pattern matching things? And how, roughly, would you describe the point where you chunk information into bigger blocks to build with? How often to reframe your ontology, and what strategies do you use for that? If any of this seems difficult to do, what would help convey the mental motions?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s