Why are some aggregates "smarter" than their individual components and others are not?

So I had a very broad question which I had been dimly aware of for some time, but I've never asked it. I'd be interested what some people have to say... I'd almost consider trying to post this on a site like Math Overflow and seeing what people say, even though its not really what they go for I think (I would probably phrase it differently). Please chip in your 2 cents.

In many many fields we have this idea of simple units acting together in cohesion to create some very complicated aggregate body. In some cases, for instance, Brains, the behavior of a single neuron is thought to be quite simple, and "unintelligent", while the overall body displays substantially more complexity of behavior, and capacity to adapt favorably to various environments and situations. On the other hand, maybe the clearest example of the opposite is the "stupidity of crowds". A crowd of people is thought to have dramatically less problem solving ability than a single person. A single person is frequently able to able to monitor their spending and manage a budget appropriately for instance, while the California legislature is not.

Other examples of systems may not be so clear cut. For instance I'm not really sure if an electron is smarter than a cloud of electrons. In areas like probabilistic combinatorics, we can frequently create large probabilistic systems composed of very simple components which are coupled together in simple ways, but about which we can say almost nothing in terms of the behavior of the whole system. In statistical physics I suppose it is the opposite -- predicting the motion of a particle, given its local environment, is thought to be extremely difficult, and typically modeled perhaps using Brownian motion, yet we can deterministically model the evolution of the gas as a whole, and develop useful statistics to "characterize" the macrostate.

I suppose that in most of science, the only time we study aggregates of "smart" components are where the components are people or animals. Perhaps we do not recognize other components as smart?

Its not clear to me what precisely is different in the way that societies are built of humans and the way that brains are built of neurons. You might suggest humans can move freely for one, but in most cases, it seems that people establish some local network of people they trust and respect, through which they receive information, and these parameters of trust and esteem become established and then fine tuned as life progresses, perhaps not unlike neural network weights. Obviously the aggregator function is substantially more complicated.

Perhaps you might suggest that for some problems, networks of humans are quite effective -- for instance, we can design space shuttles, and a single individual probably cannot do that. Its only political problems that individuals fail at. Perhaps you can point out the problems that brains fail at by analogy... certain long term risk reward tradeoffs? drug addiction?

Of course the answers you get will depend heavily on the formalism you choose for computational ability. What I would like to ask is this:

1) Are brains organized from subunits in terms substantially different from societies / other schemes?
2) Given subunits of a certain design with a certain computational power under some formalism... VC dimension which can be learned efficiently? Topological Entropy of the analogous dynamical system?.... how much computational power does the aggregate possess, when formed under one connection scheme vs. another?

Obviously 2 is going to be pretty hard to answer... and will depend on your answer to 1 which may be contentious. Pitch in your 2 cents.


  1. Single neurons are actually pretty smart, as far as cells go. Dendritic computation and whatnot.

    "people establish some local network of people they trust and respect, through which they receive information, and these parameters of trust and esteem become establish and then fine tuned as life progresses, perhaps not unlike neural network weights"


    I do not know a rigorous definition of "Topological Entropy" ( although I could churn the connectivity matrix of a network through an algorithm designed to compute entropy on bit strings, that might give you something ).

  2. small.world.network.

  3. I've not looked into this, but the canonical "well connected" graph that everyone wants to pretend is happening in the nervous system is "small world network with a power-law distribution of node degrees". Well connected, and fractal. Of course its not that simple, to get a working cortex you need dozens of interacting cell types organized in almost crystalline precision. But, perhaps hypercolumns are connected between themselves in this fashion.

    I've never thought to plot the information entropy of graphs before, I'm sure someone has but I've not heard of anyone relating it coherently to ... any sort of grand unified theory of complexity.

  4. I'd say maybe theres no a-priori requirement that a collection of complicated things interacting in a complex way need to be "smarter" than its constituent components. Natural selection has helped a lot, especially with the nervous system.

  5. SO, I'm gonna be late but

    define topological entropy as the information entropy of the bit string representing the adjacency matrix

    numerically estimated by the usual procedure of building up a historgram of the distribution of binary words.

    lets set the binary word length to either the number of nodes in the graph, or a word length that evenly divides the number of nodes in the graph.

    compute the histogram over both row vectors and column vectors.

    I select this scheme because it "seems like" it will generally lead to low entropy estimates, and is a consistent well defined way to cut up the connectivity for the entropy computation.

    By this(any) scheme the lowesu entropy graphs are the completely disconnected ( all 0 ) and the complete ( all 1 ) graphs. Neither of these is remotely similar to a brain.

    By this(any) scheme the highest entropy graphs are the ones created with connectivity matricies filled with random bits 50% 1, 50% 0. These graphs are also highly connected, and don't exactly resemble a brain either.

    So, at least at the extremes, topological entropy does not appear to be useful for characterizing the brain.

    I'll try to think about this some more today.

  6. there was talk of ants and bees today. they have castes, but no defined connectivity structure. there are many forms that emergent intelligence can take, and I don't think we've found any method of analysis applicable to all of them.

  7. @Everett

    I mean even if we can't evaluate "intelligence", if we could evaluate which systems are well-suited and ill-suited to learning certain computational tasks, that would be interesting.

    Topological entropy is maybe not intuitively related to intelligence... it is some kind of complexity measure for a dynamical system. Has to do with, counting the growth in the number of orbits that are epsilon distinguishable from eachother, as you run them for increasingly long periods of time and as epsilon gets small.


    For a particular n, epsilon, you pick as many points as you can so that for any pair of two of them, there is some time t < n so that starting at the first point you end at least epsilon away from the second point.

    Intuitively, very stable systems that have very similar orbits shouldn't grow very much, but highly unstable / rapidly branching systems should have large entropy.

    You would generally be interpretting this as, X is some dynamical system, with a nice topology, and hopefully some kind of metric, and the continuous map f : X -> X is the discrete time evolution operator.

    Its unclear if this is a reasonable thing to use as a measure of how "smart" a system is -- but I know this measure has been used in recent work by Sarnak in studying mobius randomness / pseudorandomness. There already exists a reasonable body of work studying its basic properties.

  8. there's more stuff about this here:


  9. you have got to be kidding me... I didn't know Topological Entropy was a real thing. I guess I could'v googled it, but Beck, where do you learn this crazy math ?

  10. i sat through multiple Peter Sarnak talks about this at the Ias, once last year, once in the summer at pseudorandomness workshop

    I was just reading that other one, I didn't know topological pressure was a thing... this is awesome


  11. yeah I can't read that.

  12. yeah i just got one of the books from the list, i may try to break it down and figure out whats going on, if i find some time

  13. all right, I'll try to plow through this but, let me know if you share this sentement :

    I really wish math papers were written out in english, instead of this shorthand notation. Notation can be ambiguous. Sometimes you assume notation has one meaning when your readers are accustomed to another. Sometimes your readers have never seen such notation and have no way to query what it might mean. Mathematical shorthand is pretty to look at, but sucks a lot for reading and learning things.

    In short, since I can not parse or translate into english the notation used on that Topological Pressure page, its going to be really really hard for me to read it, even if the idea is conceptually simple.

  14. then again, maybe its just a poorly written article.

    "The following elementary setting contains germs of notions and results described hereafter"

    seems like a bad sign to me.

  15. I would like to focus on the first question. First rewind back to before brains, the beginning of cell cooperation, everything wasn’t streamlined for the cells. They had a primitive communication protocol compared to today, possible starting out with a bit of synchronization with primitive neurotransmitters. The point being is that they slowly evolved into the brain increasing the efficiency of their communication.

    Now think of the brain as the primitive cell, we currently have communication yes, but it is still quite new in the evolution timeline. You can see with the technology and the internet we are getting a better connection with other brains. Soon, in the eyes of evolution, we may have this ability integrated into our mind itself, slowly blurring the line between the individual brain and the networked minds as a whole. If we continue this trend I don’t see how we could stay as individuals and not be viewed as one larger entity. Such as we currently view the individual cells as this entity we call our mind.

    I hope I was clear on what I am trying to communicate, I am quite tired. If anyone has any flaws or agrees with this statement please give some feedback. I’m always up for a discussion.

  16. Perfect sense. Ants-bees-termites-nakedMoleRats have already diminished a sense of an individual.

    Depending on who you talk to, humans are somewhere along the spectrum evolving toward a eusocial structure.

    Science fiction is dotted with explorations of this, though at the moment I can only think of The Borg and the most recent movie version of the Time Machine, or Brave New World. However, if considered seriously, all of these seem a little comical and hokey by our standards. Its a worthy exercise to consider how such a social organism comprised of human or above human level intelligence might be organized and stabilized, definitely one of the better uses of science fiction. Or maybe thats just run of the mill politics.

    sorry if that made sense, I didn't get much sleep last night and just woke up.

  17. oh but the trivial not-actually-an answer is that complex systems ( even with high topological entropy or whatever that weird thing is ) are not inherently intelligent. Intelligence is merely a fitness metric. Systems capable of reproducing and evolving, that are selected by a fitness metric similar to what we use for intelligence, evolve to be intelligent. Systems that can not undergo selection or are selected by a fitness function dissimilar to our notions of intelligence will not become intelligent.

  18. We had a similar discussion in meatspace over here recently, actually.

    with respect to "are brains organized differently than society" it was noted that

    -- connections in the brain are somewhat fixed. at least, there are a finite number of other neurons a cell might connect to, and this number is much smaller than the total number of neurons
    -- neurons are highly specialized in terms of their functioal properties

    -- connections in society are somewhat fluid. although most adults settle on a finite number of close professional interactions, social ties are significantly more fluid than neural connections, and there is hardly any prespecified program for these connections
    -- its unclear if specialization in society approaches that in the brain

    -- connections in insect colonies appear extremely fluid, but this may be due to the fact that the colony is made up of only a handful of castes which contain thousands of functionally identical and interchangeable units
    -- the intelligence of insects appears to be fundamentally different that the intelligence of the brain

    so, something about
    -- fluidity of connections
    -- stability of connections
    -- connection topology
    -- specialization of individual units ( number of castes )
    -- computational capabilities of individual units

  19. question : does characterizing the "computational power" of an aggregate system necessarily require reducing it to one of the existing formal models of computation. Or, can you show that an aggregate can implement algorithms from X complexity class without necessarily reducing an existing formalized computing system to the aggregate ?

  20. Can you build a set of tools for talking about the what complexity classes of algorithms emergent aggregates can implement ?

    This seem to be related to your little flocking problem from a while back.