Complexity has been in the back of my mind for a long time and I've struggled to find any satisfying, rigorous books on the topic. This is a short overview of what I suppose is the most popular approach to the topic, dealing chiefly with "complex adaptive systems", CAS, the most recognizable examples in the real world being economics, ecology/species interactions, evolution and cell biology. This topic has always intrigued me because it seems the next logical step in understanding the physical world, as opposed to the slow grinding down of matter by the reductionist method. While I appreciate the reductionist attempt at understanding more complex systems in our pnysical reality, I see it as one component of a two-stage process. For example, in consciousness, we certainly do need to identify and describe the individual components of the brain, down to the extreme, granular level, however, it's the interaction of these components that ultimately matters. We've spent thousands of years zooming in closer and closer and peeling apart cells, molecules, atoms and the fundamental physical laws that govern them, yet we still have no universal, formalized way of describing how these components self organize into complex interacting systems with emergent phenomena etc. The second step should be building these components back up from the ground level to understand how advanced properties emerge from matter.
The gist of the thinking in this book is nothing new, dealing with agents governed by basic rules that develop into heiarchies, specialized groups, and boundaries that organize communication by 'tags'. It's a useful way of thinking about these interactions that can be applied to a lot of known complex systems. I continually wonder if there is any quantitative aspects to these ideas, ie, can you formally measure complexity? Would it even be useful if you could? He also mentions finitely generated frameworks, something I want to look more into.
Another question I have is the energetics related to complex systems, and the resulting (in a sense, I suppose) teleological implications. Life on earth I would argue is the only truly complex system we know of - one that seems defined by self-preservation of certain organized motifs of matter...basically that of living organisms. From an energy/entropy perspective this could be a natural, spontaneous outcome of continued interaction of highly organized energy with basic, richly mixed chemical components. Similar to how if you bombard water and rice with high energy in the form of heat, the rice boils and cooks, and in the process forms a pattern of evenly shaped, well organized hexagons in the cooked rice...it seems during the exchange of heat you are able to drive the entropy of the recipient matter down, into a more ordered state. Likewise on earth, billions of years of sun bombardment, supplying low entropy, high energy photons to earth, matter will spontaneously accept the energy, and has the opportunity to drive it's own entropy lower into a more specified and ordered state. The strange thing about this is that seems to imply that complexity itself is defined by maintaining a lower entropy state, which requires some form of self-replication, or at least self-maintenance in exchange for energy. If this didn't occur, matter might be complicated, but not complex--- it would be an ever changing soup unable to establish the consistent interactions/relationships that underly complexity, heirarchies, and emergent phenomena. So, starting with a supply of well-organized energy, and a mix of chemicals, you might get all kinds of transient assemblages of matter, complete with various agents and rules to govern them, but nothing can truly develop into complexity, unless something 'catches on'...within those assemblies, something has to trigger a self-mainenance/replication process, where self-sustaining interactions can take place, otherwise its more defined by chaos, than complexity. Things would just fizzle out... So then the only purpose of complexity is to maintain itself... or at least, it's a necessary condition?
I absolutely hate analogies used to explain scientific concepts in lay audience books, particularly in physics. This book uses an infinite number of 'clocks' in place of the wave function... winding and changing sizes this way and that to represent phase and amplitude etc which was extremely distracting and confusing. I can't imagine this is any more clear to most people than just drawing a sine wave, something the vast majority of people are familiar with on some basic level...rather than constantly rambling about clocks. To add to the confusion, they occasoinally supplied equations here and there, making it unclear what level audience they were targeting. It becomes very difficult to imagine Schrodinger's equation in a universe of clocks - much more than just thinking about an abstract waveform....
it reminds me of an analogy in an MRI book I read of a man driving a car while throwing hamburgers out the window to another car which was used to explain energy transfer (or something). At some point mapping these analogies onto what they're supposed to represent in the real world becomes more complicated and bizarre than just tackling the actual mathematical relationships.
Wonderful summation of decades of research attempting to render the concept of consciousness into a scientifically tractable problem. The central idea is that consciousness, while being a strictly subjective phenomenon, is nevertheless reportable, and therefore treatable as raw data. When considered in the context of various measurable brain states, the biologic 'signatures of consciousness' can be determined - in other words, the brain activity that is necessary and sufficient to support consciousness.
One of the strengths (which is often poorly done) is the rigorous picking apart of what consciousness even refers to. This word has been tossed around in so many fields of study, with so many overlapping meanings, it's difficult to approach in an objective, scientific way. First and foremost, as mentioned, consciousness is a subjective experience...however, it is not an illusion. We all experience it... it is there, we all have a sense of what it refers to in some vague way...it is therefore 'real', and worth probing... But where does vigilance, attention, self awareness, the ego, and other related concepts fit in? Well to boil it down, consciousness isn't any of those things per se, it's simply the small number of items that you are mentally experiencing at any given time. Of course you must be vigilant, and the objects that are in the consciousness are the ones selected by your attention 'sieve', but the simplest most sensical defintion is that conscousness is just those things occupying your subjective experience at that moment - your experience. Self-awarness is not necessary - you can experience a sunset without constantly reflecting on your self experiencing it (his example). So then the ego, the self, self awareness, etc, is just when your consciousness is focused your own body and mind, which itself becomes a mental construct. Attention, which is out of our control for the most part, may simply act as gatekeeper as to what enters our consciousness, and is based more on primitive subsystems evolved for survival, and refined etc ...anyway this is a poor summation of his ideas, but simply put, consciousness is simply what you can report as being 'aware' of...
This all fits in with the global workspace theory, which is the main thesis of the book. In a nutshell, consciousness is a unified representation of the most likely physical reality. Your subconscious takes in a never ending stream of rich sensory data, the neural representation of which is inherently statistic - a continually running/updating stochastic model of the world 'out there' based on sensory stimuli that is coded by the firing patterns of individual neurons. The job of consciousness is to collapse this model into the most salient and most probable version, represented by a smaller, more specialized subset of neurons. This coincides well with a large number of studies done on the visual system in which visual representations can flutter between two states, despite both always being represented in the brain at a lower level (ie in the subconscious). Consciousness is basically settling for one or the other, but cannot show both at the same time...The purpose of this may be an efficient way to react to different sensory stimuli - if my subconscious is 70% sure a stimulus is present and 30% sure it is not, your brain is more likely to construct a conscious experience of that stimulus being present, and your behavior will follow accordingly, even if it was not actually there ... and you will be right 70% of the time.
The revelation of the 'global workspace' model, to me, is that the state of consciousness itself is defined by its content... in other words, your consciousness isn't 'filled' with images of a red lamp - the subjective experience of the red lamp is consciousness itself. There is no consciousness without content. There is no stage, no theatre...no background state. Consciousness is more like an empty balloon that deflates if it does not contain something...The Eastern philosophy ideas of 'clearing the mind' to reach 'pure consciousness' are fruitless, because if you truly emptied out your consciousness of all substance, you simply would be unconscious.
Nevertheless, I haven't exactly squared all of this scientifically...the waveforms that seem to correlate with a conscious experience seem to better explain a new stimulus arriving rather than consciousness itself...for example, a pattern on my retina may set off a cascade of neural activity leading to specific neurons in my frontal lobes, which constructs a unified picture in my mind of the most likely physical reality, but I may then hold that image in my consciousness for any period of time, and I'm not sure what then happens to these transient waveforms that first appeared with the image...it may be that this wave may be necessary to activate those particular neurons that represent that conscious image, but once they are active, they can keep firing as long as that stimlulus is attended, which may not be detectable as any particular measurable waveform....
Also, I don't need to mention that like all scientific approaches to consciousness, no possible explanation as to the 'why' of consciousness is given or even attempted... aka the hard problem of consciousness. Even if we identify the minimal, basic physical substrates of consciousness, and even if we are able to convincingly create cosnciousness (which can never be proven anyway), it will never answer WHY consciousness occurs, and what exactly is subjectivity anyway...why, when this neuron fires, which is just sodium moving between cell membranes, do I 'see' an image of marily monroe....this will always be in the domain of philosophy, I believe. Nevertheless, I firmly believe, if for no other reason than human curiosity, consciousness is a real phenomenon and worth studying...anyway, what better things do we have to do with our time on this planet?
Fast-paced space opera about a war between a warrior alien race called the Idirans, and The Culture, a mutualistic society of humans and intellectually superior AI 'Minds'. The action centers around a shape changing mercenary named Horza who is tasked with retrieving a rogue 'Mind' for the Idirans which has crash landed on an extinct planet. The book is full of standard space opera tropes and the usual paper thin characters, predictable dialogue, and over the top violence. The futurology, which is usually the redeeming factor of these books (for me), is a bit dated having being written in the 80s, with a focus on AI technology, various forms of biotech/engineering, transhuman augmentation, and vague mentions of 'DNA' sprinkled around. Not so ground breaking to read in current times, and unfortunately, it sort of lacks the goofy fun of the 'cyberspace' tech books by Gibson when viewed retrospectively, ie hacking mainframes from the same era. The relationship between the humans and AI within The Culture society is probably the most interesting and lasting part of the book. Essentially humans have harnessed the power of AI to completely automate industry and production, eliminating the need to work, allowing them to focus on art, entertainment, etc (ie culture). Interestingly, Horza cynically sides with the Idirans, the brutish, violent warrior race, because they represent messy, smelly, biological life, while humans are dependent on 'Minds', and therefore an 'evolutionary dead-end'. Most of this echoes the popular idea of the singularity, which I don't necessarily agree with, but nevertheless find interesting to think about and still a common topic in sci fi.