The Fourth Dimension: “spacey” not “time-like”?

What does that mean and where does the idea come from?

I recently (Feb 2024) came across the mathematical conjectures of Marco Pereira which he calls Lightspeed Expanding Hyperspherical Universe Topology or LEHU for short. As I am not a mathematician I am not able to critique his extensive (no pun intended) examinations and adaptations/adjustments of the formulae of General and Special Relativity. I do not immediately dismiss his ideas is because

  • they seem to be expressed in very rigorously constructed mathematics, and
  • he applies his principals to several diverse sets of astronomical data created by numerous projects run by official government and other research orgainisations world wide.

I am disappointed to see that nobody in the mainstream astronomical/astrophysical communities has taken the time to make a principled review of his underlying hypothesis or its mathematical underpinnings. It may well be that his overly assertive and/or aggressive language has put potential reviewers off. (I intend to ask him about this.)

Meanwhile, his basic thesis is that our universe is four dimensional but the fourth dimension is not “time” as such but a spacial dimension which is expanding at the speed of light. To be clear: this dimension is definitely orthogonal (at right angles) to our “normal” 3D directions (x, y, and z) and commenced its expansion at what is called the Big Bang (BB).

In looking at his diagrammatic representations I understand that his thesis allows changes over time to the space-time distance between two separated points in space to be represented as what happens to locations on the circumference of a circle which is expanding. The difference in radius of the two circles denoting the start and end of the time period is proportionate to how far a photon of light would have travelled in that time. But note that this is a calculational tool of the thesis, and that actual distances will change based on (more or less ) Euclidian geometry and the extent to which this is affected by the expansion of the hypershpere.

From this fundamentally different description of space-time Marco Pereira deduces that many of the currently accepted mathematical descriptions used by physics need to be modified in certain ways and that, furthermore, certain features of our universe, such as gravity, are epoch dependent, ie have really changed during the evolution of our universe so far.

I cannot say more about that, and hopefully I have not misrepresented his thesis with even that short description.

Marco Pereira has placed much of his descriptions and explanations on Quora:

https://hypergeometricaluniverse.quora.com

Why this interests me: what it might mean for MOPECCA

What I like about Marco Pereira’s thesis is how it removes “time” as an ontological thing and lets it be a method if you like, which we construct and use as a means for describing changes amongst things which permanently exist in their own right. That is how the MOPECCA construes time.

Another thing is that, even if Marco P’s mathematical schema turns out to be the Earth shattering breakthrough that modern physics and astronomy both need, it will still be true that mathematics is not ontology. However the ontological implications of this idea are that it very likely reconfigures the constraints around MOPECCA and possibly allows for an explanation of anti-electrons which were not falling easily out of constrains made of 3D + “time”. I am ‘working’ on that now in my spare moments.

MOPECCA – does the nature of quarks indicate/demonstrate that it is the boundaries of PA, and their abutments which determine the properties of Qnots?

This thought was prompted by wondering how the entanglements of PA(strong|colour) with PA(electro/magneto) can produce fractional electric charges.

Well the fractional charges of quarks may only be a result of needing to allocate net positive or net neutral charge amongst the proven grouping of three quarks per stable hadron – ie proton or neutron – and a net negative charge to the electron. The MOPECCA takes protons, neutrons, and atomic nuclear coalitions of these the be Qnots of entangled PAstrong, PAelectro, PAmag, PAweak, and PAvac. Electrons are taken to be Qnots of PAelectro, PAmag, PAweak, andPAvac only.

The tripartite nature of protons and neutrons per QM are attributable to the strong colour force/s of quantum chromo dynamics (QCD). These seem to be taken as ontological variations which are given and not otherwise explained.

The MOPECCA allows that the strong(green), strong(blue), and strong(red) components are orthogonal modes of oscillation manisfested by PAstrong because it has the fastest intrinsic speed of propagation.

Keep in mind

  1. the MOPECCA assertion that “nothingness” is just a concept with no actual instantiation and, that
  2. each PA is its own unique manifestation of the two opposite directions of bigwards and smallwards which are motions occurring at a unique rate intrinsic to each respective PA.

These give us the basis for understanding that tubes of each PA, where constrained by being entangled/Qnotted with at least one other PA, can sustain all manner of intrinsic resonant oscillations. The strong and weak nuclear forces are described by QM as ‘short range’ forces, whereas the electromagnetic force and gravity are described as, potentially at least, reaching to infinity. The MOPECCA on the other hand describes

  • PAstrong and PAweak as each being unitary but with vastly different internal speeds of propagation, whereas
  • PAelectro and PAmag are everywhere entwined which enforces an intrinsic chirality to their combinations and a fundamental direction to each in relation to the other, and
  • PAvac which is also unitary but slower than all the others such that its filamentery tubes are much more easily disrupted and disconnected than the others.

Note that this idea of disconnections of PAvac should not be thought of as a cutting or chopping, although that in effect is what the outcome is like. Rather it should be seen as differences of resonant ocsillation frequencies such that the faster PA’s manifest as “stronger” forces which cause PAvac to seem to shrink back from certain encounters with other PA and to reconnect with itself when the Qnot of faster PA has passed by. The paradigm for this would be the passage of an EM photon from one part of the universe to another. It is the ‘unzipping’ of PAvac in front of the photon and PAvac reconnection behind it which governs the speed of passage of the Qnot on its journey.

PA insides versus boundaries

To be parsimonious in the way advocated by William of Occam it is reasonable to assume that each different PA, as a manifestation of existence per se, need only differ from any other in one significant feature. The MOPECCA takes this to be the intrinsic speed of propagation of disturbance through the PA in question. It considers that “c” the so-called speed of light, is the relevant speed of causality of PAvac, and that each of the other PA has a different intrinsic speed. These different speeds of propogation of disturbance are taken to be the causes of the different strengths of the fundamental physical forces and that c is the slowest of them all making gravity the weakest force. Occam’s Razor also provokes the MOPECCA to assert the simplest conceivable set of fundamental attributes for PA such that for and within each and all of them there are just two ‘directions’: bigwards and smallwards. It seems feasible that bigwards might have no limit other than encountering a boundary at which the PA abuts another PA. Smallwards is what occurs then until the smallest/thinnest possible instance of the PA is reached at which, the MOPECCA assumes, some sort of ‘bounce’ occurs.

Strings, filaments, tubes

The reason for speaking of “tubes” of PA is that, in order for each different PA to remain connected with every other instance/part of itself – which is a foundation of the MOPECCA – it is necessary that instances of the very smallest possible extent of each respective PA are only transient. A way to understand this is to note that such a region of the PA is effectively a boundary in every direction except where it is immediately connected with two neighbouring regions. It is as near to being a mathematical point as is possible for that PA. As such, if it cannot bounce into bigwards, it seems that disconnection must occur.

The thinking here is somewhat similar to, and much inspired by, that proposed by Gerard t’Hooft in his essay Time, the Arrow of Time, and Quantum Mechanics. In that paper he considers the fundamental nature of space-time to be analogous to the famous _Game of Life_ computer program of John Conway, but in 3D. I must admit that I was only transiently (grin) able to understand (at least some of) Gerard t’Hooft’s reasoning about strictly deterministic Planck level quantum life histories, and how this obviates the dire paradox of the so-called “delayed choice” version of quantum non-locality. Fortunately the MOPECCA is not beholden to non-locality as such because c is taken to be the topmost speed only of changes and disturbances of the vacuum. Another deep question I would put to Gerard t’Hooft is: Why should the whole universe apparently be subject to just one “clock”?

This question is relevant because the way the Game of Life works is that each update of the situation for each cell – which is based on the number of full or empty cells immediately adjacent to it – requires that a stored memory matrix containing the whole game ‘board’is sequencially analysed and the outcome for each cell is written into a second separate memory matrix. Only when the whole of the new matrix has been filled with cell outcomes can this new arrangement be displayed. Obviously that cannot be how the real world works! (One could argue that this necessity for the running of simulations is a decisive argument against”Matrix” (the movie) type conceptions of reality.)

Of course Gerard t’Hooft’s conjecture is not a simulation but involves an intrinsic universal oscillation of space-time on the scale of the Planck length for every pointlike volume of space-time such that Game of Life type rules of absolute existence determine what happens next at each location. Thus his conception relies on the fact that non-existence entails its own negation as does the MOPECCA. He however assumes that each such point oscillates between existence and potential non-existence with the actual outcome each time rigidly determined by the limited number of possible conformations adjacent to each point. From this he argues that regular structures and patterns of evolution will occur spontaneously in 3D/4D as happens in the 2D board of Game of Life. And that is a very neat idea!

Meanwhile, for the MOPECCA, it seems to imply that each smallest possible stable region of a PA must be adjacent to at least three others and maybe the minimum number is four or even higher than that. This is needed to ensure that any boundary region is adjacent to at least one completely “internal” region of the PA. How big such an internal region must be in order to balance the contraction of adjacent boundaries could potentially be another variable attribute of PA.

Surface tension versus expansion

The challenge for the MOPECCA is to minimise untestable assumptions. Because of this, further speculation about the internal nature of PA is not wise. It does seem however that if bigwards and smallwards are to be coherent concepts then the smallest dimension of a stable and enduring region of any PA would have to be the cross section of a tube where the diameter is at least three of four times the smallest possible manifestation of “smallwards” applicable to that PA.

One simple analogy for visualising how this works is the surface tension of water. Smallwards is a much more drastic attribute for a surface than the Van de Vals and other short range interactions of the electrons, etc in water molecules but the net effect must be similar. The extent to which this surface tension

MOPECCA – a dedication

Judith May Browning 1950-2015 was the person who, back in the mid ‘eighties first challenged me with the thought that: The opposite of ‘something’ is not ‘nothing’ it is ‘something else’. Being a bit of a slow-learner, it took me about three decades to actually understand the full implications of this realisation. Eventually however, “the penny dropped” and the MOPECCA has since coalesced around this understanding.

The concept of nothingness, I now think, is an anthropocentric conceit. It seems to imply that if we cannot imagine something then it can’t possibly exist. Given that the concept arose as a rhetorical toy long before the modern concept of vacuum was discovered I think it came about as an adjunct to the concept of a supernatural Supreme Being.

I say anthropocentric rather than anthropic because the latter term only says that we see and discover that which is as it appears from our particular viewpoint because it is what it is – _already_, so to speak. We happen to be what we are and where we are and therefore the pre-existing great It appears to us as It does. Anthropocentric on the other hand says that we are special and therefore where we are is ‘special’ in some way that needs further explaining. In short, ‘anthropocentric’ implies everything is ‘about us’ whereas, in a 13.8 thousand million year old universe, we are just lucky observers who happen to have evolved and who are still learning, very slowly, how to properly take responsibility for our own actions.

The MOPECCA is anthropic only. It quite naturally implies that ‘our’ universe is unique only in the sense that it is one amongst potentially infinitely many others which are probably all different and probably very few are connected with each other.

The realisation that nothingness is basically a self contradictory fantsy implies that the vacuum of our universe has properties which constrain, if not actually define, what can happen here. The MOPECCA currently asserts that _c_ is a property of the vacuum (PAvac) which manifests as the fastest speed at which any other PA can disturb PAvac. Occam’s Razor type reduction implies that _c_ is not necessarily a limit applicable to any other PA within or to itself. Insofar as each PA is a unique network of being, inter-penetrated and/or entangled, with each other PA out to and beyong the ‘edge’ of our universe, the phenomenon of spooky action at a distance is explained quite naturally.

Evidence for existence of DLS in the brain

My F/b response to Bill Trowbridge about evidence for existence of dynamic logical strucutes (DLS) in the brain

Bill’s question:

  • Mark A Peaty   “Do you have a good reference with neurological evidence for these DLS ? I’d prefer something general, maybe summarizing all we know about them (if such a thing exists), rather than a deep dive for a specific case. But whatever … use your discretion.
    • Only in cortical columns?
    • Connecting nearby columns?
    • In other areas of the brain?
    • Distributed everywhere?
    • Only in certain area?
    • Are there specializations from place to place?”

My Answer:    Bill, I learned of cortical columns from reading Vernon B Mountcastle’s contribution to the book The Mindful Brain which he and Gerald Edelman wrote. From Gerald Edelman’s contribution I learned of the concept of neuronal group selection (AKA neural Darwinism). G.Edelman used the term repertoires for the informational-causal effectiveness of such coalitions. He also pointed out that neuronal groups as such are by far most likely to be the underlying parts/bearers of mental information because having a large membership gives:

  • robustness through redundancy and the capacity for “graceful degredation”,
  • allows widespread interconnectivity across different cortical areas (and elsewhere) which allows exactitude and nuances of meanings, and
  • allows associations to occur because individual neurons can be members of many different coalitions. There are other useful attributes also but I can’t think of them right now 😉

Jean Pierre Changeaux in his book Neuronal Man called them singularities and explained that their ‘figurative meaning’ is embodied in the locations of their component parts.

Cortical columns are the fundamental sub components of neural cell assemblies and they are spread all across the cortex, a bit like the pixels of a digital screen. Each local area of cortex therefore has a two dimensional array of columns which can each embody different features of two different environmental (or conceptual) variables.

I first read of the fine detail and processing potential of these arrangements in a Scientific American article in the early ’90s. Some researchers had experimented with bats suspended in little swings which set them moving back and forth. Electrodes set into the bats’ cortices showed various two dimensional representations of things like ‘target’ angular direction versus delay time of echo, ‘target’ direction versus fequency of echo, and so forth. Interlinking of these cortical arrays with other arrays receiving signals from primary processing arrays allows for cross referencing of the analyses performed by primary sensory sheets and synthesis of an analogue representation within the bat’s brain of a moving target insect’s location, velocity, size, and probably other significant features. These ‘other features’ would enable the bat to learn to identify target types and how best to catch them.

This U/T video, from 22:2o onwards for a fair while deals specifically with Vernon B Mountcaste’s discovery. The earlier parts of the video discusses the cortex and its layers (which are visible under a microscope), ie similarity of neuron types prominent at specific distances in from the cortical surface.

NB, I am still processing his (Jeff Hawkins’) assertion that each column has a model of the environment embodied/processed within it. I’m thinking this is akin to the idea that each part of a hologram has the whole of the subject image within it.

His statement may be true but I think that in order to make sense of it one needs to consider the effect upon each and every neuron of its participation in each of the (potentially vast number) of such activations it participates in.

As I understand it the hippocampus is coordinating and sustaining several sets of potential global gestltaten, and the pre frontal cortex is ‘deciding’ which is the most important to be fully activated as the most up to date snapshot of self-in-the-world. The basis of this decision rests upon the emotional charge that has been connected to the component memory/sensory data of each representational ensmble. That emotional charge is the ‘feeling’ related to the item and is the cortical representation of the initial emotion reaction to the raw perceptual information.

I read recently that researchers have discovered that the hippocampus outputs its signals in two waves 90° out of phase with each other. I’m guessing that the first output provokes/sustains/updates the currently active global model of self in the world, whereas the second wave sustains all the other items in short term memory and the parts of other items that were part of the model of self in the world earlier in the day. I imagine there is a kind of fading queue of such ‘new’ memories which are sustained until sleepy time by occasional bouts of reactivation due to enhanced spontaneous bursts of signalling by the member neurons. During sleep of course new memories are consolidated, which may well involve the updating of older forms of the representations involved.

I’m hoping you can see why I use the term dynamic logical structures (DLS) for each of the neuronal ensembles which becomes a self sustaining coalition due to the mutual reciprocal “re-entrant’ cortical signalling they engage in. A point I feel is important is that, for the time such DLS are active, they fulfil the basic requirements of a real thing which exists. Each one is a process which acts as a pathway for degrading the energy of their particular environment, which is a characteristic or all self-sustaining processes, and they also have effects upon their immediate environment which result, in either short term or long term, in increasing the probability of their reactivation in the future.

The danger of utilitarianism

Intrinsic worth  versus pure self interest –
 the shortcomings of utilitariansim

It seems to me that maybe the assertion of intrinsic worth is the cornerstone for any comprehensive ethical system. Religious value systems posit a single supreme being or community of divine beings as the source of value but in the modern era this is not really open to reasonably sceptical people.

I have my doubts that any purely utilitarian way of thinking will really satisfy all reasonable requirements:

  • the apologists for the rich and powerful (eg so called ‘rational economists’) are too strongly tempted to rationalise the greed and excesses of their heroes leading them to support ‘utility monsters’, for example the corporate executive cowboys and bandits who vote themselves millions of dollars in ‘bonuses’ bearing no relation to the value of any services performed; and
  • purely utilitarian thinking ultimately makes people into objects because there is nothing to counter balance the alienating efficacy of the rational instrumental approach to relationships entailed in a purely utilitarian worldview.

I agree with the writer Terry Pratchett (of Discworld fame) that ultimately there is only one sin: treating another person as a thing! *** (f1.0) I believe that the assertion of intrinsic worth is very reasonable in the light of evolutionary theory about genes and memes and by observations of human behaviour in situations where people can be held responsible for their actions. The assertion of intrinsic worth is none the less exactly that: an assertion, which must be made as the result of conscious decision making. It involves a personal risk, ie that in being consistent with one’s principles one increases the opportunities for others to cheat on you, but the payoff is in experiencing an affirmation of life and an ever deeper insight into how other people ‘tick’ and how the world works.

***************

*** This means that the bureaucratic ordering and functioning of work organisations is ethical only if sufficient attention is paid to the intrinsic worth and needs of people doing their jobs. That is, the ‘thing’ is the position not the person! It is the role, with its entailed authority and responsibilities. This is true in all cases, ie government and non government.

Further rejection of pre scientific ontology

Second response to Amy’s ontological stance

This was in response to the following question: “Mark A Peaty Would you agree that something cannot bring itself into existence?”

IMO our experience of being here now is intrinsically paradoxical, always was, and always will be. The FACT is we live in an evolving universe which is “bigger” than us in every conceivable way. IT is not limited by our thinking but the best thinking that human beings can do at the moment, in the way of conceiving testable explanations for things, indicates that IT is at least 13.8 billion “years” old. Furthermore it is reasonably asserted that IT’s basic ingredients have been around for all that time.

NOBODY knows what sparked that original expansion but I contend that it is more reasonable to suppose that eternity is a lot bigger than the wildest conjectures of human beings. Complimentary to this is the likelihood that existence, or existences (in the sense of primary absolutes which may each be a unique and different manifestation of “IS”), are also not limited to what we might imagine. And think, the existence of anything seems to involve a separation between that which it is, and all that which it is not. This seems to me to imply that for every “something” which you want to know about, there is a “something else”….. *not* nothingness which is, arguably, a self-contradictory or even incoherent concept.

On the basis of these thoughts I contend that the religious doctrines of the pre scientific universe, which were based on ideas of people whose experience was limited to the powers of the naked senses, should not be imposed on the minds of people now. They are, after all, just conjectures, like everything else which is called metaphysical.

Quite frankly I consider appeals to G/god/s of any gender in relation to existence to be a trivialising of thought about what we are and about the awesome universe we inhabit.

In speaking of scientific method, I, as an ex-Xian, use the term “advent” because that is the term Xians use concerning Jesus of Nazareth and his supposed Godhood. I use the term because the effect of the advent of modern scientific method in human culture on Earth, has been far greater than that of any of the prophets or G/godmen. The application of SM has basically turned us into a different species because it has changed our relationship to this planet and to all other life on Earth.

For the removal of doubt, I think the advent of SM some 400 or so years ago is equivalent in importance to the advent of language with versatile grammar some 150K years ago, and to the advent of fire usage more than a million or two years ago, and to the advent of tool making, however many millions of years ago when that occurred.

Rejecting the CE1078 ontological argument of Anselm of Canterbury

My response to Amy S re ontological argument – F/b

Recently on the Facebook group Philosophy, Ethics, Sociology & Psychology a discussion thread was started with the following opening question from Tyler W.:

“If mathematics isn’t ontologically real, how can it be used to describe reality via mathematical scientific formulae? If it is ontologically real, why do we need science? The presence of mathematics in science renders science either absurd and invalid (if mathematics has nothing to do with reality), or redundant (if mathematics defines reality). Either way, mathematics falsifies science. What do you all think?”

  • I responded to the following suggestion by Amy S.: “Perhaps it might help to start with defining the ontological argument before jumping into the mathematical side of it? Amy posted a F/b link to a Y/t video.
  • > This is the direct link to the Y/t video <

Amy S. I watched that video and I disagree with its conclusion and its assumptions. For example tritely saying that “we can imagine a maximally great being” is like claiming we can imagine infinity which we do not. What we actually do is visualise for ourselves the biggest thing we think we can and then sort of say: Bigger than that! In other words we use some kind of mental shortcut to which we assign a symbol after which we just imagine the symbol – and whatever endless algorithmic process is associated with it – and leave it at that. Same goes for zero which is either at kind of street sign along a number line or it is specially defined as denoting “the empty set”. Let’s be honest, what could be more artificial than an empty set? Surely it is metaphor, and maybe a powerful one, but in terms of describing material reality it is as problematic as that smile that was on the face of a “Cheshire cat”.

That kind of sleight of hand and mind was OK for the pre scientific universe but now, in the Modern Era, AKA the Anthropocene, it is not good enough.

My emphatic objection to the OP contention is that, while valid mathematical equations are indeed “discovered”, they are based upon mathematico-logical structures which are made out of mathematical objects, not real objects. There is therefore no a priori reason why the real world should conform to the expectations of mathematicians, just like there was no, and still is no, a priori reason why the real world should conform to the expectation and imaginations of medieval scholars.

Arguments against the existence of Consciousness at the physically quantum level of existence

IMO there are potentially several quite coherent arguments against the existence of consciousness (“C”) at the level of existence (= orders of magnitude) described by Quantum Mechanics. Any such coherent arguments are arguments against the conjecture of panpsychism. I think it is important to uncover and set out such arguments in plain English so that other ordinary people like me can concentrate our minds on explanations,  theories, and conjectures, which are in line with modern scientific findings concerning psychology and neuroscience (ie, what Granny Weatherwax called _headology_, what I like to think of as good quality headology anyway).

NB, this is a work in progress so will be edited as I go along. Any constructive comments and criticisms will be gratefully received.

Three approaches I can think of are:

  1. through looking at what quantum mechanics is actually about
    • in general terms of course rather than the hideously complex mathematics it requires as a scientific tool,
  2. through looking at what information is in reality, and
  3. through what might be called mereology which is a technical name for the study of parts and wholes.

1. What quantum mechanics is actually about – in general terms

Quantum mechanics (QM) is the mathematical system which describes the behaviours of the smallest measurable items and amounts of the ultimate constituents of our world. QM uses mathematical structures called fields to describe the fundamental forces of nature and treats what otherwise we call particles as being localised vibrations, rotations, and point-like concentrations of these various fields. We don’t need to go into details here and I am not competent to make pronouncements about that kind of mathematics anyway. There are some important points to consider though:

a/ QM is extremely successful at describing how (electrically) charged particles will move within certain carefully presecibed situations and this has allowed the creation of all the portable digital electronic devices which are now used everywhere in the modern world,

b/ QM successfully describes attributes and behaviours of the ultimately smallest constituents of our universe (that it is possible to detect and measure so far anyway) and these are different from the things of the world that exist at the scale of size that we normally deal with and that we have evolved to sense, to use and to think about.

  • For example it is never possible to know both exactly where a fundamental particle is and  its speed and direction of motion (its momentum.)
    • The more exact our knowledge of either its location or its momentum is, then the less exact is our knowledge of the other attribute.
  • Another example is that a pair of quantum particles can become “entangled” which means they have interacted such that certain of their quantum attributes are interconnected, even though the two particles may  become separated to quite enormous distances.  It has been demonstrated conclusively that testing of one of the quantum attributes, called ‘spin’, of one of the pair of particles affects the other particle such that if the one tested is found to be “spin up” then the other one will be “spin down”, or vice versa.
    • The “weird” aspect of this is that, until a test is done the state (of that attribute) cannot  be known for either particle  and no signal passes between them. In fact the correlated fixing of their respective states  is to all intents and purposes, instantaneous.
    • Technically speaking ‘instantaneous’ in this context means that a signal would have to travel between them faster than the speed of light, “c”,  but c has been shown to be the fastest possible speed for a causal effect in our universe.  Albert Einstein, whose theories of Special Relativity and General Relativity have been experimentally verified and which depend on c being the fastest possible speed in the universe, referred to this inexplicable synchrony as “Spooky action at a distance”.
    • NB: something to note is that the two particles of each pair involved in such an entanglement experiment must not interact with any other particles in the period between their initial entanglement and the test event. 

The facts of these QM attributes and behaviours have provoked a variety of interpretations concerning what are, or are not, necessary implications of these facts, ie there is a question about the extent to which human consciousness is necessary for QM experimental results to actually occur. (Most of us consider that the existence of our universe for 13.8 billion years before humans came on the scene is a reason to be sceptical about that.) QM however is a statistical prediction system. As mentioned earlier, it is impossible to say exactly both where a particle is and what it is doing, so exactly predicting what individual particles will do is, by definition, impossible.  Thus the success of QM, which has allowed the creation of some of the world’s biggest experimental devices, the particle accelerators, lies in its ability to specify the probabilities  of quantum events occurring.

IMO there are some interesting implications of this statistical determinacy. For one thing it makes it extremely unlikely that the biochemical processes which make up living entities rely on anything approaching the exactitude needed for controlling the paths of particles moving at speeds close to c. The atoms and molecules in the cells of our bodies are vibrating and bouncing together due to ambient thermal energy of 37 degrees centigrade, so their relative motions approximate to some proportion of the speed of sound in water. Is it not reasonable to assume that this jiggling around, which is the basis of what is called Brownian motion, ensures that any kind of quantum entanglement which occurs lasts only as long as it takes for a atom, ion, or molecule to bounce from one neighbour to another? 

There is much uninformed speculation and conjecturing that gets written down and posted on social media and in books also which conflates human consciousness with features of the world described by QM. In fact though, the only real similarity they have is that they are both considered “mysterious”.

2. Information

There are many ways of defining information, some simple, some complex; here I want to keep things simple and to the point. So let us say that information in its most general sense is something or other which, as well as being just itself, is about something else, ie something other than itself. Another way to say this is that something can be informative when its appearance indicates something about something else which we would not have known otherwise. 

The simplest way that has been expressed is “information is that which reduces uncertainty”. That is a mighty fine, minimulist, definition but what needs to be added is that it is always within a context.  So the uncertainty of someone or some creature – or some information processing device – about something which may concern them/it is reduced by the appearance of something or the change of something in their surroundings.  Another important point is that whatever the informative thing or event is, it has to be somewhere and made out of something or has to be a change occurring in something which really exists. I hold the belief that anything which really exists must be somewhere now. Some people find that idea hard to accept but I called it reality. I choose to summarise this viewpoint as: information is that part or aspect of the structure of something which can be about something other than itself.

There are of course a whole bunch of subtleties which can arise to confuse us but IMO the main one is that some ‘things’ can seem to exist, but then seem to disappear but then may seem to come back into existence. I believe something like this definitely happens inside human, and other, brains. This is easily accounted for though if we take the brain in question to be part of the context and realise that, due to neuronal plasticity and epigenetic changes, the brain has been changed in such a way as to be able to recreate the particular distinctive activity whenever a relevant stimulus/signal is received from the environment or through other activity within the particular brain itself (AKA memory!).   

So the crux of the argument from the point of view of information per se, is that quantum particles, be they electrons, photons, protons, or whole atoms or molecules, are simply what they are. Generally speaking they are not about anything other than themselves. It is true that many molecules within living cells are very complex and can have very distinct ways of interacting with other molecules within the cell or outside the cell’s membrane but these activities are to do with the the building and maintenance of the cell and its biochemical interactions with others. Furthermore these interactions are powered, as much as anything, by the random thermal jostling of water and other small molecules which are moving at the speed of sound characteristic of such liquids. For salty water at 37 deg Celsius the speed is somewhere near 1,500 m/s or between four and five times the speed of sound in air. 

The reason for mentioning the high velocity of water molecules inside the cells of our bodies is because there are those who surmise (I don’t think the word theorise is appropriate) that quantum mechanical effects inside what are called microtubules within our brain’s neurons may be the basis of long distance connections between cells. Microtubules actually form the internal  ‘skeleton’ of the cell, holding the various organelles within the cell in place and allowing the cell to maintain its shape and/or move. They also form a scaffolding which allow motor molecules to drag large proteins and vesicles along the surface of the microtubule from place to place within the cell. Whilst microtubules are helical structures which do form a tube shape with interior lumen, there is no reason to suppose that this internal space is sufficiently isolated from the rest of the cytosol to maintain some sort of quantum isolation unit that is spookily entangled with similar units in other cells! I think it is much more reasonable to accept that the local equivalent of constant Brownian motion restricts quantum entanglement of the particles involved to their nearest neighbours. 

3. Mereology – the study of parts and wholes

The “panpsychism” concept, suffers from a mereological mistake. If mereological is not the right work my apologies; it is still a mistake. 

Reason: if atoms and molecules or whatever all have their own wee bit of  “C”, what is that C about?

Answer: the teeny weeny bit of C – is about being that atom or molecule. This strongly provokes the question of: Why should a whole bunch of separate little C’s become a C which is about something which is not those separate particles but is about, not just the amalgamated assemblage of particles, but about the world that the assemblage of particles is within? 

IMO that question stands until someone can provide a coherent explanation and reasonable description of the main mechanism/process purported to underlie panpsyche. This is never provided however. Panpsychism is always put forward by persons asserting their own disbelief that scientific method has any chance of explaining subjectivity, ie: why it can be like something to be an embodiment of self-awareness.

Mereology is a term used by academic philosophers who use it for academic purposes. Sometimes this involves an attack on the idea of emergent properties, ie the apparent fact that in many situations a collection of smaller things when associated together act collectively in a way that could not be reliably predicted from the properties of the individual constituents. Indeed I have seen the term ‘mereological mistake’ applied to descriptions of the way neurons act together to create representations of things external to the brain. I hope my argument above shows how that cuts much more strongly against vitalistic concepts like panpsychism.

IMO for lay people it is more helpful to speak in terms of the nature of information, as above already, which enables us to describe things in terms of dynamic logical structures and their functional potential.

Response to Stanislav T. clarifying why I do not equate C with (all of) mind

Mind

I take the word _mind_ to be, by and large, “what the brain does” although I am happy to exclude various biochemical/hormonal processes related to homeostasis from the term mind. I take the view that one’s mind is, effectively, one’s model of the universe and it is made up of dynamic logical structures (DLS) which, when active, represent features of the world, be they things, relationships, perceptual qualities, muscle movement instructions, or whatever else. I take is also as given that DLS can be active without being a direct part of conscious awareness at that particular moment.

Consciousness (C)

In fact it seems that several strands of mental processing can be going on simulaneously in the brain such that, at a particular moment, only a subset of mental activity is part of C. I am satisfied that my memories of my own experiences in a whole lot of situations confirm this, which is why I define C as rememberable awareness. The emphasis on “rememberable” is because, in my understanding at least, the mental process of attending to things, involves allocating hippocampal processing space, amongst other things, so that what is significant at the time can be remembered in future. In brief (grin) I understand the basis of what I am calling C or rememberable awareness to be as follows. It is the _updating_ of one’s model of self in the world (hence: UMSITW) and the model (MSITW) is composed of DLS which represent: 1/ currently significant features of the world, 2/ currently significant features of self, and 3/ currently significant relationships between 1 & 2. Because active DLS are self sustaining processes (albeit potentially quite transient) which affect the world around them, they are *things which exist*. This is why there is indeed something which exists within the brain which it is like something to be it. QED

Can information be destroyed?

Information can be destroyed

I think it is not true to say that information cannot be destroyed. 

I mean, it may well be true that quantum numbers, or rather the fundamental quantum structural features denoted by the various quantum numbers, may continue to exist forever but they do not necessarily remain in the same structural conformation. The reason this is relevant IMO is that information per se is an aspect of the structure of something or other. In particular we can say that information is embodied in the part or aspect of some structure which can represent something other than itself.

Another way to state this key fact is: information is always about something and always exists within a particular context. It is the context of the situation which allows the particular feature to correspond to the (or the state of the) other thing which it informs about.

One of the assumptions of modern physics is that the total amount of energy of the universe does not change. This cannot be proved but as a working hypothesis it apparently holds true in all the carefully controlled experimental situations investigated so far that: the detected and measured amounts of energy and mass/energy equivalence going into the experiment equal the energy and mass/energy equivalence coming out of it. It also seems to be the case that, to the extent measurable, the quantum numbers – mentioned above – are conserved. It is my understanding that many people take the conservation of quantum numbers “in the universe” to be an indication that information, like energy, is neither created nor destroyed. I think this latter idea is wrong; I think it is based on a conflation of structure with information, which are not the same thing.

I believe this is so because what is not always conserved is the way the quantum numbers are combined. IE, in processes of nuclear fusion and decay for example, quanta related to the weak nuclear force and lepton number either arrive or depart at, or close to, the speed of light and from or to directions that either cannot be known or could only ever be known very imprecisely. In other words there is no mechanism – in principle – by which they can be tracked. Thus the “history” of neutrinos newly arriving to precipitate a decay is simply not known, and the subsequent adventures of neutrinos produced by a fusion can never be written. This means that what might otherwise be taken to be a potential fact or statement of relationship concerning the event is effectively a random arrival, or a randomising disappearance. 

Entropy is a universal fact about our universe. It is essentially a consequence of the seemingly endless expansion of our universe which guarantees that there is always going to be more space available for slow things to be rearranged in and for super fast things to just disappear into, or occasionally, appear out of. At our human, “classical”, scale of things it is clear that information is being lost all the time. The things we make and use, the places and people we know, all change over time; and things and people eventually disappear.

As human beings who live within, through, and by means of a description of the world, we can nonetheless strive to understand and nurture those people, things, and principles we consider most important. That is what philosophy is about and what our daily toil is for. As far as I can see there is no quantum ‘magic’ which can reverse the endless changing and aging of our universe. 

I think the most precious things which have the greatest potential for enduring are good ideas and useful behaviours, in other words beneficial memes. This is what lay philosophy is about!