Thursday, August 23, 2012

System Reaction to Looming Criticality

On approach to a phase transition, the system begins to contemplate and move violently between states of existence. The system feels the torment of the critical point, and in turn sways and jerks between the two states not knowing how it should behave. Large variability is inherent and upon reaching the critical point, the system loses its character. No longer is it described accurately by any usual metric. Instead, the scales of its measures are its only useful description. In the end, despite all of its torment and rebelious swings, the system falls into place. Only its struggle at criticality is named due to their universal features.

Friday, February 10, 2012

Graph theory and economies.

I've been teaching myself about graph theory and network dynamics lately. The topics are some of the most fascinating I've come across. I've been thinking about how the predictions of different graphs may be applied. One application that I think may be useful for their application is an economy. How to apply graph theory to an economy may be tricky, though. Here, I'll try to explain how I think a model might be developed (which has probably already been done, so this is purely my own intellect at work).
The basic pieces of a graph are vertices and edges. Edges connect vertices, so paths may be determined by following edges between nodes. If a path cannot be found between two nodes, then those two nodes belong to separate, disconnected components. So, a graph may contain multiple components which are disconnected. Furthermore, directed graphs contain edges which define how two nodes are connected, that is, there is a one-way path from one node to it's neighbor, but there may not exist a way to make a path from the neighbor back to the starting node. Lastly, edges may be weighted in that there can be a greater "flow" of something between two nodes. However, the topology of a graph is what most people study since knowing the weight of a graph's edges is not usually available.
I think a model for an economy could be determined by individuals or organizations who make trades and the resource flow between them. In this case, a business would be a node, and the people and businesses they pay (for one reason or another) defines a directed edge between the business and the recipient. If available, then these edges may be weighted by the amount of a resource, let's go with money, is exchanged from the business to it's partners, employees, and governments. Nodes which grow due to interest, like banks, may have an edge which is directed back into the bank according to the inward edges. Corporations which have multiple companies within their umbrella could be represented as black-box node. Edges may go into the node, but each edge may really go to specific portions of the corporation. This could result in corporations being sub-graph containers, where they are still a node, but the inward and outward edges ping around it's companies. Black-box nodes might also be useful for looking at different scaling levels of very large graph. Organization of government levels may provide a decent scaling definition. Individuals live within a town, a town in a county, a county in a state, a state in a country, etc. Metropolitan areas, provinces, territories, etc. may be included similarly at appropriate levels. For business side organization, a similar approach may be taken. Governments may act as apexes to their respective levels (city government to towns, county governments to counties, etc.) since they receive tax revenue from all other nodes within their level. An anarchy style economy would have no apex.
Some predictions I wish I could explore are:
There could be a huge difference in just the topology between capitalist and communist economies. Consider the US; there may be less flow to the central government and less flow from the central government. Alternatively, consider China; there may be greater flow to the central government along with greater flow from the central government. Both of these would be coupled with a greater number of edges between non-government nodes in the US than in China. In terms of the apex nodes, the US should have a smaller (outward/inward edge weights) apex than China.
Transportation networks are structured similarly with a greater number of transactions (edges) being local (physically shorter paths). Therefore, transportation networks may help determine a framework for an economic network.
Internet companies receive more inward edges (receive money from others) on a greater physical scale. These nodes may occasionally be nearly disconnected, so they may be representative of bridges within an economic graph.
Changes to economic and tax policy almost always causes disruptions to network structure and the ability for edge creation. Edge creation and strengthening inward edges should be a sign of strengthening and/or desirable economies. It'd be interesting to see if there is a measure of these two aspects of edges relative to the possible number of edges within an economy graph which determines a desirable economy (matching subjective interpretations of current measures like unemployment rate). Essentially, economies perform best when the rules are consistent.
Turmoil within a black-box may call for a swell in that level's apex node. That is, governments may need to grow to help settle the inner workings of it's level. However, when a government should swell may depend on the extent of weakening throughout it's level. This may extend considerations for some times it may be useful to take a larger government model for some time if it's constituent nodes are too weak. However, stronger economies may require smaller government/apex nodes (in the case of capitalism). Therefore, economic policy might be best implemented conditionally. The difference between big v small government is due to viewing greater trust in the control granted to an apex vs less control.
I have other predictions I've thought of, but I've exhausted my mental capacity for now.


Wednesday, November 30, 2011

Weird & random realizations

Yesterday, while talking with my advisor, we were discussing comparative words and how they are ambiguous without some reference. This was all in relation to some correspondence she had with another professor about some phase transition stuff in our evolutionary model. Anyway, we had thrown out the word, "warm", as an example. I then realized that when someone refers to something being warm, the reference by default is usually the speaker's body temperature. Simple, sure, but I don't ever recall thinking that explicitly.
The other thing I realized is why children who are beginning to talk end words like "mom" and "dad" with an "a". I think it is because they are still developing the muscles which aid in speaking. So it's easier to release those muscles quickly and airily when saying "mom" or "dad". Finishing those words requires restraint on the "m" and "d", especially after working out the "ma" or "da" sounds. I think it's easier for the muscles to repeat a sound that is soft like the two required, and/or making an "ee" sound is just more complicated than the "ah" sound. Therefore, it is probably most common for children to say "mama" and "dada".

Adam D Scott

Center for Neurodynamics
Department of Physics & Astronomy
University of Missouri at St. Louis
http://www.umsl.edu/~neurodyn/students/scott.html

Wednesday, November 2, 2011

existential comment

We will never understand who we are as long as we do not understand what we are composed of. Saying that we are "children of God" or whatever is valid for spiritual considerations. However, I see that as a lazy argument. I don't mean that to be disrespectful; it's just my interpretation and a valid criticism of the spiritual conclusion to the question addressing who we are. That is all for now.

Adam D Scott

Center for Neurodynamics
Department of Physics & Astronomy
University of Missouri at St. Louis
http://www.umsl.edu/~neurodyn/students/scott.html

Saturday, July 16, 2011

Huge problem with classical and quantum electrodynamics

I was thinking to myself today about a variety of ideas not related to my research. One problem which came up is in consideration of a charged particle and the electromagnetic field it generates. I suppose the problem I considered is from a classical sense. Imagine a charged particle such as a proton or electron. Neither particle decays according to theory, and no one has experimentally seen this occur. These particles may exist forever if left alone. There is energy stored in the mass of the particle, of course, but this is certainly finite. The problem I considered is how can a finite energy source give rise to fields which carry energy with them, but do so forever? I think this problem is very similar to some problems outlined in undergraduate and graduate books as well as in many published articles even as recent as 1998 (John David Jackson cites a paper from 1998 which discusses this problem, but the latest edition of his book was published in 1999). 
One problem, which I'm not yet sure deals with the same problem I'm considering, is the problem of a self-force. A radiation force from a moving electron with some decent degree of accuracy, perturbs the trajectory of an electron. The mathematics come in the form of the Abraham-Lorentz formula which describes the radiation reaction force. This approximation only works for certain regimes since it can mathematically have multiple, unrealistic solutions. The force works by considering the fields of the source particle exert a force on the particle while the particle is in motion. Since the fields are generated by the particle, this is in effect like the particle exerting a force on itself. This perturbation term is actually a useful correction despite the physical meaning being awkward and, I think, ridiculous.
Another problem, which I think is related to the self-force problem but is closer to the problem I'm considering, is mentioned in David J. Griffiths undergrad E&M textbook.
  "...the point charges (electrons, say) are given to us ready-made; all we do is move them around. Since we did not put them together, and we cannot take them apart, it is immaterial how much work the process would involve. (Still, the infinite energy of a point charge is a recurring source of embarrassment for electromagnetic theory, afflicting the quantum version as well as the classical. ... Where is the energy, then? Is it stored in the field, ..., or is it stored in the charge...? At the present level, this is simply an unanswerable question: I can tell you what the total energy is, and I can provide you with several different ways to compute it, but it is unnecessary to worry about where the energy is located. In the context of radiation theory (Chapter 11) it is useful (and in General Relativity it is essential) to regard the energy as being stored in the field, ... But in electrostatics one could just as well say it is stored in the charge... The difference is purely a matter of bookkeeping."
To me, his directions to the matter of bookkeeping seem like a cop out. There have been many that have attempted to "fix" the Abraham-Lorentz self-force problem with considerations of relativistic effects, but according to a paper by Rohrlich in 1997, the "pathological" solutions can be made to vanish (but sitll in special regimes). So there is still a problem with the matter of infinite energy, wherever it may be...I think. No one sounds entirely convincing even if you ask the experts.
So, what gives?

Adam D Scott

Center for Neurodynamics
Department of Physics & Astronomy
University of Missouri at St. Louis
http://www.umsl.edu/~neurodyn/students/scott.html

Wednesday, March 23, 2011

rounding out dissertation plans

So I've been working and playing and studying as usual this semester.  My course in nonlinear dynamics is going swimmingly; partly because I've gone through most of the book we're working out of this semester. :)  My research is gaining steam on both fronts - evolution and neural.  Minecraft has taken over my nightly activities, and on occasion, it's taken over a day or two. :\
Anyway, I've begun working on my dissertation proposal and outlining my first paper for publication.  My dissertation will include three parts, two on the evolution model we have, and one on a neural model previously used in our lab.  

The first part of the evolution model will probably focus on cluster (species) activity on even fitness landscapes (all organisms produce the same number of offspring).  This is important in addressing two questions.  The first deals with the problem of whether species really form when there is no landscape to determine what is most fit for the organisms.  Generally, natural selection is considered to take place when the environment organisms live in, along with their natural ability to survive in said environment, generates a selection criteria for its organisms.  The selection criteria in our model is determined by the gradient, or landscape, which determines how many offspring a nearby organism may have - their fitness.   If you take away the gradient of fitness, then there is effectively no natural selection.  However, the organisms still mutate each generation as dictated by their mutability and generate a diverse set of species.  This orientation of the system should be that of neutral theory, such that diversity arises randomly.  Although the mating algorithm of our model intrinsically produces species, I will probably explore how distinguishable those species are throughout the simulations.  I suspect that under the condition that every organism in the starting population receives a unique mutability value among a wide range of possible values, the species will go through many complex interactions - making them very inconsistent over many generations.  After what might be considered transience, the available organisms will have dwindled their competitors to just a handful of mutabilities.  This should reduce the amount of species interactions, so the species will become much more consistent and distinguishable (few interactions with other species).  (Note that I don't really have sources that discuss species interactions, so this idea will most likely change.)
The second question addressed with this portion is whether there is a "best" mutability even in the case of neutral theory.  I already briefly touched on this idea near the end of the previous paragraph.  My intuition suggests that as the organisms compete, no particular set of mutabilities will survive the full simulation.  My reasoning is that because there is no selection criteria, there should be no "best" mutability, as long as the organisms can avoid an imposed overpopulation density condition (kill off those too close to an organism).  I think my data is already showing that there is a bimodal distribution of surviving mutabilties, which implies that there are two mutabilities which are optimal for survival.  The tricky thing is that in all landscape situations, I keep seeing a bimodal distribution of survived mutabilities approximately the same in all cases.  I don't yet have a reason why this could be happening.
For the species interaction, I'll probably look at it under the scope of no competition when all organisms are given the same mutability.  I have a simple prediction for this.  As mutability increases, species interactions will be very rare at first, but then, with a large enough mutability provided, the species will move to a state where they nearly always interact with other species.  Hopefully, there should be a small range of mutabilities which will indicate this change in such a way that it can be modeled as a phase transition (like solid to liquid to gas sort of idea).  Perhaps I'll even come up with a sort of kinetic energy analogy for mutability and use the fitness landscape to define a potential energy so that I can use statistical physics and thermodynamics principles to model it.

For the neural project, I'll attempt to model glutamate activity such that neurons in a network desynchronize as the synaptic activity of glutamate falls all while in conditions prone to epileptic activity (neurons synchronize).  The term in the model which determines coupling between neurons is that of the synapse.  From this term, the strength of influence connected neurons have on each other gives rise to the possibility of synchronization.  Luckily, a previous grad student in the lab showed how to gain synchronization in such a way, that it corresponds well with experiments we've done on rats.  The change imposed to synchronize the network is the same which models seizure activity (reduced potassium conductance).  In the experiments, I noticed that there seem to be several characteristic orchestrations of the local field potential.  The behaviors can be very different, particularly with the endings.  The activity may cease abruptly or gradually decay.  Furthermore, the length of seizures may vary from tens of seconds to several minutes.  These behaviors led me to wonder about HOW the network synchronizes and desynchronizes.  There is some literature to back up more specifically the idea I considered more abstractly, which is that glutamate variation changes the coupling strength between neurons.  There are a few things to consider with this.  First, when excitatory neurons fire, they tend to release glutamate, which is an excitatory neurotransmitter.  This causes the post-synaptic neurons to increase their potential to fire an action potential as well, thus influencing when connected neurons may fire (driving mechanism to synchrony).  However, each neuron has a limited supply of glutamate in vesicles to release, and require supplementation of new glutamate to release and package for further synaptic activity.  These processes take time, and may be over long enough periods that allow the network to become effectively disconnected enough so that synchrony of the neurons is lost.  Currently, the Wilson model which I will use fixes synaptic conductance (coupling strength).  It will be my job to determine an effective model for how synaptic conductance varies in time.  Hopefully, the parameters needed are physically plausible within the conditions imposed to garner seizure activity.
Perhaps another detail I can include in my model is that the potassium conductance reduction need only be applied to a small localized subset of the network, thus modeling the experimental system more closely.

SO, yet another long winded explanation of thoughts, but I think I can turn this into the basis for my dissertation proposal.  Assuming I really do that, then I can say this mission is a success!

Tuesday, November 16, 2010

fixing ideas on dark matter and other cosmos stuff

These question and answers are from between Dr. Cheng and myself. Of
course, I'm asking questions and he's answering after the ***. I had
the complete wrong idea about dark matter....oops.

First, if dark matter is attributed to the expansion of the universe
and dark energy is attributed with an accelerating universe via the
cosmological constant, then how are they not directly related?
***Dark matter, just like ordinary matter, is subject to gravitational
attraction, while dark energy, to gravitational REPULSION. In our
universe there are (4%) ordinary matter/energy (called baryonic matter),
(21%) dark matter and (75%) dark energy. So dark matter and dark energy
are NOT, under our present understanding, "directly related".

Second, what are the chances that the expansion is not due to vacuum
energy? Kari said that some have tried pinning expansion to vacuum
energy, but that the vacuum energy is orders of magnitude less than
what is needed to achieve what we observe. However, would this vacuum
energy need to be handled as nonuniform if one considers that space is
warped? Whoever looked into this, how did they handle consideration
of vacuum energy? Does it even change according to the warped-ness of
space? (pardon the layered questions)
*** Cosmological constant is the name of the math term in Einstein's
equation that has the effect of being gravitational repulsive. Its most
probable PHYSICAL interpretation: "it's the energy of the vacuum". But a
straightforward calculation shows that the quantum mechanical vacuum
energy is 120 orders of magnitude too large compared to the observed
amount of dark energy (NOT too small). If it is the cosmological
constant, the warped-ness of spacetime will not bring any nonuniformity
in dark energy.

Third, I read somewhere that there are drag effects of objects
orbiting in space. Could a more fluid-like consideration of space
give rise to the expanding universe (high/low pressure systems
depending on empty/filled space or rotating vortices)?
*** Yes a rotating gravitational source can drag the spacetime around
it. But all this is consistently accounted for in the context of general
relativistic description of the expanding universe.

Lastly, is dark matter thought to exist as a constant amount; if not,
where might it come from?
*** Dark matter is definitely not uniform. In fact the present
understanding of the observed cosmological structure (galaxies, clusters
of galaxies, voids...) is built on the idea that structure formation
started among the dark matter first (from gravitational clumping), then
the baryonic matter falls into the grav. Potential wells formed by dark
matter. The favored idea of the origin is that they are the cosmological
thermal relics (just like the cosmological microwave background
radiation).