Tuesday, November 16, 2010

fixing ideas on dark matter and other cosmos stuff

These question and answers are from between Dr. Cheng and myself. Of
course, I'm asking questions and he's answering after the ***. I had
the complete wrong idea about dark matter....oops.

First, if dark matter is attributed to the expansion of the universe
and dark energy is attributed with an accelerating universe via the
cosmological constant, then how are they not directly related?
***Dark matter, just like ordinary matter, is subject to gravitational
attraction, while dark energy, to gravitational REPULSION. In our
universe there are (4%) ordinary matter/energy (called baryonic matter),
(21%) dark matter and (75%) dark energy. So dark matter and dark energy
are NOT, under our present understanding, "directly related".

Second, what are the chances that the expansion is not due to vacuum
energy? Kari said that some have tried pinning expansion to vacuum
energy, but that the vacuum energy is orders of magnitude less than
what is needed to achieve what we observe. However, would this vacuum
energy need to be handled as nonuniform if one considers that space is
warped? Whoever looked into this, how did they handle consideration
of vacuum energy? Does it even change according to the warped-ness of
space? (pardon the layered questions)
*** Cosmological constant is the name of the math term in Einstein's
equation that has the effect of being gravitational repulsive. Its most
probable PHYSICAL interpretation: "it's the energy of the vacuum". But a
straightforward calculation shows that the quantum mechanical vacuum
energy is 120 orders of magnitude too large compared to the observed
amount of dark energy (NOT too small). If it is the cosmological
constant, the warped-ness of spacetime will not bring any nonuniformity
in dark energy.

Third, I read somewhere that there are drag effects of objects
orbiting in space. Could a more fluid-like consideration of space
give rise to the expanding universe (high/low pressure systems
depending on empty/filled space or rotating vortices)?
*** Yes a rotating gravitational source can drag the spacetime around
it. But all this is consistently accounted for in the context of general
relativistic description of the expanding universe.

Lastly, is dark matter thought to exist as a constant amount; if not,
where might it come from?
*** Dark matter is definitely not uniform. In fact the present
understanding of the observed cosmological structure (galaxies, clusters
of galaxies, voids...) is built on the idea that structure formation
started among the dark matter first (from gravitational clumping), then
the baryonic matter falls into the grav. Potential wells formed by dark
matter. The favored idea of the origin is that they are the cosmological
thermal relics (just like the cosmological microwave background
radiation).

Monday, November 15, 2010

slacker

So, I've been thinking a lot lately about various things.
1. How exactly the universe expands.
2. How gravitation works.
3. 4-dimensional cross-product.
4. How to experimentally make sense of critical slowing down.
5. Ion channel desensitization vs. sensory adaptation.
6. Developing apps.
7. Wondering why I don't follow through with any of these ideas...or
at least why I take so long in addressing them.

I'll go through each topic, but perhaps I won't do it all in this
post...that could make for a very long post.

1. I think the current explanation among many astronomers and
cosmologists is that the universe expands because of dark energy and
matter. Dark matter is like gravity, but it works in reverse. What I
don't understand is where this stuff comes from. Can it's effects be
attributed to something else? Take this site's explanation, for
example: http://www.physlink.com/education/askexperts/ae404.cfm
Although, "where" the stuff is has been mapped according to
measurements of a galactic supercluster by Hubble:
http://hubblesite.org/newscenter/archive/releases/2007/01/image/a/
I don't think it yet answers my questions though. I certainly don't
think it proves anything...whether dark matter is real or not. It
seems to strongly suggest that it's out there though. It may still be
just a coincidence of currently known forces; although explaining the
observed effects with what we have in our toolbox doesn't seem to work
well enough. Perhaps that just means we need to expand what we know
instead of creating something new. I don't think it is impossible for
us to create a new tool to explain the cosmos, but in the end realize
it is just a special case of tools we already have.
My loosely assembled, guesswork hypothesis is that if the amount of
dark energy in our universe is not fixed, then it must come from
somewhere. That somewhere might be a higher dimension, but then
anything existing in that higher dimension must be losing that energy.
This would imply a conservation of energy among all dimensions. This
would allow the dark stuff to infiltrate our known dimensions and
possibly give rise to the cosmological constant (but this would only
be to accelerate the universe...whatever that really means). If the
amount of dark energy is actually fixed, then perhaps it is diffusing
as suggested by the recent Hubble map and subsequent measurements may
indicate. BUT if dark energy is really an occurrence of stuff we
already can measure and know about, then one of two things may happen
(although I'm not sure if they'd be mutually exclusive). Either it
comes about from energy associated with vacuum or it comes about
because of gravitational effects on space itself...like fluid effects.

I'll have to finish talking about this later. Have to go to UMSL.

Edit to finish:

I talked with an astronomer student in the lab next to mine, and I asked her about dark matter and energy. I need to be careful with the two terms since they are not linked like typical matter is with energy we deal with everyday. "Light" matter can be related to energy by it's mass with everyone's favorite E=mc^2, where E is the energy of a mass of matter, m, and c is the speed of light in vacuum. Dark matter and energy don't have this sort of relation...direct correlation...as far as anyone knows. Dark matter does give rise to the expansion of the universe as I said above, but dark energy is strictly associated with the cosmological constant that possibly accelerates the universe.................I have trouble with this still. They seem to describe the same action, but apparently there's a big difference that I'm not getting.

I emailed some questions about this stuff to a professor in my department...supposedly he'd know best when it comes to this stuff. Dr. Ta-Pei Cheng, I'm counting on you!

Friday, July 16, 2010

quantum entanglement and information transfer

I have been reading about the EPR paradox (Einstein, Podolsky and Rosen) and the implications of quantum entanglement. You can think of entanglement like the drawing of one of two cards. The two cards are entangled in the sense that if I draw a red card while we know the other is blue. Then by checking the card I have drawn, you will know exactly what card is left, the blue card. This concept is the same when talking about two objects in entangled quantum states. If I am on the moon with a particle which may be in state 1 or 2, and you are on the earth with a different particle in state 3 or 4, but our pair of particles may only be in an exclusive combo state of either 1 and 3 or 2 and 4, then if you measure the state of your particle, I need only to ask you what your result is to determine the state of mine. HOWEVER, information can only travel at most the speed of light. SO, if we measure our particles simultaneously, then we should be able to have the possibility of obtaining a result which does not match either of the two possible states I listed before (1 and 3 or 2 and 4), since information of your particle and my particle will not be able to reach the other in time to "let the other particle know" that something has changed or that a measurement has occurred. I am not convinced that entanglement can exist at such distances. Furthermore, how does the effect of measurement influence entanglement (by measuring particles or systems, we effectively put our system into a certain state...from which it may then evolve according to the state we measure it in).

critical states

Do we live in a perpetual or constant world that is in the critical state? Is there any difference between "action at a distance" and microscopic forces influencing macroscopic properties and behavior?
If I have a ferromagnetic material near its Curie temperature and change one electron's spin direction, then it should have an effect on another electron's spin at any range from it. Certainly, the force that the change in spin of the electron I choose first influences to some degree its neighboring electrons, then they influence their neighbors and so on until they find electrons at any distance to influence a complete change in direction. The influential role of the electron I have chosen is merely a fluke of probabilities in my eyes while certain in the eyes of nature.
However, is this any different than if I were to blow a feather off a table. My lungs create a pressure change which causes a chain reaction of colliding air molecules which in the end, and along the direction from my mouth to the feather, is just events that can be described microscopically in order to affect the feather from a distance. To provide another example of microscopic local influences causing global reactions, consider social networks.
My undergraduate research advisor, Dr. Ojakangas, would tell his students about some physical law or theory, then asked, "Do you buy that? Because that's all I'm selling!" Now, when I tell some story or give a lecture, I might ask whomever I am talking to, "You buy that? That's what I'm selling!"...or something to that effect. The point is this: after Ojakangas fed us (his Mechanics II students) that line the first time, he told us that he had heard a professor of his say that (I think at CalTech). He apparently liked it, so he made a similar comment to us. I like it as well, so I make the comment to whomever cares to listen to me on occasion. I imagine that by this point, others either those who have heard Ojakangas' professor at CalTech, Ojakangas, or myself say this line have or will also say this to others. Other people who have no idea where the source of the silly line came from. This is like action at a distance. (I'm not going to claim that the source is even Ojakangas' professor; that's just as far as I know who came up with what!)
An even bigger social network analogy is that of the internet. Before the internet one person with a video of something ridiculous would only be able to show the people they knew and not too many others. Now, that video can go viral and effect millions of people who have absolutely no direct connection to that person. The internet has provided a way to make a correlation distance between people in the world near its maximum possible value, just as in the critical state of electrons in a ferromagnetic material near the Curie temperature.
Does this all mean that we live in a constant state of criticality? Where the butterfly effect really changes everything? If not everything, does it at least make great dents in the previous order that existed? Has there ever been order? I suppose when talking about correlation distances of one object or idea influencing another at a distance, we must consider correlation times. For our brief time on this earth, most of us probably won't cause the global changes in our lifetimes, but our actions now may influence the next generations in ways we would not expect. Stories your parents may have told you about their times in life may influence the way you conduct yourself. Your actions may then influence others which may lead to global implications later, like the leaders of nations deciding between good and evil.

My bet is that we live in a constant state of criticality. I think that every action now influences the current order to be reordered. Whether there is an end to the criticality, I doubt it exists. I cannot think of anything on any scale in which a scale's microscopic forces do not propagate to influence objects at a distance. However, the time scale in which to consider universal objects may need to be characteristically near infinite. Keep in mind that pockets of objects that do not seem to be influenced are part of the property which determines criticality. Nothing is globally special, no matter the amount of detail you consider.

Wednesday, July 14, 2010

species & fractals, linear v nonlinear

So I've been thinking the past few days about speciation and the splitting that occurs. I wonder if phylogenetically, speciation can be fractal under certain conditions.
A model which I'm imagining could at a very concrete and highly unrealistic state begin with a species splitting into three branches. Each of these three branches would then split into three branches, and so on. I doubt it would be too difficult to show or at least understand that this should inevitably lead to a fractal.
In order to arrange some realistic aspect to the model, I would then allow some or all of the three branches to not form. These instances would model extinction of species in a way. Instead of forming, then dying out, the branch just doesn't form. The rate at which this occurs would depend on how many branches are currently able to form (once a branch has divided or failed to branch at all, it would no longer be counted), then a random number of branches that would form will not. I would think this sort of model would retain fractal behavior, but certainly will not look as "nice" as without extinctions.
Another way to make the model more realistic is to allow a variable number of branches, say between 0 and 5 or so, or whatever current estimates on speciation might suggest. This part may destroy the fractal like behavior, but as I have seen in Barnsley's fern, this variable extension of the phylogeny may be fine.
I'm sure there are plenty of other ideas which may be implemented, but I think what I have listed should be reasonable for a simple model. The real question would then be, how closely does an averaging of many simulations of the model reflect the phylogeny of today's species within all higher levels of taxonomy?

Another thing I've been thinking about is linear versus nonlinear systems. Primarily, I've been thinking about why some nonlinear systems cannot be linearized, especially globally. Locally, around fixed points, nonlinear systems may be linearized by evaluating the Jacobian matrix at the fixed points. However, what restricts us from taking the nonlinear terms and calling them a new term by a change of variables? Certainly, most, if not all, cases will result in a system with more dimensions (each independent variable gets its own dimension). However, if nonlinear terms are linearly independent among the linear terms, then could a new system be generated in order to make the system easier to solve? Perhaps this is all bogus because the nonlinear terms may be formed by the linear terms, which would then be a case of ALL nonlinear terms being linearly dependent which would then disallow one to make a change of variables of the nonlinear terms in order to present a new dimension to the problem.
Another idea related to this is to explore what is needed in order to execute linearization of a system- as in constructing the Jacobian matrix evaluated at the fixed points of a system. Following along with the steps indicated by Strogatz in his book, Nonlinear Dynamics and Chaos, I found (and am assuming) that the only requirements needed for a two dimensional system with arbitrary coupling are that the transformation functions used in the change of variables need to be differentiable and have an inverse which is also differentiable.
For example, let the derivatives of x and y be x' = f(x,y) and y' = g(x,y), and let the change of variables, u and v, be u = F(x,x*) and v = G(y,y*), where x* and y* are fixed points. Rewriting functions for x and y gives: x = Finv(u,x*) and y = Ginv(v,y*), where Finv and Ginv are the inverse functions for F and G. Differentiating u and v gives: u' = F' + u' F and v' = G' + v' G. Rearranging for Finv and Ginv, then differentiating gives: x' = Finv' + u' Finv = Finv' + u' x and y' = Ginv' + v' Ginv = Ginv' + v' y. By substitution of these new x' and y' equations with the originals gives: f = Finv' + u' x and g = Ginv' + v' y. Solving these for u' and v' and then expanding them as Taylor series evaluated at the fixed points should give appropriate Jacobian matrices from which an analysis of the eigen values will determine what sort of behavior can be expected.

Wednesday, July 7, 2010

Accumulation

So on my flight from Salt Lake City to St. Louis (originally from Portland, OR, the connecting flight was from Salt Lake City), I was day dreaming a bit. While listening to On the Origin of Species by Charles Darwin, read by Richard Dawkins, I was watching the clouds as we sped by them. The audio book had little influence in my thoughts about accumulation at the time...every once in a while, it seemed more like chatter background for me to think about other topics. However, while watching the clouds, I began to think up a simple model for accumulation. Since I know little about how clouds really form, my mind was free to dream up something whether it be accurate or not. I'll put the comment documentation for my Matlab instance of this model below:
% Model description:
% On a rectangular space, objects move parallel to the length of the
% space. The object's speed depends upon how many units of the objects
% share the same space; this may be thought of as a density dependent
% velocity. Object units may not diverge from others once they occupy the
% same space, and therefore result in the creation of larger objects.
% Geometric spacing and packing is not taken into account in the first
% instance of this model. However, limitations on the number of objects
% occupying a space and the distribution of those excess object units may
% provide a three-dimensional conceptualization of a "super" object drift.
% -Object creation: Objects will be randomly generated based on a
% proportion of the size of the space and/or the number of units in play.
% Once created, the objects will move and may form larger objects.
% Generation of larger objects may be restricted in some instances of this
% model, resulting in "packing" limitations.
% -Velocity determination: There will be a limiting velocity that will
% dictate the lowest rate of movement. The speed will increase for those
% objects with less units.
% -Accumulation restrictions: Once a maximum accumulation of an object in
% a single space is reached, some number of objects in that space may be
% removed.
% -Geometric conception: Distribution of the units may be to place excess
% units coming into a space into an adjacent space which has the least
% number of units. This may be done by searching first the immediate sides
% of the filled space, then the space directly following the filled space.
% Radiating outward in this manner if all immediate spaces are filled may
% result in a wall or object front.
% -Predictions: In the case of no geometric consideration, there should be
% pockets of great accumulation of the digital object units. Given a long
% enough space in which to drift, these pockets should grow very large and
% therefore move at or near the minimal speed allowed. A randomly
% distributed set of these pockets will exist within the space since they
% cannot merge with each other upon reaching the minimal speed. A complete
% occupation of the space may be possible, but very unlikely.

Wednesday, June 30, 2010

busy summer

So...a lot has happened since my last post. On the 19th of June I started a five day intensive course learning to use the neural modeling program, Neuron, at the University of California at San Diego. It was taught by the creators of the program and of the hoc (pronounced hoak, not hawk) programming language. It was a very good course in a very VERY nice environment. San Diego never saw temperatures above 75 deg. F, and I had to wear a jacket on multiple occasions. My suite mate was pretty great, Aaron Luchko of the University of Alberta at Edmonton. He is a computer scientist, I figure, working for a company needing optimization of a program they're working on. After the workshop, I took a day to play around UCSD and in La Jolla. The Salk Institute is amazing, at least from the outside and the quality of science coming from there. The offices overlook the ocean...jealous much?
From there, I headed north to Portland, OR to attend a workshop on evolution (Evolution 2010). Portland seems like a great city, and the conference center was quite modern and nice. They had a giant Foucault Pendulum outside the main hall we were in. My jaw dropped when I first saw it, but I'm thinking not too many people were as impressed...considering the majority were biologists in some way or another...a few were mathematicians and computer scientists...not sure how many physicists attended though...excluding myself and Bahar. Anyway, the conference was extremely informative, and I now have a reading list that seems like a mountain of work...but I LOVE mountains, so I think I'm up to the challenge.

Wednesday, May 5, 2010

April: workshop, revelations, experiments

I'll talk about the items in the subject line over the next couple or few posts.  I really just want to put a few notes about the experiments I'm doing.
First, make sure to cut the electrode tip for the appropriate diameter.
Second, turn on the air table to get rid of the slow large amplitude oscillations from the local field potential (LFP) recording.
Third, figure out how to better troubleshoot the BioAmp Controller pre-amp and amplifier.
Fourth, the brain seems to bulge more and more as time passes after removing the skull window and dura.  How problematic might this be?
Do seizures look different for different regions in the neocortex (ie, gradually increase then suddenly stop vs. suddenly start then gradually decrease)?
Is there a particular proportion of these types of seizures?  Or do they occur randomly?
Big question I've been asking:  What determines the way in which seizures begin and end?  Is there something that gets charged up, used up, and reaches critical "mass" at the beginning and the end? 
Adam D Scott
Department of Physics & Astronomy
Center for Neurodynamics
University of Missouri at St. Louis
http://www.umsl.edu/~neurodyn/

Sunday, March 28, 2010

I DID IT!

I have FINALLY finished programming the evolution simulation in C++.  The program was originally written for use in Matlab, but since January I have been working to convert it to C++.  A couple days ago, I finished all but two minor/optional functions to the original program.  I may try to edit one of the most important functions to increase efficiency, but at least the basics are all in and it runs........well, okay, so there's a bug that triggers a problem every tenth or so simulation.  I need to find the bug (something about accessing a value from a vector that doesn't exist...there are several vectors running around in a few different classes, so it may be a while before I find it)...once that's out of the way though...THEN it should REALLY be done...hopefully...please...

I have to say though, I'm extremely proud of what I've accomplished considering I have never written a program in C++ that uses classes or standard library containers.  I had to teach myself (and seek guidance from a couple other graduate students...thanks Dave and Daisuke!) to accomplish this.  Now to apply some sweet measures in order to write my first publishable paper on this stuff.

In two weeks, I will start diving into some more experimental tasks with rats and doing electrophysiological stuff with them (measure local field potential in the brains of Sprague-Dawley rats...they're properly anesthetized).  I also need to figure out what sort of stuff I will focus my research on.  Perhaps I'll be able to study something to get another paper out of this stuff.  It'll be really nice to have some experimental stuff on my CV too...considering I'll probably focus on computational studies.
Adam D Scott

Friday, February 26, 2010

Speciation and the evolution simulation...

I've been recoding the evolution simulation which Bahar and Nate developed in Matlab.  I have never programmed something so complicated.  I have also never really TRULY programmed anything in C++, ie used classes and the standard library and other pre-made classes effectively.  It has taken me several weeks to get to the point of piecing my various classes together and writing out a speciation algorithm.  I have also had a tough time dealing with the standard library class, vectors.  I think I have gotten to the point where I can use them efficiently.  Here's the break down of what I need this program to do:
I need to initialize some parameters in order to create the simulation.  Those parameters first help build a fitness landscape which represents locations where an environment benefits some phenotypes more than others.  The landscape axes represent a continuous scale of phenotypes; the area then represents a phenospace which the landscape then corresponds to.  Think of a flat map as being the phenospace; your latitude and longitude determine your location in that phenospace.  Mountainous regions represent areas which the environment would be most beneficial and the ocean trenches would be regions of poor fitness.  
The next part of the simulation is to build an initial population.  The population is a set of indivs which are initialized with some default data characteristics.  Now the main part of the program is ready.  
The primary section of the program is a for loop which iterates the generations.   Within each generation, several events occur.  The first is to find and record the identities of the two nearest indivs in phenospace for each indiv in the population.  The next step is to determine, based on each of indiv's neighbors, which indiv's are in the same species.  The speciation algorithm is based partly on a reproductive isolation model which states (in real life) that if no offspring can be conceived by two possible varieties of an organism, then those two possible varieties are actually different species.  However, our model extends this to include the second nearest neighbor as well - an assumption "dreamt" up by Nate to avoid numerous two indiv sized species.  This also indicates that different organisms don't necessarily be "close" to each other phenotypically.  Other speciation algorithms are certainly possible to implement; it would just require my time to write out such algorithms.
The next section of the program then records the information about the species and the indiv's.  One important aspect of the indiv's is their mutation rate.  The mutation rate determines how much variability their children may have about some range determined by the distance between the two mates.  This parameter is the primary value which we tweak or distribute differently among the indiv's.  As each indiv mates with their nearest neighbor, the "choosing" indiv then passes on it's mutation rate to the babies it makes.  The number of babies depends on the fitness of the "choosing" indiv parent.
The mating portion takes place after recording the population and species information.  The baby indiv's make up a new population for the next generation iteration.
Once the baby population is made, the parents are no longer needed, so their population is deleted.  Now the baby indiv population is weeded out by several death functions.  There are three ways in which the new indiv's are killed off.  One method is by an overpopulation limit.  This models organisms niche in their environment.  Particular organisms occupy important niches and tend not to share a niche with other organisms.  Therefore no indiv's may exist within a certain distance of each other in the phenospace.  We refer to this distance as the overpopulation limit.  The next killing method is random death.  This simply kills off indivs at random by determining what percentage (up to a small percentage) of the population will "win" a lottery.  The final killing that takes place is of those indivs which have sprung up outside of the phenospace limits.  This simply models a limit to our phenospace.
After the destruction of the selected indivs, the resulting population then cycles back to the beginning of the generation loop.  And the circle of life continues until either the population dies out or the program reaches the maximum number of generations that we have defined at the beginning in the parameters.
Adam D Scott

Thursday, January 28, 2010

post script

I passed my doctorate qualifier earlier this month. I'm official.

Ubiquity Ch 1-3

I began reading Ubiquity by Mark Buchanan.  My advisor suggested it to me Tuesday when we had our weekly meeting.  I've read through the third chapter and am completely entranced by it.  I tend to fall asleep reading most books or articles, but this book has engaged me greatly.
The book has covered two simple games thus far.  The first is the sand pile game.  Set a rate to drop granules of sand in a pile.  As the sand falls, avalanches of various sizes will occur.  The avalanches occur only if a threshold of instability is introduced by a single new grain.  These regions where this instability exists are called fingers of instability.  The resulting distribution of avalanches follows a power law (log(# of avalanches) v log(size of avalanche) gives a linear distribution...the slope of which determines the power).  The power law suggests there is no typical size of avalanche as would a bell/normal/Gaussian distribution suggests, for example.  This game was introduced by Per Bak, Chao Tang, and Kurt Weisenfeld, although it seems Bak delved much deeper into this than the others from what I gather from Buchanan.
The other game discussed is a game modeling earthquakes.  The game was originally introduced in 1967 by Burridge and Knopoff.  They used a physical model with a setup in one dimension, but didn't find what they were probably hoping for.  Bak and Tang rekindled the idea in 1989 but used a computer simulation in two dimensions.  The game is set up with a ceiling that is allowed to drift.  Connected to the ceiling are rods which are also connected to blocks on a floor.  Between the blocks are springs to connect one block with four neighbors.  The game is then set in motion by drifting the ceiling, as the ceiling moves, the rods bend.  When a rod reaches its limit, the connected block moves one unit.  The springs connected to that block then shift the neighboring blocks by one fourth a unit.  Bak and Tang found this to follow a power law, suggested it described earthquake sizes as being unpredictable, and were excited to show that this game was identical in nature to the sand pile game.  However, others pointed out their model was conservative, unlike real earthquakes which do not transfer all energy into motion.  Instead, some energy is lost in heating the rocks, not moving them.  In 1992 a few other scientists, Olami, Feder, and Christensen redid the Bak and Tang model, but allowed for dissipation of energy.  Their game resulted in matching data from a 1950's study by Gutenburg and Richter in which they found that real earthquakes follow a power law distribution.  Earthquakes have no typical scale since the distribution of earthquake intensity and number of those earthquakes follow a power law distribution, and this is matched by both games.
The next chapters should go into financial markets, wars, and other awesome world events.  I can't wait.
Adam D Scott