Is there any point in looking at the singular values of the stiffness matrix?
If the elements are data dependent sum you probably want to have
some idea of the quality of your model. What are the singular values with the
original FF and python coefficients? I don’t know offhand how to do
that in FF
Taking the current set of eigen values, you have two of large magnitude with the
rest smaller by maybe a factor of 1e7 or so. Is this what you expect?
To my mind, this big amplitude is expected since the eigenvalues should decay quickly with this method.
Besides, the stiffness matrix is invertible, so we can continue with the eigenvalues, there is no need to consider singular value. Some of them can be zero, but this is not a problem.
I guess empirically the question remains why one set of coefficients
gives the evs you expect and another does not.
I just ran this with the FF mass matrix and no change, using python mass
and FF stifness gives negative eigenvalues.
Just looking again at row 9, . some of these
are of opposite signs of the others. It could be you depend on exact equality/cancellation
of some terms. You could try the FF elements and round them - this may help
understand your model. I’ve looked at some cases of models with empircal
correlation matrix and they can be rather difficult
By the way when I consider the problem in the other way : STIFF \Phi = \lambda MASS \Phi, I find positive eigenvalues, and the same values than with Python.
Ok okay, in this case I am in agreement with the eigenvalues calculated.
I think the big negative values in this previous case was due to round off errors too, since in the previous case the eigenvalues should be the inverse of these eigenvalues.
By considering the problem in this way, I no longer have a problem using-eps_gen_hermitian
Did anything change playing with those parameters?
You can see how sensitivie the results are if you look at the different
STIFF files I included here. The “XX” is the python stiffness apparently
rounded . SS removed the very small values from the FF matrix.
II too the floor of the remaining elements. I may have made some mistakes
and these may not be entirely accurate but you can play with these if you like…
There are at least a couple issues here. One of course is how the FF solvers
work on various kinds of data and how to measure or classify the data quality.
Another however is your model utility- do these numbers mean anything
or are all of them noise? Some off the smaller ones seemed to be
degenerate in pairs, does that mean anything or is it accidental?
Edit and I almost forgot, there is an FF command called display
that seems to show the sturcture of larger matricies but I could
not get it to work well with these . Taking out the small values
you could probably visualize the structure in R or matlab
and see if it makes sense.