PDA

View Full Version : Siegfried [JPE, 1970] (UWM)



bscout
02-21-2008, 03:43 PM
Great paper. (http://academic.reed.edu/economics/course_pages/354_s06/Siegfried_JPE_70.pdf)

;)

israelecon
02-21-2008, 03:52 PM
not bad.

Guest Who
02-21-2008, 04:10 PM
Isn't he missing a /delta (in the exponent) in the final limit?

stupidolive
02-21-2008, 04:13 PM
great paper!

asianecon
02-21-2008, 04:26 PM
nevermind

C152dude
02-21-2008, 04:29 PM
Haha.

asianecon
02-21-2008, 04:31 PM
Isn't he missing a /delta (in the exponent) in the final limit?

Yes I think so.


Mathematicians out there: I have some trouble understanding (10) and (11). I actually just found out that there's such a thing as a vector inverse! My question is whether such an inverse is the same for a vector and its transpose and also am I correct in saying that the result is a vector, hence (11) is taking the factorial of a vector which I am not familiar with...(and then plugs these results to (12))...

but maybe it's part of the irony..


a physics friend of mine just told me that the "geometric product" of two parallel lines is a scalar but he doesn't seem to know what exactly that "geometric product" is. then he mentions something about clifford algebra but doesn't follow up.

Olm
02-21-2008, 04:34 PM
Rofl!

polkaparty
02-21-2008, 05:53 PM
Mathematicians out there: I have some trouble understanding (10) and (11). I actually just found out that there's such a thing as a vector inverse! My question is whether such an inverse is the same for a vector and its transpose and also am I correct in saying that the result is a vector, hence (11) is taking the factorial of a vector which I am not familiar with...(and then plugs these results to (12))...

My linear algebra is quite rusty...but general vector spaces do not need to have multiplicative inverses. You put on a show about when a given n \times n matrix is invertible, so it's clear that not every n \times n matrix has an inverse in the space of all n \times n matrices

For the space of m \times n matrices, I don't even think this space has a multiplicative identity, so it can't have inverses.

At the point that the paper starts discussing this it has really devolved into nonsense anyway.... He says something about restricting to the one dimensional space...in which case inverses will exist by an isomorphism to the field I believe, but he could have let X be an element of the general linear group (invertible n by n matrices). Anyway, the spaces are all getting mixed up anyway since the 0 in equation 10 is a vector while the 0 in equation 9 is a scalar in R, so it doesn't make any sense to combine them into equation 11, as the factorial function (or even the gamma function) is not defined on a general vector space (as far as I know)...but it does make sense if you assume he's using the isomorphism mentioned earlier. [addendum2: if the field is R or C]

Anyway, I thought the paper was interesting, but perhaps I don't know enough about econometrics to get it? The entire world needs less complexity (read: less powerpoint), so I suppose econometrics does too....

Addendum: multiplicative inverses are not the same as the transpose. Just consider the case of invertible 2 by 2 matrices where you have a formula for the inverse.

asianecon
02-21-2008, 07:11 PM
My linear algebra is quite rusty...but general vector spaces do not need to have multiplicative inverses. You put on a show about when a given n \times n matrix is invertible, so it's clear that not every n \times n matrix has an inverse in the space of all n \times n matrices

For the space of m \times n matrices, I don't even think this space has a multiplicative identity, so it can't have inverses.

At the point that the paper starts discussing this it has really devolved into nonsense anyway.... He says something about restricting to the one dimensional space...in which case inverses will exist by an isomorphism to the field I believe, but he could have let X be an element of the general linear group (invertible n by n matrices). Anyway, the spaces are all getting mixed up anyway since the 0 in equation 10 is a vector while the 0 in equation 9 is a scalar in R, so it doesn't make any sense to combine them into equation 11, as the factorial function (or even the gamma function) is not defined on a general vector space (as far as I know)...but it does make sense if you assume he's using the isomorphism mentioned earlier. [addendum2: if the field is R or C]

Anyway, I thought the paper was interesting, but perhaps I don't know enough about econometrics to get it? The entire world needs less complexity (read: less powerpoint), so I suppose econometrics does too....

Addendum: multiplicative inverses are not the same as the transpose. Just consider the case of invertible 2 by 2 matrices where you have a formula for the inverse.

I was actually referring to the inverse of a column or a row vector instead of something like a generalized inverse of a matrix since this is what he seems to imply by taking the inverse of X which he defines as a vector (instead of say XX'). ("vector inverse" --> inverse.cdy (http://staff.science.uva.nl/~leo/cinderella/inverse1.html))

polkaparty
02-21-2008, 07:31 PM
I was actually referring to the inverse of a column or a row vector instead of something like a generalized inverse of a matrix since this is what he seems to imply by taking the inverse of X which he defines as a vector (instead of say XX'). ("vector inverse" --> inverse.cdy (http://staff.science.uva.nl/%7Eleo/cinderella/inverse1.html))

Well a row or column 'vector' is just an n \times 1 or 1 \times n matrix, so my discussion above applies.

I was talking about inverses in the algebraic sense of course, which is the relevant definition when discussing the forumla $(X^t)^{-1} = (X^{-1})^t$, as used in the paper.

The "inverse" vector described in that link appears completely different than the algebraic inverse, but I've never seen that particular definition before, so I can't be so sure. For reference, based on that java app, it looks like this is the def:

\begin{definition}
Let $x=(a,b)$ be an element of $\R^2$ equipped with the euclidean inner product $<x,y>$. Then a \emph{vector inverse} of $x$ is a vector $y \in \R^2$ such that $|| y || = \frac{1}{|| x ||}$, or
\[
|| y || \cdot || x || = 1
\]
\end{definition}

The last part of the definition, where the norms are multiplicative inverses in the field, is where the name vector inverse must come from.

With this definition vector inverses are not unique. BTW, the definition clearly generalizes to any normed vector space.

Who knows what other crazy stuff those physicists are doing.... Anyway, I don't think the definition I gave is what Siegfried had in mind.

fp3690
02-21-2008, 11:18 PM
Great article! I'm surprised it ever got published in JPE, especially in 1970! Maybe they had a taste for subtle sarcasm.