Double up Gaussian random numbers.
Gaussians & Eigenspace Splitting
Applying fast Walsh Hadamard transform eigenspace splitting to a vector of random numbers from the Gaussian (Normal) distribution doubles the number of Gaussian variables, yet they still pass most of the test for the Gaussian distribution.
A Hadamard (or any fixed linear) transform of a Gaussian vector is still Gaussian, so the two subvectors you get from splitting the transformed vector will (marginally and jointly) have Gaussian distributions and — for the usual iid zero-mean, equal-variance input — will pass the usual Gaussian tests in the limit of exact arithmetic and large samples.
You define:
where is the normalized Hadamard transform matrix ().
-
has eigenvalues and .
-
projects onto the +1 eigenspace (up to a factor of 2).
-
projects onto the –1 eigenspace (also up to a factor of 2).
So and are each living entirely in orthogonal subspaces of .
2. Distribution of and
If (iid standard Gaussian entries):
-
Any linear transform of a Gaussian vector is still Gaussian.
-
and are both symmetric and orthogonal-projector-scaled matrices:
showing the images of and are orthogonal.
3. Covariance structure
We can compute:
and similarly:
The cross-covariance is:
So and are uncorrelated.
4. Gaussian ⇒ independence
For jointly Gaussian vectors, uncorrelatedness ⇒ independence.
Thus, and are independent Gaussian vectors, each living in its own orthogonal subspace, with their own covariance matrices .
If you normalize appropriately (e.g. divide by each has iid Gaussian coordinates in its own reduced-dimensional space.
5. Passing Gaussian tests
Because and are exactly Gaussian and independent, each one individually:
-
Has the correct Gaussian marginal distribution.
-
Will pass virtually all Gaussianity tests (KS, Shapiro–Wilk, skewness/kurtosis tests) except for normal finite-sample false rejections.
-
Any deviations you see will come from numerical floating-point effects, not from the math.
💡 Subtle note:
If you take and as full n-dimensional vectors without removing their zero-projection components, those components will be deterministically zero and obviously not Gaussian. You have to restrict each to its eigenspace dimension (
Possible uses

1. Communications
-
Noise-like signaling:
You could transmit either or as “noise” over a channel. To a naive observer, it looks like Gaussian noise.
But a receiver who knows the Hadamard matrix can project onto the two eigenspaces and recover the bit “was this or -
This is essentially a spread-spectrum watermark: the eigenspace label acts as the hidden payload.
2. Steganography in noise
-
In simulations or media where Gaussian noise is expected, you can hide a binary message by choosing which eigenspace to draw the noise from.
-
Without knowledge of and the exact projection scheme, the two noise sources are statistically indistinguishable in standard Gaussian tests.
-
This is a form of covert channel that survives most naive statistical filtering.
3. Keying / authentication
-
If you control the noise source, you can embed a “secret key” into it by modulating which eigenspace you sample from.
-
At the receiving end, applying and checking the projection tells you whether the source is authentic.
4. Simulation experiments
-
In Monte Carlo methods, you could use for one set of runs and for another to guarantee orthogonality between datasets while keeping all marginal Gaussian properties.
-
This can help in variance reduction techniques where you want two streams that look identical in distribution but are guaranteed independent and orthogonal in a known way.
5. Testing
-
Use the tainted variables to stress-test statistical procedures:
-
If your statistical model assumes Gaussian white noise, will it notice that the data comes from a restricted eigenspace?
-
Many machine learning models ignore covariance structure — this is a way to check their sensitivity.
-
Comments
Post a Comment