Posts filled under #bigbambu

Ragazzi @t4tino23 sta fac

Ragazzi @t4tino23 sta facendo un contest andate a vedere il suo video per saperne di pi! #bigbambu

An extract on #bigbambu

The Fourier transform of a normal density f {\displaystyle f} with mean {\displaystyle \mu } and standard deviation {\displaystyle \sigma } is f ^ ( t ) = f ( x ) e i t x d x = e i t e 1 2 ( t ) 2 {\displaystyle {\hat {f}}(t)=\int _{-\infty }^{\infty }\!f(x)e^{-itx}\,dx=e^{-i\mu t}e^{-{\frac {1}{2}}(\sigma t)^{2}}} where i {\displaystyle i} is the imaginary unit. If the mean = 0 {\displaystyle \mu =0} , the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain, with mean 0 and standard deviation 1 / {\displaystyle 1/\sigma } . In particular, the standard normal distribution {\displaystyle \varphi } is an eigenfunction of the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable X {\displaystyle X} is called the characteristic function of that variable, and can be defined as the expected value of e i t X {\displaystyle e^{itX}} , as a function of the real variable t {\displaystyle t} (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable t {\displaystyle t} .

For any positive integer n, any normal distribution with mean and variance 2 is the distribution of the sum of n independent normal deviates, each with mean /n and variance 2/n. This property is called infinite divisibility. Conversely, if X1 and X2 are independent random variables and their sum X1 + X2 has a normal distribution, then both X1 and X2 must be normal deviates. This result is known as Cramr's decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramr's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.

Estimator ^ {\displaystyle \scriptstyle {\hat {\mu }}} is called the sample mean, since it is the arithmetic mean of all observations. The statistic x {\displaystyle \scriptstyle {\overline {x}}} is complete and sufficient for , and therefore by the LehmannScheff theorem, ^ {\displaystyle \scriptstyle {\hat {\mu }}} is the uniformly minimum variance unbiased (UMVU) estimator. In finite samples it is distributed normally: ^ N ( , 2 / n ) . {\displaystyle {\hat {\mu }}\ \sim \ {\mathcal {N}}(\mu ,\,\,\sigma ^{2}\!\!\;/n).} The variance of this estimator is equal to the -element of the inverse Fisher information matrix I 1 {\displaystyle \scriptstyle {\mathcal {I}}^{-1}} . This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of ^ {\displaystyle \scriptstyle {\hat {\mu }}} is proportional to 1 / n {\displaystyle \scriptstyle 1/{\sqrt {n}}} , that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations. From the standpoint of the asymptotic theory, ^ {\displaystyle \scriptstyle {\hat {\mu }}} is consistent, that is, it converges in probability to as n . The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples: n ( ^ ) d N ( 0 , 2 ) . {\displaystyle {\sqrt {n}}({\hat {\mu }}-\mu )\ {\xrightarrow {d}}\ {\mathcal {N}}(0,\,\sigma ^{2}).}

logo