In probability theory, the law of large numbers ( LLN ) is the theorem that describes the results of doing the same experiment multiple times. By law, the average results obtained from a large number of experiments should be close to the expected value, and will tend to be closer as more trials are conducted.
LLN is important because it ensures a stable long-term outcome for the average of some random events. For example, while a casino may lose money in a wheel roulette, its earnings will tend toward a predictable percentage of a large number of rounds. Any successive winnings by players will eventually be overcome by game parameters. It is important to remember that the law only applies (as its name suggests) when a large number of observations are considered. There is no principle that a small number of observations will coincide with the expected value or that one successive value will soon be "balanced" by the other (see gambler error).
Video Law of large numbers
Example
Sebagai contoh, satu gulungan yang adil, enam sisi menghasilkan satu dari angka 1, 2, 3, 4, 5, atau 6, masing-masing dengan probabilitas yang sama. Oleh karena itu, nilai yang diharapkan dari satu lemparan dadu adalah
-
By law of large numbers, if a large number of six-sided dice are rolled out, the mean value (sometimes called the sample mean) is likely to be close to 3.5, with increased precision as more dice are rolled out.
This follows from the law of large numbers that the empirical probabilities of success in Bernoulli's series of experiments will merge with theoretical probabilities. For the random variable Bernoulli, the expected value is the theoretical probability of success, and the mean n of the variable (assuming they are independent and identically distributed (i.i.d.)) is the relative frequency.
For example, a fair coin throw is a Bernoulli experiment. When a coin is reversed once, the theoretical probability that the result will be a head equal to 1/2. Therefore, according to the law of large quantities, the proportion of heads in "large" coins must be about 1/2 In particular, the proportion of heads after n flipping will almost certainly converge to 1/2 because n approaches infinity.
Although the proportion of head (and tail) approaches 1/2, it is almost certain that the absolute difference in the number of heads and tails will be great as the number of folds becomes large. That is, the probability that the absolute difference is a small number, close to zero because the number of folds becomes large. Also, almost certainly the ratio of absolute difference to the number of rounds will be close to zero. Intuitively, the expected absolute differences grow, but at a slower rate than the number of flips, as the number of flips grows.
Maps Law of large numbers
History
The Italian mathematician Gerolamo Cardano (1501-1576) states without evidence that empirical statistical accuracy tends to increase with the number of experiments. This is then formalized as law by large amounts. The special form of LLN (for binary random variables) was first demonstrated by Jacob Bernoulli. It took more than 20 years to develop a meticulously precise mathematical proof published in his book Artemis Conjecturing in 1713. He named this the "Golden Theorem" but is commonly known as "Bernoulli's Theorem ". This should not be equated with the Bernoulli principle, named after the nephew of Jacob Bernoulli, Daniel Bernoulli. In 1837, S.D. Poisson further described it by the name " la loi des grands nombres " ("Law of the big numbers"). After that, it is known by both names, but the "Law of the big numbers" is most commonly used.
After Bernoulli and Poisson published their efforts, other mathematicians also contributed to the refinement of the law, including Chebyshev, Markov, Borel, Cantelli and Kolmogorov and Khinchin. Markov points out that the law may apply to random variables that have no limited variance under some other weak assumptions, and Khinchin pointed out in 1929 that if the series consists of randomly distributed independent variables, it is enough that the expected value exists for weaker laws in large quantities come true. This further study has spawned two prominent forms of LLN. One is called "weak" law and the other a "strong" law, referring to two different modes of convergence of the cumulative sample mean to the expected value; in particular, as described below, a strong form implies a weak one.
src: slideplayer.com
Form
Two different versions of with large number laws are outlined below; they are called strong laws with large numbers , and weak laws with large numbers . Expressed for the case where X 1 , X 2 ,... is the infinite sequence of i.i.d. Lebesgue integrable random variables with expected value E <. = Ã,Ãμ , both versions of the law state that - with virtual certainty - the average sample
-
fused with the expected value
-
(Lebesgue integrity X j means that the expected value of E ( X j ) exists in accordance with Lebesgue integration and is limited.)
Assuming limited variance Var ( X 1 ) = Var ( X 2 ) =... = ? 2 & lt; ? not required . The large or unlimited variations will make the convergence slower, but the LLN remains valid. This assumption is often used because it makes the evidence easier and shorter.
The mutual independence of random variables can be replaced by paired independence in both legal versions.
The difference between strong and weak versions relates to the confirmed convergence mode. For interpretation of this mode, see Convergence of random variables.
Weak law
The large weak law (also called Khinchin law) states that the sample mean fuses in probability to the expected value
-
Artinya, untuk bilangan positif ? ,
-
Interpreting this result, weak laws state that for any non-zero margins specified, no matter how small, with a large enough sample there will be a very high probability that the average of the observations will be close to the expected value; that is, in the margin.
As mentioned earlier, weak laws apply in the case of i.i.d. random variables, but also applies in some other cases. For example, the variance may differ for each random variable in the series, keeping the expected value constant. If the variance is restricted, then the law applies, as indicated by Chebyshev as early as 1867. (If the expected value changes during the series, then we can easily apply the law to the average deviation from the expected value.) The law then states that this is united in probability to zero.) In fact, the Chebyshev proof works during the average variance of the first n value going to zero when n goes to infinity. For example, assume that every random variable in the series follows a Gaussian distribution with a mean of zero, but with the same variance as At each stage, the average will be normally distributed (as the average of a set of normally distributed variables). The variance of the sum equals the sum of the variance, which is asymptotic to . The mean variance is therefore asymptotic to and go to zero.
An example where the big number law is not not is Cauchy distribution. Let the random number equal to the tangent from the equally distributed angle between -90 à ° and 90 à °. The median is zero, but the expected value does not exist, and indeed the average n variable has the same distribution with that one variable. It does not tend toward zero as n goes to infinity.
There is also an example of weak legal application even though the expected value does not exist. See #Default between weak laws and strong laws.
Strong law
The strong law of large numbers states that the average sample fused almost certainly with the expected value
-
Itu adalah,
-
What this means is that the number of experiments n goes to infinity, the probability that the average of the observations is equal to the expected value will be equal to one.
The proof is more complex than the weak law. This law justifies an intuitive interpretation of the expected value (for Lebesgue integration only) from random variables when the sample repeatedly as "long-term average".
Almost certainly convergence is also called the powerful convergence of random variables. This version is called strong law because randomly integral (almost certain) random variables are guaranteed to meet weakly (in probability). Strong laws imply weak laws but not vice versa, when strong legal conditions hold the variables together firmly (almost certainly) and weak (in probability). But weak laws can apply in conditions where strong laws do not apply and then convergence is only weak (in probability).
To date, it is not possible to prove that a strong legal condition is the same as a weak law.
The powerful laws of the big numbers themselves can be seen as special cases of pointwise ergodic theorems.
Strong laws apply to independent distributed random variables that have expected values ââ(such as weak laws). This is evidenced by Kolmogorov in 1930. It can also be applied in other cases. Kolmogorov also pointed out, in 1933, that if the variables are independent and identically distributed, then for a convergent average is almost certain on something (this can be regarded as another statement of strong law), it is necessary that they have the expected value (and of course the average will converge almost certainly about it).
Jika ringkasannya independently tetapi tidak terdistribusi secara identik, maka
-
asalkan masing-masing X k memiliki momen kedua yang terbatas dan
-
This statement is known as Stronger Law of Kolmogorov , see eg. Mon & amp; Singer (1993, Theorem 2.3.10).
Contoh dari seri di mana hukum yang lemah berlaku tetapi bukan hukum yang kuat adalah ketika X k adalah plus atau minus (mulai dari cukup kiss k sehingga penyebutnya positif) denounce probability 1/2 untuk masing-masing. Varian X k kemudian Hukum kuat Kolmogorov tidak berlaku karena jumlah parsial dalam kriterianya hingga k = n asymptotic dan ini tidak dibatasi.
Jika kita mengganti variabel acak dengan variabel Gaussian yang memiliki varians yang sama, yaitu maka rata-rata per titik mana pun juga akan terdistribusi secara normal. Lebar distribusi rata-rata akan cenderung menuju nol (asimtotik penyimpangan standar that ), tetapi untuk yang diberikan ?, ada probabilitas yang tidak pergi ke nol dengan n bahwa rata-rata kadang setelah n percobaan akan kembali that? Karena probabilitas ini tidak menuju nol, maka harus memiliki batas bawah positif p (?), Yang berarti ada kemungkinan setidaknya p (?) Bahwa rata-rata akan tercapai? setelah n uji coba. Itu akan terjadi denoun probabilitas p (?)/2 sebelum beberapa m yang bergantung pada n . Tetapi bahkan setelah m , masih ada kemungkinan setidaknya p (?) Bahwa itu akan terjadi. (Ini sepertinya menunjukkan bahwa p (?) = 1 dan rata-rata akan mencapai? Jumlah tak terhingga.)
Perbedaan antara hukum yang lemah dan hukum yang kuat
The weak law menyatakan bahwa untuk besar tertentu n , rata-rata kemungkinan berada di dekat ? . Denomin demikian, ia membuka kemungkinan bahwa terjadi dalam jumlah tak terbatas, meskipun pada interval yang jarang. (Tidak perlu untuk semua n).
The hukum yang kuat menunjukkan bahwa ini hampir pasti tidak akan terjadi. Secara khusus, ini menyiratkan bahwa dengan probabilitas 1, kita memiliki itu untuk setiap ? & gt; 0 ketidaksamaan berlaku untuk semua cukup besar n .
Strong laws do not apply in the following cases, but weak laws do not.
1. Biarkan X menjadi variabel acak terdistribusi secara exponensial dengan parameter 1. Variabel acak tidak memiliki nilai yang diharapkan sesuai dengan integrasi Lebesgue, tetapi menggunakan convergensi condisional dan menafsirkan integral sebagai integral Dirichlet, yang merupakan integral Riemann yang tidak benar, kita dapat mengatakan :
-
2. Biarkan x menjadi distribute geometric denotes probability 0.5. Variabel acak tidak memiliki nilai yang diharapkan dalam pengertian konvensional karena rangkaian tak terbatas tidak sepenuhnya convergen, tetapi menggunakan convergensi condisional, kita dapat mengatakan:
-
3. Jika fungsi distribusi kumulatif dari variabel acak adalah
-
-
- maka itu tidak memiliki nilai yang diharapkan, tetapi hukum yang lemah itu benar.
Hukum seragam dengan jumlah besar
Suppose f ( x , ? ) are some of the functions defined for ? ? ?, and continuously in ? . Then for what's fixed ? , the order is { f ( X 1 , ? ), f ( X 2 , ? ),...} will be a sequence of independent and identical distributed random variables, so the sample mean of this sequence is fused in probability to E [ f ( X , ? )]. This is a convergence of directions (in ? ).
Hukum seragam bilangan besar menyatakan kondisi di mana konvergensi terjadi secara seragam di ? . Jika
- ? kompak,
- f ( x , ? ) terus menerus di masing-masing ? ? ? untuk hampir semua x s, dan fungsi terukur x di masing-masing ? .
- terdapat fungsi yang mendominasi d ( x ) sehingga E [ d ( X )] & lt ; ?, dan
-
Kemudian E [ f ( X , ? )] terus menerus di ? , dan
-
This result is useful for obtaining consistency from a large class of estimators (see Extreme estimator).
Borel's Law with a large amount
Borel's law of large amounts , named after ÃÆ'â ⬠° mile Borel, states that if the experiment is repeated many times, independently under identical conditions, then the proportion of time of a particular occurrence occurs approximately equal to the probability of occurrence events in a particular experiment; the greater the number of repetitions, the better the approach. More precisely, if E shows the intended event, p the possibility of its occurrence, and
Source of the article : Wikipedia