Using Random Sample to Find Estimate











up vote
0
down vote

favorite












I have to use the Inversion Sampling Method to generate a random sample of 100 from the function $f(x)=theta x^{theta - 1}$ if $theta =5$. Here is my function so far:



X = function(n) 
{
U = runif(n)
sample = (U/5)^{1/4}
return(sample)
}


So I get $100$ random values when I input $X(100)$. Now I have to use this data to find an estimate for $theta = 5$. Where do I go from here? Do I make a histogram?










share|cite|improve this question






















  • If you know $theta$ equals $5$, why estimate it?
    – StubbornAtom
    Nov 29 at 15:41















up vote
0
down vote

favorite












I have to use the Inversion Sampling Method to generate a random sample of 100 from the function $f(x)=theta x^{theta - 1}$ if $theta =5$. Here is my function so far:



X = function(n) 
{
U = runif(n)
sample = (U/5)^{1/4}
return(sample)
}


So I get $100$ random values when I input $X(100)$. Now I have to use this data to find an estimate for $theta = 5$. Where do I go from here? Do I make a histogram?










share|cite|improve this question






















  • If you know $theta$ equals $5$, why estimate it?
    – StubbornAtom
    Nov 29 at 15:41













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have to use the Inversion Sampling Method to generate a random sample of 100 from the function $f(x)=theta x^{theta - 1}$ if $theta =5$. Here is my function so far:



X = function(n) 
{
U = runif(n)
sample = (U/5)^{1/4}
return(sample)
}


So I get $100$ random values when I input $X(100)$. Now I have to use this data to find an estimate for $theta = 5$. Where do I go from here? Do I make a histogram?










share|cite|improve this question













I have to use the Inversion Sampling Method to generate a random sample of 100 from the function $f(x)=theta x^{theta - 1}$ if $theta =5$. Here is my function so far:



X = function(n) 
{
U = runif(n)
sample = (U/5)^{1/4}
return(sample)
}


So I get $100$ random values when I input $X(100)$. Now I have to use this data to find an estimate for $theta = 5$. Where do I go from here? Do I make a histogram?







statistics sampling sampling-theory






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 28 at 22:09









numericalorange

1,719311




1,719311












  • If you know $theta$ equals $5$, why estimate it?
    – StubbornAtom
    Nov 29 at 15:41


















  • If you know $theta$ equals $5$, why estimate it?
    – StubbornAtom
    Nov 29 at 15:41
















If you know $theta$ equals $5$, why estimate it?
– StubbornAtom
Nov 29 at 15:41




If you know $theta$ equals $5$, why estimate it?
– StubbornAtom
Nov 29 at 15:41










1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










You can derive the MLE estimator, i.e.,
$$
L(theta; X) = theta ^n ( prod x_i )^{theta - 1 } ,
$$

the log-likelihood is
$$
l(theta) = n ln theta+(theta - 1)sumln x_i,
$$

$$
l'(theta) = frac{n}{theta} + sumln x_i = 0,
$$

hence the MLE is
$$
hat{theta}_{ML}=- frac{n}{sum ln x_i}.
$$

Verifying that it indeed maximizes the likelihood,
$$
l''(hat{theta}) = - frac{ n }{hat{theta}_{ML}} < 0.
$$






share|cite|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3017800%2fusing-random-sample-to-find-estimate%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote



    accepted










    You can derive the MLE estimator, i.e.,
    $$
    L(theta; X) = theta ^n ( prod x_i )^{theta - 1 } ,
    $$

    the log-likelihood is
    $$
    l(theta) = n ln theta+(theta - 1)sumln x_i,
    $$

    $$
    l'(theta) = frac{n}{theta} + sumln x_i = 0,
    $$

    hence the MLE is
    $$
    hat{theta}_{ML}=- frac{n}{sum ln x_i}.
    $$

    Verifying that it indeed maximizes the likelihood,
    $$
    l''(hat{theta}) = - frac{ n }{hat{theta}_{ML}} < 0.
    $$






    share|cite|improve this answer

























      up vote
      1
      down vote



      accepted










      You can derive the MLE estimator, i.e.,
      $$
      L(theta; X) = theta ^n ( prod x_i )^{theta - 1 } ,
      $$

      the log-likelihood is
      $$
      l(theta) = n ln theta+(theta - 1)sumln x_i,
      $$

      $$
      l'(theta) = frac{n}{theta} + sumln x_i = 0,
      $$

      hence the MLE is
      $$
      hat{theta}_{ML}=- frac{n}{sum ln x_i}.
      $$

      Verifying that it indeed maximizes the likelihood,
      $$
      l''(hat{theta}) = - frac{ n }{hat{theta}_{ML}} < 0.
      $$






      share|cite|improve this answer























        up vote
        1
        down vote



        accepted







        up vote
        1
        down vote



        accepted






        You can derive the MLE estimator, i.e.,
        $$
        L(theta; X) = theta ^n ( prod x_i )^{theta - 1 } ,
        $$

        the log-likelihood is
        $$
        l(theta) = n ln theta+(theta - 1)sumln x_i,
        $$

        $$
        l'(theta) = frac{n}{theta} + sumln x_i = 0,
        $$

        hence the MLE is
        $$
        hat{theta}_{ML}=- frac{n}{sum ln x_i}.
        $$

        Verifying that it indeed maximizes the likelihood,
        $$
        l''(hat{theta}) = - frac{ n }{hat{theta}_{ML}} < 0.
        $$






        share|cite|improve this answer












        You can derive the MLE estimator, i.e.,
        $$
        L(theta; X) = theta ^n ( prod x_i )^{theta - 1 } ,
        $$

        the log-likelihood is
        $$
        l(theta) = n ln theta+(theta - 1)sumln x_i,
        $$

        $$
        l'(theta) = frac{n}{theta} + sumln x_i = 0,
        $$

        hence the MLE is
        $$
        hat{theta}_{ML}=- frac{n}{sum ln x_i}.
        $$

        Verifying that it indeed maximizes the likelihood,
        $$
        l''(hat{theta}) = - frac{ n }{hat{theta}_{ML}} < 0.
        $$







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Nov 30 at 23:27









        V. Vancak

        10.8k2926




        10.8k2926






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3017800%2fusing-random-sample-to-find-estimate%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Berounka

            Sphinx de Gizeh

            Different font size/position of beamer's navigation symbols template's content depending on regular/plain...