# UVa 10900

## Summary

You are a player on a quiz show. In the beginning, you have \$1. For each correct answer this price doubles, but once you give a wrong answer, you lose everything. The game is over either after you decide to stop, or after you answer ${\displaystyle n}$ questions.

Each time you are given a question, you think for a while, and come up with a possible answer. You can also estimate the probability ${\displaystyle p}$ that your answer is correct. Based on this ${\displaystyle p}$ you can decide whether you will stop playing (and take the current price), or try to answer the question.

What is your expected price, if you use an optimal strategy?

Assume that ${\displaystyle p}$ is a random variable uniformly distributed over the interval ${\displaystyle [t,1]}$.

## Explanation

First of all, we will explain the last sentence. Take a lot of questions, try to answer each of them, and each time write down the value ${\displaystyle p}$. That sentence says that the numbers you have written down will all come from the interval ${\displaystyle [t,1]}$ and they will be approximately uniformly distributed.

Now, what is the optimal strategy?

Suppose that your current price is ${\displaystyle C}$, and that the probability of answering the current question right is ${\displaystyle p}$. If you keep the price, you will gain ${\displaystyle C}$. If you answer the question, with probability ${\displaystyle p}$ you will have ${\displaystyle C}$ and you will still be in the game with one question less to answer, with probability ${\displaystyle (1-p)}$ you gain nothing and you are out.

If this is the last question, the expected price if you answer the question is ${\displaystyle 2pC}$. If this is more than ${\displaystyle C}$, you should answer the question, otherwise you shouldn't.

Now consider the general case. Let ${\displaystyle f(k)}$ be the answer we seek – the expected price, if you have ${\displaystyle k}$ questions left and play optimally. How to compute it? If you don't answer, you will get ${\displaystyle 2^{n-k}}$ dollars. If you do, you have the probability ${\displaystyle p}$ that you get the price ${\displaystyle f(k-1)}$.

When you will be in this situation, you will know the exact value ${\displaystyle p}$, thus your expected price will be ${\displaystyle \max \left(2^{n-k},p.f(k-1)\right)}$.

But when we compute our answer, we don't know the question you are going to get. What now? We will simply take an "average over all possible values of p". (As there is an infinite number of possible values, the "average" will actually be an integral.)

We get the following recurrence:

• ${\displaystyle f(0)=2^{n}}$
• ${\displaystyle f(k)={1 \over 1-t}\cdot \int _{t}^{1}\max \left(2^{n-k},p.f(k-1)\right)dp}$

Using them, we can compute the values ${\displaystyle f(k)}$ and output the value ${\displaystyle f(n)}$.

## Implementations

An easy way of computing ${\displaystyle f(k)}$: split the integral into two parts, on the first part integrate the constant function ${\displaystyle 2^{n-k}}$, on the second part integrate the linear function ${\displaystyle p.f(k-1)}$.

## A More Formal Approach

First we define a couple of random variables. Let ${\displaystyle X_{k}}$ denote the amount that you win when you have ${\displaystyle k}$ questions left, and let ${\displaystyle P_{k}}$ denote the probability that you'll answer the next question correctly when you have ${\displaystyle k}$ questions left.

We want to determine ${\displaystyle E(X_{n})}$, which is the expected prize when there are ${\displaystyle n}$ questions left.

We use ${\displaystyle E(X_{k}|P_{k}=p)}$ to denote the conditional expected value of ${\displaystyle X_{k}}$ given that ${\displaystyle P_{k}=p}$. This is just the expression that was derived above.

${\displaystyle E(X_{k}|P_{k}=p)=\max \left(2^{n-k},p.f(k-1)\right)}$

where ${\displaystyle f(k-1)=E(X_{k-1})}$.

Now, as explained above, we don't know the value of ${\displaystyle p}$. The solution above was to "average over all possible values of p". This is the correct thing to do, but here's the formal justification for this step:

There is a theorem from probability theory which says that ${\displaystyle E(X_{k})=E(E(X_{k}|P_{k}))}$. (See, for example, section 7.5.2, "Computing Expectations by Conditioning", in _A First Course in Probability_ by Sheldon Ross, to see why this theorem holds.)

${\displaystyle E(X_{k})=E(E(X_{k}|P_{k}))=\int _{-\infty }^{\infty }E(X_{k}|P_{k}=p)\cdot f_{P_{k}}(p)dp}$

where ${\displaystyle f_{P_{k}}(p)}$ is the density function of ${\displaystyle P_{k}}$. Since ${\displaystyle P_{k}}$ is uniformly distributed over ${\displaystyle t..1}$, it follows that

• ${\displaystyle f_{P_{k}}(p)={1 \over 1-t}}$ for t <= p <= 1 and
• ${\displaystyle f_{P_{k}}(p)=0}$ for p < t or p > 1

Plugging the density function and the conditional expected value into the expression for ${\displaystyle E(X_{k})}$ yields the same results as above:

• ${\displaystyle f(0)=E(X_{0})=2^{n}}$
• ${\displaystyle f(k)=E(X_{k})={1 \over 1-t}\cdot \int _{t}^{1}\max \left(2^{n-k},p.f(k-1)\right)dp}$

## Input

1 0.5
1 0.3
2 0.6
24 0.25
30 0.8
0 0


## Output

1.500
1.357
2.560
230.138
45517159.608