Table of Links
-
The Soup Kitchen Problem
-
Multi-Unit Posted-Price Mechanisms
A. Proofs for Section 2 (The Soup Kitchen Problem)
B. Proofs for Section 3 (Multi-Unit Posted-Price Mechanisms)
A Proofs for Section 2 (The Soup Kitchen Problem)
Proof. Proving Theorem 1 can be split into the above two separate steps. The first of these two steps is Lemma 2, proved subsequently. Consider that when we select L = 0, Lemma 2 tells us that
for µ1, ..., µn denoting the means of the individual random variables D1,...,Dn.
Observe that Lemma 3 also allows us to show
To see this, let n ′ ≫ n and begin with µi = µ/n for all i ∈ [n] and µi = 0 for all i ∈ [n′] \ [n].
Then, the procedure laid out in Lemma 3 lays out a path by which all µi converge to µ/n′ and demonstrates
Furthermore, observe that by the definition of a binomial distribution,
for X ∼ Binomial(n, µ/n). Therefore, as we’ve demonstrated that E[f(X)] is weakly increasing in n, we have E[f(X)] ≤ E[f(Y )] for Y ∼ Poisson(µ), which is the limiting case as n → ∞. We conclude that Theorem 1 must hold, i.e. for our initial random variable D that is the sum of n independent Di ∈ [0, 1],
E[f(D)] ≤ E[f(X)] ≤ Y)].
Proof. We first introduce some notation. Let µ ≜ E[D], and denote the upper conditional mean µ↑ ≜ E[D | D ≥ κ] and the quantity µ↓ ≜ E[D1(D < κ)]; thus, µ = µ↑P(D ≥ κ) + µ↓.
We will manipulate the parameters of this function G to show that G(µ↑, µ↓) ≤ G(κ, µ↓), allowing us to conclude that P(D ≥ κ) ≤ G(κ, µ↓). To do this, we will examine two cases: where β ≥ 0, and where β < 0. In the case where β ≥ 0, we will first showcase a continuous path in (u↑, u↓) space that preserves the value of G and goes from (µ↑, µ↓) to (κ, t) for some t ≤ µ ↓. Then, we will apply the fact that G(κ, ·) is weakly increasing to show G(µ↑, µ↓) ≤ G(κ, µ↓). In the case where β < 0, we will instead show directly that G(·, µ↓) is decreasing.
Assume that β ≥ 0. Observe that for any fixed u ↑ ∈ (β, µ↑], the function G(u↑, ·) is continuous across all inputs u ↓ such that
u↑P(D ≥ κ) + u ↓ ∈ [0, n].
In other words, G(u↑, ·) is continuous in the closed interval [−u ↑P(D ≥ κ), n − u ↑P(D ≥ κ)
Note that the value u ↓ = (µ↑ − u↑)P(D ≥ κ) + µ↓ lies in this interval, which we briefly verify by checking th
u↑P(D ≥ κ) + (µ↑ − u↑)P(D ≥ κ) + µ↓ = µ↑ P(D ≥ κ) + µ
= µ ≜ E[D] ∈ [0, n
It follows that since G(u↑, ·) is continuous in the interval [−u↑P(D ≥ κ), n − u ↑P(D ≥ κ)], it is also continuous in the subinterval [−u↑P(D ≥ κ),(µ ↑ − u↑)P(D ≥ κ) + µ↓]
We now observe that when u ↓ is equal to the endpoints of this interval, −u↑P(D ≥ κ) and (µ↑−u ↑)P(D ≥ κ)+µ ↓, we obtain values of G that are lower and higher than G(µ ↑ , µ↓ ), respectively
First, when u ↓ = −u↑P(D ≥ κ), we have
Therefore, by the intermediate value theorem, there will always exist some u ↓ within the interval [−u ↑P(D ≥ κ),(µ↑ − u↑)P(D ≥ κ) + µ↓] such that G(u↑, u↓) exactly equals G(µ↑, µ↓). We will hereafter write u ↓ as a function of u ↑ so as to always select a u ↓ where this equality
G(u↑, u↓ (u↑)) = G(µ↑ ,µ↑)
By definition, varying u↑ keeps the left-hand side of the above equality constant. Thus, if we take the total derivative of G with respect to u↑, we will always have
We will use this fact to demonstrate that u↓ (u↑) is weakly increasing in u↑. To do so, we compute the total derivative as folows:
Observe that the two terms on the right-hand side can be simplified, respectively, as
and
Thus, we have
Lastly, note that G(u↑, ·) is a weakly increasing function, as the numerator of
lets us conclude that
for all u↑ ∈ (κ, µ↑]. Recall additionally that u↓ (u↑) ∈ [−u ↑P(D ≥ κ), (µ↑ − u↑)P(D ≥ κ) + µ↓], which provides an upper bound on u↓ (µ↑):
u↓ (µ↑) ≤ (µ↑ − µ↑)P(D ≥ κ) + µ↓ = µ↓
Hence, u↓ (u↑) ≤ µ ↓ for all u ↑ ∈ (κ, µ↑]. Furthermore, from the facts that G(u ↑, u↓ (u ↑)) = G(µ , µ↓) for all u ↑ ∈ (κ, µ↑] and that G(u↑, ·) is weakly increasing, we can deduce that
with the bound becoming infinitely weak when κ = β and τ > 0. We can observe that the righthand side can be written as an expectation with respect to a binomial distribution with mean κτ , and so we have
We now examine the case where β<0. Let us fix the parameter u↓ = µ↓ and observe what happens as we vary the parameter u↑. We first note that since µ↓ ≥ 0, the inequalities
µ ≥ u↑P(D ≥ κ) + µ↓ ≥
hold for any u↑ ∈ [0, µ↑], and so the summation
in the numerator of G(u↑, µ↓) is an expectation of max(i − β, 0) with respect to a binomial distribution with mean u↑ P(D ≥ κ) + µ↓. Additionally, if β < 0, then max(i − β, 0) = i − β for all i ∈ {0,1, ..., n}. Thus, we compute that
Taking the partial derivative of G with respect to u↑, we obtain
Authors:
(1) Aadityan Ganesh, Princeton University ([email protected]);
(2) Jason Hartline, Northwestern University ([email protected]);
(3) Atanu R Sinha, Adobe Research ([email protected]);
(4) Matthew vonAllmen, Northwestern University ([email protected]).
This paper is