Entanglement is Necessary for Optimal Quantum Property Testing
Abstract
There has been a surge of progress in recent years in developing algorithms for testing and learning quantum states that achieve optimal copy complexity [o2015quantum, o2016efficient, haah2017sample, o2017efficient, acharya2019measuring, buadescu2019quantum]. Unfortunately, they require the use of entangled measurements across many copies of the underlying state and thus remain outside the realm of what is currently experimentally feasible. A natural question is whether one can match the copy complexity of such algorithms using only independent—but possibly adaptively chosen—measurements on individual copies.
We answer this in the negative for arguably the most basic quantum testing problem: deciding whether a given dimensional quantum state is equal to or far in trace distance from the maximally mixed state. While it is known how to achieve optimal copy complexity using entangled measurements, we show that with independent measurements, is necessary, even if the measurements are chosen adaptively. This resolves a question posed in [wright2016learn]. To obtain this lower bound, we develop several new techniques, including a chainrule style proof of Paninski’s lower bound for classical uniformity testing, which may be of independent interest.
1 Introduction
This paper considers the problem of quantum state certification. Here, we are given copies of an unknown mixed state and a description of a known mixed state , and our goal is to make measurements on these copies^{1}^{1}1Formally, a measurement is specified by a positive operatorvalued measure (POVM), which is given by a set of positivedefinite Hermitian matrices summing to the identity, and the probability of observing measurement outcome is equal to . See Definition 3.1 for details. and use the outcomes of these measurements to distinguish whether , or if it is far from in trace norm. An important special case of this is when is the maximally mixed state, in which case the problem is known as quantum mixedness testing.
This problem is motivated by the need to verify the output of quantum computations. In many applications, a quantum algorithm is designed to prepare some known dimensional mixed state . However, due to the possibility of noise or device defects, it is unclear whether or not the output state is truly equal to . Quantum state certification allows us to verify the correctness of the quantum algorithm. In addition to this more practical motivation, quantum state certification can be seen as the natural noncommutative analogue of identity testing of (classical) probability distributions, a wellstudied problem in statistics and theoretical computer science.
Recently, [o2015quantum] demonstrated that copies are necessary and sufficient to solve quantum mixedness testing with good confidence. Subsequently, [buadescu2019quantum] demonstrated that the same copy complexity suffices for quantum state certification. Note that these copy complexities are sublinear in the number of parameters in , and in particular, are less than the copies necessary to learn to error in trace norm [o2016efficient, haah2017sample].
To achieve these copy complexities, the algorithms in [o2015quantum, buadescu2019quantum] heavily rely on entangled measurements. These powerful measurements allow them to leverage the representation theoretic structure of the underlying problem to dramatically decrease the copy complexity. However, this power comes with some tradeoffs. Entangled measurements require that all copies of are measured simultaneously. Thus, all copies of must be kept in quantum memory without any of them decohering. Additionally, the positiveoperator valued measure (POVM) elements that formally define the quantum measurement must all be of size ; in particular, the size of the POVM elements scales with . Both of these issues are problematic for using any of these algorithms in practice [cotler2020quantum]. Entangled measurements are also necessary for the only known sampleoptimal algorithms for quantum tomography [o2016efficient, haah2017sample, o2017efficient].
This leads to the question: can these sample complexities be achieved using weaker forms of measurement? There are two natural classes of such restricted measurements to consider:

an (unentangled) nonadaptive measurement fixes POVMs ahead of time, measures each copy of using one of these POVMs, then uses the results to make its decision.

an (unentangled) adaptive measurement measures each copy of sequentially, and can potentially choose its next POVM based on the results of the outcomes of the previous experiments.
It is clear that arbitrarily entangled measurements are strictly more general than adaptive measurements, which are in turn strictly more general than nonadaptive ones. However, both nonadaptive and adaptive measurements have the advantage that the quantum memory they require is substantially smaller than what is required for a generic entangled measurement. In particular, only one copy of need be prepared at any given time, as opposed to the copies that must simultaneously be created, if we use general entangled measurements.
Separating the power of entangled vs. nonentangled measurements for such quantum learning and testing tasks was posed as an open problem in [wright2016learn]. In this paper, we demonstrate the first such separations for quantum state certification, and to our knowledge, the first separation between adaptive measurements and entangled measurements without any additional assumptions on the measurements, for any quantum estimation task.
We first show a sharp characterization of the copy complexity of quantum mixedness testing with nonadaptive measurements: {theorem} If only unentangled, nonadaptive measurements are used, copies are necessary and sufficient to distinguish whether is the maximally mixed state, or if has trace distance at least from the maximally mixed state, with probability at least . Second, we show that copies are necessary, even with adaptive measurements. We view this as our main technical contribution. Formally: {theorem} If only unentangled, possibly adaptive, measurements are used, copies are necesssary to distinguish whether is the maximally mixed state, or has trace distance at least from the maximally mixed state, with probability at least . As quantum state certification is a strict generalization of mixedness testing, Theorems 1 and 1 also immediately imply separations for that problem as well. Note that the constant in the above theorem statements is arbitrary and can be replaced with any constant greater than . We also remark that our lower bounds make no assumptions on the number of outcomes of the POVMs used, which can be infinite (see Definition 3.1).
1.1 Overview of our techniques
In this section, we give a highlevel description of our techniques. We start with the lower bounds.
“Lifting” classical lower bounds to quantum ones
Our lower bound instance can be thought of as the natural quantum analogue of Paninski’s for (classical) uniformity testing: {theorem}[Theorem 4, [paninski2008coincidence]] samples are necessary to distinguish whether a distribution over is far from the uniform distribution in total variation distance, with confidence at least . At a high level, Paninski demonstrates that it is statistically impossible to distinguish between the distribution of independent draws from the uniform distribution, and the distribution of independent draws from a random perturbation of the uniform distribution, where the marginal probability of each element in has been randomly perturbed by (see Example 2).
The hard instance we consider can be viewed as the natural quantum analogue of Paninski’s construction. Roughly speaking, rather than simply perturbing the marginal probabilities of every element in , which corresponds to randomly perturbing the diagonal elements of the mixed state, we also randomly rotate it (see Construction 3.2). We note that this hard instance is not novel and has been considered before in similar settings [o2015quantum, wright2016learn, haah2017sample]. However, our analysis technique is quite different from previous bounds, especially in the adaptive setting.
The technical crux of Paninski’s lower bound is to upper bound the total variation distance between and in terms of the divergence between the two. This turns out to have a simple, explicit form, and can be calculated exactly. This works well because, conditioned on the choice of the random perturbation in , both of the distributions and have a product structure, as they consist of independent samples.
This product structure still holds true in the quantum case when we restrict to nonadaptive measurements. This allows us to do a more involved version of Paninski’s calculation in the quantum case and thus obtain the lower bound in Theorem 1.
However, this product structure breaks down completely in the adaptive setting, as now the POVMs, and hence, the measurement outcomes that we observe, for the th copy of , can depend heavily on the previous outcomes. As a result, the divergence between the analogous quantities to and no longer have a nice, closed form, and it is not clear how to proceed using Paninski’s style of argument.
Instead, inspired by the literature on bandit lower bounds [auer2002nonstochastic, bubeck2012regret], we upper bound the total variation distance between and by the KL divergence between these two quantities. The primary advantage of doing so is that the KL divergence satisfies the chain rule. This allows us to partially disentangle how much information that the th copy of gives the algorithm, conditioned on the outcomes of the previous experiments.
At present, this chainrule formulation of Paninski’s lower bound seems to be somewhat lossy. Even in the classical case, we need additional calculations tailored to Paninski’s instance to recover the bound for uniformity testing (see Appendix B), without which our approach can only obtain a lower bound of (see Section 5). At a high level, this appears to be why we do not obtain a lower bound of for adaptive measurements. We leave the question of closing this gap as an interesting future direction.
“Projecting” quantum upper bounds to classical ones
While the lower bound techniques we employ are motivated by the lower bounds for classical testing, they do not directly use any of those results. In contrast, to obtain our upper bounds, we demonstrate a direct reduction from nonadaptive mixedness testing to classical uniformity testing. The reduction is as follows. First, we choose a random orthogonal measurement basis. Measuring in this basis induces some distribution over . If is maximally mixed, this distribution is the uniform distribution. Otherwise, if it is far from maximally mixed, then by similar concentration of measure phenomena as used in the proof of the lower bounds, with high probability this distribution will be quite far from the uniform distribution in distance. Thus, to distinguish these two cases, we can simply run a classical uniformity tester [chan2014optimal, diakonikolas2014testing, canonne2018testing]. See Appendix A for more details.
Concentration of measure over the unitary group
In both our lower bounds and upper bounds, it will crucial to carefully control the deviations of various functions of Haar random unitary matrices. In fact, specializations of quantities we encounter have been extensively studied in the literature on quantum transport in mesoscopic systems, namely the conductance of a chaotic cavity [brouwer1996diagrammatic, beenakker1997random, blanter2000shot, khoruzhenko2009systematic, al2009statistics], though the tail bounds we need are not captured by these works (see Section 3.3 for more details). Instead, we will rely on more general tail bounds [meckes2013spectral] that follow from logSobolev inequalities on the unitary group .
1.2 Related work
The literature on quantum (and classical) testing and learning is vast and we cannot hope to do it justice here; for conciseness we only discuss some of the more relevant works below.
Quantum state certification fits into the general framework of quantum state property testing problems. Here the goal is to infer nontrivial properties of the unknown quantum state, using fewer copies than are necessary to fully learn the state. See [montanaro2016survey] for a more complete survey on property testing of quantum states. Broadly speaking, there are two regimes studied here: the asymptotic regime and the nonasymptotic regime.
In the asymptotic regime, the goal is to precisely characterize the exponential convergence of the error as and are held fixed and relatively small. In this setting, quantum state certification is commonly referred to as quantum state discrimination. See e.g. [chefles2000quantum, audenaert2008asymptotic, barnett2009quantum] and references within. However, this allows for rates which could depend arbitrarily badly on the dimension.
In contrast, we work in the nonasymptotic regime, where the goal is to precisely characterize the rate of convergence as a function of and . The closest work to ours is arguably [o2015quantum] and [buadescu2019quantum]. The former demonstrated that the copy complexity of quantum mixedness testing is , and the latter showed that quantum state certification has the same copy complexity. However, as described previously, the algorithms which achieve these copy complexities heavily rely on entangled measurements.
Another interesting line of work focuses on the case where the measurements are only allowed to be Pauli matrices [flammia2011direct, flammia2012quantum, da2011practical, aolita2015reliable]. Unfortunately, even for pure states, these algorithms require copies of . We note in particular the paper of [flammia2012quantum], which gives a lower bound for the copy complexity of the problem, even when the Pauli measurements are allowed to be adaptively chosen. However, their techniques do not appear to generalize easily to arbitrary adaptive measurements.
A related task is that of quantum tomography, where the goal is to recover , typically to good fidelity or low trace norm error. The paper [haah2017sample] showed that copies suffice to obtain trace error, and that copies are necessary. Independently, [o2016efficient] improved their upper bound to . These papers, in addition to [o2017efficient], also discuss the case when is low rank, where copy complexity can be achieved. Notably, all the upper bounds that achieve the tight bound heavily require entanglement. In [haah2017sample], they demonstrate that copies are necessary, if the measurements are nonadaptive. It is a very interesting question to understand the power of adaptive measurements for this problem as well.
Quantum state certification and quantum mixedness testing are the natural quantum analogues of classical identity testing and uniformity testing, respectively, which both fit into the general setting of (classical) distribution testing. There is again a vast literature on this topic; see e.g. [canonne2017survey, goldreich2017introduction] for a more extensive treatment of the topic. Besides the papers covered previously and in the surveys, we highlight a line of work on testing with conditional sampling oracles [canonne2015testing, chakraborty2016power, canonne2014testing, acharya2014chasm, bhattacharyya2018property, kamath2019anaconda], a classical model of sampling which also allows for adaptive queries. It would be interesting to see if the techniques we develop here can also be used to obtain stronger lower bounds in this setting. Adaptivity also plays a major role in property testing of functions [belovs2016polynomial, chen2017beyond, khot2016n, baleshzar2017optimal, chen2017boolean, belovs2018adaptive], although these problems appear to be technically unrelated to the ones we consider here.
1.3 Miscellaneous Notation
We gather here useful notation for the rest of the paper. Let denote the set . Given a finite set , we will use to denote sampled uniformly at random from . Given two strings and , let denote their concatenation. Given and a sequence , define . We will also sometimes refer to this as . Also, let .
Given distributions , the total variation distance between and is . If is absolutely continuous with respect to , let denote the RadonNikodym derivative. The KLdivergence between and is
Let , , and denote trace, operator, and HilbertSchmidt norms respectively. Let denote the maximally mixed state. Given a matrix , let . Given and with cycle decomposition , let .
Finally, throughout this work, we will freely abuse notation and use the same symbols to denote probability distributions, their laws, and their density functions.
Roadmap
The rest of the paper is organized as follows:

Section 2— We describe a generic setup that captures Paninski’s and our settings as special cases and provide an overview of the techniques needed to show lower bounds in this setup.

Section 3— We formalize the notion of quantum property testing via adaptive measurements, define our lower bound instance, and perform some preliminary calculations.

Appendix B— A more ad hoc chain rule proof of Paninski’s optimal lower bound.

Appendix C— Various helpful technical facts.
2 Lower Bound Strategies
The lower bounds we show in this work are lower bounds on the number of observations needed to distinguish between a simple null hypothesis and a mixture of alternatives. For instance, in the context of classical uniformity testing, the null hypothesis is that the underlying distribution is the uniform distribution over , and the mixture of alternatives considered in [paninski2008coincidence] is that the underlying distribution was drawn from a particular distribution over distributions which are far in total variation distance from the uniform distribution (see Example 2). In our setting, the null hypothesis is that the underlying state is the maximally mixed state , and the mixture of alternatives will be a particular distribution over quantum states which are far in trace distance from (see Construction 3.2).
Note that in order to obtain dimensiondependent lower bounds, as in classical uniformity testing, it is essential that the alternative hypothesis be a mixture. If the task were instead to distinguish whether the underlying state was or some specific alternative state , then if we make independent measurements in the eigenbasis of , it takes only such measurements to tell apart the two scenarios.
For this reason we will be interested in the following abstraction which contains as special cases both Paninski’s lower bound instance for uniformity testing [paninski2008coincidence] and our lower bound instance for mixedness testing, and which itself is a special case of Le Cam’s twopoint method [lecam1973convergence]. We will do this in a few steps. First, we give a general formalism for what it means to perform possibly adaptive measurements: {definition}[Adaptive measurements] Given an underlying space , a natural number , and a (possibly infinite) universe of measurement outcomes, a measurement schedule using measurements is any (potentially random) algorithm which outputs , where each is a potentially random function. We say that is nonadaptive if the choice of is independent of the choice of for all , and we say is adaptive if the choice of depends only on the outcomes of for all . To instantiate this for the quantum setting, we let the underlying space be the set of mixed states, and we restrict the measurement functions to be (possibly adaptively chosen) POVMs. See Definition 3.1 for a formal definition. {definition} A distribution testing task is specified by two disjoint sets in . For any , and any measurement schedule , we say that solves the problem if there exists a (potentially random) postprocessing algorithm so that for any , if , then
where are generated by . For instance, to instantiate the quantum mixedness testing setting, we let be the set of mixed states, we let be the set containing only , the maximally mixed state, and we let . Note that the choice of for the constant is arbitrary and can be replaced (up to constant factors in ) with any constant strictly larger than . With this, we can now define our lower bound setup: {definition}[Lower Bound Setup: Simple Null vs. Mixture of Alternatives] In the setting of Definition 2, a distinguishing task is specified by a null object , a set of alternate objects parametrized by , and a distribution over .
For any measurement schedule which generates measurement functions , let and be distributions over strings , which we call transcripts of length . The distribution corresponds to the distribution of . The distribution corresponds to the distribution of of , where . The following is a standard result which allows us to relate this back to property testing: {fact} Let be a property, let , and let be a class of measurement schedules using measurements. Suppose that there exists a distinguishing task so that for every , we have that . Then the distribution testing task cannot be solved with samples by any algorithm in . For the remainder of the paper, we will usually implicitly fix a measurement schedule , and just write and . The properties that we assume (e.g. adaptive or nonadaptive) of this algorithm should be clear from context, if it is relevant.
We next define some important quantities which repeatedly arise in our calculations: {definition} In the setting of Definition 2, for any , define to be the respective conditional laws of the th entry, given preceding transcript . For any , let be the distribution over transcripts from independent observations from .
Assume additionally that are absolutely continuous with respect to , for every . Then, there will exist functions , such that for any , the RadonNikodym derivative satisfies
(1) 
We refer to the functions as likelihood ratio factors.
We emphasize that neither nor any of the alternatives is necessarily a product measure. Indeed, this is one of the crucial difficulties of proving lower bounds in the adaptive setting. In the nonadaptive setting, the picture of Definition 2 simplifies substantially:
[Nonadaptive Testing Lower Bound Setup] In this case, in the notation of Definition 2, the measurement schedule is nonadaptive, so and all are product measures. Consequently, the functions will depend only on and not on the particular transcript , so we will denote the functions by .
Paninski’s lower bound for classical uniformity testing [paninski2008coincidence] is an instance of the nonadaptive setup of Definition 2:
Let us first recall Paninski’s construction. Here the set is the set of distributions over . Uniformity testing is the property , where is the uniform distribution over . In the classical “sampling oracle” model of distribution testing, the measurements simply take a distribution and output an independent sample from . In particular, .
To form Paninski’s lower bound instance, take to be the uniform distribution over . Let the null hypothesis be , and let the set of alternate hypotheses be given by , where the distribution over whose th marginal is . Clearly for all . for any
There is no obviously no adaptivity in what the tester does after seeing each new sample. So the family of likelihood ratio factors for which (1) holds is given by
(2) 
The definition of in our proofs will be straightforward (see Construction 3.2), and by Fact 2, the key technical difficulty is to upper bound the total variation distance between in terms of . After recording some notation in Section 1.3, in Section 2.1, we overview our approach for doing so in the nonadaptive setting of Definition 2, and in Section 2.2, we describe our techniques for extending these bounds to the generic, adaptive setting of Definition 2.
2.1 NonAdaptive Lower Bounds
It is a standard trick to upper bound total variation distance between two distributions in terms of the divergence, which is often more amenable to calculations. These calculations are especially straightforward in the nonadaptive setting of Definition 2.
Let be defined as in Definition 2. As is therefore a product measure, for every denote its th marginal by . Then
(3) 
Proof.
The first inequality is just Pinsker’s and the fact that chisquared divergence upper bounds KL divergence. For the latter inequality, it will be convenient to define
(4) 
Then for any , the product structure implies
(5) 
We then get that
(6)  
(7)  
(8) 
where the fourth step follows by (5), the last step follows by Holder’s, and the third step follows by the fact that for and any ,
(9) 
∎
The upshot of (8) is that the fluctuations of the quantities with respect to the randomness of dictate how large must be for and to be distinguishable.
Recalling (2), the quantities take a particularly nice form in Paninski’s setting. There we have
(10) 
Because is distributed as a shifted, rescaled binomial distribution, has subGaussian tails and fluctuations of order , implying that for as large as , . While this is not exactly how Paninski’s lower bound was originally proven, concentration of the binomial random variable lies at the heart of the lower bound and formalizes the usual intuition for the scaling in the lower bound: to tell whether a distribution is far from uniform, it is necessary to draw samples just to see some element of appear twice.
2.2 Adaptive Lower Bounds
As was discussed previously and is evident from the proof of Lemma 3, the lack of product structure for and in the adaptive setting of Definition 2 makes it infeasible to directly estimate . Inspired by the literature on bandit lower bounds [auer2002nonstochastic, bubeck2012regret], we instead upper bound , for which we can appeal to the chain rule to tame the extra power afforded by adaptivity. To handle the mixture structure of , we will upper bound each of the resulting conditional KL divergence terms by their corresponding conditional divergence.
First, we introduce some notation essential to the calculations in this work.
[Key Quantities] In the generic setup of Definition 2, for any , define
(11) 
The following is a key technical ingredient of this work.
Let be defined as in Definition 2. Then
(12) 
Proof.
The first inequality is Pinsker’s. For the second, by the chain rule for KL divergence and the fact that chisquared divergence upper bounds KL, can be written as
(13) 
By definition, the conditional densities satisfy
(14) 
Therefore, we have:
(15)  
(16)  
(17) 
where the first step follows by (14) and the third step follows by a change of measure in the outer expectation.
3 Unentangled Measurements and Lower Bound Instance
In this section we provide some preliminary notions and calculations that are essential to understanding the proofs of Theorem 1 and 1. We first formalize the notion of quantum property testing with unentangled, possibly adaptive measurements in Section 3.1. Then in Section 3.2, we give our lower bound construction and instantiate it in the generic setup of Definition 2. Finally, in Section 3.3, we give some intuition for some of the key quantities that arise.
3.1 Testing with Unentangled Measurements
We first formally define the notion of a POVM with possibly infinite outcome set.
Given space with Borel algebra , let be a regular positive realvalued measure on , and let be a measurable function taking values in the set of psd Hermitian matrices. We will denote the image of under by .
We say that the pair specifies a POVM if and, for any density matrix , the map for specifies a probability measure over . We call the distribution given by this measure the distribution over outcomes from measuring with .^{2}^{2}2This definition looks diferent from standard ones because we are implicitly invoking the RadonNikodym theorem for POVMs on finitedimensional Hilbert spaces, see e.g. Theorem 3 from [moran2013positive] or Lemma 11 from [chiribella2010barycentric].
Given a POVM , we will refer to the space of measurement outcomes as .
With no meaningful loss in understanding, the reader may simply imagine that all POVMs mentioned henceforth have finitely many outcomes so that a POVM is simply the data of some finite set of positive semidefinite Hermitian matrices for which , though our arguments extend to the full generality of Definition 3.1.
Let . An unentangled, possibly adaptive POVM schedule is a type of measurement schedule specified by a (possibly infinite) collection of POVMs where , and for every , denotes the set of all possible transcripts of measurement outcomes for which for all (recall that ). The schedule works in the natural manner: at time for , given a transcript , it measures the th copy of using the POVM .
If in addition the resulting schedule is also a nonadaptive measurement schedule, we say it is an unentangled, nonadaptive POVM schedule.
3.2 Lower Bound Instance
Let be the Haar measure over the unitary group . In place of from Definition 2, we will denote elements from by . and will be with respect to unless otherwise specified.
Let denote the diagonal matrix whose first diagonal entries are equal to , and whose last diagonal entries are equal to . Let . Let .
Our lower bound instance will be the distribution over densities for . We remark that this instance, the quantum analogue of Paninski’s lower bound instance [paninski2008coincidence] for classial uniformity testing, has appeared in various forms throughout the quantum learning and testing literature [o2015quantum, wright2016learn, haah2017sample].
Given , define and . Take any POVM schedule . Given , define and to be the distribution over the measurement outcomes when the first steps of these POVM schedules are applied to the first parts of and respectively. Equivalently, can be regarded as the distribution over sequences of measurement outcomes arising from first sampling according to the Haar measure and then applying the first steps of POVM schedule to copies of .
For any POVM , define
(22) 
is absolutely continuous with respect to , and the family of likelihood ratio factors for which (1) holds for and defined in Construction 3.2 is given by .
Proof.
By taking a disjoint union over for all and transcripts , we can assume without loss of generality that there is some space for which is a subspace of for every . For the product space , equip the th factor with the algebra given by the join of all algebras associated to for transcripts of length .
Then the measures in Definition 3.1 for all POVMs induce a measure over . Moreover, by definition, and correspond to probability measures over which are absolutely continuous with respect to .
Because for any nonzero psd Hermitian matrix , absolute continuity of with respect to follows immediately.
By the chain rule for RadonNikodym derivatives, we conclude that
(23) 
as claimed. ∎
For any , the quantities and are given by (11). Given a POVM , also define in the obvious way. Lastly, we record the following basic facts:
For any POVM ,

[label=()]

for any .

For any measurement outcome and , and thus .
3.3 Intuition for
Recall from Example 2.1 that for classical uniformity testing, , and by Lemma 3, the fluctuations of as a random variable in precisely dictate the sample complexity of uniformity testing.
One should therefore think of the distribution of the quantity as a “quantum analogue” of the binomial distribution whose fluctuations are closely related to the scaling of the copy complexity of mixedness testing.
As we will show in Theorem 26, has fluctuations and concentrates well, from which it will follow by integration by parts that can be taken as large as , yielding the lower bound of Theorem 1.
To get some intuition for where these fluctuations come from, suppose were the orthogonal POVM given by the standard basis. Then
(24) 
where
(25) 
For any fixed , are independent random unit vectors, and the variance of is (see Fact 74). If were all independent, then would thus have variance , suggesting fluctuations as claimed. Of course we do not actually have this independence assumption; in addition, the other key technical challenges we must face to get Theorem 26 are 1) to go beyond just a second moment bound and show sufficiently strong concentration of , and 2) to show this is the case for all POVMs. We do this in Section 7.
4 Proof of NonAdaptive Lower Bound
In this section we prove Theorem 1 by applying Lemma 3; the technical crux of the proof (and of our proof of Theorem 1 in the next section) is the following tail bound, whose proof we defer to Section 7:
[] Fix any POVM . There exists an absolute constant such that for any , we have
(26) 
Proof of Theorem 1.
By Fact 2, it suffices to show that no nonadaptive POVM schedule can solve the distinguishing task given by Construction 3.2, unless . For a nonadaptive POVM schedule , let denote the sequence of POVMs that are used. Recalling (22), the likelihood ratio factors for which (1) holds in the nonadaptive setting of Definition 2 are given by . Similarly, denote by .
By Lemma 3, we have