MATSUO YANASE SIGNED MANUSCRIPT 18  PAGES OF HIS WORK WITH HISIHIRO ARAKI. HAS HIS HANDWRITTEN INK CORRECTIONS BEFORE PUBLICATION IN 1960.

ON THE MEASURMENT OF QUANTUM MECHANICAL OPERATORS

HIS SIGNATURE IS EXTREMELY SCARCE. 

Mutsuo Yanase
Physicist
Born: January 19, 1922, Osaka, Japan
Died: December 7, 2008
Education: The University of Tokyo


Measurement of Quantum Mechanical Operators
Huzihiro Araki and Mutsuo M. Yanase
Phys. Rev. 120, 622 – Published 15 October 1960
ABSTRACT
The limitation on the measurement of an operator imposed by the presence of a conservation law is studied. It is shown that an operator which does not commute with a conserved (additive) quantity cannot be measured exactly (in the sense of von Neumann). It is also shown for a simple case that an approximate measurement of such an operator is possible to any desired accuracy.



HE WORKED ON A MAJOR WORK IN QUANTUM MECHANICS:

The Wigner–Araki–Yanase theorem, also known as the WAY theorem, is a result in quantum physics establishing that the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured. It is named for the physicists Eugene Wigner, Huzihiro Araki and Mutsuo Yanase.

The theorem can be illustrated with a particle coupled to a measuring apparatus.:421 If the position operator of the particle is {\displaystyle q}q and its momentum operator is {\displaystyle p}p, and if the position and momentum of the apparatus are {\displaystyle Q}Q and {\displaystyle P}P respectively, assuming that the total momentum {\displaystyle p+P}{\displaystyle p+P} is conserved implies that, in a suitably quantified sense, the particle's position itself cannot be measured. The measurable quantity is its position relative to the measuring apparatus, represented by the operator {\displaystyle q-Q}{\displaystyle q-Q}. The Wigner–Araki–Yanase theorem generalizes this to the case of two arbitrary observables {\displaystyle A}A and {\displaystyle B}B for the system and an observable {\displaystyle C}C for the apparatus, satisfying the condition that {\displaystyle B+C}B+C is conserved.


The Wigner–Araki–Yanase (WAY) theorem establishes an important constraint that conservation laws impose on quantum mechanical measurements. We formulate the WAY theorem in the broader context of resource theories, where one is constrained to a subset of quantum mechanical operations described by a symmetry group. Establishing connections with the theory of quantum state discrimination we obtain optimal unitaries describing the measurement of arbitrary observables, explain how prior information can permit perfect measurements that circumvent the WAY constraint, and provide a framework that establishes a natural ordering on measurement apparatuses through a decomposition into asymmetry and charge subsystems.















The Wigner–Araki–Yanase theorem, also known as the WAY theorem, is a result in quantum physics establishing that the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured.[1][2][3] It is named for the physicists Eugene Wigner,[4] Huzihiro Araki and Mutsuo Yanase.[5][6]

The theorem can be illustrated with a particle coupled to a measuring apparatus.[7]:421 If the position operator of the particle is {\displaystyle q}q and its momentum operator is {\displaystyle p}p, and if the position and momentum of the apparatus are {\displaystyle Q}Q and {\displaystyle P}P respectively, assuming that the total momentum {\displaystyle p+P}{\displaystyle p+P} is conserved implies that, in a suitably quantified sense, the particle's position itself cannot be measured. The measurable quantity is its position relative to the measuring apparatus, represented by the operator {\displaystyle q-Q}{\displaystyle q-Q}. The Wigner–Araki–Yanase theorem generalizes this to the case of two arbitrary observables {\displaystyle A}A and {\displaystyle B}B for the system and an observable {\displaystyle C}C for the apparatus, satisfying the condition that {\displaystyle B+C}B+C is conserved.[8][9]

This Week's Finds in Mathematical Physics (Week 33)
John Baez
With tremendous relief, I have finished writing a book, and will return to putting out This Week's Finds on a roughly weekly basis. Let me briefly describe my book, which took so much more work than I had expected... and then let me start catching up on listing some of the stuff that's cluttering my desk!

1) Gauge Fields, Knots and Gravity, by John Baez and Javier de Muniain, World Scientific Press, to appear in summer 1994.

This book is based on a seminar I taught in 1992-93. We start out assuming the reader is familiar with basic stuff - Maxwell's equations, special relativity, linear algebra and calculus of several variables - and try to prepare the reader to understand recent work on quantum gravity and its relation to knot theory. It proved difficult to do this well in a mere 460 pages. Lots of tantalizing loose ends are left dangling. However, there are copious references so that the reader can pursue various subjects further.

Part 1.    Electromagnetism 

Chapter 1. Maxwell's Equations 
Chapter 2. Manifolds
Chapter 3. Vector Fields
Chapter 4. Differential Forms
Chapter 5. Rewriting Maxwell's Equations
Chapter 6. DeRham Theory in Electromagnetism

Part 2.    Gauge Fields

Chapter 1. Symmetry
Chapter 2. Bundles and Connections
Chapter 3. Curvature and the Yang-Mills Equations
Chapter 4. Chern-Simons Theory
Chapter 5. Link Invariants from Gauge Theory

Part 3.    Gravity

Chapter 1. Semi-Riemannian Geometry
Chapter 2. Einstein's Equations
Chapter 3. Lagrangians for General Relativity
Chapter 4. The ADM Formalism
Chapter 5. The New Variables
2) Quantum Theory: Concepts and Methods, by Asher Peres, Kluwer Academic Publishers, 1994.

As Peres notes, there are many books that teach students how to solve quantum mechanics problems, but not many that tackle the conceptual puzzles that fascinate those interested in the foundations of the subject. His book aims to fill this gap. Of course, it's impossible not to annoy people when writing about something so controversial; for example, fans of Everett will be distressed that Peres' book contains only a brief section on "Everett's interpretation and other bizarre interpretations". However, the book is clear-headed and discusses a lot of interesting topics, so everyone should take a look at it.

Schroedinger's cat, Bell's inequality and Wigner's friend are old chestnuts that everyone puzzling over quantum theory has seen, but there are plenty of popular new chestnuts in this book too, like "quantum cryptography", "quantum teleportation", and the "quantum Zeno effect", all of which would send shivers up and down Einstein's spine. There are also a lot of gems that I hadn't seen, like the Wigner-Araki-Yanase theorem. Let me discuss this theorem a bit.

Roughly, the WAY theorem states that it is impossible to measure an operator that fails to commute with an additive conserved quantity. Let me give an example to clarify this and then give the proof. Say we have a particle with position q and momentum p, and a measuring apparatus with position Q and momentum P. Let's suppose that the total momentum p + P is conserved - which will typically be the case if we count as part of the "apparatus" everything that exerts a force on the particle. Then as a consequence of the WAY theorem we can see that (in a certain sense) it is impossible to measure the particle's position q; all we can measure is its position relative to the apparatus, q - Q.

Of course, whenever a "physics theorem" states that something is impossible one must peer into it and determine the exact assumptions and the exact result! Lots of people have gotten in trouble by citing theorems that seem to show something is impossible without reading the fine print. So let's see what the WAY theorem really says!

It assumes that the Hilbert space for the system is the tensor product of the Hilbert space for the thing being observed - for short, let's call it the "particle" - and the Hilbert space for the measuring apparatus. Assume also that A and B are two observables belonging to the observed system, while C is an observable belonging to the measuring apparatus; suppose that B + C is conserved, and let's try to show that we can only measure A if it commutes with B. (Our assumptions automatically imply that A commutes with C, by the way.)

So, what do we mean when we speak of "measuring A"? Well, there are various things one might mean. The simplest is that if we start the combined system in some tensor product state u(i) ⊗ v, where v is the "waiting and ready" state of the apparatus and u(i) is a state of the observed system that's an eigenvector of A:

Au(i) = a(i)u(i),

then the unitary operator U corresponding to time evolution does the following:

U(u(i) ⊗ v) = u(i) ⊗ v(i)

where the state v(i) of the apparatus is one in which it can be said to have measured the observable A to have value a(i). E.g., the apparatus might have a dial on it, and in the state v(i) the dial reads "a(i)". Of course, we are really only justified in saying a measurement has occured if the states v(i) are distinct for different values of i.

Note: here the WAY theorem seems to be restricting itself to nondestructive measurements, since the observed system is remaining in the state u(i). If you go through the proof you can see to what extent this is crucial, and how one might modify the theorem if this is not the case.

Okay, we have to show that we can only "measure A" in this sense if A commutes with B. We are assuming that B + C is conserved, i.e.,

U*(B + C)U = B + C.

First note that

<u(i), [A,B] u(j)> = (a(i) - a(j)) <u(i), Bu(j)>.

On the other hand, since A and B only act on the Hilbert space for the particle, we also have

<u(i), [A,B] u(j)> = <u(i) ⊗ v, [A,B] u(j) ⊗ v> 

                   = <u(i) ⊗ v, [A,B+C] u(j) ⊗ v>

                   = (a(i) - a(j))  <u(i) ⊗ v, (B+C) u(j) ⊗ v>.
It follows that if a(i) - a(j) isn't zero,

<u(i), Bu(j)> = <u(i) ⊗ v, (B+C) u(j) ⊗ v>

              = <u(i) ⊗ v, U*(B + C)U u(j) ⊗ v>

              = <u(i) ⊗ v(i), (B + C) u(j) ⊗ v(j)> 

              = <u(i), Bu(j)> <v(i), v(j)> + <u(i), u(j)> <v(i), C v(j)>
but the second term vanishes since u(i) are a basis of eigenvectors and u(i) and u(j) correspond to different eigenvalues, so

<u(i), Bu(j)> = <u(i), Bu(j)> <v(i), v(j)>

which means that either <v(i), v(j)> = 1, hence v(i) = v(j) (since they are unit vectors), so that no measurement has really been done, OR that <u(i), B u(j)> = 0, which means (if true for all i,j) that A commutes with B.

So, we have proved the result, using one extra assumption that I didn't mention at the start, namely that the eigenvalues a(i) are distinct.

I can't say that I really understand the argument, although it's easy enough to follow the math. I will have to ponder it more, but it is rather interesting, because it makes more precise (and general) the familiar notion that one can't measure absolute positions, due to the translation-invariance of the laws of physics; this translation invariance is of course what makes momentum be conserved. (What I just wrote makes me wonder if someone has shown a classical analog of the WAY theorem.)

Anyway, here's the table of contents of the book:

Chapter 1: Introduction to Quantum Physics 
 
1-1. The downfall of classical concepts                             3 
1-2. The rise of randomness                                         5 
1-3. Polarized photons                                              7 
1-4. Introducing the quantum language                               9 
1-5. What is a measurement?                                        14 
1-6. Historical remarks                                            18 
1-7. Bibliography                                                  21
 
Chapter 2: Quantum Tests 
 
2-1. What is a quantum system?                                      24 
2-2. Repeatable tests                                               27 
2-3. Maximal quantum tests                                          29 
2-4. Consecutive tests                                              33 
2-5. The principle of interference                                  36 
2-6. Transition amplitudes                                          39 
2-7. Appendix: Bayes's rule of statistical inference                45 
2-8. Bibliography                                                   47
 
Chapter 3: Complex Vector Space 
 
3-1. The superposition principle                                    48 
3-2. Metric properties                                              51 
3-3. Quantum expectation rule                                       54 
3-4. Physical implementation                                        57 
3-5. Determination of a quantum state                               58 
3-6. Measurements and observables                                   62 
3-7. Further algebraic properties                                   67 
3-8. Quantum mixtures                                               72 
3-9. Appendix: Dirac's notation                                     77 
3-10. Bibliography                                                  78
     
Chapter 4: Continuous Variables 
 
4-1. Hilbert space                                                  79 
4-2. Linear operators                                               84 
4-3. Commutators and uncertainty relations                          89 
4-4. Truncated Hilbert space                                        95 
4-5. Spectral theory                                                99 
4-6. Classification of spectra                                     103 
4-7. Appendix: Generalized functions                               106 
4-8. Bibliography                                                  112
 
Chapter 5: Composite Systems  
 
5-1. Quantum correlations                                          115 
5-2. Incomplete tests and partial traces                           121 
5-3. The Schmidt decomposition                                     123 
5-4. Indistinguishable particles                                   126 
5-5. Parastatistics                                                131 
5-6. Fock space                                                    137 
5-7. Second quantization                                           142 
5-8. Bibliography                                                  147
 
Chapter 6: Bell's Theorem  
 
6-1. The dilemma of Einstein, Podolsky, and Rosen                  148 
6-2. Cryptodeterminism                                             155 
6-3. Bell's inequalities                                           160 
6-4. Some fundamental issues                                       167 
6-5. Other quantum inequalities                                    173 
6-6. Higher spins                                                  179 
6-7. Bibliography                                                  185
      
Chapter 7: Contextuality  
 
7-1. Nonlocality versus contextuality                              187 
7-2. Gleason's theorem                                             190 
7-3. The Kochen-Specker theorem                                    196 
7-4. Experimental and logical aspects of inseparability            202 
7-5. Appendix: Computer test for Kochen-Specker contradiction      209
7-6. Bibliography                                                  211

Chapter 8: Spacetime Symmetries  
 
8-1. What is a symmetry?                                           215 
8-2. Wigner's theorem                                              217 
8-3. Continuous transformations                                    220 
8-4. The momentum operator                                         225 
8-5. The Euclidean group                                           229 
8-6. Quantum dynamics                                              237 
8-7. Heisenberg and Dirac pictures                                 242 
8-8. Galilean invariance                                           245 
8-9. Relativistic invariance                                       249 
8-10. Forms of relativistic dynamics                               254 
8-11. Space reflection and time reversal                           257 
8-12. Bibliography                                                 259
     
Chapter 9: Information and Thermodynamics  
 
9-1. Entropy                                                       260 
9-2. Thermodynamic equilibrium                                     266 
9-3. Ideal quantum gas                                             270 
9-4. Some impossible processes                                     275 
9-5. Generalized quantum tests                                     279 
9-6. Neumark's theorem                                             285 
9-7. The limits of objectivity                                     289 
9-8. Quantum cryptography and teleportation                        293 
9-9. Bibliography                                                  296
 
Chapter 10: Semiclassical Methods  
 
10-1. The correspondence principle                                 298 
10-2. Motion and distortion of wave packets                        302 
10-3. Classical action                                             307 
10-4. Quantum mechanics in phase space                             312 
10-5. Koopman's theorem                                            317 
10-6. Compact spaces                                               319 
10-7. Coherent states                                              323 
10-8. Bibliography                                                 330
 
Chapter 11: Chaos and Irreversibility  
 
11-1. Discrete maps                                                332 
11-2. Irreversibility in classical physics                         341 
11-3. Quantum aspects of classical chaos                           347 
11-4. Quantum maps                                                 351 
11-5. Chaotic quantum motion                                       353 
11-6. Evolution of pure states into mixtures                       369 
11-7. Appendix: PostScript code for a map                          370 
11-8. Bibliography                                                 371

Chapter 12: The Measuring Process  
 
12-1. The ambivalent observer                                      373 
12-2. Classical measurement theory                                 378 
12-3. Estimation of a static parameter                             385 
12-4. Time-dependent signals                                       387 
12-5. Quantum Zeno effect                                          392 
12-6. Measurements of finite duration                              400 
12-7. The measurement of time                                      405 
12-8. Time and energy complementarity                              413 
12-9. Incompatible observables                                     417 
12-10. Approximate reality                                         423 
12-11. Bibliography                                                428
3) Loop representations, by Bernd Bruegmann, Max Planck Institute preprint, available as gr-qc 9312001.

This is a nice review article on loop representations of gauge theories. Anyone wanting to jump into the loop representation game would be well advised to start here.

4) The fate of black hole singularities and the parameters of the standard models of particle physics and cosmology, by Lee Smolin, available in LaTeX format as gr-qc/9404011.

This is about Smolin's "evolutionary cosmology" scenario, which I already discussed in week31. Let me just quote the abstract:

A cosmological scenario which explains the values of the parameters of the standard models of elementary particle physics and cosmology is discussed. In this scenario these parameters are set by a process analogous to natural selection which follows naturally from the assumption that the singularities in black holes are removed by quantum effects leading to the creation of new expanding regions of the universe. The suggestion of J. A. Wheeler that the parameters change randomly at such events leads naturally to the conjecture that the parameters have been selected for values that extremize the production of black holes. This leads directly to a prediction, which is that small changes in any of the parameters should lead to a decrease in the number of black holes produced by the universe. On plausible astrophysical assumptions it is found that changes in many of the parameters do lead to a decrease in the number of black holes produced by spiral galaxies. These include the masses of the proton, neutron, electron and neutrino and the weak, strong and electromagnetic coupling constants. Finally,this scenario predicts a natural time scale for cosmology equal to the time over which spiral galaxies maintain appreciable rates of star formation, which is compatible with current observations that Ω = .1-.2

The Wigner–Araki–Yanase (WAY) theorem establishes an important constraint that conservation laws impose on quantum mechanical measurements. We formulate the WAY theorem in the broader context of resource theories, where one is constrained to a subset of quantum mechanical operations described by a symmetry group. Establishing connections with the theory of quantum state discrimination we obtain optimal unitaries describing the measurement of arbitrary observables, explain how prior information can permit perfect measurements that circumvent the WAY constraint, and provide a framework that establishes a natural ordering on measurement apparatuses through a decomposition into asymmetry and charge subsystems.

Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles.[2]:1.1 It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science.

Classical physics, the description of physics that existed before the theory of relativity and quantum mechanics, describes many aspects of nature at an ordinary (macroscopic) scale, while quantum mechanics explains the aspects of nature at small (atomic and subatomic) scales, for which classical mechanics is insufficient. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.[3]

Quantum mechanics differs from classical physics in that energy, momentum, angular momentum, and other quantities of a bound system are restricted to discrete values (quantization), objects have characteristics of both particles and waves (wave-particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle).

Quantum mechanics arose gradually from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield.


Contents
1 Overview and fundamental concepts
2 Mathematical formulation
2.1 Uncertainty principle
2.2 Composite systems and entanglement
2.3 Equivalence between formulations
2.4 Symmetries and conservation laws
3 Examples
3.1 Free particle
3.2 Particle in a box
3.3 Harmonic oscillator
3.4 Mach–Zehnder interferometer
4 Applications
5 Relation to other scientific theories
5.1 Classical mechanics
5.2 Special relativity and electrodynamics
5.3 Relation to general relativity
6 Philosophical implications
7 History
8 See also
9 Notes
10 References
11 Further reading
12 External links
Overview and fundamental concepts
Quantum mechanics allows the calculation of probabilities for how physical systems can behave. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.[note 1] A basic mathematical feature of quantum mechanics is that a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.

One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between different measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also for a measurement of its momentum.

Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.[4]:102–111[2]:1.1–1.8 The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles.[4] However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave).[4]:109[5][6] However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. Other atomic-scale entities, such as electrons, are found to exhibit the same behavior when fired towards a double slit.[2] This behavior is known as wave-particle duality.

Another counter-intuitive phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential.[7] In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy and the tunnel diode.[8]

When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought".[9] Quantum entanglement enables the counter-intuitive properties of quantum pseudo-telepathy, and can be a valuable resource in communication protocols, such as quantum key distribution and superdense coding.[10] Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem.[10]

Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed, using entangled particles, and they have shown results incompatible with the constraints imposed by local hidden variables.[11][12]

It is not possible to present these concepts in more than a superficial way without introducing the actual mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects.[note 2] Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples.

Mathematical formulation
Main article: Mathematical formulation of quantum mechanics
In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac,[15] David Hilbert,[16] John von Neumann,[17] and Hermann Weyl,[18] the state of a quantum mechanical system is a vector {\displaystyle \psi }\psi  belonging to a (separable) Hilbert space {\displaystyle {\mathcal {H}}}{\mathcal {H}}. This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys {\displaystyle \langle \psi ,\psi \rangle =1}{\displaystyle \langle \psi ,\psi \rangle =1}, and it is well-defined up to a complex number of modulus 1 (the global phase), that is, {\displaystyle \psi }\psi  and {\displaystyle e^{i\alpha }\psi }{\displaystyle e^{i\alpha }\psi } represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions {\displaystyle L^{2}(\mathbb {C} )}{\displaystyle L^{2}(\mathbb {C} )}, while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors {\displaystyle \mathbb {C} ^{2}}{\mathbb  C}^{2} with the usual inner product.

Physical quantities of interest — position, momentum, energy, spin — are represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue {\displaystyle \lambda }\lambda  is non-degenerate and the probability is given by {\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}}{\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}}, where {\displaystyle {\vec {\lambda }}}{\displaystyle {\vec {\lambda }}} is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by {\displaystyle \langle \psi ,P_{\lambda }\psi \rangle }{\displaystyle \langle \psi ,P_{\lambda }\psi \rangle }, where {\displaystyle P_{\lambda }}P_{\lambda } is the projector onto its associated eigenspace.

After the measurement, if result {\displaystyle \lambda }\lambda  was obtained, the quantum state is postulated to collapse to {\displaystyle {\vec {\lambda }}}{\displaystyle {\vec {\lambda }}}, in the non-degenerate case, or to {\displaystyle P_{\lambda }\psi /{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}}{\displaystyle P_{\lambda }\psi /{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}}, in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics.[19]

The time evolution of a quantum state is described by the Schrödinger equation:

{\displaystyle i\hbar {\frac {d}{dt}}\psi (t)=H\psi (t).}{\displaystyle i\hbar {\frac {d}{dt}}\psi (t)=H\psi (t).}
Here {\displaystyle H}H denotes the Hamiltonian, the observable corresponding to the total energy of the system. The constant {\displaystyle i\hbar }i\hbar is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle.

The solution of this differential equation is given by

{\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).}{\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).}
The operator {\displaystyle U(t)=e^{-iHt/\hbar }}{\displaystyle U(t)=e^{-iHt/\hbar }} is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state {\displaystyle \psi (0)}\psi (0)  – it makes a definite prediction of what the quantum state {\displaystyle \psi (t)}\psi(t) will be at any later time.[20]


Fig. 1: Probability densities corresponding to the wave functions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momenta (increasing across from left to right: s, p, d, ...). Denser areas correspond to higher probability density in a position measurement. Such wave functions are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics and are modes of oscillation as well, possessing a sharp energy and thus, a definite frequency. The angular momentum and energy are quantized and take only discrete values like those shown (as is the case for resonant frequencies in acoustics)
Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1).

Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment.

However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy. Another method is called "semi-classical equation of motion", which applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of quantum chaos.

Uncertainty principle
One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum.[21][22] Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator {\displaystyle {\hat {X}}}{\hat {X}} and momentum operator {\displaystyle {\hat {P}}}\hat{P} do not commute, but rather satisfy the canonical commutation relation:

{\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .}{\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .}
Given a quantum state, the Born rule lets us compute expectation values for both {\displaystyle X}X and {\displaystyle P}P, and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have

{\displaystyle \sigma _{X}={\sqrt {\langle {X}^{2}\rangle -\langle {X}\rangle ^{2}}},}{\displaystyle \sigma _{X}={\sqrt {\langle {X}^{2}\rangle -\langle {X}\rangle ^{2}}},}
and likewise for the momentum:

{\displaystyle \sigma _{P}={\sqrt {\langle {P}^{2}\rangle -\langle {P}\rangle ^{2}}}.}{\displaystyle \sigma _{P}={\sqrt {\langle {P}^{2}\rangle -\langle {P}\rangle ^{2}}}.}
The uncertainty principle states that

{\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.}{\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.}
Either standard deviation can in principle be made arbitrarily small, but not both simultaneously.[23] This inequality generalizes to arbitrary pairs of self-adjoint operators {\displaystyle A}A and {\displaystyle B}B. The commutator of these two operators is

{\displaystyle [A,B]=AB-BA,}{\displaystyle [A,B]=AB-BA,}
and this provides the lower bound on the product of standard deviations:

{\displaystyle \sigma _{A}\sigma _{B}\geq {\frac {1}{2}}\left|\langle [A,B]\rangle \right|.}{\displaystyle \sigma _{A}\sigma _{B}\geq {\frac {1}{2}}\left|\langle [A,B]\rangle \right|.}
Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an {\displaystyle i/\hbar }{\displaystyle i/\hbar } factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum {\displaystyle p_{i}}p_{i} is replaced by {\displaystyle -i\hbar {\frac {\partial }{\partial x}}}{\displaystyle -i\hbar {\frac {\partial }{\partial x}}}, and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times {\displaystyle -\hbar ^{2}}-\hbar ^{2}.[21]

Composite systems and entanglement
When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let A and B be two quantum systems, with Hilbert spaces {\displaystyle {\mathcal {H}}_{A}}{\displaystyle {\mathcal {H}}_{A}} and {\displaystyle {\mathcal {H}}_{B}}{\displaystyle {\mathcal {H}}_{B}}, respectively. The Hilbert space of the composite system is then

{\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.}{\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.}
If the state for the first system is the vector {\displaystyle \psi _{A}}{\displaystyle \psi _{A}} and the state for the second system is {\displaystyle \psi _{B}}{\displaystyle \psi _{B}}, then the state of the composite system is

{\displaystyle \psi _{A}\otimes \psi _{B}.}{\displaystyle \psi _{A}\otimes \psi _{B}.}
Not all states in the joint Hilbert space {\displaystyle {\mathcal {H}}_{AB}}{\displaystyle {\mathcal {H}}_{AB}} can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if {\displaystyle \psi _{A}}{\displaystyle \psi _{A}} and {\displaystyle \phi _{A}}\phi _{A} are both possible states for system {\displaystyle A}A, and likewise {\displaystyle \psi _{B}}{\displaystyle \psi _{B}} and {\displaystyle \phi _{B}}\phi _{B} are both possible states for system {\displaystyle B}B, then

{\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)}{\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)}
is a valid joint state that is not separable. States that are not separable are called entangled.[24][25]

If the state for a composite system is entangled, it is impossible to describe either component system A or system B by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system.[24][25] Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory.[24][26]

As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic.[27]

Equivalence between formulations
There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger).[28] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.

Symmetries and conservation laws
Main article: Noether's theorem
The Hamiltonian {\displaystyle H}H is known as the generator of time evolution, since it defines a unitary time-evolution operator {\displaystyle U(t)=e^{-iHt/\hbar }}{\displaystyle U(t)=e^{-iHt/\hbar }} for each value of {\displaystyle t}t. From this relation between {\displaystyle U(t)}U(t) and {\displaystyle H}H, it follows that any observable {\displaystyle A}A that commutes with {\displaystyle H}H will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator {\displaystyle A}A can generate a family of unitary operators parameterized by a variable {\displaystyle t}t. Under the evolution generated by {\displaystyle A}A, any observable {\displaystyle B}B that commutes with {\displaystyle A}A will be conserved. Moreover, if {\displaystyle B}B is conserved by evolution under {\displaystyle A}A, then {\displaystyle A}A is conserved under the evolution generated by {\displaystyle B}B. This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law.

Examples
Free particle
Main article: Free particle

Position space probability density of a Gaussian wave packet moving in one dimension in free space.
The simplest example of quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:

{\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.}{\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.}
The general solution of the Schrödinger equation is given by

{\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,}{\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,}
which is a superposition of all possible plane waves {\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}}{\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}}, which are eigenstates of the momentum operator with momentum {\displaystyle p=\hbar k}{\displaystyle p=\hbar k}. The coefficients of the superposition are {\displaystyle {\hat {\psi }}(k,0)}{\displaystyle {\hat {\psi }}(k,0)}, which is the Fourier transform of the initial quantum state {\displaystyle \psi (x,0)}{\displaystyle \psi (x,0)}.

It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states.[note 3] Instead, we can consider a Gaussian wave packet:

{\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}}{\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}}
which has Fourier transform, and therefore momentum distribution

{\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.}{\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.}
We see that as we make a smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making a larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle.

As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant.[29]

Particle in a box

1-dimensional potential energy box (or infinite potential well)
Main article: Particle in a box
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region.[21]:77–78 For the one-dimensional case in the {\displaystyle x}x direction, the time-independent Schrödinger equation may be written

{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .}-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .
With the differential operator defined by

{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}}{\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}
the previous equation is evocative of the classic kinetic energy analogue,

{\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,}{\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,
with state {\displaystyle \psi }\psi  in this case having energy {\displaystyle E}E coincident with the kinetic energy of the particle.

The general solutions of the Schrödinger equation for the particle in a box are

{\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}}\psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}
or, from Euler's formula,

{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!}{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!}
The infinite potential walls of the box determine the values of {\displaystyle C,D,}{\displaystyle C,D,} and {\displaystyle k}k at {\displaystyle x=0}x=0 and {\displaystyle x=L}x=L where {\displaystyle \psi }\psi  must be zero. Thus, at {\displaystyle x=0}x=0,

{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}
and {\displaystyle D=0}D=0. At {\displaystyle x=L}x=L,

{\displaystyle \psi (L)=0=C\sin(kL),}{\displaystyle \psi (L)=0=C\sin(kL),}
in which {\displaystyle C}C cannot be zero as this would conflict with the postulate that {\displaystyle \psi }\psi  has norm 1. Therefore, since {\displaystyle \sin(kL)=0}{\displaystyle \sin(kL)=0}, {\displaystyle kL}{\displaystyle kL} must be an integer multiple of {\displaystyle \pi }\pi ,

{\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .}k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .
This constraint on {\displaystyle k}k implies a constraint on the energy levels, yielding

{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.}{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.}

A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.

Harmonic oscillator
Main article: Quantum harmonic oscillator

Some trajectories of a harmonic oscillator (i.e. a ball attached to a spring) in classical mechanics (A-B) and quantum mechanics (C-H). In quantum mechanics, the position of the ball is represented by a wave (called the wave function), with the real part shown in blue and the imaginary part shown in red. Some of the trajectories (such as C, D, E, and F) are standing waves (or "stationary states"). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This "energy quantization" does not occur in classical physics, where the oscillator can have any energy.
As in the classical case, the potential for the quantum harmonic oscillator is given by

{\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.}{\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.}
This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by

{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad }{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad }
{\displaystyle n=0,1,2,\ldots .}{\displaystyle n=0,1,2,\ldots .}
where Hn are the Hermite polynomials

{\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),}{\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),}
and the corresponding energy levels are

{\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).}{\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).}
This is another example illustrating the discretization of energy for bound states.

Mach–Zehnder interferometer

Schematic of a Mach–Zehnder interferometer.
The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement.[30][31]

We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector {\displaystyle \psi \in \mathbb {C} ^{2}}{\displaystyle \psi \in \mathbb {C} ^{2}} that is a superposition of the "lower" path {\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}}{\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}} and the "upper" path {\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}}{\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}}, that is, {\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}}{\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}} for complex {\displaystyle \alpha ,\beta }\alpha ,\beta  such that {\displaystyle |\alpha |^{2}+|\beta |^{2}=1}{\displaystyle |\alpha |^{2}+|\beta |^{2}=1}.

Both beam splitters are modelled as the unitary matrix {\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}}{\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}}, which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of {\displaystyle 1/{\sqrt {2}}}1/{\sqrt {2}}, or be reflected to the other path with a probability amplitude of {\displaystyle i/{\sqrt {2}}}{\displaystyle i/{\sqrt {2}}}. The phase shifter on the upper arm is modelled as the unitary matrix {\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}}{\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}}, which means that if the photon is on the "upper" path it will gain a relative phase of {\displaystyle \Delta \Phi }{\displaystyle \Delta \Phi }, and it will stay unchanged if it is in the lower path.

A photon that enters the interferometer from the left will then end up in the state

{\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},}{\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},}
and the probabilities that it will be detected at the right or at the top are given respectively by

{\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},}{\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},}
{\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.}{\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.}
One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities.

It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases there will be no interference between the paths anymore, and the probabilities are given by {\displaystyle p(u)=p(l)=1/2}{\displaystyle p(u)=p(l)=1/2}, independently of the phase {\displaystyle \Delta \Phi }{\displaystyle \Delta \Phi }. From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.[32]

Applications
Main article: Applications of quantum mechanics
Quantum mechanics has had enormous success in explaining many of the features of our universe, with regards to small-scale and discrete quantities and interactions which cannot be explained by classical methods.[note 4] Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics.

In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy.[33] Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.

Relation to other scientific theories
Modern physics
{\displaystyle {\hat {H}}|\psi _{n}(t)\rangle =i\hbar {\frac {\partial }{\partial t}}|\psi _{n}(t)\rangle }{\displaystyle {\hat {H}}|\psi _{n}(t)\rangle =i\hbar {\frac {\partial }{\partial t}}|\psi _{n}(t)\rangle }
{\displaystyle {\frac {1}{{c}^{2}}}{\frac {{\partial }^{2}{\phi }_{n}}{{\partial t}^{2}}}-{{\nabla }^{2}{\phi }_{n}}+{\left({\frac {mc}{\hbar }}\right)}^{2}{\phi }_{n}=0}{\displaystyle {\frac {1}{{c}^{2}}}{\frac {{\partial }^{2}{\phi }_{n}}{{\partial t}^{2}}}-{{\nabla }^{2}{\phi }_{n}}+{\left({\frac {mc}{\hbar }}\right)}^{2}{\phi }_{n}=0}
Manifold dynamics: Schrödinger and Klein–Gordon equations
Founders
Concepts
Branches
Scientists
Categories
vte
Classical mechanics
The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers.[34] One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.

When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.

Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.

Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, except maybe at temperatures approaching absolute zero at which quantum behavior may manifest macroscopically.[note 5]

Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.[35]

Special relativity and electrodynamics
Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised.[36][37]

The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical {\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)}{\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)} Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.

Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg.[38]

Relation to general relativity
Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics, but also derive the four fundamental forces of nature from a single force or phenomenon.

One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force.[39][40]

Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG.[41]

Philosophical implications
Main article: Interpretations of quantum mechanics
Unsolved problem in physics:
Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the "superposition of states" and "wave function collapse", give rise to the reality we perceive?

(more unsolved problems in physics)
Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."[42] According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[43]

The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation".[44][45] According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations remain popular in the 21st century.[46]

Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox.[note 6] In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles.[51] Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.[11][12]

Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.[52]

Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[53] This is not accomplished by introducing a "new axiom" to quantum mechanics, but by removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we don't observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule?[54] Everett tried to answer both questions in the paper that introduced many-worlds; his derivation of the Born rule has been criticized as relying on unmotivated assumptions.[55] Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful.[56][57]

Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas,[58] and QBism was developed some years later.[59]

History
Main article: History of quantum mechanics

Max Planck is considered the father of the quantum theory.
Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.[60] In 1803 English polymath Thomas Young described the famous double-slit experiment.[61] This experiment played a major role in the general acceptance of the wave theory of light.

In 1838 Michael Faraday discovered cathode rays. These studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[62] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets) precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much".[63] According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν):

{\displaystyle E=h\nu \ }E=h\nu \ ,
where h is Planck's constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation.[64] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[65] However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen.[66] Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency.[67] In his paper "On the Quantum Theory of Radiation," Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation,[68] which became the basis of the laser.

This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics.[69] The theory is now understood as a semi-classical approximation[70] to modern quantum mechanics.[71] Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects.


The 1927 Solvay Conference in Brussels was the fifth world physics conference.
In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory. Heisenberg, Max Born, and Pascual Jordan pioneered matrix mechanics. The following year, Erwin Schrödinger suggested a partial differential equation for the wave functions of particles like electrons. And when effectively restricted to a finite region, this equation allowed only certain modes, corresponding to discrete quantum states – whose properties turned out to be exactly the same as implied by matrix mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926.[72] Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.[73]

By 1930 quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann[74] with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors[75] and superfluids.[76]

Its speculative modern developments include string theory and other attempts to build a quantum theory of gravity.

See also
icon Physics portal
Angular momentum diagrams (quantum mechanics)
Bra–ket notation
Einstein's thought experiments
Fractional quantum mechanics
List of textbooks on classical and quantum mechanics
Macroscopic quantum phenomena
Phase space formulation
Quantum dynamics
Regularization (physics)
Spherical basis
Two-state quantum system
Notes
 See, for example, Precision tests of QED. The relativistic refinement of quantum mechanics known as quantum electrodynamics (QED) has been shown to agree with experiment to within 1 part in 108 for some atomic properties.
 Physicist John C. Baez cautions, "there's no way to understand the interpretation of quantum mechanics without also being able to solve quantum mechanics problems — to understand the theory, you need to be able to use it (and vice versa)".[13] Carl Sagan outlined the "mathematical underpinning" of quantum mechanics and wrote, "For most physics students, this might occupy them from, say, third grade to early graduate school—roughly 15 years. [...] The job of the popularizer of science, trying to get across some idea of quantum mechanics to a general audience that has not gone through these initiation rites, is daunting. Indeed, there are no successful popularizations of quantum mechanics in my opinion—partly for this reason."[14]
 A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise, a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes introduce fictitious "bases" for a Hilbert space comprising elements outside that space. These are invented for calculational convenience and do not represent physical states.[21]:100–105
 See, for example, the Feynman Lectures on Physics for some of the technological applications which use quantum mechanics, e.g., transistors (vol III, pp. 14–11 ff), integrated circuits, which are follow-on technology in solid-state physics (vol II, pp. 8–6), and lasers (vol III, pp. 9–13).
 see macroscopic quantum phenomena, Bose–Einstein condensate, and Quantum machine
 The published form of the EPR argument was due to Podolsky, and Einstein himself was not satisfied with it. In his own publications and correspondence, Einstein used a different argument to insist that quantum mechanics is an incomplete theory.[47][48][49][50]

This chapter gives a brief introduction to quantum mechanics. Quantum mechanics can be
thought of roughly as the study of physics on very small length scales, although there are
also certain macroscopic systems it directly applies to. The descriptor “quantum” arises
because in contrast with classical mechanics, certain quantities take on only discrete values.
However, some quantities still take on continuous values, as we’ll see.
In quantum mechanics, particles have wavelike properties, and a particular wave equation, the Schrodinger equation, governs how these waves behave. The Schrodinger equation
is different in a few ways from the other wave equations we’ve seen in this book. But these
differences won’t keep us from applying all of our usual strategies for solving a wave equation
and dealing with the resulting solutions.
In some respect, quantum mechanics is just another example of a system governed by a
wave equation. In fact, we will find below that some quantum mechanical systems have exact
analogies to systems we’ve already studied in this book. So the results can be carried over,
with no modifications whatsoever needed. However, although it is fairly straightforward
to deal with the actual waves, there are many things about quantum mechanics that are a
combination of subtle, perplexing, and bizarre. To name a few: the measurement problem,
hidden variables along with Bell’s theorem, and wave-particle duality. You’ll learn all about
these in an actual course on quantum mechanics.
Even though there are many things that are highly confusing about quantum mechanics,
the nice thing is that it’s relatively easy to apply quantum mechanics to a physical system
to figure out how it behaves. There is fortunately no need to understand all of the subtleties
about quantum mechanics in order to use it. Of course, in most cases this isn’t the best
strategy to take; it’s usually not a good idea to blindly forge ahead with something if you
don’t understand what you’re actually working with. But this lack of understanding can
be forgiven in the case of quantum mechanics, because no one really understands it. (Well,
maybe a couple people do, but they’re few and far between.) If the world waited to use
quantum mechanics until it understood it, then we’d be stuck back in the 1920’s. The
bottom line is that quantum mechanics can be used to make predictions that are consistent
with experiment. It hasn’t failed us yet. So it would be foolish not to use it.
The main purpose of this chapter is to demonstrate how similar certain results in quantum mechanics are to earlier results we’ve derived in the book. You actually know a good
1
2 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS
deal of quantum mechanics already, whether you realize it or not.
The outline of this chapter is as follows. In Section 10.1 we give a brief history of the
development of quantum mechanics. In Section 10.2 we write down, after some motivation,
the Schrodinger wave equation, both the time-dependent and time-independent forms. In
Section 10.3 we discuss a number of examples. The most important thing to take away from
this section is that all of the examples we discuss have exact analogies in the string/spring
systems earlier in the book. So we technically won’t have to solve anything new here. All
the work has been done before. The only thing new that we’ll have to do is interpret the old
results. In Section 10.4 we discuss the uncertainty principle. As in Section 10.3, we’ll find
that we already did the necessary work earlier in the book. The uncertainty principle turns
out to be a direct consequence of a result from Fourier analysis. But the interpretation of
this result as an uncertainty principle has profound implications in quantum mechanics.
10.1 A brief history
Before discussing the Schrodinger wave equation, let’s take a brief (and by no means comprehensive) look at the historical timeline of how quantum mechanics came about. The
actual history is of course never as clean as an outline like this suggests, but we can at least
get a general idea of how things proceeded.
1900 (Planck): Max Planck proposed that light with frequency ν is emitted in quantized
lumps of energy that come in integral multiples of the quantity,
E = hν = ¯hω (1)
where h ≈ 6.63 · 10−34 J · s is Planck’s constant, and ¯h ≡ h/2π = 1.06 · 10−34 J · s.
The frequency ν of light is generally very large (on the order of 1015 s
−1
for the visible
spectrum), but the smallness of h wins out, so the hν unit of energy is very small (at least on
an everyday energy scale). The energy is therefore essentially continuous for most purposes.
However, a puzzle in late 19th-century physics was the blackbody radiation problem. In a
nutshell, the issue was that the classical (continuous) theory of light predicted that certain
objects would radiate an infinite amount of energy, which of course can’t be correct. Planck’s
hypothesis of quantized radiation not only got rid of the problem of the infinity, but also
correctly predicted the shape of the power curve as a function of temperature.
The results that we derived for electromagnetic waves in Chapter 8 are still true. In
particular, the energy flux is given by the Poynting vector in Eq. 8.47. And E = pc for
a light. Planck’s hypothesis simply adds the information of how many lumps of energy a
wave contains. Although strictly speaking, Planck initially thought that the quantization
was only a function of the emission process and not inherent to the light itself.
1905 (Einstein): Albert Einstein stated that the quantization was in fact inherent to the
light, and that the lumps can be interpreted as particles, which we now call “photons.” This
proposal was a result of his work on the photoelectric effect, which deals with the absorption
of light and the emission of elections from a material.
We know from Chapter 8 that E = pc for a light wave. (This relation also follows from
Einstein’s 1905 work on relativity, where he showed that E = pc for any massless particle,
an example of which is a photon.) And we also know that ω = ck for a light wave. So
Planck’s E = ¯hω relation becomes
E = ¯hω =⇒ pc = ¯h(ck) =⇒ p = ¯hk (2)
This result relates the momentum of a photon to the wavenumber of the wave it is associated
with.
10.1. A BRIEF HISTORY 3
1913 (Bohr): Niels Bohr stated that electrons in atoms have wavelike properties. This
correctly explained a few things about hydrogen, in particular the quantized energy levels
that were known.
1924 (de Broglie): Louis de Broglie proposed that all particles are associated with waves,
where the frequency and wavenumber of the wave are given by the same relations we found
above for photons, namely E = ¯hω and p = ¯hk. The larger E and p are, the larger ω
and k are. Even for small E and p that are typical of a photon, ω and k are very large
because ¯h is so small. So any everyday-sized particle with large (in comparison) energy and
momentum values will have extremely large ω and k values. This (among other reasons)
makes it virtually impossible to observe the wave nature of macroscopic amounts of matter.
This proposal (that E = ¯hω and p = ¯hk also hold for massive particles) was a big step,
because many things that are true for photons are not true for massive (and nonrelativistic)
particles. For example, E = pc (and hence ω = ck) holds only for massless particles (we’ll
see below how ω and k are related for massive particles). But the proposal was a reasonable
one to try. And it turned out to be correct, in view of the fact that the resulting predictions
agree with experiments.
The fact that any particle has a wave associated with it leads to the so-called waveparticle duality. Are things particles, or waves, or both? Well, it depends what you’re doing
with them. Sometimes things behave like waves, sometimes they behave like particles. A
vaguely true statement is that things behave like waves until a measurement takes place,
at which point they behave like particles. However, approximately one million things are
left unaddressed in that sentence. The wave-particle duality is one of the things that few
people, if any, understand about quantum mechanics.
1925 (Heisenberg): Werner Heisenberg formulated a version of quantum mechanics that
made use of matrix mechanics. We won’t deal with this matrix formulation (it’s rather
difficult), but instead with the following wave formulation due to Schrodinger (this is a
waves book, after all).
1926 (Schrodinger): Erwin Schrodinger formulated a version of quantum mechanics that
was based on waves. He wrote down a wave equation (the so-called Schrodinger equation)
that governs how the waves evolve in space and time. We’ll deal with this equation in depth
below. Even though the equation is correct, the correct interpretation of what the wave
actually meant was still missing. Initially Schrodinger thought (incorrectly) that the wave
represented the charge density.
1926 (Born): Max Born correctly interpreted Schrodinger’s wave as a probability amplitude. By “amplitude” we mean that the wave must be squared to obtain the desired
probability. More precisely, since the wave (as we’ll see) is in general complex, we need to
square its absolute value. This yields the probability of finding a particle at a given location
(assuming that the wave is written as a function of x).
This probability isn’t a consequence of ignorance, as is the case with virtually every
other example of probability you’re familiar with. For example, in a coin toss, if you
know everything about the initial motion of the coin (velocity, angular velocity), along
with all external influences (air currents, nature of the floor it lands on, etc.), then you
can predict which side will land facing up. Quantum mechanical probabilities aren’t like
this. They aren’t a consequence of missing information. The probabilities are truly random,
and there is no further information (so-called “hidden variables”) that will make things unrandom. The topic of hidden variables includes various theorems (such as Bell’s theorem)
and experimental results that you will learn about in a quantum mechanics course.
4 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS
1926 (Dirac): Paul Dirac showed that Heisenberg’s and Schrodinger’s versions of quantum
mechanics were equivalent, in that they could both be derived from a more general version
of quantum mechanics.
10.2 The Schrodinger equation
In this section we’ll give a “derivation” of the Schrodinger equation. Our starting point will
be the classical nonrelativistic expression for the energy of a particle, which is the sum of
the kinetic and potential energies. We’ll assume as usual that the potential is a function of
only x. We have
E = K + V =
1
2
mv2 + V (x) = p
2
2m
+ V (x). (3)
We’ll now invoke de Broglie’s claim that all particles can be represented as waves with
frequency ω and wavenumber k, and that E = ¯hω and p = ¯hk. This turns the expression
for the energy into
¯hω =
¯h
2
k
2
2m
+ V (x). (4)
A wave with frequency ω and wavenumber k can be written as usual as ψ(x, t) = Aei(kx−ωt)
(the convention is to put a minus sign in front of the ωt). In 3-D we would have ψ(r, t) =
Aei(k·r−ωt)
, but let’s just deal with 1-D. We now note that
∂ψ
∂t = −iωψ =⇒ ωψ = i
∂ψ
∂t , and
∂x2
= −k
2ψ =⇒ k
2ψ = −
∂x2
. (5)
If we multiply the energy equation in Eq. (4) by ψ, and then plug in these relations, we
obtain
¯h(ωψ) = ¯h
2
2m
(k
2ψ) + V (x)ψ =⇒ i¯h
∂ψ
∂t =
−¯h
2
2m
·
∂x2
+ V ψ (6)
This is the time-dependent Schrodinger equation. If we put the x and t arguments back in,
the equation takes the form,
i¯h
∂ψ(x, t)
∂t =
−¯h
2
2m
·
2ψ(x, t)
∂x2
+ V (x)ψ(x, t). (7)
In 3-D, the x dependence turns into dependence on all three coordinates (x, y, z), and the
2ψ/∂x2
term becomes ∇2ψ (the sum of the second derivatives). Remember that Born’s
(correct) interpretation of ψ(x) is that |ψ(x)|
2 gives the probability of finding the particle
at position x.
Having successfully produced the time-dependent Schrodinger equation, we should ask:
Did the above reasoning actually prove that the Schrodinger equation is valid? No, it didn’t,
for three reasons.
1. The reasoning is based on de Broglie’s assumption that there is a wave associated with
every particle, and also on the assumption that the ω and k of the wave are related to
E and p via Planck’s constant in Eqs. (1) and (2). We had to accept these assumptions
on faith.
2. Said in a different way, it is impossible to actually prove anything in physics. All we
can do is make an educated guess at a theory, and then do experiments to try to show
10.2. THE SCHRODINGER EQUATION 5
that the theory is consistent with the real world. The more experiments we do, the
more comfortable we are that the theory is a good one. But we can never be absolutely
sure that we have the correct theory. In fact, odds are that it’s simply the limiting
case of a more correct theory.
3. The Schrodinger equation actually isn’t valid, so there’s certainly no way that we
proved it. Consistent with the above point concerning limiting cases, the quantum
theory based on Schrodinger’s equation is just a limiting theory of a more correct one,
which happens to be quantum field theory (which unifies quantum mechanics with
special relativity). This is turn must be a limiting theory of yet another more correct
one, because it doesn’t incorporate gravity. Eventually there will be one theory that
covers everything (although this point can be debated), but we’re definitely not there
yet.
Due to the “i” that appears in Eq. (6), ψ(x) is complex. And in contrast with waves in
classical mechanics, the entire complex function now matters in quantum mechanics. We
won’t be taking the real part in the end. Up to this point in the book, the use of complex
functions was simply a matter of convenience, because it is easier to work with exponentials
than trig functions. Only the real part mattered (or imaginary part – take your pick, but not
both). But in quantum mechanics the whole complex wavefunction is relevant. However,
the theory is structured in such a way that anything you might want to measure (position,
momentum, energy, etc.) will always turn out to be a real quantity. This is a necessary
feature of any valid theory, of course, because you’re not going to go out and measure a
distance of 2 + 5i meters, or pay an electrical bill of 17 + 6i kilowatt hours.
As mentioned in the introduction to this chapter, there is an endless number of difficult
questions about quantum mechanics that can be discussed. But in this short introduction
to the subject, let’s just accept Schrodinger’s equation as valid, and see where it takes us.
Solving the equation
If we put aside the profound implications of the Schrodinger equation and regard it as
simply a mathematical equation, then it’s just another wave equation. We already know
the solution, of course, because we used the function ψ(x, t) = Aei(kx−ωt)
to produce Eqs.
(5) and (6) in the first place. But let’s pretend that we don’t know this, and let’s solve the
Schrodinger equation as if we were given it out of the blue.
As always, we’ll guess an exponential solution. If we first look at exponential behavior
in the time coordinate, our guess is ψ(x, t) = e
−iωtf(x) (the minus sign here is convention).
Plugging this into Eq. (7) and canceling the e
−iωt yields
¯hωf(x) = −
¯h
2
2m
2f(x)
∂x2
+ V (x)f(x). (8)
But from Eq. (1), we have ¯hω = E. And we’ll now replace f(x) with ψ(x). This might
cause a little confusion, since we’ve already used ψ to denote the entire wavefunction ψ(x, t).
However, it is general convention to also use the letter ψ to denote the spatial part. So we
now have
E ψ(x) = −
¯h
2
2m
2ψ(x)
∂x2
+ V (x)ψ(x) (9)
This is called the time-independent Schrodinger equation. This equation is more restrictive
than the original time-dependent Schrodinger equation, because it assumes that the particle/wave has a definite energy (that is, a definite ω). In general, a particle can be in a state
that is the superposition of states with various definite energies, just like the motion of a
6 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS
string can be the superposition of various normal modes with definite ω’s. The same reasoning applies here as with all the other waves we’ve discussed: From Fourier analysis and
from the linearity of the Schrodinger equation, we can build up any general wavefunction
from ones with specific energies. Because of this, it suffices to consider the time-independent
Schrodinger equation. The solutions for that equation form a basis for all possible solutions.1
Continuing with our standard strategy of guessing exponentials, we’ll let ψ(x) = Aeikx
.
Plugging this into Eq. (9) and canceling the e
ikx gives (going back to the ¯hω instead of E)
¯hω = −
¯h
2
2m
(−k
2
) + V (x) =⇒ ¯hω =
¯h
2
k
2
2m
+ V (x). (10)
This is simply Eq. (4), so we’ve ended up back where we started, as expected. However, our
goal here was to show how the Schrodinger equation can be solved from scratch, without
knowing where it came from.
Eq. (10) is (sort of) a dispersion relation. If V (x) is a constant C in a given region, then
the relation between ω and k (namely ω = ¯hk2/2m + C) is independent of x, so we have
a nice sinusoidal wavefunction (or exponential, if k is imaginary). However, if V (x) isn’t
constant, then the wavefunction isn’t characterized by a unique wavenumber. So a function
of the form e
ikx doesn’t work as a solution for ψ(x). (A Fourier superposition can certainly
work, since any function can be expressed that way, but a single e
ikx by itself doesn’t work.)
This is similar to the case where the density of a string isn’t constant. We don’t obtain
sinusoidal waves there either.
10.3 Examples
In order to solve for the wavefunction ψ(x) in the time-independent Schrodinger equation
in Eq. (9), we need to be given the potential energy V (x). So let’s now do some examples
with particular functions V (x).
10.3.1 Constant potential
The simplest example is where we have a constant potential, V (x) = V0 in a given region.
Plugging ψ(x) = Aeikx into Eq. (9) then gives
E =
¯h
2
k
2
2m
+ V0 =⇒ k =
s
2m(E − V0)
¯h
2
. (11)
(We’ve taken the positive square root here. We’ll throw in the minus sign by hand to obtain
the other solution, in the discussion below.) k is a constant, and its real/imaginary nature
depends on the relation between E and V0. If E > V0, then k is real, so we have oscillatory
solutions,
ψ(x) = Aeikx + Be−ikx
. (12)
But if E < V0, then k is imaginary, so we have exponentially growing or decaying solutions.
If we let κ ≡ |k| =
p
2m(V0 − E)/¯h, then ψ(x) takes the form,
ψ(x) = Aeκx + Ba−κx
. (13)
We see that it is possible for ψ(x) to be nonzero in a region where E < V0. Since ψ(x) is
the probability amplitude, this implies that it is possible to have a particle with E < V0.
1The “time-dependent” and “time-independent” qualifiers are a bit of a pain to keep saying, so we usually
just say “the Schrodinger equation,” and it’s generally clear from the context which one we mean.
10.3. EXAMPLES 7
This isn’t possible classically, and it is one of the many ways in which quantum mechanics
diverges from classical mechanics. We’ll talk more about this when we discuss the finite
square well in Section 10.3.3.
If E = V0, then this is the one case where the strategy of guessing an exponential function
doesn’t work. But if we go back to Eq. (9) we see that E = V0 implies ∂
2ψ/∂x2 = 0, which
in turn implies that ψ is a linear function,
ψ(x) = Ax + B. (14)
In all of these cases, the full wavefunction (including the time dependence) for a particle
with a specific value of E is given by
ψ(x, t) = e
−iωtψ(x) = e
−iEt/h¯ψ(x) (15)
Again, we’re using the letter ψ to stand for two different functions here, but the meaning of
each is clear from the number of arguments. Any general wavefunction is built up from a
superposition of the states in Eq. (15) with different values of E, just as the general motion
of a string is built of from various normal modes with different frequencies ω. The fact that a
particle can be in a superposition of states with different energies is another instance where
quantum mechanics diverges from classical mechanics. (Of course, it’s easy for classical
waves to be in a superposition of normal modes with different energies, by Fourier analysis.)
The above E > V0 and E < V0 cases correspond, respectively, to being above or below
the cutoff frequency in the string/spring system we discussed in Section 6.2.2. We have an
oscillatory solution if E (or ω) is above a particular value, and an exponential solution if
E (or ω) is below a particular value. The two setups (quantum mechanical with constant
V0, and string/spring with springs present everywhere) are exactly analogous to each other.
The spatial parts of the solutions are exactly the same (well, before taking the real part
in the string/spring case). The frequencies, however, are different, because the dispersion
relations are different (¯hω = ¯h
2
k
2/2m + V0 and ω
2 = c
2k
2 + ω
2
s
, respectively). But this
affects only the rate of oscillation, and not the shape of the function.
The above results hold for any particular region where V (x) is constant. What if the
region extends from, say, x = 0 to x = +∞? If E > V0, the oscillatory solutions are fine,
even though they’re not normalizable. That is, the integral of |ψ|
2
is infinite (at least for any
nonzero coefficient in ψ; if the coefficient were zero, then we wouldn’t have a particle). So
we can’t make the total probability equal to 1. However, this is fine. The interpretation is
that we simply have a stream of particles extending to infinity. We shouldn’t be too worried
about this divergence, because when dealing with traveling waves on a string (for example,
when discussing reflection and transmission coefficients) we assumed that the sinusiodal
waves extended to ±∞, which of course is impossible in reality.
If E < V0, then the fact that x = +∞ is in the given region implies that the coefficient
A in Eq. (13) must be zero, because otherwise ψ would diverge as x → ∞. So we are left
with only the Ba−κx term. (It’s one thing to have the integral of |ψ|
2 diverge, as it did in
the previous paragraph. It’s another thing to have the integral diverge and be dominated
by values at large x. There is then zero probability of finding the particle at a finite value
of x.) If the region where E < V0 is actually the entire x axis, from −∞ to ∞, then the B
coefficient in Eq. (13) must also be zero. So ψ(x) = 0 for all x. In other words, there is no
allowed wavefunction. It is impossible to have a particle with E < V0 everywhere.
10.3.2 Infinite square well
Consider the potential energy,
V (x) = ½
0 (0 ≤ x ≤ L)
∞ (x < 0 or x > L).
(16)
8 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS
This is called an “infinite square well,” and it is shown in Fig. 1. The “square” part of the
V=0
V=
-a a
8
V=
8
Figure 1
name comes from the right-angled corners and not from the actual shape, since it’s a very
(infinitely) tall rectangle. This setup is also called a “particle in a box” (a 1-D box), because
the particle can freely move around inside a given region, but has zero probability of leaving
the region, just like a box. So ψ(x) = 0 outside the box.
The particle does indeed have zero chance of being found outside the region 0 ≤ x ≤ L.
Intuitively, this is reasonable, because the particle would have to climb the infinitely high
potential cliff at the side of the box. Mathematically, this can be derived rigorously, and
we’ll do this below when we discuss the finite square well.
We’ll assume E > 0, because the E < 0 case makes E < V0 everywhere, which isn’t
possible, as we mentioned above. Inside the well, we have V (x) = 0, so this is a special
case of the constant potential discussed above. We therefore have the oscillatory solution
in Eq. (12) (since E > 0), which we will find more convenient here to write in terms of trig
functions,
ψ(x) = A cos kx + B sin kx, where E =
¯h
2
k
2
2m
=⇒ k =
2mE
¯h
. (17)
The coefficients A and B may be complex.
We now claim that ψ must be continuous at the boundaries at x = 0 and x = L. When
dealing with, say, waves on a string, it was obvious that the function ψ(x) representing
the transverse position must be continuous, because otherwise the string would have a
break in it. But it isn’t so obvious with the quantum-mechanical ψ. There doesn’t seem
to be anything horribly wrong with having a discontinuous probability distribution, since
probability isn’t an actual object. However, it is indeed true that the probability distribution
is continuous in this case (and in any other case that isn’t pathological). For now, let’s just
assume that this is true, but we’ll justify it below when we discuss the finite square well.
Since ψ(x) = 0 outside the box, continuity of ψ(x) at x = 0 quickly gives A cos(0) +
B sin(0) = 0 =⇒ A = 0. Continuity at x = L then gives B sin kL = 0 =⇒ kL = nπ, where
n is an integer. So k = nπ/L, and the solution for ψ(x) is ψ(x) = B sin(nπx/L). The full
solution including the time dependence is given by Eq. (15) as
ψ(x, t) = Be−iEt/h¯
sin ³nπx
L
´
where E =
¯h
2
k
2
2m
=
n
2¯h
2
2mL2
(18)
We see that the energies are quantized (that is, they can take on only discrete values) and
indexed by the integer n. The string setup that is analogous to the infinite square well is
a string with fixed ends, which we discussed in Chapter 4 (see Section 4.5.2). In both of
these setups, the boundary conditions yield the same result that an integral number of half
wavelengths fit into the region. So the k values take the same form, k = nπ/L.
The dispersion relation, however, is different. It was simply ω = ck for waves on a
string, whereas it is ¯hω = ¯h
2
k
2/2m for the V (x) = 0 region of the infinite well. But as
in the above case of the constant potential, this difference affects only the rate at which
the waves oscillate in time. It does’t affect the spatial shape, which is determined by the
wavenumber k. The wavefunctions for the lowest four energies are shown in Fig. 2 (the
x=0 x=L
n=1
n=2
n=3
n=4
Figure 2
vertical separation between the curves is meaningless). These look exactly like the normal
modes in the “both ends fixed” case in Fig. 24 in Chapter 4.
10.3. EXAMPLES 9
The corresponding energies are shown in Fig. 3. Since E ∝ ω = (¯h
2
/2m)k
2 ∝ n
2
, the
n=1
n=2
n=3
n=4
π
2h
2
/2mL2
1
4
9
16
E
Energy in units of
Figure 3
gap between the energies grows as n increases. Note that the energies in the case of a string
are also proportional to n
2
, because although ω = ck ∝ n, the energy is proportional to ω
2
(because the time derivative in Eq. (4.50) brings down a factor of ω). So Figs. 2 and 3 both
apply to both systems. The difference between the systems is that a string has ω ∝
E,
where as the quantum mechanical system has ω ∝ E.
There is no n = 0 state, because from Eq. (18) this would make ψ be identically zero.
That wouldn’t be much of a state, because the probability would be zero everywhere. The
lack of a n = 0 state is consistent with the uncertainty principle (see Section 10.4 below),
because such a state would have ∆x∆p = 0 (since ∆x < L, and ∆p = 0 because n = 0 =⇒
k = 0 =⇒ p = ¯hk = 0), which would violate the principle.
10.3.3 Finite square well
Things get more complicated if we have a finite potential well. For future convenience, we’ll
let x = 0 be located at the center of the well. If we label the ends as ±a, then V (x) is given
by
V (x) = ½
0 (|x| ≤ a)
V0 (|x| > a).
(19)
This is shown in Fig. 4. Given V0, there are two basic possibilities for the energy E:
V=0
V=V0
-a a
Figure 4
• E > V0 (unbound state): From Eq. (11), the wavenumber k takes the general form
of p
2m(E − V (x))/¯h. This equals √
2mE/¯h inside the well and p
2m(E − V0)/¯h
outside. k is therefore real everywhere, so ψ(x) is an oscillatory function both inside
and outside the well. k is larger inside the well, so the wavelength is shorter there. A
possible wavefunction might look something like the one in Fig. 5. It is customary to -a a
E
V0
Figure 5
draw the ψ(x) function on top of the E line, although this technically has no meaning
because ψ and E have different units.
The wavefunction extends infinitely on both direction, so the particle can be anywhere.
Hence the name “unbound state.” We’ve drawn an even-function standing wave in
Fig. 5, although in general we’re concerned with traveling waves for unbound states.
These are obtained from superpositions of the standing waves, with a phase thrown
in the time dependence. For traveling waves, the relative sizes of ψ(x) in the different
regions depend on the specifics of how the problem is set up.
• 0 < E < V0 (bound state): The wavenumber k still equals √
p
2mE/¯h inside the well and
2m(E − V0)/¯h outside, but now that latter value is imaginary. So ψ is an oscillatory
function inside the well, but an exponential function outside. Furthermore, it must
be an exponentially decaying function outside, because otherwise it would diverge at
x = ±∞. Since the particle has an exponentially small probability of being found
far away from the well, we call this a “bound state.” We’ll talk more below about
the strange fact that the probability is nonzero in the region outside the well, where
E < V (x).
There is also the third case were E = V0, but this can be obtained as the limit of the
other two cases (more easily as the limit of the bound-state case). The fourth case,
E < 0, isn’t allowed, as we discussed at the end of Section 10.3.1.
In both of these cases, the complete solution for ψ(x) involves solving the boundary
conditions at x = ±a. The procedure is the same for both cases, but let’s concentrate on
the bound-state case here. The boundary conditions are given by the following theorem.
10 CHAPTER 10. INTRODUCTION TO QUANTUM MECHANICS
Theorem 10.1 If V (x) is everywhere finite (which is the case for the finite square well),
then both ψ(x) and ψ
0
(x) are everywhere continuous.
Proof: If we solve for ψ
00 in Eq. (9), we see that ψ
00 is always finite (because V (x) is always
finite). This implies two things. First, it implies that ψ
0 must be continuous, because if ψ
0
were discontinuous at a given point, then its derivative ψ
00 would be infinite there (because
ψ
0 would make a finite jump over zero distance). So half of the theorem is proved.
Second, the finiteness of ψ
00 implies that ψ
0 must also be finite everywhere, because if
ψ
0 were infinite at a given point (excluding x = ±∞), then its derivative ψ
00 would also be
infinite there (because ψ
0 would make an infinite jump over a finite distance).
Now, since ψ
0
is finite everywhere, we can repeat the same reasoning with ψ
0 and ψ that
we used with ψ
00 and ψ
0
in the first paragraph above: Since ψ
0
is always finite, we know
that ψ must be continuous. So the other half of the theorem is also proved.
Having proved this theorem, let’s outline the general strategy for solving for ψ in the
E < V0 case. The actual task of going through the calculation is left for Problem 10.2. The
calculation is made much easier with the help of Problem 10.1 which states that only even
and odd functions need to be considered.
If we let k ≡ iκ outside the well, then we have κ =
p
2m(V0 − E)/¯h, which is real and
positive since E < V0. The general forms of the wavefunctions in the left, middle, and right
regions are
x < −a : ψ1(x) = A1e
κx + B1e
−κx
,
−a < x < a : ψ2(x) = A2e
ikx + B2e
−ikx
,
x > a : ψ3(x) = A3e
κx + B3e
−κx
, (20)
where
k =
2mE
¯h
, and κ =
p
2m(V0 − E)
¯h
. (21)
We’ve given only the x dependence in these wavefunctions. To obtain the full wavefunction
ψ(x, t), all of these waves are multiplied by the same function of t, namely e
−iωt = e
−iEt/h¯
.
We now need to solve for various quantities. How many unknowns do we have, and how
many equations/facts do we have? We have seven unknowns: A1, A2, A3, B1, B2, B3, and
E (which appears in k and κ). And we have seven facts:
• Four boundary conditions at x = ±a, namely continuity of ψ and ψ
0 at both points.
• Two boundary conditions at x = ±∞, namely ψ = 0 in both cases.
• One normalization condition, namely R ∞
−∞ |ψ|
2dx = 1.
As we mentioned at the end of Section 10.3.1, the boundary conditions at ±∞ quickly
tell us that B1 and A3 equal zero. Also, in most cases we’re not concerned with the overall
normalization constant (the usual goal is to find E), so we can ignore the normalization
condition and just find all the other constants in terms of, say, A1. So were’re down to
four equations (the four boundary conditions at x = ±a), and four unknowns (A2, B2, B3,
and E). Furthermore, the even/odd trick discussed in Problem 10.1 cuts things down by a
factor of 2, so we’re down to two equations and two unknowns (the energy E, along with
one of the coefficients), which is quite manageable. The details are left for Problem 10.2,
but let’s get a rough idea here of what the wavefunctions look like.
10.3. EXAMPLES 11
It turns out that the energies and states are again discrete and can be labeled by an
integer n, just as in the infinite-well case. However, the energies don’t take the simple form
in Eq. (18), although they approximately do if the well is deep. Fig. 6 shows the five states for
-a a
V0
E4
E5
E3
E2
E1
V=E=0
Figure 6
a well of a particular depth V0. We’ve drawn each wave relative to the line that represents
the energy En. Both ψ and ψ
0 are continuous at x = ±a, and ψ goes to 0 at x = ±∞.
We’ve chosen the various parameters (one of which is the depth) so that there are exactly
five states (see Problem 10.2 for the details on this). The deeper the well, the more states
there are.
Consistent with Eq. (20), ψ is indeed oscillatory inside the well (that is, the curvature
is toward the x axis), and exponential decaying outside the well (the curvature is away
from the x axis). As E increases, Eq. (21) tells us that k increases (so the wiggles inside
the well have shorter wavelengths), and also that κ decreases (so the exponential decay is
slower). These facts are evident in Fig. 6. The exact details of the waves depend on various
parameters, but the number of bumps equals n.