Mass and Binding Energy of the PCC Bar Particle
All right, two minutes. So all of the problems are one-liners. You sold them online, but you have to know what to do. Let’s discuss quickly question number one. First, the mass of the states. This appears as the particle, right? In this condition, neglecting inelastic channels, this particle is formed by the \(\Sigma_c^+\) and \(D^0\) constituents.
The mass of the \(P_c^+\) particle is roughly the sum of the masses of its constituents:
\[ M_{P_c^+} \approx M_{\Sigma_c^+} + M_{D^0} \]
The binding energy is defined as the mass difference between the constituents and the particle itself:
\[ E_{\text{binding}} = (M_{\Sigma_c^+} + M_{D^0}) - M_{P_c^+} \]
The notation has been updated from \(P_c\) to \(P_{cc}\) (or \(P_{cc}\) bar) to explicitly indicate the presence of charm quarks.
The \(P_{cc}\) bar particle is a hadronic molecule, meaning it is a bound state of \(\Sigma_c^+\) and \(D^0\). The binding energy is small, so the state sits near the threshold of its constituents.
The heavy quarks (charm) are explicitly listed in the notation because they are conserved quantities in decays, while light quarks can be inferred from the context.
For example:
- \(\Sigma_c\) indicates a charmed baryon with two light quarks (e.g., \(uu\), \(ud\), or \(dd\)).
- \(P_{cc}\) denotes a pentaquark state with two charm quarks and three light quarks.
In this case, the \(P_c^+\) state likely has the quark content \(ud\) for the light quarks.
The mass of the \(P_c^+\) is approximately the threshold mass of \(\Sigma_c^+\) and \(D^0\), since the binding energy is small. This is why we say the state “sits at the threshold.”
Decay Mechanisms and Lifetime of the CC Bar Particle
The hadronic threshold is approximately 10 MeV. The mass of the \(\Sigma_c^+\) is around 2.5 GeV, and the mass of the \(D^0\) is around 2 GeV, so the sum is roughly 4.5 GeV. This is the energy range where such formations would be searched for.
For quantum numbers, we have spin-parity combinations of \(1/2^-\) and \(3/2^-\) due to the \(S\)-wave reference. This implies two expected states: one with spin-parity \(1/2^-\) and another with \(3/2^-\).
There is a lower bound on the lifetime of this particle, which arises from the binding of its constituents. The decay of this particle must conserve energy. In the rest frame of the particle, the total energy available is just its mass. Since the \(P_c^+\) is below the threshold for strong decay into \(\Sigma_c^+\) and \(D^0\), it cannot decay that way.
The possible decay channels are:
1. Electromagnetic annihilation of the \(c\bar{c}\) pair, but this is highly suppressed because electromagnetic processes are much weaker than other interactions.
2. Strong annihilation of the \(c\bar{c}\) pair into lighter quarks, but this is also suppressed for heavy quarks like charm. The heavier the quarks, the smaller the annihilation probability.
The dominant decay mechanism is the weak decay of one of the charm quarks. For example, one charm quark can transition to an \(s\)-quark by emitting a \(W\) boson:
\[ c \to s + W^+ \]
This is the primary decay channel for the \(P_c^+\). The lifetime can be estimated from the weak decay rate of the charm quark.
There is also a radiative decay channel where the \(D\)-meson component transitions to a \(D^*\)-meson by emitting a \(\pi^0\) or photon:
\[ D \to D^* + \pi^0 \quad \text{(or photon)} \]
This radiative decay is the most likely channel for the \(P_c^+\) if the weak decay is suppressed. The width of this decay is smaller than the width of the \(D\)-meson decay, so the lifetime of the \(P_c^+\) is correspondingly longer.
The lifetime of the \(P_c^+\) is constrained by the weak decay of its charm quark constituents or radiative transitions of the \(D\)-meson component. The exact lifetime depends on the dominant decay channel.
Binding Energy and Decay Suppression in Particle Systems
The width of the bound state is smaller than the width of the \(D\) meson, so its lifetime is longer than that of the $D^*` meson. When two stable particles combine, the resulting bound state can live indefinitely. For example, the deuteron is made of a proton and a neutron. While the proton is stable and the neutron has a long lifetime, the deuteron itself is stable because the neutron is so tightly bound inside that its decay is suppressed.
The binding mechanism suppresses the phase space available for decay, thereby increasing the particle’s lifetime. For instance, a \(`D\) meson in a bound state would live longer than an isolated \(D^*\) meson. The width \(\Gamma\) of the bound state is smaller than the width of the \(D\), and since the lifetime \(\tau\) is inversely proportional to the width:
\[ \tau \propto \frac{1}{\Gamma} \]
the bound state has a longer lifetime.
A similar effect occurs with the \(\Sigma_c^* D\) system. Instead of fully dissociating, it may transition to a lower energy state, such as \(\Sigma_c D \pi\) or \(\Sigma_c D \gamma\), though this requires significant energy. The binding energy and suppression of decays are actively researched topics, with frequent discussions at conferences.
The suppression of decay channels due to binding energy leads to longer-lived states, as seen in deuteron and certain mesonic systems. The exact lifetime depends on the available phase space and decay mechanisms.
The \(\Sigma_c^+ D^0\) molecule can exist, but its stability depends on whether it decays weakly or through radiative transitions. The dominant decay mechanism for such systems is often weak decay, where a charm quark transitions to a strange quark via \(W\) boson emission:
\[ c \to s + W^+ \]
Alternatively, radiative decays like \(D \to D^* + \pi^0\) (or photon emission) can occur, but with a smaller width than the \(D\)-meson decay, leading to a longer lifetime for the bound state.
Kinematic Variables and Invariant Masses in Particle Reactions
For any reaction with a four-particle system (A, B, C, D), you can define invariant variables that characterize the kinematics. The first variable is the mass squared of the combined system of particles A and B:
\[ s = (p_A + p_B)^2 \]
Here, \(\sqrt{s}\) is the mass of the system of particles A and B. Similarly, \(t\) is the mass squared of the system with particles B and C, and \(u\) is the mass squared of the system with particles A and C.
Only two variables are needed to fully characterize the kinematics of a four-particle process, due to counting degrees of freedom. These can be any pair among \(s\), \(t\), or \(u\), as they are related by the constraint:
\[ s + t + u = m_A^2 + m_B^2 + m_C^2 + m_D^2 \]
The scattering amplitude (or transition amplitude) depends on only two of these variables, such as \(s\) and \(t\), or \(s\) and \(u\). This amplitude is defined on the plane of kinematic variables, with different domains corresponding to different physical processes.
To analyze the kinematics, we examine the signs of \(s\), \(t\), and \(u\). If the mass squared of a two-particle system is physical (i.e., \(s > 0\)), it represents a realizable state. If unphysical (e.g., \(t < 0\) or \(u < 0\)), it depends on the exact masses of the particles to determine the allowed kinematic ranges.
For example, in a scattering process, \(s > 0\) typically corresponds to the center-of-mass energy squared, while \(t < 0\) or \(u < 0\) may describe momentum transfer in a spacelike regime. The exact placement of kinematic boundaries requires evaluating the masses of the involved particles.
The Mandelstam variables \(s\), \(t\), and \(u\) are not independent — only two are needed to describe the kinematics of a four-particle reaction. The choice depends on the physical process being studied.
The matrix element or scattering amplitude is a function of these variables, and its behavior across different kinematic domains reveals the underlying dynamics of the reaction. For instance, poles in the amplitude may indicate resonant states, while discontinuities reflect thresholds for particle production.
Definition and Properties of Kinematic Variables S, T, and U
The kinematic variable \(u\) is defined as:
\[ u = (p_B + p_C)^2 \]
For variable \(S\), it must be positive and greater than the threshold value:
\[ S > (m_A + m_B)^2 \]
This is because \(S\) represents the mass squared of a system of two real particles, which is at least the sum of their masses squared. If the particles have momentum in the rest frame, the energy of the system exceeds the sum of their masses.
The same logic applies to \(T\) and \(U\). These variables are also physical masses of two-particle systems and must satisfy:
\[ T > (m_B + m_C)^2, \quad U > (m_A + m_C)^2 \]
The sign convention for these variables depends on the direction of particle momenta relative to the interaction “blob.” In this definition, all particles (\(A, B, C\)) are outgoing, while \(X\) is incoming, which determines the sign.
The allowed kinematic domain is constrained by these thresholds. For \(S\), it must exceed the threshold \((m_B + m_C)^2\), and similarly for \(U\). The only permitted region for the first kinematic matrix corresponds to physical processes where these conditions hold.
The variables \(S\), \(T\), and \(U\) are not independent — only two are needed to fully describe the kinematics of a four-particle reaction due to the constraint:
\[ S + T + U = m_A^2 + m_B^2 + m_C^2 + m_X^2 \]
The scattering amplitude depends on two of these variables, such as \(S\) and \(T\), or \(S\) and \(U\), and is defined on the plane of kinematic variables with distinct physical domains.
Kinematic Domains and Physical Contours in Scattering Reactions
The allowed domain for the first \(K\) matrix is where \(T\) is the physical mass of two particles, which must be greater than zero (above threshold). To discuss the kinematics of cross-channel reactions using the same variables, one must modify the definition of the environments by swapping momenta to minus momenta for particles moving to the other side.
As mentioned earlier, you could define the directions in two different ways, and the minus signs would appear in the first place. The physical region for this case occurs when \(T > 0\), since \(T\) represents the squared mass of two particles. In this scenario, \(S\) corresponds to particles \(A\) and \(B\), but this becomes an impossible combination if they are on different sides, leading to \(S < 0\) and \(U < 0\).
The kinematic domain is roughly where \(U < 0\), \(S > 0\), and \(T > 0\). Here, \(T\) is the defining direction. The minimum condition is satisfied when \(S > 0\) and \(U > 0\). Another region of interest is where \(U > 0\), corresponding to the \(U\)-channel process involving particles \(A\) and \(C\).
To identify the true physical contours for kinematics, one must calculate the physical ranges of scattering variables. For example, restricting the cosine of the scattering angle to the range \([-1, 1]\) reveals that not all points in the kinematic domain are allowed — only a specific region satisfies the constraints.
The true boundary for scattering is often determined by these restrictions. The variables \(S\), \(T\), and \(U\) are interdependent, with the constraint:
\[ S + T + U = m_A^2 + m_B^2 + m_C^2 + m_X^2 \]
Only two of these variables are needed to fully describe the kinematics of a four-particle reaction. The scattering amplitude depends on combinations such as \((S, T)\) or \((S, U)\), defined on the plane of kinematic variables with distinct physical domains.
Quantum Transition Amplitudes and Analyticity in Scattering Processes
The true boundary for scattering is often given by a sixth-order polynomial. In the center is the Dalitz plot, which represents our physical domain. There are four regions corresponding to different physical reactions, and it’s crucial to recognize that a single amplitude matrix element describes all four.
If you precisely constrain this function over its domain, it yields a complex transition amplitude (e.g., \(1 + 3i\)) for any point. The same matrix element can compute transition amplitudes for different processes, such as decays. This framework works exceptionally well in QED. For instance, the same matrix element describes:
- Compton scattering (photon-electron → photon-electron)
- Electron-positron annihilation into two photons
- Two-photon production of an electron-positron pair
In hadron physics, however, the situation is more complicated due to the lack of a perturbative framework. When modeling the Dalitz plot, we approximate resonances as finite chunks, but this leads to unphysical infinities in the scattering amplitude. The divergence occurs because we truncate the resonance spectrum — the physics demands an infinite number of resonances to maintain analyticity.
The relationship between the amplitude and its domain implies that infinite sums of resonances are necessary to properly connect different kinematic regions, as seen in the lines of the Dalitz plot.
The scattering amplitude’s behavior in this domain requires operating with infinite sums, as finite approximations fail to capture the full analytic structure.
Analytic Functions and Resonance Properties in Scattering Theory
There is an effort spanning 30 to 50 years to find a set of functions that works everywhere nicely, is analytic, and reasonable. The most difficult part is constructing something that accurately describes the data, particularly in regi theory. You may have heard of regi theory, which is one approach to this problem.
The goal is to derive a complex function with all the desired properties: it should work well in scattering domains and decay domains. However, it currently lacks an exact understanding of resonance properties. For instance, it may predict resonances with zero width in regi theory. A key development is implementing resonances with finite widths.
A simple function called the Klein function (or challenge function) describes contours of the physical domain.
The Klein function is given by:
\[ \Phi(x, y, z) = x^2 + y^2 + z^2 - 2xy - 2yz - 2zx \]
Solving \(\Phi(s, t, u) = 0\) yields the boundaries of the physical domain. For a fixed value of \(s\) (e.g., \(s = -50\) or \(s = +20\)), this equation provides two solutions. The function is straightforward to implement computationally.
The Kibble function extends this by combining three Klein functions, each corresponding to a different channel:
- \(\Lambda_1\) for the first channel
- \(\Lambda_2 = \lambda(t, m_X^2, m_X^2)\) for the second channel
- \(\Lambda_3 = \lambda(u, m_X^2, m_B^2)\) for the third channel
Here, \(\lambda\) denotes the standard Källén triangle function.
You can visualize these contours using tools like Wolfram Alpha by plotting \(\Phi(s, t, u) = 0\) for fixed \(s\).
This approach provides a practical way to map the physical domain while maintaining analyticity, though challenges remain in fully capturing resonance dynamics.
Introduction to Unitarity and Probability Conservation in Scattering Amplitudes
The best book on particle kinematics is written by two authors with unspellable family names. The book is excellent, and if you get a chance to read it, you will learn everything about particle kinematics, particularly the peculiar properties of the Kibble function and the challenge function.
Unitarity is a constraint on scattering amplitudes. In high-energy physics, we do not compute scattering amplitudes from first principles but rather model them, guided by principles that dictate what these amplitudes can be. You cannot write an arbitrary expression for the amplitude that fits the data—there are constraints, one of which is probability conservation.
Probability conservation translates into a mathematical statement about the amplitude, known as unitarity. You may have seen this earlier in particle scattering, particularly in optical theory, which relates the imaginary part of the amplitude to the total cross section. Specifically, the imaginary part is equal to the coefficient of the total cross section.
This relationship is a direct consequence of unitarity and probability conservation.
The optical theorem is a key example of how unitarity manifests in scattering theory. It ensures that the amplitude respects fundamental physical principles, such as the conservation of probability.
For deeper insights, refer to the mentioned book on particle kinematics, which covers these concepts in detail, including the Kibble and challenge functions.
Analytic Structure and Singularities in Scattering Amplitudes
The principle of unitarity, combined with probability conservation, leads to a powerful constraint on scattering amplitudes. This alone allows us to derive a reasonable approximation for the amplitude that captures resonance phenomena, such as the bumps observed in the spectrum.
Unitarity also determines the analytic structure of the amplitude, including the types of singularities (e.g., cuts, branch points) and their locations in the complex plane. This was briefly discussed in the last lecture, where we examined the properties of analytic functions and their singularities.
The location of these singularities arises from the fact that we are dealing with physical scattering processes, where unitarity imposes specific conditions on the amplitude’s behavior in the complex energy plane.
Unitarity and Scattering Amplitudes
The scattering amplitudes are real unitary functions, where the imaginary part determines the locations of the cuts in the complex plane. Let’s proceed step by step, starting with a discussion of the scattering amplitude and partial waves.
For efficiency, I will first state the key result and then derive it. Unitarity imposes constraints on the full amplitude \(A\), describing a scattering process. The first unitarity condition is:
\[ A - A^{\dagger} = i \int A^{*} A \, d\Phi \]
Terms:
- \(A\): Scattering amplitude.
- \(A^{\dagger}\): Hermitian conjugate of \(A\).
- \(i\): Imaginary unit.
- \(\int d\Phi\): Integration over the phase space of intermediate states.
Diagrammatically, this represents the relationship between the amplitude and its conjugate. For partial waves, the amplitude \(A\) depends on a single variable, simplifying the phase space integration. The expression reduces to:
\[ A - A^{*} = i \rho A^{*} A \]
Here, \(\rho\) encodes the simplified phase space factor. This shows that the difference between the amplitude and its complex conjugate gives the imaginary part of the amplitude, which is related to the phase space.
The next step is to derive these expressions systematically, starting from the scattering amplitude and partial wave decomposition. The imaginary part of the amplitude plays a crucial role in determining the analytic structure, including cuts and branch points in the complex plane.
Kinematics and Scattering Angle in Elastic 2-to-2 Process
We are dealing with the 2-to-2 elastic scattering process, where the initial and final particles are the same. The diagram shows an interaction blob, representing some unspecified dynamics. Below it is the kinematic representation in the center-of-momentum frame, where the total momentum is zero.
In this frame, even if the particles have different masses, their momenta are equal and given by the breakup momentum \(p_1 = p_2 = \frac{\Lambda^{1/2}(s, m_1^2, m_2^2)}{2\sqrt{s}}\), where \(\Lambda\) is the Källén function:
\[ \Lambda(x, y, z) = x^2 + y^2 + z^2 - 2xy - 2xz - 2yz \]
After the interaction, the masses and breakup momenta remain unchanged—only the scattering angle \(\theta\) changes. The observable manifestation of the interaction dynamics is the angular distribution of the scattered particles.
For a 2-to-2 process, once the energy is fixed, the only remaining variable is the scattering angle. This means the differential cross section \(\frac{d\sigma}{d\Omega}\) encodes all the interaction physics in its angular dependence.
The Källén function \(\Lambda\) ensures kinematic consistency, guaranteeing real-valued momenta in physical regions.
The scattering amplitude depends on two variables: the center-of-mass energy \(\sqrt{s}\) and the angle \(\theta\). The angular distribution reflects the underlying interaction mechanism, making it a key observable in scattering experiments.
Definition and Properties of the Scattering Amplitude
The scattering amplitude is defined as the expectation value of the two-particle initial state on the right and the final state on the left:
\[ \langle p_1', p_2' | T | p_1, p_2 \rangle = A(s, t) \langle 2a \rangle^i \delta^4(p_1 + p_2 - p_1' - p_2') \]
The four-dimensional delta function \(\delta^4(p_1 + p_2 - p_1' - p_2')\) ensures conservation of energy and momentum.
To parameterize the scattering, we use the energy of the system and the scattering angle \(\theta\). The state vectors \(|p_1, p_2\rangle\) represent the two-particle state, and the scattering amplitude depends on the angle \(\theta\).
The identity operator for the two-body phase space is given by:
\[ \int \frac{d^3 p_1}{(2\pi)^3 2E_1} \frac{d^3 p_2}{(2\pi)^3 2E_2} |p_1, p_2\rangle \langle p_1, p_2| \]
This phase space integration covers all possible directions in the sphere, parameterized by the solid angle \(d\Omega = d\cos\theta \, d\phi\). The normalization ensures that the identity operator acts correctly, preserving the state under integration.
The scattering amplitude must satisfy unitarity, which imposes constraints on \(A(s, t)\). The differential cross section \(\frac{d\sigma}{d\Omega}\) encodes the angular dependence of the interaction, making it a key observable in scattering experiments.
The Källén function \(\Lambda(s, m_1^2, m_2^2)\) ensures kinematic consistency, guaranteeing real-valued momenta in physical regions.
In the center-of-momentum frame, the breakup momentum is:
\[ p_1 = p_2 = \frac{\Lambda^{1/2}(s, m_1^2, m_2^2)}{2\sqrt{s}} \]
For elastic scattering, the masses and breakup momenta remain unchanged—only the scattering angle \(\theta\) varies. The scattering amplitude \(A(s, t)\) thus depends on \(\sqrt{s}\) and \(\theta\), fully characterizing the interaction dynamics.
Unitarity Constraint and Scattering Operator
Unitarity comes from the condition that \(S S^{\dagger} = I\), where \(S\) is the S-matrix and \(S^{\dagger}\) is its adjoint.
There is also the possibility that \(P_1' = P_2\) for distinguishable scalar particles, avoiding cross terms. The probability is a statement about the full scattering operator being unitary. Part of the scattering operator is the identity, representing no interaction, while the remaining part introduces the interaction operator \(T\), which defines the scattering amplitude.
In field theory, we always deal with the non-trivial part of the amplitude, which is an addition to the identity. The easiest way to proceed is by subtracting the identity from the full scattering operator to isolate \(T\).
This leads to the unitary constraint, which is critical because it constrains partial waves and scattering amplitudes. To see this explicitly, we need to analyze how unitarity restricts the form of \(A(s, t)\).
Derivation of the Unitarity Condition
The unitarity condition constrains our scattering amplitude. To see this, we start from the condition and derive the constraints on \(T\). The approach involves manipulating the expression \(1 - (1 + iT)(1 - iT^\dagger)\).
Let’s verify the algebra:
- The term \((1 + iT)(1 - iT^\dagger)\) expands to \(1 - iT^\dagger + iT + T T^\dagger\).
- Subtracting this from \(1\) gives \(i(T - T^\dagger) + T T^\dagger\).
- Multiplying both sides by \(i\) shifts the minus sign to \(T^\dagger\) and removes the \(i\) on the left, yielding:
\[ T - T^\dagger = i T T^\dagger \]
Now, we insert the final state and introduce the identity operator to split it into intermediate states. The key step is inserting a complete set of intermediate states, which includes all possible momentum configurations of the intermediate particles (with fixed energy). This requires integrating over the intermediate momentum:
\[ \langle f | T - T^\dagger | i \rangle = i \sum_m \langle f | T^\dagger | m \rangle \langle m | T | i \rangle \frac{d^3 p^{(m)}}{(2 \pi)^3 2 E_m} \]
Here, the delta functions enforce energy-momentum conservation:
- The first delta function connects the initial state to the intermediate state: \(P_1 + P_2 = P_1^{(m)} + P_2^{(m)}\).
- The second delta function connects the intermediate state to the final state: \(P_1' + P_2' = P_1^{(m)} + P_2^{(m)}\).
By isolating the delta function that relates the initial and final states, we simplify the phase space integral to:
\[ \int \delta^{(4)}(P_1 + P_2 - P_1' - P_2') \, d^4 p^{(m)} \]
This leads us to the unitarity constraint for the scattering amplitude:
\[ T - T^\dagger = i \int T^\dagger T \, d\Phi \]
where \(d\Phi\) is the phase space measure. This equation ensures that the probability is conserved, reflecting the unitarity of the \(S\)-matrix.
The scattering operator \(S\) must satisfy \(S S^\dagger = I\), where \(S = 1 + iT\). The non-trivial part of the amplitude is encoded in \(T\), which is isolated by subtracting the identity. This constraint is critical for restricting partial waves and the form of the scattering amplitude \(A(s, t)\).
For distinguishable scalar particles, there is no cross-term if \(P_1' = P_2\). The probability statement is about the full scattering operator being unitary, with \(T\) representing the interaction part beyond the identity.
Partial Wave Expansion and Its Properties
We arrive at the first equation:
\[ T - T^\dagger = i \int T^\dagger T \, d\Phi \]
The most general form of this equation, without any constraints, is very similar. Instead of integrating over two-body phase space, you must sum over all intermediate states and integrate over \(n\)-body phase space if \(n\)-body processes are allowed. Diagrammatically, this means the amplitude minus its conjugate must equal the sum over all intermediate states, with integration over all possible configurations.
Instead of working with momentum and energy variables, we can simplify the problem by using angles. A function of angle can be approximated by partial waves, which is a convenient series because it converges well, especially in low-energy physics.
There are constraints on these amplitudes: they cannot be large for high angular momentum \(j\) due to a natural suppression related to the size of hadrons. Contributions from high \(j\) values are small, which is why experiments often require only a few partial waves—two, three, sometimes six, but rarely more than ten. This makes partial wave expansion a very practical approximation.
If the sum is taken to infinity (as in Regge theory), the relation becomes exact.
Another key feature is the partial wave amplitude, which is a function of a single variable. For each partial wave, there is one such function, and each has fixed quantum numbers. A major advantage of partial waves is that they do not influence each other—they are effectively decoupled.
Partial Wave Unitarity and Phase Space Relations
Partial waves do not interact with each other—they are decoupled. Since quantum numbers in scattering are conserved, partial waves in the initial state are only related to partial waves in the final state. The unitarity constraint reveals that each partial wave relates only to its own output, with no mixing between different partial waves.
To proceed, we insert the following relation into the phase space simplification:
\[ a_j - a_j^* = i \frac{1}{8 \pi \sqrt{s}} \rho \left( a_j - a_j^* \right) = -2 \operatorname{Im} a_j \]
Here, \(a_j\) is the partial wave amplitude, \(s\) is the Mandelstam variable (center-of-mass energy squared), and \(\rho\) represents the phase space factor.
A key simplification involves a “magic formula” (derivation omitted but available in references) that connects the initial and final states. The initial state has an angle of zero, while the final state is described by the transition matrix \(T\). For the final state, the Legendre polynomial \(P_J(\cos \theta)\) is equivalent to the Wigner \(D\)-function \(D_{00}^J(\cos \theta)\).
This leads to the powerful relation:
\[ D_{00}^J(\cos \theta) = d^J_p(\theta) d^J_q(\theta) \]
Here, \(D_{00}^J\) is the Wigner \(D\)-function for zero helicity states, and \(d^J_p\), \(d^J_q\) are the small Wigner \(d\)-functions. The capital \(G\)-function is defined as:
\[ G_{\lambda=00} = e^{-i \lambda \theta} d^J(\theta) \]
The integration over all possible intermediate states requires expanding the cosine between the first and last states into a composition of intermediate angles. This expansion is necessary to relate the amplitude to the partial wave decomposition.
The final step involves dividing both sides of the equation by \(a_j\), leading to:
\[ \frac{1}{a_j} - \frac{1}{a_j^*} = -2i \operatorname{Im} \left( \frac{1}{a_j} \right) \]
This reveals that the imaginary part of the partial wave amplitude is simply \(i - \ell\), where \(\ell\) is the angular momentum. This is a remarkable result, as it directly connects the unitarity constraint to the partial wave structure.
The Wigner \(D\)-functions and Legendre polynomials are essential for relating angular dependencies to partial wave expansions.
The decoupling of partial waves simplifies calculations significantly, as each wave can be treated independently within the unitarity constraint.
Modeling the Real Part of the Amplitude
The partial amplitude is just \(I - L\). It’s amazing because it tells us exactly how the inverse amplitude—the imaginary part of the inverse amplitude—looks like. Let’s plot it. Here’s \(S\), and here is the imaginary part of \(a_J\) and the phase space function.
The phase space factor here is a function that starts from threshold and approaches \(1/(16\pi)\) at infinity. It has a square-root behavior near threshold: \(\sqrt{s - (M_1 + M_2)^2}\). It begins with a square-root dependence but then approaches a constant, \(1/(16\pi)\) or \(1/(8\pi \lambda^{1/2}/s)\). Since \(P\) is a channel function, this is expected.
Now, the key point is modeling. We know the imaginary part, but we do not know the real part. The real part of the amplitude is specific to the interaction. So far, everything applies universally—electromagnetic, strong, or gravitational interactions. Any unitary interaction must have an imaginary part related to phase space. However, the real part must be computed or modeled.
Experimentalists extract the real part by analyzing data, such as from LABIS or harmonic measurements. The simplest model assumes the inverse amplitude equals some real function. One option for this real function is a pole, which is a common approximation.
For example, we can write:
\[ \frac{1}{a_j} = K - i \rho \]
Here, \(K\) is a real function (e.g., a pole term), and \(\rho\) is the phase space factor. This is a minimal model where the real part is approximated by a single pole.
The real part of the amplitude encodes the dynamics of the interaction, while the imaginary part is constrained by unitarity and phase space.
The Wigner \(D\)-functions and Legendre polynomials are essential for relating angular dependencies to partial wave expansions. The decoupling of partial waves simplifies calculations, as each wave can be treated independently under the unitarity constraint.
The final relation reveals:
\[ \frac{1}{a_j} - \frac{1}{a_j^*} = -2i \operatorname{Im} \left( \frac{1}{a_j} \right) \]
This shows that the imaginary part of the partial wave amplitude is simply \(i - \ell\), where \(\ell\) is the angular momentum. This is a remarkable result, directly linking unitarity to the partial wave structure.
Introduction to Resonant Phenomena and K Matrix Approach
The amplitude computed assuming that the real part is a single pole is called the predictor. This is a relativistic predictor amplitude.
Modeling the amplitude in terms of pole terms is often called the K-matrix approach. Let me plot this unpertinent amplitude. Here, the absolute value of \(A\) is on the Y-axis, and \(M\) is a parameter that I put in. These are both real numbers. The numerator \(G\) determines how broad the peak appears, and \(M\) is often called the bare mass—the mass at which the peak appears.
This describes a resonant phenomenon, where particles collide, form an intermediate resonance for a short moment, and then decay.
The simplest resonance amplitude describes a peak at the bare mass \(M\), with the width controlled by \(G\).
It’s not entirely clear why we call this resonant phenomena, but imagine that what is inside the block is an intermediate resonance.
Resonance Phenomenon and Cross-Section Peaks
The term “resonance phenomenon” may not be immediately clear, but consider an experiment where two particles are brought together, and their interaction probability is measured as a function of energy. By colliding particles at varying energies and counting scattering events, you observe a pattern: at certain energies, the interaction probability (cross section) peaks sharply before decreasing again.
This peak indicates the presence of an intermediate resonant state—a particle that briefly forms during the collision. The cross-section plot shows two peaks corresponding to two poles in the \(K\)-matrix, with a zero between them. The zero arises because \(K\) vanishes at that point.
The \(K\)-matrix describes resonant scattering, where poles correspond to resonant states, and zeros indicate destructive interference.
The corrected formula involves \(K - 1\):
\[ \text{Amplitude} \propto \frac{1}{1 - K} \]
Here, \(K\) introduces singularities (poles) and a zero between them. When \(K\) crosses zero, the amplitude vanishes, creating a dip in the cross-section. The poles correspond to resonant masses, and the zero propagates through the amplitude squared, ensuring the cross-section drops to zero at that point.
The two-peak structure reflects the underlying dynamics of the resonant intermediate state, where the system briefly forms a particle before decaying. The \(K\)-matrix approach captures this by modeling the amplitude in terms of pole terms and zeros.
Propagator and Interaction Terms in Scattering Amplitude
The amplitude propagates with the amplitude squared, but in this case, the amplitude is zero on this axis. The values of energy where amplified peaks occur are not exactly at the particle masses, because they enter the expression in a complicated form. The peaks appear very close to these values but not exactly at them, which is why these masses are sometimes called “bare masses.”
This process of incorporating the expression addresses the bare particle masses through the propagator during the renormalization process. Let’s examine the expression:
\[ A = \frac{1}{K^{-1} - i \rho} \]
Expanding this in a Taylor series gives:
\[ A = K + K i \rho K + K i \rho K i \rho K + \dots \]
We see an infinite series of terms, which helps interpret the roles of \(K\) and \(i\rho\). Here, \(i\rho\) arises from the two-particle phase space, representing two particles propagating, while \(K\) is the elementary (point-like) interaction.
Diagrammatically, this can be represented as:
- \(K\) (elementary interaction)
- \(K + K i \rho K + K i \rho K i \rho K + \dots\) (series of interactions with intermediate propagations)
Here, \(\rho\) corresponds to the two-particle phase space, and \(K\) represents the fundamental interaction vertex.
Vertex Amplitude and Scattering Phase in the Complex Plane
Diagrammatically, \(\rho\) is the two-particle parameter and \(k\) is the elementary direction. The vertex amplitude is a complex function. What we have discussed so far is the magnitude of this complex vector, but we have not yet addressed the angle — the scattering phase, which is the argument of the scattering amplitude. This phase exhibits interesting behavior, tracing a circular path in the complex plane.
From the threshold, the amplitude starts small, increases to its maximal value around \(m_1^2\), then decreases back to zero before repeating the pattern in a second loop. Here, \(A\) represents the imaginary part of the amplitude, plotted as a function of \(s\).
The propagator and interaction terms are given by:
\[ A = \frac{1}{K^{-1} - i \rho} \]
Expanding this as a Taylor series yields:
\[ A = K + K i \rho K + K i \rho K i \rho K + \dots \]
Here, \(i\rho\) corresponds to the two-particle phase space, while \(K\) represents the elementary (point-like) interaction vertex. The series describes repeated interactions with intermediate propagations.
Argand Diagram and Amplitude Representation in the Complex Plane
The maximal value of the amplitude is approached around \(m_1^2\). There is a second circle that the amplitude traces, forming what is called an Argand diagram.
In the homework sheet, it is noted that the Argand diagram simply plots the amplitude or the amplitude multiplied by \(\rho\). The quantity \(F = A \cdot \rho\) is often more convenient to plot.
The vertex amplitude is a complex function, and while we have discussed its magnitude, the angle — the scattering phase — is also important. This phase traces a circular path in the complex plane.
The amplitude starts small at the threshold, peaks near \(m_1^2\), then decreases before repeating in a second loop. Here, \(A\) represents the imaginary part of the amplitude as a function of \(s\).
The propagator and interaction terms are given by:
\[ A = \frac{1}{K^{-1} - i \rho} \]
Expanding this as a Taylor series yields:
\[ A = K + K i \rho K + K i \rho K i \rho K + \dots \]
Here, \(i\rho\) corresponds to the two-particle phase space, and \(K\) represents the elementary (point-like) interaction vertex. The series describes repeated interactions with intermediate propagations.
Modeling the Real Part of the Amplitude
The quantity \(F = A \cdot \rho\) is often more convenient to plot in the complex plane. The amplitude’s imaginary part is fixed by unitarity, so we only need to model the real part. This real part represents a point-like interaction that must be resummed to all orders.
There are several techniques to model the real part of the amplitude. One approach uses a polynomial expansion, which is referred to as the scattering length.
The amplitude can be expressed as:
\[ A = \frac{1}{K^{-1} - i \rho} \]
Here, \(K\) represents the elementary (point-like) interaction, and \(i\rho\) corresponds to the two-particle phase space. Expanding this as a Taylor series gives:
\[ A = K + K i \rho K + K i \rho K i \rho K + \dots \]
This series describes repeated interactions with intermediate propagations.
Scattering Length and k Approximation
The modeling is the polynomial, and this is called the scattering length approximation. The first term would be the scattering length.
The second term relates to the wave number \(k\), which can be written as:
\[ k = a^{-1} + \frac{1}{2} r_s k^2 + \dots \]
Here, \(a\) is the scattering length, and \(r_s\) is the effective range. This is called the scattering expansion.
x Approximation and Closing Remarks
You can write \(k\) as \(a^{-1} + r_s k^2\), and this is called the scattering x approximation.
I’m out of time. Thank you very much for staying to the end, and so sorry for being late. Please feel free to send me an email if something is missing. I really appreciate receiving any feedback from you. I hope you leave some exercises for me by the end. What about exercises tomorrow or Friday otherwise?