Scheduling and Participation for the Exam

We need to fix the day and the format for the exam. The date might still be adjusted between the 23rd and 25th.

Do you know who from our group would come or meet to do the exam? We would need to have Marx, Mauritz, and maybe Fibian. I’m not sure if Fibian needs the credit points for this, but I will ask them if they will come.

Paul said he’s finishing his bachelor, so I asked both of them, and I would like others who are available to participate as well. Do you want to participate in the exam? We certainly have to make Dhuvanshu participate.

Durvanshu is a PhD student in our group who was partly involved in the typing exercises. He is at CERN every week doing his hardware job. You might have seen him in the first week—he was here. We should include him in the exam. He skipped the entire semester, so he has to work now.

Did you receive my encouragement to look at the symmetry breaking? I saw it this morning and made some notes on the ideas. Moritz, I briefly saw it yesterday, but I had a dentist appointment because I sent it on Friday. Yesterday was the first working day.

I put a date for Hanin on Thursday. You’re very welcome to bring the information you obtained today and review it again for Ilya’s marketing on Thursday. This is important because it gives you easy points. The exercise is simple, and you might need to report it.

We don’t need to report, but for our number of points over the semester, please collect them. It’s a pity we started a bit late.

Introduction to Lattice QCD and HD Spectroscopy

For every counting, please collect. It is a pity we started a bit late because the lecture today is very exciting. I am particularly excited about lattice QCD and the HD spectroscopy that we do in lattice. We will go at the next.

Mapping Scattering Amplitudes from S-Plane to K-Plane

We have discussed a lot about amplitude, which is a function of \(S\). \(S\) is the Mandelstam variable, calculated as \((P_1 + P_2)^2\) for the scattering process. This \(S\) is a convenient variable to describe the scattering amplitude, where the partial wave amplitude \(f\) is a function of this single variable.

Equivalently, one can consider the breakup momentum variable \(K\), which represents the momentum carried by each particle. Here, \(S\) is the squared energy of the system in the center-of-mass frame, while \(K\) is the momentum of each particle in the same frame. The relationship between \(S\) and \(K\) is given by the channeling function, and the analytic structure of the amplitude is simpler in terms of \(K\) near the threshold.

Instead of working in the \(S\)-plane, where the amplitude is analytic except for a few singularities, we now examine the \(K\)-plane and relate the structures between the two planes. Specifically, we map the following points from the \(S\)-plane to the \(K\)-plane:

  1. Points 1 and 2: Resonances located on the second Riemann sheet with respect to the threshold in \(S\).
  2. Point 3: A pole on the real axis in the first Riemann sheet, corresponding to a bound state.
  3. Point 4: A virtual state, residing on the unphysical sheet below the threshold.

The goal is to understand how these singularities and structures translate when mapped from the \(S\)-plane to the \(K\)-plane.

\[ S = 4(K^2 + m^2) \] where \(m\) is the mass of the particle. This relation highlights the connection between the energy (\(S\)) and the breakup momentum (\(K\)).

Note

The \(K\)-plane simplifies the analysis near thresholds, making it useful for studying bound states, resonances, and virtual states in scattering amplitudes.

Understanding the Mapping of Real Axis and Threshold in K-Plane

The real axis in the \(S\)-plane gets mapped to the \(K\)-plane as well. It is easier to start by mapping the real axis first and then determine the positions of other features relative to it.

The real axis above the threshold corresponds to a physical region, and this physical region is preserved when mapped to the \(K\)-plane. The point where \(S = 0\) corresponds to the threshold in the \(K\)-plane. Below the threshold, when \(S\) is smaller than the threshold value, the square root of \(S\) becomes imaginary, meaning \(K\) is also imaginary.

The other square root term (from the breakup momentum function) does not change sign when crossing the threshold. This term remains negative, so it does not introduce any additional complications near the threshold. However, there is a branch point somewhere between zero and the threshold on the real axis in the \(K\)-plane.

The region close to zero in the \(K\)-plane is more complicated due to this branch point, but the focus here is on the behavior near the threshold. The key observation is that the physical region in the \(S\)-plane maps directly to the physical region in the \(K\)-plane, while non-physical regions (below threshold) map to imaginary values in the \(K\)-plane.

The mapping simplifies the analysis of scattering amplitudes, particularly near thresholds, by transforming the \(S\)-plane structure into a more tractable form in the \(K\)-plane. The relationship between \(S\) and \(K\) is given by:

\[ S = 4(K^2 + m^2) \]

where \(m\) is the mass of the particle. This equation connects the energy variable \(S\) to the breakup momentum \(K\), making it easier to study bound states, resonances, and virtual states in the \(K\)-plane.

Note

The \(K\)-plane provides a clearer representation of the analytic structure near thresholds, particularly for distinguishing physical and non-physical regions.

Riemann Sheets and Analytic Structure in the K-Plane

I want to look at the function nearest threshold. What happens is really interesting. If you continue that, you would have another real part—somehow it flips back to the real axis. The dashed line would just come back to the real axis somehow.

Number four is straightforward—we just add the dashed line, so it ends up here. For number three, the negative part of the… my intuition suggested it might need to be there because it’s on another Riemann sheet. We just stay there, but how do you realize that?

The value of \(S\) is positive—or rather, real below threshold—but on the other sheet. When you put \(S\) below threshold here, the sheets are related to the \(\pm\) sign. That’s where the sheet comes from. Being on the first sheet means the \(+\) sign, and being on the second sheet means the \(-\) sign.

But also, the square root for \(S\) matters here because these are sheets for \(S\). On the \(K\)-plane, \(S\) does not change sign, but the square root changes sign. So \(S\) here and \(S\) here are the same, but when you map it, you have either \(+\) or \(-\). This tells you that both sheets get mapped to the same \(K\)-plane—it’s just \(+\) or \(-\).

Numbers 3 and 4, as well as 2 and 1, are rather easy. Essentially, everything is connected analytically.

Note

The Riemann sheets in the \(S\)-plane correspond to different signs of the square root when mapped to the \(K\)-plane, but the underlying \(S\) values remain the same.

The relationship between \(S\) and \(K\) is given by:

\[ S = 4(K^2 + m^2) \]

where \(m\) is the mass of the particle. This connects the energy variable \(S\) to the breakup momentum \(K\), clarifying the analytic structure near thresholds.

Complex Plane Transformations and Riemann Sheets

The transformations between the \(S\)-plane and \(K\)-plane are analytically connected. You can visualize this by drawing lines or curves, such as half-circles, that do not intersect the real axis (the dashed line). The exact shape (circle, bicircle, or deformed curve) does not matter as long as the mapping remains analytic.

The \(K\)-plane is simply a reparametrization of the \(S\)-plane. For momentum space (\(K\)-plane), there is no square root because \(K\) itself is the square root of \(S\). A function with a square root has two Riemann sheets, but when expressed in terms of \(K\), it simplifies to a single-sheeted analytic structure. The two-sheeted nature arises from the \(\pm\) sign in front of the square root, which is already embedded in the \(K\)-plane.

The mapping is given by the squaring operation:

\[ S = K^2 \]

In the \(K\)-plane, a domain transforms under squaring, producing a two-sheeted structure in the \(S\)-plane. This explains why the sheets are connected without discontinuity.

Note

The \(\pm\) sign in the square root determines the Riemann sheet in the \(S\)-plane, while the \(K\)-plane unifies both sheets under a single analytic structure.

The relationship between \(S\) and \(K\) is further clarified by the threshold behavior:

\[ S = 4(K^2 + m^2) \]

where \(m\) is the particle mass. This connects the energy variable \(S\) to the breakup momentum \(K\), revealing the analytic structure near thresholds.

When \(S\) is below threshold, its value remains real but lies on the second Riemann sheet (corresponding to the \(-\) sign). The first sheet corresponds to the \(+\) sign. Importantly, the \(K\)-plane does not distinguish between sheets—it only reflects the sign change in the square root.

The dashed lines in the \(S\)-plane (representing branch cuts) map to specific curves in the \(K\)-plane, ensuring analytic continuity. This is why transformations between the planes preserve the underlying physics while simplifying the mathematical structure.

Introduction to QCD and Confinement Phenomenon

The fundamental fields and building degrees of freedom in QCD are quarks and gluons. However, one of the important phenomena in QCD is confinement. People began understanding the dynamics of QCD by studying protons, neutrons, and nuclear interactions, which exhibit features of confinement.

For comparison, consider electron-proton scattering or electron-electron scattering, where the electromagnetic coupling constant is small:

\[ \frac{e^2}{4\pi} = \frac{1}{137} \]

This small coupling allows perturbative expansions using Feynman diagrams in QED. However, in QCD, the theory is not in a perturbative regime due to confinement.

The degrees of freedom—quarks and gluons—are confined to small scales, forming composite fields such as mesons. This non-perturbative behavior distinguishes QCD from QED, where perturbation theory is applicable.

Note

The confinement phenomenon means that quarks and gluons cannot be isolated as free particles; they only exist within bound states like hadrons.

The transition from perturbative to non-perturbative dynamics in QCD is a key aspect of its study, as it governs the formation of hadronic matter.

Effective Chiral Perturbation Theory and Double Well Potential

At low energy, the effective theory describing interactions is chiral perturbation theory. This is an expansion in the masses and momenta of particles, where the small parameter is not the interaction strength but rather the smallness of masses and momenta.

The theory relies on composite fields (e.g., mesons) as simpler degrees of freedom. A key feature is that when working with these effective fields (fermion-antifermion pairs), the theory exhibits spontaneous symmetry breaking. The potential of this theory has a double-well structure, meaning it has two minima, and the vacuum settles into one of them.

Note

Understanding the precise nature of the sigma field in this context is crucial for accurately describing the vacuum structure and symmetry breaking.

This double-well potential is central to the non-perturbative dynamics of the theory, distinguishing it from perturbative expansions like those in QED.

Classical vs. Quantum Fields and Potential Introduction

The potential can only be introduced when considering fields as classical fields rather than quantum fields. For classical fields, the concept of potential is clear: if a particle moves in a potential, it turns back when its energy is below the potential energy. For example, a ball hitting a wall reflects back.

Quantum fields behave differently. There is a probability for the particle to penetrate the potential barrier due to quantum effects. If a particle approaches a wall, the wave function has a tail inside the wall, decaying exponentially. This is a consequence of the quantum nature of the system.

Note

The classical picture assumes deterministic reflection, while quantum mechanics allows for tunneling through barriers.

Quasi-Classical Expansion and Field Averaging

The wave function has a tail in the wall due to the quantum nature of the system. The relation between particles, quantum fields, and classical fields arises through quasi-classical expansion. In quantum field theory, \(\hbar\) is treated as a small parameter (though in natural units, it is set to 1). This parameter appears in the Lagrangian and action, allowing an expansion in powers of \(\hbar\).

The first term in this expansion gives the average value of the fields, which we identify as the classical field. For example, the field \(\sigma\) can be obtained by performing an \(\hbar\)-expansion of the Lagrangian. The leading term yields not the fluctuating quantum field but the classical average value of \(\sigma\) in space.

At this level, \(\sigma\) is no longer a function of spacetime coordinates \(x\) (i.e., it does not fluctuate) but is instead a fixed value. We can then study the quasi-classical potential of the theory, which emerges from this approximation.

Note

The classical field approximation neglects quantum fluctuations, simplifying the analysis while retaining key physical insights.

Potential, Sigma Field, and Massless Pions

We can analyze the quasi-classical potential of the theory as a function of the sigma field (\(\sigma\)). The potential is visualized in three dimensions, where the x-axis represents the real part of \(\sigma\) and the y-axis represents the imaginary part. These can be treated as two independent fields, \(\sigma_1\) (real) and \(\sigma_2\) (imaginary).

The curvature of the potential indicates the mass of the particle. In the current theory, the degrees of freedom that experience binding in the confinement regime are associated with the sigma field, while the unimportant field is integrated out. The key observation is that the degrees of freedom not experiencing binding—those without curvature in the potential—correspond to the pion (\(\pi\)) field.

The vacuum is centered around a non-zero value due to spontaneous symmetry breaking, filling space with a quark condensate represented by the sigma field. On top of this background, there are fluctuations of the massless pion field. This picture illustrates that the vacuum is dominated by the constant sigma field, while the pion fields fluctuate without mass.

This aligns with the Goldstone theorem: when a symmetry is spontaneously broken, massless bosons (pions, in this case) appear in the theory. In QCD, pions are massless in the idealized scenario—their observed mass in reality arises from the Higgs mechanism, which gives quarks their mass.

Note

The sigma field represents a non-fluctuating classical average, while the pion fields are massless fluctuations on top of this background, as dictated by spontaneous symmetry breaking.

Spontaneous Symmetry Breaking and Goldstone Bosons

The Higgs mechanism gives quarks mass in real life. If quarks were massless, pions (\(\pi\)), kaons (\(K\)), and the eta meson (\(\eta\)) would also be massless, forming a massless multiplet.

We previously examined the current symmetry, checking whether the Lagrangian remains invariant under independent rotations of left-handed and right-handed quarks. The QCD Lagrangian has a global \(SU(2)\) symmetry when no mass term is present. This symmetry is spontaneously broken when moving to the minimum of the theory.

Introducing independent rotations of the right and left fields corresponds to rotations in the \(\sigma\) field, reflecting the symmetry of the potential. Once the system settles at the minimum, this symmetry is broken—it is a global symmetry. The breaking of this left-right symmetry leads to the appearance of Goldstone bosons, which are pions. These pions are massless in the idealized case.

If the symmetry were larger, the pattern of symmetry breaking would produce additional massless modes. The mass of pions in reality arises from explicit symmetry breaking due to quark masses generated by the Higgs mechanism.

\[ m_u = m_d = 0 \Rightarrow M_{\pi} = 0 \]

This equation shows that if the up (\(u\)) and down (\(d\)) quarks were massless, the pion (\(\pi\)) would also be massless.

Higgs Mechanism and Yukawa Couplings in Mass Generation

If symmetry is bigger, like \(SU(3)\), we have eight mesons that are massless. And this is two kaons, three pions, and eight etas that disappear to be massless. Now, why are they not massless? Because quarks have mass.

This connects to the Higgs mechanism, as it is the Higgs interaction with fermions that gives them mass through Yukawa couplings. For the current perturbation theory, we just put a mass term explicitly in the Lagrangian. Now, with the discovery of the Higgs, we understand that a fundamental symmetry breaking occurs with the Higgs field. The vacuum is filled with the Higgs, and particles moving through it acquire mass.

For the chiral perturbation theory, the mass term appears explicitly in the Lagrangian and breaks the symmetry explicitly.

Note

The Yukawa coupling in the Higgs context refers to how fermions acquire mass via interaction with the Higgs field. Originally, Yukawa proposed the exchange of pions (\(\pi\)) as the mediator of nuclear forces, but in the Standard Model, Yukawa couplings introduce the mass matrix.

Even if you consider QCD without mass terms, the \(\eta'\) meson still appears massive. The \(\eta'\) is related to the \(U(1)\) symmetry, where \(SU(3)\) symmetry breaking plays a role.

Spontaneous Symmetry Breaking and Goldstone Bosons in QCD

The \(\eta'\) meson is related to the \(U(1)\) symmetry. Here, the “S” in both groups (e.g., \(SU(2)\), \(SU(3)\)) stands for “special,” meaning the determinant of the matrix is equal to 1.

For \(SU(2)\), we have:
\[ U(2) = SU(2) \times U(1) \]
If you allow the determinant to have a phase (i.e., the \(U(1)\) part is a unitary group where \(|\det U|^2 = 1\)), you can consider the matrix itself as unitary but with an extra phase for the determinant.

Looking at the Lagrangian, it appears invariant under this \(U(1)\) phase transformation, especially for global symmetries where derivatives do not affect the phase. For example, a term like \(\bar{\psi}\psi\) is invariant under such phase transformations. However, this symmetry is dynamically broken. Higher-order perturbative calculations reveal that the \(U(1)\) symmetry is spontaneously broken, which is related to the mass of the \(\eta'\).

When examining the light meson octet, the \(\eta'\) stands out as special. The number of generators in \(U(3)\) is 9, with 8 belonging to \(SU(3)\). These 8 generators correspond to the spontaneously broken symmetries, giving rise to 8 Goldstone bosons (the pions, kaons, and eta mesons).

The remaining \(U(1)\) symmetry is seemingly preserved, but anomalies cause the \(\eta'\) to acquire mass, making it distinct from the other Goldstone bosons.

Proton Mass and Quark-Gluon Interactions in QCD

The proton mass is not determined by the bare masses of the quarks but by the energy stored in quark-gluon interactions. In the chiral limit, where quark masses are set to zero, pions and kaons become massless (as they are Goldstone bosons of chiral symmetry breaking), but the proton remains massive. This is because the proton’s mass arises from the dynamics of quark-gluon interactions, not from the quark masses themselves.

The proton is composed of three quarks, but unlike pions, it is not a Goldstone boson. Pions are fluctuations of the quark condensate (the \(\sigma\) field), while protons are excitations of this condensate with additional energy stored in their structure. The mass of the proton is thus a manifestation of the strong interaction energy within QCD.

Important

The mass of hadrons (except Goldstone bosons like pions) is determined by the curvature of the QCD potential. Pions move along flat directions of the potential (no curvature, hence massless), while protons and other hadrons correspond to excitations with non-zero curvature, giving them mass.

The quark-gluon interactions in mesons also exist, but the key difference lies in the nature of their excitations. The quark condensate forms a background, and pions are small fluctuations on this background, whereas protons are more complex excitations with additional energy contributions.

To understand this mathematically, one can examine how the \(\sigma\) field acquires a non-zero vacuum expectation value and how pions emerge as fluctuations along flat directions of the potential. The mass of particles corresponds to the curvature of the potential, and since pions move along directions with no curvature, they remain massless. Protons, however, involve excitations of the condensate itself, leading to their observed mass.

The theory does not inherently contain a mass scale for the proton; its mass emerges dynamically from the strong interaction dynamics of QCD.

Impact of Quark Masses on QCD Theory

The theory itself does not have special values for the quark masses. You can tune the masses, increase them, or even set them to zero, and the theory remains largely unchanged. This is especially true for the light quarks (\(u\) and \(d\)), whose masses are much smaller than the QCD scale of 1 GeV. Introducing small quark masses does not drastically alter the physics.

When quark masses are introduced, they effectively add a linear term to the potential, slightly skewing it. The overall picture remains almost the same, but the minimum on one side becomes more pronounced. In the pion direction, there is now a curvature, making pions slightly massive. The theory, however, remains fundamentally unchanged.

Note

The Mexican hat potential develops a curvature in the pion direction due to quark masses, giving pions a small mass while preserving the structure of QCD.

One important quantity often discussed is the ratio of the proton mass to the pion mass, which is primarily determined by QCD interactions and does not change significantly even if quark masses are increased. Computations of properties in lattice QCD are highly dependent on the quark masses used. Interestingly, calculations appear easier when pions are slightly heavier, achieved by introducing larger quark masses. Despite this, the essential properties of the theory—such as those of mesons—do not change much.

While details like particle masses and widths do change, the broader characteristics remain stable. This means that even with unphysical pion masses in simulations, we can still extract meaningful insights. The key takeaway is that these variations are a computational tool, not a fundamental limitation.

Important

Lattice QCD computations often use artificially heavy quark masses for numerical stability, but the extracted physics remains representative of the real-world theory.

The mass of particles like pions changes with quark masses, but the underlying dynamics of QCD—such as the role of the quark condensate and chiral symmetry breaking—are preserved. This allows us to study QCD effectively even with non-physical parameters.

Introduction to Lattice QCD and Computational Setup

We work with fields that depend on coordinates \(X\), which is a four-dimensional vector (time + spatial dimensions). The approach involves introducing a grid in both spatial and time dimensions, then computing the Lagrangian action or particle correlations on this grid.

A typical lattice QCD setup extends around 200 points in the time dimension and spans about 7 Fermi in spatial dimensions, discretized into 50 points per side. This forms a box with 50 points on each side, evolving over 200 time steps. These values are determined experimentally to balance computational efficiency and accuracy.

Note

The lattice size and step count are optimized through numerical testing—larger volumes and finer discretization improve accuracy but increase computational cost.

Two primary challenges arise:
1. Finite volume effects (due to the box size).
2. Discretization errors (due to step size).

To mitigate these:
- Increase the volume to reduce finite-volume effects.
- Decrease step size to minimize discretization errors.

Both adjustments come at a higher computational cost. The goal is to use the minimal number of points required for reliable results, as lattice QCD calculations are numerically intensive.

Periodic Boundary Conditions in Lattice Simulations

The lattice size ratio \(\frac{L}{a} \sim 20:50\) is a practical range for balancing computational feasibility and approximating an infinite, continuous space.

A useful analogy for understanding lattice simulations is to think of a proton placed in a box with periodic boundary conditions. Due to the finite box size and cyclic boundaries, the proton interacts with its mirror images.

Why use periodic boundary conditions? The alternatives, such as hard-wall boundaries where the wave function vanishes at the edges, introduce quantization effects and restrict the wave function’s behavior. Periodic boundary conditions avoid this by allowing the wave function to remain continuous across the box edges.

In quantum field theory, periodic boundary conditions are advantageous because they preserve translational symmetry and avoid artificial quantization of momenta. This is particularly important in lattice QCD, where the wave function should not be forced to zero at the boundaries.

Note

Hard-wall boundaries impose vanishing wave functions at the edges, while periodic boundaries allow the wave function to propagate smoothly across the box. The choice depends on the physical scenario being modeled.

Finite Volume Effects and Exponential Suppression

I would have assumed that it would have been better, but that wouldn’t make a difference. Uncertainties are suppressed by having the wave function in a finite box, and these effects are exponentially suppressed for good observables.

I should have started by saying that one can spend a semester or several semesters learning quantum thermodynamics on the lattice. It’s a rapidly moving field with many interesting techniques, but I can only give you a brief introduction.

Perhaps John Gulava could hold a lecture on this topic next semester. He would be the right person to introduce all the details, as he has deep expertise in this area. For now, let me give you some vague arguments and a taste of the computations, along with common knowledge about finite volume effects.

My claim is that corrections due to the lattice size—for good observables like particle masses or form factors—are exponentially suppressed with the lightest degrees of freedom in the theory, which in this case is the pion.

Note

Finite volume effects are suppressed exponentially rather than polynomially, making lattice simulations viable for extracting physical quantities even with limited computational resources.

Virtual Pions and Proton Interactions

The lightest degrees of freedom in the theory are the pions. If you place a proton on the lattice, pions appear as the dominant contribution. The proton is addressed, while pions are beautiful particles that emerge as virtual entities. They do not propagate but instead form clouds around the proton. Since they do not travel, they are virtual particles and cannot sample the boundary of the box.

Pions provide the correct scale for understanding suppression effects. Another way to see this is by examining the interactions in the middle image. The potential has the form:

\[ V(R) \sim -e^{-m_\pi R} \]

where \(R\) is the distance and \(m_\pi\) is the pion mass. This potential gives an argument for the exponential suppression of finite-volume effects. The pion mass sets the scale for this suppression, as it is the lightest degree of freedom in the theory.

Note

Virtual pions act as carriers of interactions, forming clouds around protons and mediating forces without propagating as free particles.

Lattice Volume Effects and Discretization

The exponential factor provides another argument for understanding finite-volume effects. Once your lattice is large compared to the inverse mass of the pion (\(1/m_\pi\)), your computations closely approximate infinite-volume quantities. However, these are still box quantities, meaning the physics is constrained to the box. When the box size \(L\) becomes infinite, you approach true infinite-volume physics, and the computations reflect the physical situation.

Discretization error is another consideration. The more steps you take in the lattice, the better the approximation. Increasing the volume of the box significantly would make computations more intensive, but the box is never truly empty. Even if no explicit particles are placed inside, the vacuum itself contains fluctuating quantum fields. These fields interact, and the box is filled with correlated fields, visualized as bubbles appearing and disappearing in the vacuum.

Adjusting discretization by varying grid spacing is important. The lattice spacing \(a\) affects the scales probed in the theory. For large \(a\), confinement may break down, as the theory loses its non-perturbative properties. Therefore, small step sizes are crucial to maintain confinement and ensure accurate results.

Note

Lattice computations are typically performed for multiple discretization values to demonstrate that results are not strongly affected by the choice of spacing.

In lattice QCD studies, the physical box size (e.g., 5 Fermi) is fixed, but computations are repeated for different lattice spacings (e.g., \(a = 0.05, 0.033, 0.02 \ \text{Fermi}\)). This ensures that discretization errors are under control and that the extracted physics is reliable.

Numerical Computation of Discretization Errors

To ensure that results are not significantly affected by discretization error, we probe several lattices. The computation of these errors is a complex topic, often addressed numerically.

A key question is how changes between different states are implemented—for example, whether particles exhibit hopping behavior, such as a hopping probability in a given direction.

The primary observables computed on the lattice are correlation functions. These are defined as expectation values of operators at different times:

\[ \langle \mathcal{O}(t) \mathcal{O}(0) \rangle \]

Here, each node of the lattice registers field values, and the quantities of interest are derived from these correlations. The computation process involves evaluating these expectation values for operators at time \(t\) and time \(0\).

Note

Numerical methods are essential for handling discretization effects, as analytical solutions are often impractical for complex lattice configurations.

High-Dimensional Integral Challenges in Lattice Field Theory

The correlation function is defined as the expectation value of an operator at time \(t\) and the same operator at time \(0\). Formally, it is given by:

\[ \langle \mathcal{O}(t) \mathcal{O}(0) \rangle = \frac{\int \mathcal{D} \bar{\Psi} \mathcal{D} \Psi \, \mathcal{O}(t) \mathcal{O}(0) \, e^{-S[\bar{\Psi}, \Psi]}}{\int \mathcal{D} \bar{\Psi} \mathcal{D} \Psi \, e^{-S[\bar{\Psi}, \Psi]}} \]

Here, \(S[\bar{\Psi}, \Psi]\) is the Lagrangian action of the theory, and the integral is taken over all configurations of the fields \(\bar{\Psi}\) and \(\Psi\). The denominator serves as a normalization factor.

When discretized on a lattice, the dimensions scale as \((L/a)^3 \times (T/a) \times 4 \times 3\), where \(L\) and \(T\) are spatial and temporal extents, \(a\) is the lattice spacing, \(4\) accounts for Lorentz indices, and \(3\) for color (SU(3)). The field \(\Psi\) is a four-dimensional Dirac spinor with color and flavor degrees of freedom.

The integral becomes extremely high-dimensional—roughly \(10^4\) dimensions per field component. For numerical integration, even sampling 10 points per dimension would require \(10^{10^4}\) evaluations, which is computationally intractable.

Important

Direct discretization and sampling methods fail due to the exponential growth in computational cost. Instead, statistical mechanics techniques are employed, leveraging importance sampling to approximate the integral efficiently.

The challenge lies in avoiding brute-force discretization and instead using statistical methods to estimate high-dimensional integrals in lattice field theory.

Monte Carlo Sampling for Integral Evaluation

Instead of discretizing and using equal-spaced grid computation, I would create a sample of \(x\) distributed uniformly from \(a\) to \(b\). The integral of \(F(x)\) can then be approximated as the average value of the function over this sample, multiplied by the interval length:

\[ \int_a^b F(x) \, dx \approx \frac{b - a}{N} \sum_{i=1}^N F(x_i) \]

Here, \(x_i\) are uniformly sampled points in \([a, b]\). This is the Monte Carlo sampling technique, which avoids the need for brute-force discretization.

Important

The key advantage is computational efficiency, especially for high-dimensional integrals where traditional methods fail due to exponential scaling.

This approach leverages statistical mechanics principles, using random sampling to estimate integrals rather than deterministic grid-based methods.

Euclidean Metric for Improved Integral Convergence

The trigger integral is evaluated using the Monte Carlo sampling technique, which is the primary method employed in lattice computations. The procedure involves first generating configurations (samples of the variables, called “tan samples”), then evaluating the statistic and computing the sum.

One major challenge in applying this method is the significant fluctuation in the integration value due to the phase factor. While the computation remains possible, the convergence is poor because of oscillatory behavior, leading to the so-called “time problem.” A complex weight is unsuitable for the emergency fgametric to 5.

To address this, instead of using the Minkowski metric, one performs a transformation of time to imaginary time and adopts the Euclidean metric. For example, when computing the operator correlation function ( \(C(t)\) ), the action becomes imaginary, and the integral’s convergence improves significantly when working with real weights.

The Euclidean metric is introduced by replacing time with imaginary time, which effectively transforms the Lagrangian (or action) into a real-valued quantity. This modification enhances the numerical stability and efficiency of the integral evaluation.

Evolution of Operators in Heisenberg Picture

The correlation function \(C(t)\) is given by:

\[ C(t) = \langle 0 | O(t) O(0) | 0 \rangle = \frac{1}{2} \sum_{\Psi, \Psi'} D \Psi D \Psi' A O(t) O(0) e^{-S(\Psi + \Psi')} \]

Here, the action \(S\) replaces the Lagrangian, and the weights are real.

In the Heisenberg picture in Minkowski space, the evolution of operators at time \(T\) requires phase factors on both sides of the vacuum state. The Hamiltonian governs this evolution, with the phase factors being exponentials involving the Hamiltonian matrix.

The Hamiltonian only gives energy when acting on eigenstates. For example, \(O(0) | 0 \rangle\) should yield zero if \(| 0 \rangle\) is an eigenstate of the Hamiltonian.

Important

The phase factors are crucial because they encode the time evolution of operators via the Hamiltonian.

The Euclidean metric improves integral convergence by transforming time to imaginary time, making the action real and stabilizing numerical evaluations.

Ground State Energy and Correlation Functions

The Hamiltonian should be zero when acting on the vacuum state \(| 0 \rangle\). The operator \(O(0)\) applied to the vacuum should also yield zero if \(| 0 \rangle\) is an eigenstate of the Hamiltonian.

To recover the missing cross terms, we use a completeness relation involving states \(|\Psi\rangle\) and \(|\Psi'\rangle\). The corrected expression includes dagger operations on the operators:

\[ C(t) = \langle 0 | O^\dagger(t) O^\dagger(0) | 0 \rangle \]

This ensures proper squaring of terms. The correlation function can be expanded as a sum over energy eigenstates \(|N\rangle\) with energies \(E_N\):

\[ C(t) = \sum_N e^{-(E_N - E_0)t} \]

Here, \(E_0\) is the vacuum energy. The exponential factor \(e^{-Ht}\) acts as the identity when applied to the vacuum, assuming \(E_0 = 0\).

Important

The ground state energy is the reference level from which other energies are measured.

When computing \(C(t)\) on a lattice with discrete time points, the long-time behavior of the correlation function is dominated by the lowest energy state. Higher-energy states are exponentially suppressed, leaving only the contribution from the ground state at large \(t\).

Momentum Quantization and Periodic Boundary Conditions

The long-time behavior of correlation functions is dominated by the ground state because higher-energy states are exponentially suppressed. All non-zero energy terms in the correlation function have time dependence and decay away.

To isolate the ground state energy, we introduce a modified quantity by comparing correlation functions at different times:

\[ \tilde{C}(t) = \frac{C(t)}{C(t+1)} \]

Taking the logarithm of this ratio gives:

\[ \log \tilde{C}(t) \approx -E_1 \]

where \(E_1\) is the energy of the lowest excited state. This is a common method to extract energy levels numerically. The energy obtained depends on the operator used:
- For pion-like operators, it approaches the pion mass.
- For vector quantum number operators, it approaches the rho meson mass.

Important

The periodic boundary condition leads to momentum quantization.

For a particle in a box of size \(L\), the wave function must satisfy \(\psi(x + L) = \psi(x)\). For a plane wave \(e^{i p x}\), this implies:

\[ p L = 2 \pi n \]

where \(n\) is an integer. Thus, the momentum is quantized in units of \(2 \pi / L\).

Note

The phase of the wave function must match after traversing the full length \(L\), but not necessarily for smaller distances.

This means the particle’s momentum can only take discrete values:

\[ p_n = \frac{2 \pi n}{L} \]

The step size in momentum space is determined by the system size \(L\) — larger systems allow finer momentum resolution.

Discrete Spectrum and Interaction Shifts in Particle Systems

The step size in the spectrum is as small as \(1/L\), where \(L\) is the system size. For a two-particle system, the spectrum is discrete. Consider particles labeled by \(M = 1, 2, 3, 4\) in a periodic band configuration.

You can visualize this as two particles moving in a circle of length \(L\). This is the one-dimensional case of a box with periodic boundary conditions, and the spectrum remains discrete. The total energy of the system is simply the sum of the individual particle energies when they are non-interacting:

\[ E_{\text{system}} = \sum_i E_i \]

However, when interactions are introduced, the energy of the system shifts slightly. The total energy can still be expressed as a sum, but now an interaction term must be included:

\[ E_{\text{system}} = \sum_i E_i + E_{\text{interaction}} \]

This shift in energy levels due to interactions is the key idea behind lattice spectroscopy. By computing the spectrum first without interactions and then with interactions, the differences between energy levels reveal information about the particles, particularly the phase shift induced by their interactions.

Note

The discrete spectrum arises from quantization in a finite system, while interaction shifts provide insights into particle dynamics.

The method involves comparing the non-interacting and interacting spectra to extract physical properties of the system.

2PI Correlator and Scattering Phase Shift Analysis

The 2PI correlator provides information about the phase shift of particle interactions. You compute this quantity for two pions and observe where it separates at high energy. The saturation point of the tilde function at large \(T\) gives a measurement of the energy level.

Next, you perform the same computation for the interacting system and compare it to the non-interacting (polytronic) case. The difference in shifts between the two systems reveals the scattering phase shift.

Note

The scattering phase shift is extracted by comparing the non-interacting and interacting spectra.

The collision dynamics can also be analyzed using alternative techniques in configuration space, where real values of parameters are used. This approach is part of modern lattice QCD, which has significantly advanced our understanding of the hadron spectrum in recent years.

Lattice QCD Advantages in Studying Meson Scattering

We have learned so much about the hadron spectrum from lattice QCD. The lattice provides an isolated setup where there are no experimental effects, allowing us to study processes that are impossible in experiments. For example, while scattering two pions is feasible in experiments, scattering a photon on a sigma meson is not — yet this becomes tractable on the lattice.

The \(2\pi\)-sigma meson system, which appears as a blob in the spectrum, is a candidate for a tetraquark state at low energy. In lattice QCD, we can study \(\pi\pi\) scattering by inserting a photon current and computing the correlation function:

\[ \langle \mathcal{O}(t) \mathcal{O}^\dagger(0) \rangle \]

This gives access to the form factor, which reveals the spatial distribution of quarks and gluons inside these mesons.

Exploring Hadron Structure with Lattice QCD and Current Insertion

Lattice QCD allows us to probe the internal dimensions of mesons, revealing how quarks and gluons are distributed inside them. One of the exciting frontiers is understanding how hadrons behave when a current is inserted — how they look and how quarks and gluons are dynamically distributed in space.

This cannot be studied with KVT (Kinematic Vertex Tracking) because it only deals with pure electronic states, where quarks are not directly accessible. Form factors cannot be computed, and there is no straightforward way to fix low-energy constants. While some researchers using IPT (Instantaneous Perturbation Theory) might argue otherwise, I am not aware of a method to achieve this.

Unitarity Constraints on Scattering Amplitudes in Resonance Production

For low-living resonances, experiments produce them in processes involving stable particles. The production amplitude \(\mathcal{P}_I\) is accessible, but constraints are stronger on the scattering amplitude and elastic amplitudes.

Important

Unitarity imposes severe constraints on the scattering amplitude — it cannot be arbitrary. The amplitude must lie within the unitary circle.

In the previous lecture, a matrix element was written on the board, and a question arose about the imaginary part being positive due to unitarity relations. This specifically applies to the scattering amplitude, not the production amplitude.

\[ \text{Im}(\mathcal{M}) \geq 0 \] where \(\mathcal{M}\) is the scattering amplitude.

The elastic scattering amplitude cannot be chosen arbitrarily; it must satisfy these unitarity bounds.

Elastic Amplitudes and Quark Mass Dependence

The elastic scattering amplitude must lie within the unitary circle, and if it’s elastic, it should be exactly on the circle — it cannot be outside. The imaginary part is always positive. On the lattice, you can access this quantity, which comes with strong constraints.

Elastic amplitudes include processes like \(\pi\pi \to \pi\pi\), \(\pi\pi \to KK\), \(\pi K \to \pi K\), and \(\pi \eta \to \pi \eta\). Additionally, you can control the quark masses, allowing you to make pions massless or massive.

Important

Changing the pion mass significantly alters the elastic region. When pions are massless, the thresholds for \(2\pi\), \(3\pi\), and \(5\pi\) coincide. However, if you increase the pion mass, a large region opens where the \(2\pi\) threshold is accessible, but higher thresholds (e.g., \(3\pi\) or \(4\pi\)) remain closed.

This creates a broad elastic region where unitarity constraints provide nearly parameter-free parameterizations, enabling deeper physical insights. By varying the quark masses, you can explore how these thresholds shift and how the elastic amplitude behaves in different regimes.

Resonances and Bound States in the K Plane

The parameterization in the \(K\) plane reveals deep physical insights. By adjusting quark masses, particles can transition across the \(K\) plane, allowing bound states to become resonances or virtual states. This analytic continuity shows that these phenomena are intrinsically connected — bound states, virtual states, and resonances are all manifestations of QCD under different conditions.

Important

The quark mass determines whether a state appears as a bound state, resonance, or virtual state. As masses vary, particles “travel” through the \(K\) plane, smoothly transitioning between these regimes.

This demonstrates the unified nature of QCD phenomena, where seemingly distinct states are part of a continuous spectrum tied to quark mass dynamics.

Introduction and Exercise Sheet Announcement

The states of QCD manifestation depend on quark mass, where one state can transition into another.

Note

The exercise sheet was prepared by Ilya and printed for you. I worked on it overnight, and Ilya handled the printing.

Thank you for attending today, and I apologize for running slightly overtime. Well done.