Differences between reflection and
earthquake seismology
Let us examine the scope of
each field in order to understand the differences and similarities a
little better.
Seismology is literally the
study of earthquakes (i.e. seismos comes from the Greek word
meaning earthquake and logo means science, also derived from the
Greek). So, what is seismology and whatís the difference between
earthquake seismology and reflection seismology?
1) Causes of waves/propagating
vibrations generated by Earthquake & Reflection Seismology
In Earthquake Seismology :
The uppermost part of the
earth is divided into a rigid shell about 100 km thick that
comprises both rigid mantle and rigid crust. This shell (known as
the lithosphere) is broken into say a dozen small plates, which do
not coincide with the continents.
As these plates move with
respect to each other, you can imagine large stresses build up at
the edges where these plates meet. Plates collide at a velocity of
210 cm/year.
image of plates in motion
and plate boundary earthquakes
It is obvious in this simple
description that there can be a significant build up of stress as a
result of plate movement. Rocks can accumulate stress to a small
degree but when the stresses exceed the rock strength the rock
fractures along a plane of weakness (fault ).
When the rocks rebound back to
their original position, or close to it, the mapping motion shakes
creating a vibration that propagates in all directions.
In reflection seismology
we tend to use artificially induced vibrations created by a variety
of seismic sources:
SEISMIC RESOLUTION
Vertical Resolution
Seismic resolution is the ability to distinguish separate features; the minimum
distance between 2 features so that the two can be defined separately rather than
as one. In vertical incidence reflection seismology we think of resolution in the vertical
sense but it is a concept that can be applied in the horizontal
sense as well..
A yardstick that we can use in the seismic realm is a wavelength.
In order for two nearby reflecting interfaces to be seen well, they have to be
separated by no less that 1/4 wavelength, or in other words, the
layer thickness has to be no less than a certain value if we are to
resolve the top and bottom of the layer (Rayleigh Criterion).
However, if we have a good idea of what the geological thicknesses
are we can by additional sophisticated modeling 'improve' resolution down to 1/8 wavelength.
e.g. Velocity of the seismic wave = frequency x wavelength
shallow earth: 2000 m/s, 50 Hz, l= 40 m
deep earth: 5000 m/s, 20 Hz, l= 250 m
(Assumption: seismic signal has one frequency and that seismic waves travel at
one velocity)
There is a practical limitation in generating high frequencies that can penetrate
large depths.
The earth acts as a natural filter removing the higher frequencies more readily than
the lower frequencies.
In effect the deeper the source of reflections, the lower the frequencies we can
receive from those depths and therefore the lower resolution we appear to have from
great depths such as the middle crust. Often we presume that the lower crust is more
homogeneous but that can be a human perception borne by poor resolution.
One could argue that we could simply increase the power of our source so that
high frequencies could travel farther without being attenuated. However, larger power
sources tend to produce lower frequencies.
(Figure 7.31, p. 218[ Sheriff, 1995 #1510])
Vertical resolution decreases with the distance traveled (hence depth) by the
ray because attenuation robs the signal of the higher frequency components more readily.
Horizontal Resolution and the First Fresnel Zone ([Yilmaz, 1988 #316] p. 470)
Lateral resolution refers to how close two reflecting points can be situated horizontally,
and yet be recognized as two separate points rather than one.
If we only think of rays then we never have any problems with resolving the lateral
extent of features because a ray is infinitely thin, has infinite frequencies, and
can detect all changes.
(We will deal with the concept of rays
versus waves later in the semester).
However, when we deal with waves (reality) a reflection is not energy from just
one point beneath us. A reflection is energy that bounces back at us from a region.
As waveforms are really nonplanar, reflections from a surface are returned from
over a region and over an interval of time. Signal that comes in at about the same
time can not be separated into individual components. So, we see that reflections
that can be considered as almost coincident in time at the receiver come from a region.
The area that produces the reflection is known as the First Fresnel Zone: the reflecting
zone in the subsurface insonified by the first quarter of a wavelength. If the wavelength
is large then the zone over which the reflected returns come from is larger and the
resolution is lower.
Horizontal resolution depends on the frequency and velocity.
Equation: See handouts from AAPG Explorer and class handouts
KEY WORDS:
SEISMIC ACQUISITION GEOMETRY
CommonMidpoint Method
Normal moveout is the time difference between any point
on a reflection hyperbola and its apex.
Shotpoint Gather are the data collected
in one shot and together are known as a shotpoint gather.
Common MidPoint Method (by W. Harry
Mayne)
Now that we've seen the limitations involved with imaging subsurface data let's
look at the geometrical techniques used in collecting field reflection data. In general,
data collection always aims to collect data with the least amount of noise.
The cornerstone technique of collecting seismic reflecting data reducing the noise
is known as the CMP/CDP method (common midpoint method is a better description,
although CDP is most commonly used).
Seismic Sources and Receivers
When we collect data we generate a seismic signal using for example,
a
water gun, an airgun, dynamite, a hammer etc. and collect the data with detectors.
Detectors at sea are known as hydrophones: pressure transducers: electronic equipment
that converts mechanical enery (pressure) into electrical energy (voltage). Dualsensor
oceanbottom cables combine the advantages of geophones and hydrophones.
On land the detectors are known as geophones;
these convert vertical velocity ground motion into electrical energy. You can have
differently oriented geophones to detect the various components of ground motion.
SignaltoNoise Ratio
All data collected has some background noise such as:
cows walking across a paddock, instrumental selfgenerated noise, wind knocking
blades of grass against geophones, rain, faulty connections, induced noise from nearby
power sources. If the reflection data that you want to collect is stronger than the
background noise there's no problem. But, what happens when the returning signal
has traveled far, is attenuated of high frequencies and has suffered geometric spreading?
Harry Mayne thought that noise was on average random, so that if we stacked/ summed
various copies of the reflections from one point, the noise would cancel out but
the signal would add coherently.
We describe the ratio of wanted to unwanted data as the signaltonoiseratio:
If S/N =1 for x geophones,
S/N =4 times better for 16x geophones
So Mayne (1962) laid out various receivers/ geophones and shot to all of them simultaneously,
then added the signal returning to all of them. The S/N ratio improves ideally as
the square root of the number of geophones.
So, as we move along at fixed distances and fix shot time intervals, different
receivers receive reflections from different receivers from the same position at
depth. For a fixed shot all the data received and collected is known as a shotpoint gather.
Because we want information from the same position on the seafloor we reorder
the traces according to where they wave source hits the seafloor. This reordering
is known as a Common MidPoint gather (of traces) or more commonly, as a
Common DepthPoint gather (of traces). With further accounting we can associate
the source and receivers with the same midpoints. In the following picture, different
shots with different receivers sample the same position on the seafloor. After three
shots the distribution of CMP is:
Insert link to picture
In detail these curves and lines are composed of signal received from many adjacent
detectors. The vertical axis is TWTT (twoway travel time (s) ) and the horizontal
axis is the distance between the individual detector and the source. The data is
received as a a series of sampled voltages and displayed with parts of the wiggle
shaded in:
Insert link to picture
How are we going to sum the data if it's on a hyperbolic path? We can calculate
a hyperbolic path and add the data along it, or we can move the traces up by an amount
predicted by the hyperbola equation.
Normal Moveout
In practice, it is easier to shift the traces DTi and
sum them across. We have to estimate the velocity of the medium and move the traces
out until they are horizontal and we can sum or stack them:
Insert link to picture
How de we calculate this DTi ?
DTi is known as the normal moveout: variation of reflection
arrival time because of variation in the shotpointtogeophone distance. Normal moveout
is the time difference between any point on a reflection hyperbola and its apex.
Normal moveout is equivalent to moving your receiver over to the explosion , i.e.
no offset. Otherwise, you'd have to take many shots in one location, but that takes
a longer period of time.
SIGNAL PROCESSING
KEY WORDS
Sampling theorem
The theorem determines that you must sample your data at least twice per wavelength
(for the frequency being considered)
Nyquist frequency is half the sampling frequency, or the frequency above which
higher frequencies are aliased (or appear to be) lower frequencies because of insufficient
sampling.
Nyquist frequency is 1/2 the sampling frequency:
N_{f}= 1/(2DSI),
where SI is the sampling interval
Phase The angle of lag or lead in a sine wave with respect to a reference
Radian When the arc=radius, the angle subtended is 1 radian

Introduction
Because of the large amount of information theory developed at MIT in the 1950's
and advent of digital computers, signal recordings are now done digitally (i.e.,
not continuous, but through discrete sampling at fixed time intervals.) Processing
is done once the signals are stored digitally.
In order to carry out seismic processing we need to review some basic concepts
in signal theory:
Sampling Theorem
Although the data is collected at discrete intervals of time that of the order
of milliseconds, there is a basin limitation to how the sampling is done.
The rule is that you must sample at least twice per wavelength (for the frequency
being considered) if you are to be able to capture the frequency during sampling
(example); this requirement stems from the Sampling
theorem.
e.g. 125 Hz, or 125 oscillations per second. We must sample at least 2 times per
oscillation i.e. 250 times /sec, i.e. 4 ms.
(Sampling rate is the frequency at which we sample seismic signal).
So, what happens with frequencies over 125 Hz? According to the Sampling Theorem
Nf + Df will be indistinguishable from Nf  Df
That is, a 175 Hz signal will appear like a (12550) Hz signal.

That is, signals of frequency greater that the Nf will alias as lower frequencies.
example
So, it's critical when you are record data that you know which frequencies you
want to keep and which you cannot trust. Once you know these facts you can design
the sampling interval. Just to be on the safe side, and to make sure that you don't
allow any aliased frequencies to come through, the cutoff frequency is sometimes
made to be half that of the Nyquist frequency. For example, if a S.I. = 4ms and Nf
= 125 Hz, electronic analog filters are put in place that will start to eliminate
signal at about 62.5 Hz.
The Geometrics Strataview seismograph we will use in the field automatically adjusts
its input filters so that the data is not aliased.
Why are all these theorems so necessary? How do you decide what frequency you
want?
Well, that will depend on how much vertical resolution you need to have of the subsurface.
Linear frequency vs. angular
frequency
When we used
V = l x frequency
we were calculating how fast a wave traveled during one oscillation
Now we are considering the behavior of a wave at a fixed point in space and are
seeing how it changes through time. How do we relate sine of an angle to time? Since
sine waves of a fixed frequency are repetitive every 360 ° we can relate time
to the degrees, expressed as radians.
We do this by working out in degrees where we are in time along a repeating oscillation.
We can
Insert diagram here
A(t) = A sin(degrees)
How often does the curve cross the 0line?
In terms of degrees we can also express this periodicity:
At sin(0), sine (180 °), sine(360 °), sine (540°) etc. Because the sine
wave repeats itself regularly we can use the regular repetition of degrees in a circle
to express (in radians) the same concept
So,
A(t) = A sin [(2p/ T) t]
= A sin [(w)t]
This value in the brackets (w) is the angular frequency,
related to just the normal frequency we thought of before.
A(t) = A sin[ (2p f) t ]
,where f is the more familiar linear frequency =1/T, and 1/T is expressed in Hz.
We can view angular frequency as how many times in a second a singlefrequency
wave goes through 2 p radians;
Insert link picture of a radian
For a plot of amplitude versus time we have:
Insert link to picture here
(N.B. A wave in the simplest form is described as being composed of sines and
cosine waves of a single frequency added together  Fourier Analysis)
The units of angular frequency are in radians per second
The frequency is expressed in terms of a cyclical radians for greater convenience.
w means that the signal takes T seconds to cycle through 2p
radians.
Now with angular frequency we measure how quickly an oscillation of a fixed frequency
takes to cycle completely to its beginning point.
By using radians we choose to focus on the angular cyclicity of an oscillation
instead of how far it travels along the horizontal.
Phase Shift
"How far a wave has advanced in degrees, radians distance with respect to
some reference point."
Include link to Figure here
We have displaced the second wave forward in time, or back in time (remember wave
oscillates for all times and angles)
As a result of moving this wave forward, the + is now a minus. The second wave
is equal to the first wave. How far has the second wave moved forward? p
radians (or 180 °)
In dealing with sea surface reflections: we found that waves reversed their polarity.
That's the same as saying that their phase was shifted forward 180 ° or backward
180 °.
What if we shift a sine wave 90° degrees backward?
click here
How far forward do you need to shift it to make a sine into a cosine? Sines and
cosine functions are phaseshifted versions of each other.
Let's now express the phase angle mathematically:
A(t) = A sin [(2p/ T)t  phase angle]
If the phase is 45°, then
A(t) = A sin [(2p/T)t  p/4]
For example, let angular frequency = 1 and t = phase angle, then:
sin[(2p/ T)t  p/4]
= 1
So, in order to move a signal forward in time, you have to make it arrive earlier,
and to do this you have to ve phase shift and vice versa.
Fourier Theory
Introduction
As I said before, discretization of the waveform allows us
to analyze seismic data using the digital computer and he key aim of all processing
is to enhance the geological signal over the background noise. This might mean
(1) filtering unwanted background noise, or
(2) designing
the best source possible.
The digital computer also allows us to filter the data once
it has been collected, but in order to achieve this we must decompose the complex
signal into its simpler parts. This can be done with the help of a procedure known
as Fourier Theory. With Fourier Theory we can describe waves by breaking them
down into frequency and phase components.
There is a theorem (thanks to an illustrious dead French Mathematician
Monsieur Fourier) that a long continuous
signal (sufficiently long), no matter how complex looking, can always be decomposed
into fundamental frequencies.
We achieve this transformation working on signals that have
been collected as a function of time but will be expressed as a function of frequency.
This transformation views complex signals as the summation of simple
singlefrequency signals. The Principle of Superposition assumes that these signals
can be added up independently of each other.
If we are working on combining the simpler components
of a signal into an end result that looks like the signal we collect and interpret
we call this a Fourier synthesis. However,
if we are decomposing a signal into its simpler components we call this a Fourier analysis.
Not only is it mechanistically simpler to break down the complex
parts of a signal into smaller parts but it becomes a lot faster to do the arithmetic on a computer using a very clever
algorithm known as the Fast Fourier Transform or FFT. We can
also filter in the time domain we would need to carry out more individual
steps which include sums and multiplications whereas in the frequency domain we need
only to multiply.
Applications: Filtering
with Fourier
How can Fourier transforms remove unwanted data/noise?
Filtering a frequency out is a complicated procedure known
mathematically as a convolution. However, in the frequency domain, filtering is accomplished
by multiplication.
Let's see how we would represent a singlefrequency signal
in the frequency domain:
click here
If we had a more complicated signal:
click here
Now you can also see a most complicated signal, such as a seismic
source:
click here
Fourier analysis would allow the removal of certain frequencies
by multiplication of 0 in the
frequency domain.
Applications: Spikes and
Fourier
How can Fourier transforms help us design the optimal source?
What is the ideal source that we can use to best resolve subsurface
features?
It is not sufficient to just have high frequencies. The ideal
source must be rich in ALL frequencies. The more frequencies that are added in the narrower the appearance of the
source wavelet.
A spike.
But what does a spike look like in the frequency domain? What's the dominant frequency?
The best signal is one that has energy at all frequencies ("rich
in frequencies"), or one that has a white spectrum:
click here
Intuitively, one might guess that a single ultrahigh frequency
signal would have the greatest resolving power, because resolution depends on the
wavelength and the smaller the wavelength , the greater the frequency?
However, a singlefrequency signal with many sinusoids makes a very long wavelet
and decreases the resolution. The best source is very short
inthe time domain inorder to maximize the resolution. Note
that we are not contradicting ourselves because when we explained resolution previously
we spoke of dominant frequency. Dominant frequency
is the nominal frequency with carrying greates energy in a wavelet. We single
out dominant frequencies in simple calculations to get a practical sense of the resolving
power of the wavelet. But, remember that the wavelet must be very short. While every
reflector in the subsurface is unique and theoretical very thin, a reflector will
produce as many returns as there are pulses in the downgoing wavelet.
That is for every single interface we will create several pulses in our seismogram
making interpretation ever more difficult.
Insert link to drawing here
What is the ideal source that we can use to best resolve subsurface
features? How can we shape the signal to make it narrower? Add more frequencies.
In the limit, an infinitely thin signal (known as a spike) is rich in all frequencies
at no phase. This comes from Fourier synthesis.
If we analyze what happens when we add (superposition), we'll
find that the more frequencies we add the narrower the signal.
Fourier Analysis:
Amplitude and Phase
However, describing a waveform in the time domain by using
the equivalent in the frequency domain is not sufficient because we are losing the
phase information, or when the signal first appears.
Insert link to drawing here
The same mathematics that allows us to break the signal up
into its component frequencies also allows us to break it into phases. We express
the phase relationships on a phase diagram:
In Figure 1 of Yilmaz's book we see three examples of sinusoids
(sine waves with different phases, i,e, also considered cosine waves with different
complementary phase values). As the sinusoid advances in time the phase has a more
negative value, verifying what we saw previously
Applications of Phase: Constant and linear phase shifts
Interpreters sometimes are more comfortable with data whose
phase has been shifted to 0 phase. This process is known as wavelet shaping and as
a result the positivenegative couplets are reduced to single peaks. This procedure
requres that we apply a constant
phase shift to all frequencies in the data.
We can see from the following example the effect of applying different phase shifts
on the shape of the wavelet:
click here
Yilmaz has several examples in his book: click here,
Whereas a constant phase shift will create a change in the
signal shape a linear
shift in the phase will move the signal in time but
not change its shape : click
here, click here
If you want to succeed in change both the shape and
arrival time of your wavelets you can combine a linear phase shift with a constant phase shift
Decibels:
db = ratio of powers: 10 log (P_{1}/P_{2})
e.g. P_{1} =100, P_{2 }= 10 then,
P_{1}/P_{2} = 10
log_{10}10 = 1
db = 10
e.g., P_{1} = 4, P_{2 }= 1
then,
P_{1}/P_{2} = 4
log_{10}4 = 0.6
10 log_{10}4 = 6
octave = f, 2f, 4f, 8f

RAYPATHS in the SUBSURFACE
Rays as simplified wave descriptors
Advantages of Rays
I will often analyse a reflection seismology problem in terms
of rays although sounds are more accurately described as travelling as waves. Rays
are easier to visualise than wavefronts. Rays are ideal and have an infinite frequency
content so they can image all objects regardless of how small they are. That is to
say that a ray or a line of light or sound has only one dimension and therefore is
always smaller than any object on which it impinges. This is a theoretical approximation
only works if we have waves with a frequency content that were infinitely high.
Rays are one common way to simplify reflection seismology problems.
(Assuming flat wavefronts is another)
Limitation of Rays
However, rays do not explain diffraction.
In an isotropic medium, I have said, seismic wave travels in
a direction normal to the wavefronts and it is convenient to represent its normal
as a seismic ray:
A drawing of raypath vs. wavepath example
We know the limitations of rays in an intuitive way borrowed
from optics. The term ray is adopted from optics. In optics we know that this concept
of ray (or pencil of light) will travel around an object and not only travel on a
straight path. When???
When the size of the object is very small in comparison to
the wavelength then the object will cause diffractions
Figures of diffractions here!
So, rays are a useful concept when the size of the object we
are trying to examine is several times the size of the wavelength. Rays can not predict
wavefront paths if the obstacle is much smaller than the wavelength. When this situation
occurs, point obstacles (a relative size problem) scatter energy in all directions
(diffraction) and full wave theory is needed to explain the wavepathas and amplitudes,
which are not the same in all directions.
Rays do not work well at discontinuities.
We can use rays to simulate a diffraction if we apply Huygenís
principle to a point reflector and consider it to radiate energy in all directions
(a later class dealing with a demonstration of Snell's Law)
Interpreting an offset versus traveltime
diagram
As we have seen during applicatin of bandpass filters there is a need to identify
the type of seismic arrival in order to distinguish noise or bad data from the data
which we sish to keep (signal)
Let's begin by examining a typical
offset versus traveltime data set and identifying each
of the key arrivals
as follows:
(N.B. that the way we disply data in class is to commonly place
the increasing TWTT(s) pointing down whereas the earthquake seismological community
may do the contrary. (click here for such an example)
There are several arrival geometries:
Direct wave : The direct wave (equation) is a good arrival to reexamine the concept of destructive interference
because beyond a certain distance from the source the Dwave or direct wave dies
out. The reason for this is that the 180 degree phaseshifted reflection from the
airwater interface interferes destructively with the ray travelling directly (Dwave)
from the source to each of the receivers.
click here for diagram of a Dwave components as a funciton of sourcereceiver
distance
Hyperbola
Singlelayer reflections in offsettraveltime plots can be
consideredto have the geometry of a hyperbola. In the parametric case shown above
a and b are geophysical parameters:
The horizontal axis the sourcereceiver offset in meters and the vertical y axis
is the TWTT(s). In this formula we will show later that
b= 1/(4h^{2}) and a = V^{2}/4h^{2},
where h is the depth to the reflector and V is the rootmeansquare average (V_{RMS})
velocity to the reflector.

Precritical arrivals
Precritical reflections:
Postcritical arrivals
Refractions:
The critical distance is calculated from the following diagrm.
One defintion for the simple case of a singlelayer refractor is that it is the distance
at which the refracted rays begin arriving before the reflected rays. This distance
is important in some situation because the reflected energy near the critical distance
is phasedistorted. Phasedistortion is both useful and harmful. It is useful because
the new wave shape can be interperted with sophisticated methods to determine the
presence of fluids below the first layer. It is harmful because if we stack
the new wave shapes near the critical distance with wave shapes that are almost undisturbed
at smaller sourcereceiver offsets then the differences may lead to destructive interference
and a decrease in the signal quality. Hence it is important that we be able to calculate
this critical distance at least for a simple case:
There are more sophisticated cases where more than one layer is consideredbut we
will not derive them in class and I will leave them for a later homework exercise.
Head Waves
Head waves are the arrivals that appear on the data as postcritical
refractions. One model of the earth can view these seismic data as rays which travel
along the interface (critically refracted rays) and which generate rays returning
to the surface at the same velocity. While the geomtery of these arrivals can explain
the perceived velocity on the receiver offset versus traveltime displays as we said
before these rays do not exist. What really exists are waves which refract near the
interface and generate what are known as whispering gallery waves. These type
sof waves are beyond the immediate scope of this course.
Reflection Hyperbola
Let's demonstrate that the equation for a reflection from the
bottom of a singlelayer is perceived as a series of seismic wavelets that line up
along the branch of a theoretical hyperbola. :Let's first start by considering the
following diagram:
Lets' now derive the equation
MECHANISMS
THAT CHANGE SIGNAL SIZE AND SHAPE
Let's examine the reasons for which a signal changes shape
and size as it travels through the earth along different paths and distances. Let's
do this in a more systematic way than we have thus far. The signal as well as the
time of arrival at a certain distance from the source will contain valuable subsurface
information. The causes for amplitude change are several as we may have mentioned.
They are, namely:
(1) Geometric Spreading
(1a) Processing techniques to compensate for Geometric Spreading
(read Sect. 1.5, Yilmaz)
(2) Attenuation
(3) Reflection Coefficients
(1) Geometric Spreading
(Spherical divergence)
As we move away from an energy source and make recordings of
the strength of the signal, the intensity diminishes. This does not only occur
because individual frequencies are being filtered out by natural attenuation processes
in the earth but because the energy that initially was concentrated in a very small
volume around the seismic chargetravels as a wave and distributes itself over a wavefront
that increases in size in all directions.
Normally, the effects of spherical divergence can be corrected
during the seismic processing by taking into account the distance travelled by each
part of the seismic trace. Distances can be calculated using velocity and traveltime
information.
Geomteric spreading makes the amplitude of a signal falls off
in proportion to the distance traveled by the ray. So that if the path of flight
is doubled the amplitude will decrease by a factor of: square root of 2.
(2) Attenuation (Absorption)
In spherical divergence energy, all the energy that leaves
the source arrives at the receiver. However, with attenuation some of the energy
is absorbed by the medium. Why? Should this occur in a perfectly elastic medium?
The inelastic behavior is called attenuation. In the ball and
spring model, some of the bounce has been converted into heat.
Attenuation:
(1) increases with the distance (r) traveled
(2) affects higher frequency more readily (i.e. high frequency
is absorbed first)
A (r) = A_{0} e raised
to the exponent [[rp/(lQ)]
r is the distance traveled, Q is the
quality factor, and p/(lQ)is the absorption coefficient
1/Q is proportional to the decrease in amplitude a wave
experiences each wavelength as it travels through a given material at a given wavelength
and speed.
Q is about 1000 for low attenuation in rocks
and about 30500 in sediments
This means that in hard sediments waves can travel three times as far before they
see a decrease of their amplitude by the same amount as in soft sediments. Although
Q is smaller at higher frequencies (i.e. greater attenuation at higher frequencies),
for the range of frequencies we commonly experience is reflection seismology we can
consider Q to be the same.
(3) Reflection Coefficients
In general when a wave arrives at a surface separating
two media having different elastic properties, it gives rise to reflected and refracted
waves. We have seen that an incident Pwave can produce 4 other waves if the angle
of incidence at the boundary is not 90 degrees. Because energy must be conserved:
E_{P`} = ( E_{P`P`} + E_{P`P´}) + ( E _{P`S`} + E_{P`S´})
E = 1/2 w^{2}r
A^{2}
Through this condition, we can determine the relationship between
the amplitudes of a normally incident Pwave. i.e. by looking at the conservation
of this "thing" called energy.
Eventually we arrive at an estimation of how much energy comes
out for the energy that goes in. In the case of normally incident P waves (reflection
case) this ratio reduces to a ratio of amplitudes before and after reflection is
the reflection coefficient.
Reflection coefficients in conditions of normal incidence
We start by looking at the simplest case when the angle
of incidence is 90 degrees :
E_{P`}/E_{P`P´}
We know that :
E_{P`} = E_{P`P`}
+ E_{P`P´}
Illustrate this energy point with a drawing
The two key physical properties responsible for producing a
reflection at normal incidence are the velocity and the density of the medium (I'm
not going to prove this). These values multiplied together are known as the acoustic
impedance. If a velocity does not change across a boundary there won't be refraction
but there will be a reflection because the acoustic impedance has changed.
At normal incidence, whatever does not get reflected
gets transmitted:
Reflection coefficient
A_{P`P´} / A_{P´} = difference in the acoustic impedance
sum of the acoustic impedance
A_{P`P´} / A_{P´} = (r_{2}V_{2}  r_{1}V_{1}) / (r_{2}V_{2} +r_{1}V_{1})
(ratio of reflected to incident amplitude)
Transmission coefficient:
A_{P`P`} / A_{P`} = 2 (r_{1}V_{1})/ (r_{2}V_{2} +r1V_{1})
(ratio of transmitted to incident amplitude
at normal incidence)
These equations are derived from energy relations that reduce
to amplitude relations by the square root of the energy. They are very useful even
up to angles of reflection of about 20 degrees.
Link to image of reflection coefficients
(and Table 3.1 Sheriff )
Example of sea water/air interface
and
calculate reflection coefficients.
Let's calculate reflection coefficients so that they are negative
in a solid rock example
Summary of all the above effects
If we combine all of the above effects we can semiquantitatively
express the total control of these factors on the amplitude:
Final amplitude at receiver = attenuation
(r, frequency) x Geometric Spreading (r) x Reflection Coefficient
For the range of frequencies we will encounter, losses by attenuation
are insignificant by comparison to the effects of geometric spreading.
Reflection Coefficients when the angle of incidence is not
a right angle, i.e., as a function of offset (AVO)
I'd like to show you what happens to the signal energy if you
are not in conditions of normal incidence.
Figure 3.2(b) from Sheriff, P. 78
Other types of waves, known as shear waves start to appear. So from an incident P
wave, you can produce reflecting P and S waves as well as refracting P and S waves.
Please refer to the section on Snell's law where we discuss this in more detail.
Reflection coefficients are far more complicated and depend on the V_{s}, density and V_{P}.
Why? Remember that at nonnormal incidence you will begin to create S waves.
I'd like to show you what happens to the signal energy if you
are not at normal incidence conditions. Reflection coefficients are far more complicated
and depend on the Vs, density and VP. Why? Remember that at nonnormal incidence
you will begin to create S waves.
In basic physics classes you probably envountered the term critical reflection. At
the critical angle of reflection all the enrgy that hits a boundary is reflected
back
A rule of thumb used in the early days of estimating fluid
content according to the patterns in the Amplitude versus offset for a given reflector
was that increasing reflection coefficients values with offset revealed a higher
Poissonís ratio in the layer below the origin of the reflection and that this higher
Poissonís ratio implied gas. The first part of the argument can be true but Poissonís
ratio is sensitive to even a small amount of gas so that the amount of gas cannot
be readily ascertained.
Fig. 3.4 from Sheriff, p. 80
Automatic Gain
Control
Signals lose strength by ..... transmissivity losses, geometric
spreading and attenuation. They could be counteracted exactly if we knew the velocity
of the medium and its nonelastic properties. However, we don't always know that
(We could guess, we have for many other problems!)
The Automatic Gain Control technique is cosmetic, visual, improves
the look, and tries to counteract the effects of amplitude decay in a rather artificial
way as well as improve the structural continuity and stabilize calculations.
(If there are large numbers in the data it tries to suppress
them and obtain a smoother continuous set of amplitudes Some calculations cannot
handle very small or very large numbers, it's a mathematical limitation)
There are many ways of controlling the amplitude: Two of the
most common are: fixed time gate and sliding time gate
1. Fixed Time Gate
1. A signal is divided into fixed intervals, or windows or
gates. They can be of any size.
2. Within each gate, we square the amplitudes, average the
square and find the square root:
known as the A_{RMS}. Then we
3. The we say that the RMS has to be a certain value:
e.g. 2000, and find the factor
by which the ARMS has to be multiplied, in order to obtain
2000.
4. The value is applied to the center of the gate and interpolated
between them:
2. Sliding Time Gate
In a sliding time gate a similar process is carried out, but
the value is assigned to any sample, and the calculation is done by sliding the window
over one sample at a time. If the window you use is small it has the effect of boosting
small amplitudes and deteriorating the signal character
NOISE
Multiples
Seismic energy which has been reflected more than once is known
as a multiple. An unknown amount of energy received at geophones and hydrophones
may actually be multiple energy. Recall that every time a ray crosses a reflecting
boundary on the way up and on the way down there will be a reflection generating
in principle a myriad of possible reflections.
Multiple noise only really becomes a problem when it arrives coincident with other
important information.
Let's look at different types of multiples:
(1) Sea bottom multiples (longpath multiple)
We can examine seabottom multiples in CDP/CMP/SP
gathers as well as the stacked sections
Include handouts
Multiples in seismic gathers
Seabottom multiples can also be seen as deeper reflections,
but which travel at lower velocities than a true reflection from those depths
Handout of seafloor multiples
A deeper reflection will produce a flatter hyperbola. The hyperbola
will be asymptotic to the Dwave.
However, how would a true reflection from the same depth compare?
Remember that if we assume that velocity generally increases with depth, the real
hyperbola will cut the extension of the Dwave
Multiples in stacked seismic sections
Multiples are best distinguished when there is a slight dip
to the seafloor. The dip of the multiples increases with the order of the multiple.
(2) Internal multiples (pegleg multiples)
 When both long and shortpath multiples reflect repeatedly within a layer beneath
the sea surface or land surface
Internal multiples can be determined in a seismic normalincident
profile (staked seismic section) by looking for pairs of strong reflectors and then
trying to predict later arrivals separated by the time it takes to bounce between
these two reflectors.
(3) Ghosts
In marine seismic data, ghost energy forms part of the signal return from each reflector.
The ghosts are a followon sea surface reflection. They can not be eliminated; only
minimized. If the depth to both the receivers and shots are known, then you can filter
out the signal. The total signal comprises four components:
Include ODP drawing here.
Example: Direct wave at sea is destroyed by a ghost
reflection
The Dwave dies at sea at a certain distance of a few hundred
meters.
Q. At what distance is the difference in arrival time smaller
than the sampling interval ("window of observation") of about 4 ms? Assume
that the hydrophone and a point source are both at 3 meters below the surface of
the water. 
PROCESSING SEISMIC
REFLECTION DATA
Definitions
There are three velocities that we will see in reflection seismology
processing:
V_{INT }Velocity of a subsurface layer
determined from travel time through the layer of known thickness but in the context
of today's class it is derived using the Vrms and TWTT(s) to the top and bottom of
the layer using the Dix equation
But, for real cases we tend to use a type of average velocity
that fulfills the equation for a hyperbola V_{RMS} (another type of average
velocity)g
V_{RMS }is the root mean square velocity. It is a
type of weighted average velocity. Weighted means that the thicker the layer in which
the ray travels, the greater the contribution to the final estimated VRMS
V_{STACK }Velocity used for stacking data calculated
from the bestfit hyperbola to gather data through any of the techniques that follow.
Click here for a picture of NMO and stacking and here for tutorials on velocity
analysis: by
former
students and those used for former labs

VRMS
Why do we use VRMS?
The equation for a hyperbola for the two layer case uses V_{RMS}
and the hyperbolic approximation is a good model to deal with reflections, at least
initially.
If we know the V_{RMS} down to the bottom of a layer
and down to bottom of the previous layer, we can estimate the individual interval
velocity of the last layer:
(Place the Dix formula here)
Remember, that in order for this equation to be true, you must
have conditions of nearnormal reflections.
Stacking Velocities
When stacking data we use a velocity which is statistically
derived. We assume that the best stacking velocity gives us the highest amplitude
signal in the data, and we can derive it in several ways:
1. Constant Velocity Analysis
Through a trial a error we plot move out the traces at different
times with a constant velocity (a Vrms)
Click here to see the effects of using
different stacking velocities (too much and too little)
2. Semblance Analysis
Measure the similarity of signals across a CDP gather and is
expressed as:
the sum of energy across traces over an interval of time
Ö normalized to Ö
the sum of energies in each trace
As the semblance increases across a gather, the stacking improves,
and the output signal preserves all the summed energy. Think that an ideal signal
will simply amplify the input:
This is done on a trial and error basis too.
click here for drawing of Semblance
Analysis
Limitations
V stack is a cosmetic (looks good) bestfit velocity and it's
derived assuming signals do not change shape with offset
V stack is distorted by processes that change the signal:
e.g. NMO stretch and the need for NMO stretch mute (click
here for and exa,mple)
We know that the deeper a reflection the flatter the hyperbola, and the faster the
velocity the flatter the hyperbola. Therefore if we try to correct for a shallow
reflector, we'll over correct for the deeper reflector. We therefore have to apply
a different correction at different depths. However, because we only have a finite
number of samples, where many hyperbola cross the data the signal will become artificially
stretched, essentially filtered to let only lowfrequency signals. Remember that
deep signals will have lower frequencies and therefore we may have lower resolution.
Let's calculate this difference and see what sort of error, using the equation for
a hyperbola.
e.g., nearcritical phase changes
e.g. reflected signals can overlap
e.g., there's also random and coherent noise
e.g. we assume that the signal to noise ratio is the same in
each trace and down each trace
We've seen how this is done for singlelayer cases. How is it done for multiplelayer
cases?
In case you haven't noticed, the equation that we have obtained
for a hyperbola has always been the case for a single layer. If you were to develop
the exact form of a reflection for a double layer, it would be a hideous exercise,
because you'd have to take into account 2 or more layers with a different thickness
and a different velocity.
Advantages
Multiples will stack at lower velocities than expected, and
on average the signal/noise ration will improve.
Improving the signaltonoise ration means adding signal from
the same point.
Migration
Process by which dipping reflections are plotted in their true
spatial positions rather than directly beneath the halfway point between source
and receiver.

Why do we use migration?Because the assumption of horizontal
layers has been violated. Unmigrated sections give a distorted picture of the subsurface,
the distortion increasing with the amount of dip and structural complexity.
When do we use it ?
1. When we want to remove diffractions:
See figure on diffractions
2. When we want to restore the true dip and position of reflectors
in order to make stacked seismic sections more interpretable.
See figure before and after migration
What are the effects of migration?
1. Migration steepens reflectors .
The dip angle of the reflector in a unmigrated section is greater
than in the migrated section. Migration moves reflectors in the updip direction.
2. migration shortens reflectors. The length of the
reflector is shorter in the migrated section than in the unmigrated section.
Semicircle Superposition Method of Migration
One way of examining these consequences is to consider the
very simple case of a constantvelocity medium. This method is known as the semicircle
superposition method of migration. The true position of the reflector will be
along a line that is tangent to all the semicircles. These semicircles represent
the wavefront that was generated by an incident plane wave. We swing arcs whose radius
is equal to the traveltime
Limitation
When using migration one important point to remember is that
toward the edge of your profile your migration cannot be complete. When you restore
reflections to heir proper location you need to restore enough of them so that interfere
constructively to produce the true reflector. If, however, you only have a partial
set of reflections to migrate you can produce artificial
Insert figure
Application of migration to very complex geology
If your geology is not too complex and you want your results
in time you can use the migration technique in the time domain. (time migration)
If your geology is complex and there are lateral variations
in velocity (not a horizontally layered case) then you'll want a depthmigration
technique. Time domain based algorithms are simply not accurate enough. You'll have
to shoot rays through your model to do depth migration.
If we migrate before stacking it's because so far we've always
considered that reflectors and receivers are coincident. This effect was achieved
during NMO which implicitly assumed no geological dip (POSSIBLY A BAD ASSUMPTION).
So, instead of accumulating errors why don't we just migrate before the stacking?
(prestack depth migration)
Spatial Aliasing
reflection seismology attempts to remove noise. A common noise
type is streamer noise which comes in producing lines across seismic sections. Or,
refracated energy? How do we remove this? All we have is that the noise is linear,
has a certain slope. How canwe use this information to remove the noise? When we
study this problem we'll also find that if the processing and acquisistion has not
been done correctly misleading stratigraphy may be introduced into the seismic section.
Just as frequencies are aliased or mistaken if the sampling
in time is not fine enough, dipping events can aslo be aliased if we do not have
enough samples of a particular geometry.
Insert figure
How can we avoid ambiguity of dipping events. Add more traces.
What's the maximum shift (dip) that can take place between
traces before you produce an ambiguous interpretation? 1/2 cycle per trace. How can
we remove this ambiguity? Use a lower frequency. Add more traces.
This problem, known as spatial aliasing leads in to a type
of frequency we have not looked at before. We know of the temporal frequency (oscillations
per second).
Spatial frequency is trhe number of oscillations per second,
or wavenumber
1/l= k (wavenumber)
Now we want to know the number of oscillations over a given
distance. (spatial frequency) or cycles/km or wavenumber. That is first you convert
the field into frequencies (frequency spectra)
let's look at the example of spatial aliasing in the overhead:
Insert overhead
for a 36 hz wavelet, if you have more that 14 ms shift per
trace you will produce aliasing.
Or the same signal will be confused as a negative and a positive
slope!
To eliminate aliasisng you can do two things:
(1) use more traces or decrease trace separation.
(2) lower frequency (but you lose resolution)
Nk= 1/2. trace interval
Nwavenumber + Dwavenumber = (Nwavenumber + Dwavenumber)
Analogous but not exactly, when you see old motionpictures
of stagecoach pursuits, the wheels seem to spin backward, because there are not
enough frames per second!
e.g. for a 36 Hz wavelet, the T = 28 ms. Therefore shift must
not exceed 14 ms/trace.
It's a convenient way to look at this data because we can represent
it in a way where one
dip becomes a point in a twodimensional plot. If we could
delete the dips of no interest and reverse the process we should be able to remove
unwanted dipping noise.
Let's now look in a little more detail at how we can remove
the dips:
In a frequency wavenumber plot the more to the right we go,
the greater the dip, until we reach a point where the dip is so great we actually
begin to see a negative dip (negative k).
Insert figure
let's see how we could use this to eliminate real noise:
Insert figure
For a fixed frequency
The greatest energy would lie where several arrivals have the
same slopes. In fk space, these slopes would appear as :
Insert figure
THEORETICAL
BACKGROUND
Images of
(2) bobcatmounted auger
and shotgun (San Juan Basin) and
(2) vibrating trucks
Thereís a tradeoff between energy released
by a seismic source and the dominant frequency. High frequency sources require less
energy to produce than low frequency sources. But, high frequency sources are more
easily attenuated with depth.
2) Types of Waves studied
by Earthquake Seismology vs. Reflection Seismology
In Earthquake seismology we want to
record all the different types of waves created by earthquakes.
Essentially there are two types of seismic
waves, namely body waves that propagate through the earthís interior and surface
waves that propagate along discontinuities e.g., surface of the earth, bottom of
the ocean, between geological layers.
The vibrations that are set up naturally will
use either shearing strain or dilation or compressions. They are the two types of
deformation rods experience.
If, for example, the medium can not maintain
the type of deformation, e.g. water canít maintain shearing, then certain vibrations
will not be set up.
That is, if we had a homogeneous earth with
no sharp changes anywhere at any scale, we could only detect body waves.
Surface waves are usually the primary cause of destruction and carry the greatest
amount of energy from shallow sources.
Body waves move more quickly than surface waves.
There are two types of body waves. (This was postulated last century by S.
D. Poisson, in 1829.) There are P waves and S waves.
In P waves, the vibrations in the rods are
in the direction of propagation of the wave. Note that the rod moves back and forth,
the vibration is carried out.
Pstands for primary waves. P waves are about
1.7 times faster than S waves.
S waves, on the other hand, incur a vibration
which is at right angles to the direction in which the wave is propagated. Sstands
for secondary or shear.
The second group of waves depends on the existence
of geological layers. This is mathematically demonstrable complex and beyond the
realm of this course.
I will limit myself here by saying that as
far as surface waves are concerned we have Love waves and Rayleigh waves. Love and
Rayleigh waves are noise for reflection seismology. Love waves are shearing horizontal
waves. Rayleigh waves, named after the British mathematician, produce retrograde
elliptical motion.
In Reflection Seismology, we are concerned
with only P waves ( S waves are not generated for conditions of normal incidence
because none of the motion across an interface can be converted into shearing motion.
V_{P }= 1.7 V_{S}
V_{R} = 0.92 V_{S}
V_{S1}
V_{S1 }< V_{L }< V_{S2} 
V_{S2}
e.g., P waves 1,500 m/s in sea water at 0°C
P waves 6,500 m/s in granite
P waves 8100 m/s in upper mantle (peridotite)
P waves ;4,500 m/s
S waves 0 m/s in water

3) Differences
in scale between Earthquake and Reflection Seismology
In general there is a difference in scale between
Reflection and Earthquake Seismology.
In Earthquake Seismology, we have very large energy sources and we can study the
earth at the scale of 10,000km.
In Reflection Seismology, we tend to look at
reflections from the Moho (about 30 km near sea level under continents).
4) Mathematical
representations of wave propagation
And finally in Earthquake Seismology we tend
to use more complex mathematical representations because at the scale of work, the
earth is round whereas in Reflection Seismology it is approximately flat.
Key Words
Energy = 1/2 r w2
A2
Amplitude, Frequency and Wavelength are very
important DESCRIPTORS of the behavior of ground motion. We see that they can
help characterize the energy of the motion. Waves: perturbation/vibration
propagated through a medium
Amplitudehow far the ground oscillates up and
down, about the center or line of reference. (units: m, volts)
Periodhow long do we wait between for the ground to be at the same
height on three successive occasions. Its inverse is the:
Frequencyhow often, in 1 sec., we return to the original height
Frequency (f)
Wavelength ( l ) "lambda"
velocity V= l * f
Stress: force/unit area
Elasticity
Strain
Stress

WAVES
Waves are a type of kinetic energy or energy
of motion.What is energy ? In physics today we have no knowledge of what energy
is, but we know how to calculate its conservation (Feynman Lectures, Vol.
1, 1963). We can calculate energy, have laws for it (Conservation of Energy) but
it doesnít tell us why things happen.
e.g. imagine you experienced an earthquake.
The ground moved under your feet  up and down and you could record the elevation
of the floor at each moment ( in m ). A moment of greater amplitude conveys a sense
of greater motiongreater energy. We can derive that the energy associated with
the motion of the medium at any point depends on
w2 (faster motion; greater energy)
r (heavier material, requires more energy)
A^{2}(greater
height, greater energy)
K.E. = 1/2 w^{2}r A^{2}
The 1 / 2 and (2) come from translating our
intuition into the conventional units for energy.
(2) Use of wave descriptors to describe velocity
It's important to make a note here of the
use of frequency and wavelength together
Velocity of a wave = distance travelled by
perturbation / unit of time (s) or, said differently, how many wavelengths (distance/Amplitude
space) pass a given point in one second i.e. How long does a wavelength take to
pass a given fixed reference point in space?
Include sketch of oscillations here
The answer is by definition, the period of
a wave (measured in timeAmplitude space)
Velocity =
l/ T
But, 1/T = 1/T oscillations per second, ore
frequency.
So, Velocity = wavelength x frequency
(3) Controls on velocity
Introduction
Wave propagation is the propagation of a deformation
through the rock. This strain (or internal deformation) is not permanent, is very
small, and is completely reversible (elastic strain). In ideal elastic conditions,
the strain expereienced by a rock is in direct proportion to the amount of stress
it experiences. The constant of proportionality are the elastic constants or moduli.
Fig. 2.4
Wave propagations In geological situations
inside rock bodies we always have three principal stress at right angles to each
other. One is vertical (overburden) and the other two horizontal.
Normally stress is a mathematical entity known as a secondorder tensor, that describes
stress in all directions in all planes (shear stress) and at right angles to all
planes (normal stresses). The analysis of these entities can not de described by
simple vectorial decomposition. We can describe general stress in terms of three
principal stress tensors that are know to have a predictable empirical relationship
to faults.
Fig. 2.2
When we say that a rock or sediment has a velocity
we do not mean that the rock travels at a certain speed but that the deformation,
be it a shearing propagation as in the case of shear waves or a dilatational perturbation
,as in the case of P waves, travels through the medium at a certain pace. For example
Vp in granite is about 6.5 km/s and V_{S} is about 60% that of V_{P} and V_{R}
is about 92% of V_{S}.
V_{P}=
sqrt (m/r)
m a general variable related to two elastic
constants
and
V_{S}= sqrt (m/r)
Not that in both cases the velocity is inversely
proportional to the density of the medium:
m includes both the dilatational properties
as well as the shearing properties. Why?
Well, for a when the dilatation propagates
in the direction of wave travel, the material is also deformed ina direction at right
angles to the direction of this propagation.
m is therefore = k + 4/3
m
Vs
Passage of a shear wave , on the other hand
only requires that we know , the shear modulus.
In water, m=0 and therefore m = k which
implies that
Vp = sqrt (k /r)
VS = sqrt (m
/r) = 0
The velocity of waves through rock is associated with the density and elastic properties
of the medium
Look at Fig. 5.19 from Stacey and Fig. 4.20
from Telford
Where would you predict velocity to be
the greatest? By how much? Calculate!
What's the density of the mantle? And
of the crust?
Now look at Fig. 5.17 from Stacey
Why do our predictions not agree?
The cause of this disparity is that the elastic
moduli increase faster than does density and it is the elastic moduli that dominate.
Elastic constants
Passage of waves through rock depends onrocks being capable
of sustaining certain types of deformation. These types of propagation are described
by a series of constants, or values that characterise rock elasticity. Each constant
can be expressed in terms of two others. Youngís modulus, Poissonís ratio, Bulk
modulus, shear modulus
The Stress vector and Geological Stress
Normally stress is a mathematical entity known as a secondorder
tensor, that describes stress in all directions in all planes (shear stress) and
at right angles to all planes (normal stresses). The analysis of these entities can
not de described by vectorial decomposition. We can describe general stress in terms
of three principal stress tensors that are known` to have a predictable empirical
relationship to faults. These we can treat as vectors and describe with an equation
or describe using Mohr's graphical simplification.
Young's modulus (E) Pa (DF/A / DL/L)
Elastic constant that reflects the stiffness of earth materials. E is the ratio
of stress to strain. If our aim is to lengthen or shorten a rock without actually
breaking it, the greater the value of E, the larger the stress that is needed to
achieve the deformatio. Strain is nondimensional and so the units of E are those
of stress.
Bulk modulus k = DP/ (DV/V)
(Pa)
Is another elastic constant that reflects the resistance of
the material to an overall gain or loss of volume in conditions of hydrostatic stress
(PH). If the PH increases then the volume will decrease and the volume change will
be negative. If the volume increases, PH will decrease. PH always has positive value
and the negative sign in the compensation (R.H.S.) keeps this relation valid. A material
that has a large bulk modulus could be imaginedto consist of very tightly bound spring
and ball materials which will quickly transmit a wave. Ideally, elastic deformation
is instantaneous but we know that there is some delay in transmitting a deformation.
The stiffer the material then the faster a propagation should travel.
Poisson's ratio s=  DW/W
/ DL/L (Pa)
Poisson's ratio (s ) (another
dead mathematician) is a ratio that relates deformation in materials at directions
at right angles to each other. Immediately we can predict that perhaps this same
value will tell us something of the ratio of compressional to shear waves, since
they too deform materials in directions that are at right angles to each other. With
s we can determine the
ratio of transverse contraction to longitudinal extension.
Shear modulus m =
DF/A / DL/L (or tan j) (Pa)
This valuable property tells us ahead of time how stiff a material
is to shearing deformation. If a material is very stiff (tightly wound springs) then
it will transmit the shear energy very quickly. The shear modulus is the ratio the
shear stress needed to deform a material by a given angle (measured as the tan of
the deformation angle). As strain has no units, then the shear modulus will have
units of Pascals

SNELLíS LAW
Definitions
Snell's Law
When a wave crosses a boundary between two isotropic media,
the wave changes direction so that the sine of the angle of incidence (angle between
the wavefront and the tangent to the boundary) divided by the velocity in the first
medium equals the sine of the angle of refraction divided by the velocity in the
second medium
Refraction
A change in the direction of a seismic ray upon passing into
a medium with a different velocity
Ray parameter
The quantity p = sin q_{1} /Vi which is constant everywhere along a raypath in the case of horizontal
layering.
Reflections
Waves that have returned from a geological interface because
of an impedance contrast and do not enter into the underlying medium.
Diving rays Are refracted rays which have been bent
back toward the surface because of zone with a strong velocity gradient.
Head waves Refracted wave which travels along an interface
because the incident wave impinged on the boundary at a critical angle
Critical angle and critical distance The angle at which rays start to become
internally reflected and refracted critically. The distance at which the reflection
time equals the refraction time (preferred usage)

Reflection is a specific form of wave behavior.
In the general case we have energy being transmitted into the lower medium at a different
angle to the incident angle. This behavior is known as refraction as is sufficiently
pervasive in reflection seismology data that we should become acquainted with its
behavior.
Refraction and reflection describe the paths
rays take and are based on Snellís Law.
Geometric Confirmation of Snell's Law of Refraction and Reflection (for an isotropic
medium)
Consider two successive wavefronts in a medium
A of velocity V1 and that propagate into another medium of velocity V2. The 2 successive
wavefronts are separated by an interval of time t and distance V1 x t. in the first
,medium and V2 x t in the second medium.
Let's assume that the boundary between the
two media is stuck together (close to reality) so that if we pull the first medium
up, the second will follow and not separate at the seam and if you shear the first
medium, the second medium will shear in continuity. This condition implies that all
the energy hitting the boundary will go into propagating the wave, either back into
the first medium or through into the second medium.
Since the second medium is faster, for the
same t, the wavefront will travel a greater distance, although it will travel the
same distance in that unit of time along the interface. (a units)
;If V1 = V2, the wavefronts in both media are
parallel to each other:
Include mathematical development
of proof
Application of Snell's Law to nonnormal incident (refraction)at an interface
According to Snell's Law we must maintain the horizontal rate of travel of the wavefront
at the interface (i.e. a). We can also see Snell's Las as a law of constancy of the
apparent speed of the horizontal wavefront. Let's add another layer still.
Include another drawing
t/a is also known as the ray parameter. This
one value characterizes all the travel path and does not change. The only thing that
does change is the angle and instantaneous velocity of the ray. As you can see the
wavefront is continually changing direction.
What happens when q2 becomes
a right angle?
Include mathematical development
here
The angle of incidence when the angel of refraction is 90 ° is known as the
critical angle of refraction., and it's that angle at which the wavefront begins
to travel horizontally.
As we make V2 greater, keeping V1 the same
then the critical angle becomes smaller. That is if we go from a low velocity medium,
such as unconsolidated clays (23 km/s) into a deeper medium, say through an angular
unconformity, into granite then the large velocity jump will bring us closer to critical
conditions ,i.e., the rays will be forced to move horizontally and will not be able
to penetrate more deeply.
Snell's Law applied to converted phases
We've said before that unless you impinge on
an interface between materials that have different velocities at exactly 90 °,
you can set up shear waves for incident P waves.
Conservation of the ray parameter still applies
to these cases and so does Snell's Law.
However, we know that if shear waves are produced
at an interface they will travel more slowly.
If there velocity is lower, what does that imply about the angle of refraction?
Include graphical explanation here
