Monday, August 30, 2010

Isotropic and anisotropic earth media in exploration geophysics

When we are doing exploration seismology, we take seismic wave equations as our theoretical base, and often we mention acoustic wave equation, elastic wave equations of isotropic media and anisotropic media. Real earth media should be considered as anisotropic media. VTI (transversely isotropic with vertical axis) media or TTI (transversely isotropic with tilted axis) media are the best approximation for complex geologic areas, while in relatively simple geological areas, all the seismic data processing steps can be taken under the assumption that the media are acoustic. This simplification has its reason that acoustic wave equation techniques are much mature than elastic (anisotropic, even isotropic) wave equation techniques. If we consider elastic wave equation, then we must deal with converted waves such like PP, PS, SP, SS, and so on, waves, which will add much difficulties for practical seismic data processing.

When we talk about elastic media, we want to differentiate different kinds of elastic media by their definite physical characteristics. Elasticity is one significant (almost most significant) character of elastic media. It is characterized by second rank elasticity tensor. According to elasticity theory, a second rank elasticity tensor has limited symmetries, and different symmetries will result in different types of earth models. In exploration seismology, elastic tensor is used in the form of its corresponding 6 by 6 elastic matrix because of its symmetries. There are the following kinds of anisotropic media, and at last the isotropic media can be considered as a special type of anisotropic media.
1. Generally anisotropic continuum has an elastic matrix that is symmetric and has 21 independent entries.
2. Monoclinic continuum is a continuum whose symmetry group contains a reflection about a plane through the origin. The elasticity matrix is also a symmetry matrix with 12 independent entries.
3. Orthotropic continuum is a continuum that possesses three orthogonal symmetry planes. The elasticity matrix has 9 independent entries.
4. Tetragonal continuum is a continuum whose symmetry group contains a four-fold rotation and a reflection through the plane that contains the axis of rotation. The elasticity matrix of tetragonal continuum has 6 independent entries.
5. Transversely isotropic continuum is a continuum that is invariant with respect to a single rotation. The elasticity matrix of transversely isotropic continuum has 5 independent parameters. Transversely isotropic media is a kind of very important media in exploration seismology and reservoir geophysics, since either VTI or TTI media can be considered as the approximation of real sedimentary geology, where the sedimentary layers lay parallel layer by layer.
6. Isotropic continuum is a continuum whose symmetry group contains all orthogonal transformations. Only two independent parameters are needed to describe the elasticity matrix of isotropic continuum. And sometimes people will conveniently use Lame parameters, which can be expressed as the linear combination of these two independent parameters, to solve problems.

Researchers are now using more and more anisotropic techniques to implement seismic data processing, since practical cases indicate that in some complex areas, anisotropy is necessary. However, anisotropic methods still have a lot of problems and is still under construction.

Sunday, August 29, 2010

More Crew Profiles

1.Name/Title
-Nicky Applewhite/OS

2. Fave food/music/vacation destination
-Fried Chicken/R & B/Atlantic City

3. If you were stuck on a deserted island with only one person from this boat, who would you choose and why?
-Dave DuBois (OBS) because he's a jokester







1.Name/Title
-Rachel Widerman/3rd Mate


2. Fave food/music/constellation
Grilled Calamari/depends on my mood/Orion

3. If you were stuck on a deserted island with only one person from this boat, who would you choose and why?
-Hervin because he's gonna cook and bring his ipod











1.Name/Title
-Hervin Fuller/Steward

2. Fave food/music/kitchen utensil
-Italian/Opera/12 inch knife

3. If yo
u were stuck on a deserted island with only one person from this boat, who would you choose and why?
-Jason (Boson) because we like to hang out and we will be good at solving problems






1.Name/Title
-Mike Tatro/Acquisition Leader

2. Fave food/music/tool
-Steak/Country/Monkey Wrench

3. If you were stuck on a deserted island with only one person from this boat, who would you choose and why?
-Ca
rlos (Source Mechanic) because he's my fave

Friday, August 27, 2010

Marine Multi-channel Seismic Processing (part-2)

(4) Pre-process

With the raw shot data with geometry laoded, we are looking forward to seeing changes happen to make the data better till the final image of cross-section. The first thing we do is to use Ormsby bandpass filter to remove the noise generated during acquisition (ProMAX module: Bandpass Filter). Remember we have done the main frequency range analysis when we get raw shot data by using Interactive Spectral Analysis module. Take it and use it in bandpass filter. You can see big difference after employing this filter to the raw shot. The second thing we do is to edit the traces, including to kill bad channels (ProMAX module: Trace Kill/Reverse) and to remove the spikes and brusts (ProMAX module: Spike and Noise Edit). Remember we have known the bad channels using Trace Display when we just get the raw shot data in hand. So input the information in Trace Kill to get rid of those bad ones. All right, after these two steps, we have been able to see the difference from raw data. It is much better, isn't it?

But not good enough. The third thing we do in the pre-process is deconvolution. With the help of deconvolution, we could enhance the primaries and suppress the multiples (ProMAX module: Spking and Predictive Decon). Here, we need to test some critical parameters of the deconvolution, to figure out which ones create best results. It takes time! Please be patient, and read the related books and papers to understand how the deconvolution works and how it could work better.

(5) Velocity Analysis

After the pre-process flow, we have got better-look data in hand. It is time for us to do velocity analysis (ProMAX module: Velocity Analysis), which will take a lot time to complete. So be patient enough to get this step done. First of all, we start with large CDP interval, for example, 5000 CDP interval in a section of 60000 CDPs. When we conduct the velocity analysis, remember to use the near-trace plot that we have made before so that we could recognize the main horizons, and keep the direct wave, primaries and multiples paths in mind, so as to distinguish the primaries from multiples. Try our best to keep veolocity analysis away from multiples, and we know it is not always easy honestly.

Technically, we have stack velocity and interval velocity during velocity analysis, while stack one is lower than the interval one. However, we try our best to keep both of them increase reasonably with increasing depth. Because the seismic velocities of different layers or horizons will increase with depth in common cases due to the increase of some physical attributes like density. And the main factor for quality control during velocity analysis is to see the flat horizons after applying NMO (Normal Moveout). That is to say, if we pick the accurate velocity for the certain horizon, we are able to see the straight or flat coherent event in the trace gather. Sometimes we have some obvious coherent event to apply NMO to make sure the correct velocity we pick, especially in the upper layers, but sometimes not, especially in the lower layers. So in this unlucky situation, we would like to use the semblance graph to find out the energy concentration hotspot, and together with the increasing velocity with increasing depth in mind, to pick predictive velocities.

Again, be carefull of multiples. Because they will show up with hotspots in the semblance which might get you confused in some points, but the distinct thing is that they just have the same or similar velocity all the way down with the increasing depth, i.e., the multiples' velocity function curve should be a nearly straight line from the top down to the bottom. Anyway, try our best to be away from multiples during the whole process of velocity analysis. As long as we build up the brute velocity model with the large CDP interval, we can conduct the so-called brute stack. When we want to see more details for the strutures or something interesting, we need to denser the CDP interval for velocity analysis, for example, go for 2500 CDP, 1000 CDP interval, or even smaller CDP interval for some specific areas to image the relative smaller struture. So it depends on where is the interesting place we want to look at, how much details we want to see and what geolocial question we want to answer.

To Be Continued, see you next week, part 3!

Thursday, August 26, 2010

Give me the Earth, cut it up, and I'll give you a nautical mile!

We approached the centre of the rise today. There has not been a lot to do since we started collecting multi channel seismic data. Maybe when we start recovering the OBSs I'll get to go up to the deck more often. We did sight a pod of whales today, though. This was a first for me. The whales were far off so I couldn't make them out very well. All I saw was the jet of water they made ever so often. Apart from this exciting event all I have been doing during my watch is looking at the monitors, recording OBS crossings and thinking about how long it'd take to get to the next site. I keep asking myself, "How long will it take before I can take my eyes off the paper I'm reading and record the next crossing? "


But things are different on the Langseth. I am used to working with distances in kilometers, I am Nigerian and we inherited the British system. Now I get to the United States and I have learnt to intuit the Mile. I run in miles, I drive in miles, Google navigator feeds me distances in miles. I get it. Now I have to get used to two new units, distance in "nautical miles" and speed in "knots". That's how sailors of old measured distances and speed, and although we have distance conversions, we still measure speed in knots on this research vessel. So I try to dig out conversions, run a couple of google searches, and voila! I discover very interesting history on the definition of the distance. I learnt during my search that both units of distance, the nautical mile and the kilometer were both defined based on the Earth. The nautical mile is actually an English system and the meter was defined by the French. The nautical mile you get if you cut up the earth in half at the equator and divide the circumference of the circle you get into 360 degrees, then each degree into 60 minutes. A minute of arc will then be 1 nautical mile. Same thing for the kilometer: you cut up the Earth. This time you cut it from the North pole, make it pass through paris (for historical reasons) and then measure the distance from the North pole to the equator, divide by 10,000 and you get 1 kilometer.

I see now. The ship travels at 5 knots. The knot? A very convenient measure of "nautical"speed: 1 nautical mile per hour. Nautical, anything relating to navigation. We navigate on the seas. The Earth. The distances make sense. 1 nautical mile ~ 2 kilometers ~ 1.2 miles. I am thinking to myself. If I run at my average running pace - I do have a best time, but Nike plus tells me I run ~ 9' 30''/ mile - how long will it take me to run round the world? I do the math. There are (360 * 60) nautical miles to run. Those miles give ~ ( 360 * 60 * 1.2) US miles. At my average running pace it'd take me ~ 237, 000 minutes. That's ~ 5 months and 12 days. I think I'll put off running round the world. Its a long night. I'm done with my watch and I want to return to bed, but I begin thinking about why we have to cut the earth into 360 parts. why 360 and 60? I am sure there are interesting reasons.


Wednesday, August 25, 2010

A Brief History of Our Understanding of Planet Earth.

We know surprisingly little about our planet! The reason for this is that we are not able to probe the depths of Earth directly and explore. Also, because geological processes occur on much longer timescales than humans are familiar in dealing with. What little that we do know, we learned in the past couple decades! We learned how to split the atom before we learned how our own planet worked. Here I provide a brief history of our journey in understanding our home, planet Earth.

The age of Earth was of scientific speculation for many centuries. Finally, in 1953, Clair Patterson at the University of Chicago determined the now accepted age of Earth of 4,550 million (plus or minus 70 million) years old. He accomplished this through the Uranium/Lead dating of meteorites, which are the building blocks of planets. But once we discovered how old Earth was, a significant question came into the scientific forefront. If Earth is in fact ancient, then where were all the ancient rocks?

It took quite some time to answer this question. But it all started a few years earlier with Alfred Wegner, a German Meteorologist from the University of Marburg. Wegner developed a theory to explain geologic anomalies such as similar rocks and fossils being located on the East coast of the U.S. and the North West coast of Africa. His theory was that Earth’s continents had once been connected together, in a large landmass known as Pangea, and had since split apart into their contemporary locations. This theory opened up another question, what sort of force could cause the continents to move and plow through the Earth’s crust?

In 1944, Arthur Holmes, an English geologist, published his text Principles of Physical Geology in which he laid out his “Continental Drift” theory, which described how convection currents inside Earth could be the forcing behind the continent’s motion. Many members of the scientific community still could not accept this as a viable explanation for the movement of continents.

At the time, many thought that the seafloor of Earth’s oceans was young and mucky from all the sediment that was eroded off the continents and washed down river into the oceans. During the Second World War, a mineralogist from Princeton, Harry Hess, was on board the USS Cape Johnson. On board the Johnson there was a new depth sounder called the fathomer that was made to aid in shallow water maneuvering. Hess realized the scientific potential of this device and never turned it off. Hess surprisingly found that the sea floor was not shallow and covered with sediment! It was in fact deep and scored everywhere with canyons, trenches, etc. This was indeed a surprising and exciting discovery.

In the 1950’s oceanographers found the largest and most extensive mountain range on Earth, in the middle of the Atlantic Ocean. The mountain range, known as the Mid-Atlantic Ridge, was very interesting, being that it seemed to run exactly down the middle of the ocean and had a large canyon running down the middle of it. In the 1960’s, core samples showed that the seafloor was young at the ridge and got progressively older with distance away from the ridge. Harry Hess considered this and came to the conclusion that new crust was being formed at the ridge and was being pushed away from it as new crust came along behind it. The process became known as seafloor spreading.

It was later discovered that where the oceanic crust met continental crust, the oceanic crust subsided underneath the continental crust and sank into the interior of the planet. These were called subduction zones and their presence was able to explain where all the sediment had gone (back into the interior of the planet) and the youthful age of the seafloor (the older portion of seafloor currently being around 175 million years old at the Marianas Trench).

The term “Continental Drift” was then discarded, once it was realized that the entire crust moved and not just the continents. Various names were used to refer to the giant separate chunks of crust, including “Crustal Blocks,” and “Paving Stone.” In 1968, three American seismologists in the paper in the Journal of Geophysical Research called the chunks “Plates,” and coined the name for the science that we still use today, “Plate Tectonics.”

Finally it all made sense! Plate tectonics were the surface manifestation of convection currents in Earth’s mantle. This explained where all the ancient rocks on Earth’s surface went, that they were recycled back into the interior of the Earth. Plate tectonics gave answers to many questions in geology, and Earth made a lot more sense.

Plate tectonics are the surface manifestation of convection currents in Earth’s mantle. Convection involves upwellings and downwellings like in a boiling pot of water. Subduction zones are the downwellings in Earth’s convection system. Upwellings known as plumes are thought to exist, where hot material rises to the surface of the planet from the very hot interior. These plumes are thought to cause volcanism at the surface that are known as Large Igneous Provinces, such as the Shatsky Rise. We are out here today, continuing our journey of learning and understanding how our planet works. The data collected during this survey will hopefully shed light on what processes produced that Shatsky Rise, and if it was in fact a plume from Earth’s interior.



Note: Most of the information in this blog can be found in Bill Bryson's book, A Brief History of Nearly Everything

Monday, August 23, 2010

Anatomy of an Airgun

So far most of the talks have been an introduction to what we do, but little respect has been paid as to how we do it. Therefor, today I will discuss the not so humble air gun.The gun pictured at right is not of our guns but of a single gun to give you an idea. We use an air gun array composed of 40 guns (similar to the one at right), 36 of which fire in tandem while 4 are left on standby. The total capacity of the array when operating at maximum is 6600 cubic inches of air. The remaining four guns are used in situations where we lose power to any of the other guns. However, there is a catch. The system cannot exceed 6600 cubic inches and each of the four standby guns weigh in at a hefty 180 cubic inches each. The largest guns are 360 cubic inches and the smallest are 60. So if a 60 goes out, we stop sending air to it, but if a 360 goes out we turn on two of the standby's to bring the volume back to 6600.

If you need a rough approximation of what 6600 cubic inches pressurized to 2000 psi of air exploding is like, imagine that a standard SCUBA tank is pressurized to 3000 psi and is roughly 80 cubic feet at one atmosphere of pressure 80 cubic feet = 138240 cubic inches. 6600 cu in * 2000 psi = 1.1*10^6 ft lbs of force. The SCUBA tank if it explodes would be 138240 * 3000 = 3.456*10^7 ft lbs of force. The SCUBA tank is an order of magnitude larger, however this does not take away from the power of the air guns. Think of the tank as a bomb whereas the air guns are a controlled source. Regardless of the reference the air guns are still dangerous and they are kept at 2000 psi almost at all times. This requires one powerful air compressor. We refill the air guns every 20s (or 50m whichever comes first), so we need a large volume compressor to basically instantaneously fill the guns to be ready to fire for the next shot. If we were using the volume of the SCUBA tank we would need a compressor that was an order of magnitude greater in volume output and 1.5 times more powerful. The point is, is this is not JAWS and we do not condone exploding SCUBA tanks as our source material.

The air guns are relatively deep penetration sources, operating at 100 to about 1200 Hz, to identify subsurfacegeologic layers and define the subsurface structure. In studies that require less resolution but substantial penetration, the air gun is usually preferable as compared to a water gun, because it is far more efficient at producing low frequency energy. It can be used in fresh or brackish (less saline) water found in lacustrine and estuarine environments. Both air guns (and water guns) can be used in shallow w
ater surveys and relatively deeper water environments, achieving resolution on the order of 10 to 15 meters and up to 2000 meters penetration. With proper tuning, the air guns work well in a wide variety of bottom types. Minimum operating water depths of about 10 meters are possible in acoustically “soft” bottoms. In areas with acoustically hard bottoms, deeper water depths of operation are required. The harder bottoms produce multiples, unwanted reflective energy that travels repeatedly between the sea surface and the sea floor or shallow-subsurface and obscures the desired primary reflected energy arrivals.

The air gun requires an air compressor on board the ship. For maximum resolution, the smallest chamber size is used. If maximum penetration is the goal, a larger chamber is configured, but resolution is lessened. Both guns have a stable and repeatable pulse in terms of frequency composition and amplitude and can be tuned to optimize the source signature.Air guns generate more signal strength than boomer, and sparker, and chirp systems. The air gun is towed astern. The return signals are received by a towed hydrophone array.

This post has been updated. The volume for the guns was misunderstood. The current post reflects the changes.

Migration in seismic data processing

By using OBSs and streamer, we can get seismic data. The next step is seismic data processing, which is in fact the central step for our mission. Among the many steps in seismic data processing, migration is considered as the critical part, which in fact determines the quality of the seismic data processing.

What is migration? Concisely, migration is the step that “moves” seismic data received at the surface receivers to a subsurface image, which is considered to be able to describe the structural information of subsurface. Migration is not the first child of seismic data processing. It was born only after 1930s and emerges rapidly after 1960s and 1970s with the development of digital wave-equation technique. Here I only give some brief description on modern depth migration methods and their comparison. For a more detailed chronology of seismic migration and imaging, please refer to A brief history of seismic migration by J. Bee Bednar on Geophysics Vol. 70, NO. 3, and please refer to An overview of depth imaging in exploration geophysics by John Etgen et al. on Geophysics Vol. 74, No.6, for a detailed description of modern depth imaging methods.

Basically there are two major classes of migration methods, ray-based migration and wave-equation migration. Ray-based migration is based on the high-frequency asymptotic solution of the wave-equation. So from its nature, ray-based migration is in fact wave-equation migration, however in practice, we still differentiate it from wave-equation migration, since they just follow a much different methodology when doing migration. Two main methods are included in ray-based migration, Kirchhoff migration and beam migration. Kirchhoff migration dominates the petroleum industry from 1980s to 1990s, and now is still a living method both in practice and in theoretical research. Kirchhoff migration has its advantages of great flexibility and small computing amount. However, as the Kirchhoff migration is based on ray-tracing, there are deadly limitations in its imaging ability, the most obvious of which is that it uses single arrives along single raypaths to reconstruct the entire wavefield. Beam migration mainly denotes the Gaussian-beam migration, which uses “fat” rays and they can overlap each other. Another important feature of beam migration is that they are not dip-limited. But again, as beam migration is based on rays, in they may fail to image correctly in complex geological areas. Wave-equation migration are based on either acoustic wave equation, which is based on the assumption that our earth is fluid, or elastic wave equation, which is based on the assumption that our earth should be considered to be elastic solid. There are one-way and two-way wave-equation migrations. One-way wave-equation migration (OWEM) applies Green identity, which expresses the wavefield at certain time by the wavefield at earlier time or later time. One-way wave equation downward-propagates the wavefields from zero depth and suffers no upward-propagating wavefields, which justifies “one-way”. By one-way wave equation, based on the separation of two-way wave equation, the source and receiver wavefields are downward extrapolated from shallower depth to deeper depth step by step, and then applying imaging conditions, we can get the migrated image of subsurface. There are mainly four methods to do the downward extrapolation: implicit finite-difference algorithms, which express the single square-root one-way wave equation with infinite fractional series (and truncated to be finite in practice) to numerically implement; stabilized explicit extrapolation methods, which designs numerically Green functions to downward-propagate the one-way wavefield; phase-shift propagation with multireference velocities; dual-space (space-wavenumber) methods, including split-step Fourier (SSF) migration, Fourier finite-difference (FFD) migration, phase-screen and generalized-screen methods, etc.. As its nature is the approximation to two-way wave-equation, OWEM suffers from dip-angle limitation, which means that they are difficult to image steep dips and may give poor reconstructed image in geologically complex areas. On the contrary, two-way wave-equation uses not Green identity but full wavefield to reconstruct the subsurface image. When we refer to two-way wave-equation migration, we often denote the reverse-time migration (RTM). There is no high-frequency assumption and dip-angle limitation in RTM since it uses full wave equation and propagates the wavefield in all directions. For prestack RTM, we implement it by forward-propagating the source wavefield in time and backward-propagate the receiver wavefield in time, and then we get the subsurface image by doing the wavefield crosscorrelation of the source and receiver wavefields. At the emergence of RTM in 1980s, it was almost abandoned because of its high computation cost, both in time consumption and in storage requirements. However in recent years, with the development of computing hardware and algorithms, such as PC cluster, parallel computing technique, GPGPU computing technique, and improvements in computing storage, RTM is gaining the popularity both in practice and theoretical research. In fact, RTM is the most accurate algorithm we can find at present to render a complete and reliable image of subsurface structures. From the beginning acoustic RTM, to recent anisotropic RTM, the RTM is becoming more and more powerful and more and more companies are using RTM as their primary choice.

Full-waveform inversion is another emerging technique for seismic imaging and inversion. But there are many unresolved problems in it. We do not give introduction here.

Saturday, August 21, 2010

Marine Multi-channel Seismic Data Processing (part-1)

I think it is time for ProMAX right now. What is ProMAX? It is a software package of Lankmark Graphics Corporation since 1989 of Halliburton, for processing reflection seismic data. It is commonly used in the energy industry. Is it free? Unfortunately not! Even kind of expensive when comparing to other programs that will do the same sort of thing like SIOSEIS. Also another unfriendly thing for ProMAX may be that it runs on flavors of UNIX like Red Hat (but note that: not all the UNIX system can make ProMAX work), and I guest most people would like Windows-based program, because they hate command lines and writing script! But the good thing is, ProMAX is a program of user interface! No writing script. Gi'gem! You just need to start ProMAX in the terminal, and then you will use the program as friendly as Windows.

OK, let us get down to the technical business. What we are doing for processing the marine relection seismic data on the boat are following some workflows:

(1) SEG-D data input
When we get the raw shot data tape by tape from the recording system on the seismic boat, they are in SEG-D format and every .RAW file stored in the tape stands for one single shot gather (one shot point with 468 channels & traces). We probably get 3 tapes of raw shot gathers per day, which are about 18 GB each with maximum 1273 .RAW files in one single full tape. Even we have SEG-D data immediately when the boat is investigating, but we do not have much things to do until we finish the whole single seismic line, because we need the processed navigation file in .p190 format to set up the geometry for following processing work, and the .p190 files just can provided at least after completing one whole seismic line (sometimes after several short lines they process several .p190s together to save money on the use of lisence of the program). However, we still have things to do: we could take a first look at the raw shot gathers (ProMAX module: Trace Display), to find out the overall situation including direct wave path, reflected ray path including primaries and multiples, noise and bad channels; we can also figure out the main frequency range (PromMAX module: Interactive Spectral Analysis); and we could also make a near-trace plot by only using the near group of every shot to show the first glimpse of geology. We can have a first basic look at the major horizons like sediment layers, transition layers, volcanic layers or acoustic basement. Scientists are willing to see the seismic image as soon as possible. So the near-trace plot is a good thing to show them to release their desire in real time.

(picture below: near-trace plot of MGL1004 MCS line A)

(2) Set up Geometry
When we get the .p190 files for the seismic lines, we are ready to start the whole process flow. The first thing we need to do is to set up the geometry (ProMAX module: 2D Marine Geometry Spreadsheet). A lot of information we need to provide to ProMAX: group interval(12.5m), shot interval(50m), sail line azimuth, source depth(9m), streamer depth(9m), shot point number, source location (easting-X and northing-Y), field file ID, water depth, date, time, near channel number(468), far channel number(1), minmum offset, maximum offset, CDP interval(6.25m), full fold number(59), etc. Anyway, a lot! It takes some time to fill out the sheet and we need to be careful to make sure all the information matching up together. Sometimes it will be tricky, so double check, even triple check!

(3) Load Geometry
When the geometry set-up or the Spreadsheet is done, we can load the geometry in the raw shot data (ProMAX module: Inline Geometry Header Load). It takes time! Remember every tape is 18 GB. It will take almost one hour for loading one tape in (maybe faster using better workstation). After loading all the tapes or all the raw shot SEG-D files in the ProMAX with the geometry, they are ready to go to the real part of processing. Hold on for a second. I say "real" here, because I mean we are starting to polish the raw data, i.e., where change happens!

(picture on the left: ProMAX geometry assignment map)

To Be Continued, next week, part-2!

Friday, August 20, 2010

Saving 1,522 lives on the Titanic with technology from the Langseth: fact or fiction?

My girlfriend once asked me a question a while ago, "why do you study ancient volcanism?" I must admit I found it a little difficult communicating in clear and simple terms the motive behind what I do. " I want to know why Earth works the way it does," I remember explaining. I also tried justifying my interest in using computers to investigate the earth: " You know, developments in existing seismic methods borrow from the fields of mathematics and medical imaging. Who knows, methods developed in this research may someday be used in other fields." I still convince myself that this is true. In reality, most scientists just love asking "why?" and sometimes we get amazing answers that lead to enormous technological benefits, most of which were not planned in the first place. This is a story of how technology in use on the Langseth inherits a lot from the curiosity and dedication of scientists who asked "why?" Oh! and how things may have been different on the Titanic with these technologies.

I start with three names. Two are popular, the last maybe not so. They are Albert Einstein, Leonardo da Vinci and Isidor Rabi. I mention their names because they were pioneers, and their curiosity and research lead to 3 important technologies. Everyone knows Einstein. He is reputably the most influential and greatest scientist that ever lived. To him we owe the theories of general and special relativity. It was because Einstein asked the question "What is gravity?" that led Isidor Rabi and others to pioneer work on developing the atomic clock. In our attempt to understand the atomic world, scientist successfully built highly accurate clocks. These clocks are fundamental to the functioning of the Global Positioning System or GPS.

Isidor Rabi is not so well known, but this doesn't make his contribution less important. He pioneered work on building accurate atomic clocks. And then there is Leonardo da Vinci. He is more famous for the Mona Lisa, but he was also a scientist and inventor, and to him the field of acoustics owes the experimentalists curiosity on the behavior of sound waves.


It was Leonardo da Vinci, as early as 1490, who first observed : “If you cause your ship to stop and place the head of a long tube in the water and place the outer extremity to your ear, you will hear ships at a great distance from you.” So pioneering the basis of acoustic methods. Duayne and Kai have shown how the Langseth conducts seismic experiments with sound sources. Acoustic methods are also used by marine mammal observers (MMOs) to listen to aquatic life. With the sound sources, accurate positioning made available by GPS, and the theory of sound, we can image Earth's interior. See the connection? Curiosity encapsulated in scientific endevour is the seed of technology. With this technology we can do better science, and also we reap enormous social benefits.

But I still haven't explained the Titanic connection. Yes, I'll admit it, I put in the Titanic connection to get the reader to follow me to the end of this post. But truthfully, let's revisit the history. Apart from the hubris of the engineers, at least that's what the movie Titanic suggests, could our application of science have saved the 1, 522 people who perished on board the titanic? Arguably so. Ten years following the tragedy, the Submarine Signal company of Boston commenced work on developing sonar devices to prevent such navigation hazards. And actually the first of these devices in the United States in 1914 by Reginald A. Fessenden. So with sound sources we could actually have prevented the disaster, and with GPS we could have very easily located the Titanic and saved more lives. Fact or Fiction? Fact!


Thursday, August 19, 2010

Crew Profiles

Hello all! As promised earlier, I now present you with mini interviews I conducted with various crew members. It takes copious amounts of work to keep this ship up and running so it seems only proper to introduce the hard working people that provide a safe and efficient means of data acquisition to science parties. I asked each person 3 main questions and here's what they had to say:

1. Name/Title
-David Ng/Systems Analyst/Programmer

2. Favorite food/music/operating system
-Lobster/Rap and R&B/Ubuntu

3. You're stuck on a deserted island with only one person from this boat, who would you choose and why?
-Mike Duffy because he can cook







1. Name/Title
-Robert Steinhaus/Chief Science Officer

2. Favorite food/music/science
-Mac & Cheese/Classic Rock/ Marine Seismic

3. You're stuck on a deserted island with only one person from this boat, who would you choose and why?
-Captain Landow because people would look for him




1. Name/Title
-Sir David Martinson/Chief Navigation

2. Favorite food/music/port
-Steak/Jazz/Aberdeen, Scotland

3. You're stuck on a deserted island with only one person from this boat, who would you choose and why?
-Pete (1st Engineer) because he can build and fix anything





1. Name/Title
-Bern McKiernan/Chief Acquisition

2. Favorite food/music/tool
-Animal/Mosh-pit music/Leatherman

3. You're stuck on a deserted island with only one person from this boat, who would you choose and why?
-Jason (Boatswain) because he's fun and handy with a small boat





More to come next week!!!!!

Seismic Refraction and Reflection

Today I am going to explain the nature of seismic reflection and refraction and then briefly how we use it to extract information pertaining to the structure of Earth's subsurface. Let me start of with explaining what seismic refraction is.

The speed at which a seismic wave travels through a particular material is strongly dependent on the density and elastic properties of that material. So seismic waves travel at different speeds through materials with different properties. Generally, the denser the material, the faster seismic waves travel through it. When a seismic wave travels from one material into another, it does not continue in the same direction, but will bend. This bending of the seismic wave path is known as seismic refraction. Seismic refraction is caused by the difference in seismic wave speed between the two materials and is characterized by Snell’s Law, which is illustrated in the figure to the right. Given an angle of incidence (the angle between approaching seismic wave path and the line perpendicular to the interface) and the seismic wave speed in each material, Snell’s Law dictates the angle of refraction (the angle between the departing seismic wave path the line perpendicular to the interface).


All waves(light, sound, etc.) undergo refraction when moving from one material to another. A good everyday example of refraction is when you look at a straw in a glass of water and the straw seems to bend where it enters the water. Well the straw is not actually bending but the path of the light traveling to your eyes from the submerged straw bends slightly as it enters the air from the water. This is because light travels at slightly different speeds through water and through air. It is this refraction (bending) of the path of the light from water to air that makes it appear that the straw is bent.

Now that we know what seismic refraction is, what is seismic reflection? The answer is that seismic reflection is a type of seismic refraction! When a seismic wave is travelling from a material with a lower seismic wave speed to a material with a higher seismic wave speed, the angle of refraction is larger than the angle of incidence. In this case, there exists an angle of incidence, where the angle of refraction is 90 degrees and the refracted seismic wave runs parallel to the interface between the two materials. The angle of incidence at which this occurs is known as the critical angle. When the angle of incidence is larger than the critical angle, the seismic wave is refracted back into material 1 and leaves the interface at the same angle as the incident seismic wave approached it. This post-critical refraction is call total internal reflection. These phenomena are illustrated in the figure just above, where the red wave path is critically refracted and the yellow wave path is reflected.

So how do we use seismic reflection and refraction to extract information about the structure of Earth’s subsurface? In general the density of material increases with depth in Earth, therefore the seismic wave speed increases with depth. As seismic waves travel down through the subsurface, they will see larger and larger velocity materials the deeper they go and according to Snell’s Law, will refract and reflect back up to the surface, where we can record them. Well by using controlled seismic sources (like the airguns described in Kai’s post) we can send seismic waves into Earth’s subsurface and record the reflections and refractions when the arrive. By measuring when these refractions and reflections arrive, we can determine where the interfaces between differing materials are in the subsurface, thus generating a cross-sectional view of the subsurface.

Wednesday, August 18, 2010

Arrow of time, thermodynamics, and another transit to Yokohama

The second law of thermodynamics is the only physical law that is time-irreversible. All other laws, such as Newton's equation of motion or the Maxwell equations of electromagnetism, are insensitive to the direction of time (for example, the Earth is revolving around the Sun counterclockwise, and if you reverse the time, it would revolve clockwise, and this situation is perfectly fine). The second law of thermodynamics tells us that entropy (of a close system) can only increase with time, if you let the system alone. It's like your room getting messier and messier until you make your mind to clean it up. Why only the second law of thermodynamics has this arrow of time is still an unresolved problem in physics. Ludwig Boltzmann, the founder of statistical mechanics, once provided a microscopic justification, which was found to be rather incomplete, and he allegedly committed suicide because of this. Boltzmann's explanation is, however, what is commonly seen in most of textbooks on thermodynamics. Here I translate it to a more familiar example using a deck of cards. Suppose you start with a deck of 52 playing cards nicely ordered from the ace of spades to the king of clubs. You then shuffle it 100 times so that the cards are pretty much randomly ordered. Now you keep shuffling it and see if any order pops up. You may occasionally get lucky to see some partially ordered sequence (e.g., all aces in one pace), but your cards would stay looking randomly ordered for most of times, because there are so many combinations of card order that look random. How many different combinations of card order do we have? It's 52x51x50x….x1 = 52! ~ 8x10^67. If you're not familiar with the scientific notation, it's about 80000000000000000000000000000000000000000000000000000000000000000000 (67 zeros after 8) [try WolframAlpha for the exact result], and most of these combinations are pretty much randomly ordered. The initial order you started with is just one out of this huge number, so the probability of coming back to the initial order (the ace of spades to the king of clubs) by randomly shuffling is extremely small. It's not zero, but really, really small. Therefore, things are usually got more disordered with time (i.e., entropy increases), because a more random state is more likely to happen. But the probability of getting back to the initial state is not exactly zero, so some rigorous people are not satisfied with this sort of explanation based on thermodynamic improbability, and the debate over the origin of the arrow of time continues…

Now, why am I talking about this? Isn't this blog supposed to be about the Shatsky Rise cruise? OK, the reason is that we had another medical emergency, and we're currently heading to Yokohama again (this time, fortunately, no death is involved). Having two medical diversions in one cruise is pretty unusual. Will Sager, who has much longer sea-going experiences than I, told me that he has been on ~40 cruises over 33 years and had never had bad luck to this degree before. This cruise is only my 10th cruise, and this is actually my very first cruise as a chief scientist, and look what an experience I'm having! But if you think about the above thermodynamical improbability, the likelihood of having two medical diversions in one cruise may not be so small… Or I should think the other way around... Having similarly bad luck in future would be even less likely, so my future cruises may go entirely trouble-free!? Maybe it's time to start writing another sea-going proposal.

NSF has been very sympathetic to our situation, and because no further extension is possible this year (we need to get back to Honolulu by September 14th to give OBS to another cruise), they agreed to have another Shatsky Rise cruise sometime during late 2011 to early 2012 to finish any unaccomplished portion of our planned seismic survey. The science party is all grateful for this thoughtful decision.

Saturday, August 14, 2010

There, in the water! A shark! A torpedo! A maggie?

Ah yes. The magnetometer or as we call it, Maggie. The magnetometer is a useful tool that has been around for over a century. A magnetometer is a scientific instrument used to measure the strength and/or direction of the magnetic field in the vicinity of the instrument. Magnetism varies from place to place and differences in Earth's magnetic field(the magnetosphere) can be caused by the differing nature of rocks and the interaction between charged particles from the Sun and the magnetosphere of a planet. Magnetometers are a frequent component instrument on spacecraft that explore planets or in our case a sea going vessel.

To keep it simplified: as the liquid metal core moves around the solid inner core, it creates a magnetic bipole or a North and South pole. The strength of this field would be continuous throughout the system if the Earth were a homogeneous structure. However, the Earth is a hetergeneous system and as such has effects on the magnetic field. The magnetometer allows us to detect the differences in the magnetic field created by changes within the structures of the Earth. Thus, a highly ferrous rock will have a greater effect on the surrounding magnetic field than a non-ferrous rock.

Magnetometers claim to fame: They discovered that parts of the seafloor were polarized one direction and parts were polarized in the other. Duayne has already covered seafloor spreading and magnetism so read his post for that. Without magnetometers the polar reversals would not have been discovered.

When we first begin deploying seismic equipment, Maggie is put in the water first. It is towed behind the boat about 150m back and at about a depth of about 40m. Maggie is giving us updates constantly so we can track changes in the magnetic strength as we travel across the survey area. Using the data we collect we can build a very accurate magnetic profile of Shatsky Rise when we have completed the survey.

Although the Maggie is a humble looking piece of equipment, we are glad to have it and can thank its predecessors for helping solidify plate tectonics as an accepted theory of Earth's evolution.

Something about the air guns source

We are using air guns to implement OBS and MCS. Let's talk about something about the airguns source, mainly about its signature.

Source signature corresponds to the seismic wavelet in seismic survey. Seismic wavelet is the far-field response of energy of particle motion velocity (land seismic survey) or pressure (marine seismic survey) which propagates from the seismic source. A seismic wavelet should be carefully chosen in exploration seismology since the bandwidth, the length, and the shape of the seismic wavelet will affect the resolution of seismic survey. A good estimation of seismic wavelet is absolutely critical for seismic inversion.

A seismic wavelet can be defined with its amplitude spectrum, which shows its amplitude characteristics, and phase spectrum, which shows its phase characteristics and includes zero-phase, constant phase, minimum phase and mixed phase etc.. In exploration seismology, we can extract seismic wavelet from data by mainly three ways, pure deterministic way, pure statistical way, and through well-logging curve.

In our marine exploration seismology, air guns source has its deterministic signature, and we can extract the amplitude and phase information directly from the control terminal in the main lab after each shot. For each shot, we think they are almost consistent in amplitude and phase.

We then want to know which type of wavelet the air guns source signature belongs to, or it is unique and belongs to none of ideal wavelet. There are four types of seismic wavelet that are commonly used in seismic data processing software: Ricker wavelet, Ormsby wavelet, Klauder wavelet and Butterworth wavelet. See the following figures for their main characteristics.




(From up to bottom: Ricker, Ormsby, Klauder and Butterworth wavelets, a different choose of dominant frequency or bandpass frequencies and cutoff frequencies may give different bandwidths in frequency domain, and thus the temporal domain shapes are slightly different. )

And the following is the source signature for each air gun.


Although total source signature is not available, we can see from the shape of the single signature and conjecture from the time delay characteristics that the air gun source can be approximated by the minimum-phase Butterworth wavelet with certain parameters: both of them have vibrating evanescent tail, and the most important, minimum-phase Butterworth wavelet itself is physically realisable. Ricker wavelet is so ideal that although it is often used in wave-equation-based numerical modeling or inversion test, it is not realistic in practical cases. Another reason that we consider our air gun source signature is close to Butterworth wavelet is that the frequency range of our air guns source signature expands from low to high frequencies. We can see from MMO’s monitoring screen that after each shot, there will be a bright line in the frequency domain whose range is from very low (~ 0 Hz) to very high (~ 48kHz). Ormsby and Klauder wavelets are bandpass-type, and have nearly vertical cutoff edges at the boundary of frequency band. If we suppose them to be full-ranged, then their waveform will not be far away from practical cases. Butterworth wavelet however, will have soft edges at the boundary of its band. In fact it has a long tail in frequency domain.

To further explore the characteristics of our air guns source signature, a more careful mathematical treatment will be needed. Perhaps we can arrive at some different conclusions and find something new if we can extract the amplitude and phase information in some way, rather than merely use shape-based information, which can be inaccurate in some situations.

Another thing that should be noticed is that OBS and streamers receive different kinds of signals. As OBSs are located on the sea floor, the seismic wavelet they received are in the form of particle motions, while the streamers receive pressure as the signals, although the signals are both from air guns source. The reason is that when air guns source are triggered, energy is propagated to the sea floor and subsurface rocks through sea water in the form of pressure, since fluid can not transfer shear stress, and then only P-wave can propagate in the seawater. When energy arrives at the sea floor, it will convert to the kinematic energy of particle motions of subsurface rocks, and now there will be shear wave converted from P-wave when the P-wave meets subsurface reflectors. After some time, there will be reflections that go upwards to the sea floor from subsurface and at the contact of sea floor and seawater, all S-wave will vanish quickly, only P-wave propagates upwards to the streamers and received. As OBSs are in contact with sea floor, they will receive both S- and P-wave. And thus the wave signals received at the OBSs and streamers are different in nature.

Friday, August 13, 2010

Quiz: How many iPods does it take to store reflection seismic data?


Hi all, it's another day on the Langseth, and we are still recording multichannel seismic (MCS) data. This is reflection data collected by hydrophone's trailing behind the ship. Yesterday, Sam was kind enough to give a window into the actual deployment of the streamers (housing the hydrophone) and the depth control devices, otherwise known as birds. Today I want to give the reader a feel for the amount of data memory involved in the actual seismic survey. This is the juncture where geophysics meets computer science. I'll keep it brief.

Let's start with the set up of the reflection experiment. Behind the ship we are currently towing a streamer cable that is approximately 6 km long. On this streamer cable there are 468 channels each recording data every 2 milliseconds. It might not be immediately obvious the enormous data requirements involved in the data acquisition, but a simple order of magnitude calculation provides an intuition for how much memory is required. In an earlier blog post by the chief scientist, Jun Korenaga, we learnt that the Shatsky is about the size of California. We will be making several trips across the Shatsky rise, and in total we will collect 3, 500 km of reflection data, which enable us to "see" into the crust beneath the ocean floor.

So what does 3, 500 km of MCS data translate to? Quick math: at 20 shots per km (because we shoot at every ~50 m), 3, 500 km of seismic survey will require 70, 000 shots. Each of these shots is recorded by the 468 channels, so we have 468 x 70, 000 = 32,760,000 shot traces. Now, as I mentioned, each channel samples data every 2 milliseconds and for each shot we have sampling continuing for 16 seconds (so 8,000 samples per trace). Four byte per sample will then gives the total memory required as: ( 32,760,000 * 8,000 * 4) bytes = 1,048,320,000,000 bytes. Large? Maybe not so much. This is actually 976 Gb (close to 1 tera bytes). I have a 6 Gb iPod touch, and if I were to store all the data on this model, I would require ~ 163 iPods. Yes, that's a lot of iPods. An interesting story is the history of the growth of memory capability for seismic acquisition, but that's for another day. Right now, I need to get back to my watch.

Tuesday, August 10, 2010

Streamer out----let's get the multi-channel reflection started


Cheers! Streamer 3 is being pulled out from the deck since this morning. The streamer has 6 km long, with 480 maximum channels in 12.5 m interval (but we just activate 468 channels for work this time). In addition, we need to install birds(streamer depth contrllers) on it in certain interval to control the depth (picture: people are installing the red bird onto the yellow streamer, and then the streamer will go out into the ocean with the help of the hanging and transmittion mechine on the left hand side of the picture). Although it will take hours to complete the whole streamer deployment, that means our multi-channel reflection seismic survey is about to start around the sunset today, hopefully. Let's do it!
P.S. my name is Jinchang Zhang (people also call me Sam). I am a PhD student of oceanography at Texas A&M University, working with Dr Will Sager on the research topic of Shatsky Rise formation. We are trying to use kinds of marine geophysical data to interpret the geological nature of Shatsky Rise, like bathymetry, seismic, magnetic, gravity, etc. So the seismic data collected by this cruise means a lot to my entire PhD study. And I am going to employ ProMAX to process the reflection seismic data of this survey, which I will talk more about later on. See you later!

Sunday, August 8, 2010

Did Somebody Say XBT Party?

I did!! What is an XBT, you ask? Well, XBT stands for Expendable Bathythermograph Probe and it is a disposable temperature probe that the science team launches from the boat once every twenty-four hours. Our XBT brand of choice is made by Lockheed Martin Sippican Proprietary, and comes in two different flavors, T-5(max depth of 1830m, max vessel speed of 6 knots) and T-7(max depth of 760m, max vessel speed of 15 knots). But what does an XBT do and what can it tell us? Well, an XBT actually contains a wire link which transmits data to our main lab's computer, which in turn stores information on depth versus water temperature and depth versus sound velocity. This data can then be pooled together from several different research vessels to compile weather and climate forecasting as well as climate research. But why would we care about sound velocity in the water? Once we can calculate accurate estimates of the speed of sound through water, scientists can then use this information to create more precise bathymetric maps. To demonstrate how these depth vs. temperature and depth vs. sound velocity profiles can vastly change across an ocean basin I present some of our recent findings: the first set of profiles, in red, were collected at the beginning of the trip in close proximity to Oahu. The next set, in blue, were collected off the coast of Japan in transit back to our study area.