\documentclass[12pt]{article}
\usepackage{wsh}
\begin{document}
\bibliographystyle{plain}
\markright{Cross-balancing intersecting surveys --- W.S. Harlan}
\title{Cross-balancing intersecting surveys}
\author{William S. Harlan}
\date{September, 2003}
\maketitle
\section {Introduction}
Tying seismic sections with well picks typically requires
estimating a source wavelet from a sonic log.
Few want to repeat this effort for each dataset with
different recording parameters.
Each frequency of a source wavelet can have a different
amplitude and phase.
A typical phase rotation or amplitude scaling does not
change with frequency and cannot make one arbitrary wavelet
look like another.
Instead we should estimate different wavelets directly
from cross-correlations of all intersecting datasets.
A collection of independent seismic surveys, 2D lines or 4D vintages,
intersect each other, but reflections do not overlay well.
Seismic attributes differ and complicate any stratigraphic interpretation,
particularly of thin beds.
We want to apply wavelet corrections to each of these
datasets so that intersections tie as well as possible.
One approach was independently implemented by
David Mackidd of EnCana, and by Bishop and Nunns \cite{ties:BishopNunns}.
This approach cross-correlates traces from
different surveys near intersection points. From these
cross-correlations are extracted time shifts,
phase rotations, and scale factors by least-squares means.
Bulk phase-rotations require care with phase-unwrapping,
and time shifts are not independent of phase.
These limited corrections may be
robust in the presence of defective spectra,
but at the cost of ignoring arbitrary phase and amplitude
changes with frequency.
Instead, we take the approach of Henry and Mellman
\cite{SEG-1988-S11-6}, and invert cross-correlations directly for
arbitrary wavelets.
This problem is very similar to the more-familiar problem of
surface-consistent deconvolution (Levin \cite{decon:levin}), which
solves for prediction-error filters. I have implemented
a variation on both. I solve the same wavelet equalization problem
as Henry and Mellman, but with a non-quadratic objective function
more like surface-consistent deconvolution,
and with the Gauss-Newton optimization
of Harlan \cite{GEO54-11-14551465}.
\section {Cross-correlations}
Let us index independent seismic lines or datasets by indices $i$ and $j$.
Certain pairs of lines cross at intersections indexed by $m$.
Each $m$ will index a triplet of indices $\{m,i,j\}$.
At each of these intersections $m$, a representative trace $d_i^m(t)$
from line $i$ will be cross-correlated with a trace $d_j^m(t)$ from line $j$,
all as a function of time $t$.
Let us define the cross-correlation $c_{ij}^m(t)$ as a function
of lag time $t$ by
\begin{eqnarray}
c_{ij}^m(t) &=& \int d_i^m(\tau-t) d_j^m(\tau) d \tau \\
&=& \int d_i^m(\tau) d_j^m(\tau+t) d \tau \\
&=& d_i^m(t) \star d_j^m(t) \\
&=& d_i^m(-t) \ast d_j^m(t) .
\end{eqnarray}
Here the star $\star$ is a concise notation for correlation,
and the asterisk $\ast$ for convolution.
Typically cross-correlations will be averaged from multiple traces,
but I will assume a single trace in the derivation of a solution.
\section {The model}
Assume that a data trace $d_i^m(t)$
is a convolution of a short wavelet $w_i (t)$ that is specific
to the line $i$ with a reflectivity $r^m (t)$ that is specific
to the intersection point $m$. Also assume additive noise
$n_i^m (t)$ that is specific to the trace:
\begin{eqnarray}
d_i^m(t) &=& \int r^m(t-\tau) w_i(\tau) d \tau + n_i^m(t) \\
&=& r^m(t) \ast w_i(t) + n_i^m(t) .
\end{eqnarray}
Assume the reflectivity is white:
\begin{eqnarray}
r^m(t) \star r^m(t) = R^m \delta(t) ,
\end{eqnarray}
where $R^m$ is a constant, and $\delta (t)$ is an impulse function.
Assume noise $n_i^m(t)$ is uncorrelated with anything else:
\begin{eqnarray}
n_i^m(t) \star n_i^m(t) &\equiv& N (t). \\
n_i^m(t) \star n_j^m(t) &=& 0, \ \forall i \neq j . \\
n_i^m(t) \star r^m(t) &=& 0, \ \forall i.\\
n_i^m(t) \star w_j(t) &=& 0, \ \forall i,j .
\end{eqnarray}
A correlation of two traces can then be rewritten as
\begin{eqnarray}
c_{ij}^m(t) &=& d_i^m(t) \star d_j^m(t) \\
&=& [r^m(t) \ast w_i(t) + n_i^m(t)] \star
[r^m(t) \ast w_j(t) + n_j^m(t)] \\
&=& [r^m(-t) \ast w_i(-t) + n_i^m(-t)] \ast
[r^m(t) \ast w_j(t) + n_j^m(t)] \\
&=& [r^m(-t) \ast w_i(-t) \ast r^m(t) \ast w_j(t) ] +
[n_i^m(-t) \ast n_j^m(t)] \\
&=& [r^m(t) \star r^m(t)] \ast [w_i(t)\star w_j(t) ] +
[n_i^m(t) \star n_j^m(t)] \\
&=& R^m [ w_i(t) \star w_j(t) ] + \delta_{i-j} N (t).
\end{eqnarray}
We will only examine cross correlations where $i \ne j$,
so the discrete delta function $\delta_{i-j}$ will be 0.
\section {The objective function}
Thus far the assumptions are very similar to those of Henry and Mellman
\cite{SEG-1988-S11-6}.
They use time shifts as separate parameters
so that their short wavelets are centered around zero-lag.
Their objective function also measures differences
between third-order convolutions of measured correlations with wavelets.
These higher-order terms are easier to optimize with least-squares,
but distribute errors more unpredictably.
I prefer to use longer wavelets to avoid a separate parameterization
of time shifts. I directly minimize errors
in cross-correlations modeled from the estimated wavelets.
The objective function finds the collection of wavelets $w_i(t)$
and scale factors $R^m$ that best minimize
\begin{eqnarray}
\min_ {w_i(t), R^m }
\sum_{\{m,i,j\}} \int \{ c_{ij}^m(t) - R^m [ w_i(t) \star w_j(t) ] \}^2 dt
+ \epsilon \sum_i \int [w_i (t)]^2 dt .
\end{eqnarray}
Sum over the triplets of indices.
The damping suppresses unnecessary frequencies that
do not contribute significantly to the cross-correlations.
The damping factor $\epsilon$ is an appropriate ratio of expected
variances for noise to that of wavelets.
Conservatively small values are sufficient for stable inverses.
I set the variance of a wavelet sample to
1000 times the variance of a sample of noise,
but this ratio can vary orders of magnitude without
significantly affecting the result.
If reflectivities have reasonably consistent strengths, then
one can safely set all scale factors $R^m$ to 1. This implementation
makes this assumption.
\section {Optimization}
This objective function is a non-quadratic function of the wavelets
$w_i(t)$, but it is very amenable to a Gauss-Newton algorithm
that iteratively approximates the objective function as a quadratic.
Initialize all wavelets to delta functions. Linearize
a perturbation of modeled cross-correlations $\Delta c_{ij}^m(t)$
with respect to perturbed wavelets $\Delta w_i(t)$:
\begin{eqnarray}
\Delta c_{ij}^m(t) &\equiv&
R^m [ w_i(t) + \Delta w_i(t)] \star [w_j(t) + \Delta w_j(t) ] \\
&-& R^m [ w_i(t) \star w_j(t) ] \\
&\approx&
R^m [ w_i(t) \star \Delta w_j(t) + \Delta w_i(t) \star w_j(t) ] .
\end{eqnarray}
With this linearization, and its linear adjoint, the objective
function becomes a least-squares (quadratic) function of the
wavelet perturbations.
We can use a standard conjugate-gradient algorithm to solve for
the wavelet perturbations that best fit the correlations not yet
modeled by reference wavelets.
Perturbations are scaled appropriately (by performing a line search
on the original objective function) before adding to the reference wavelets.
\section{Correction of data with estimated wavelets}
We can imagine using estimated wavelets
directly for least-squares deconvolution of
each line. This would however unnecessarily
attempt to whiten the frequency spectra.
Most likely, the data are already whitened as
much as desired. We also do not want to
compute inverse wavelets explicitly, which
would amplify some otherwise very weak noisy
frequencies.
To avoid unnecessary alteration of the
spectral color, we want to reconvolve all
deconvolved traces by a wavelet for a
preferred reference line. This reference
line may also be one that has been carefully
tied to sonic logs or formation tops by
synthetic seismograms. If more than one line
is constrained, then I share the same wavelet
for all during optimization. The estimated
wavelet for these constrained lines is set
aside as the reference wavelet.
For each of the unconstrained lines I solve
for a transfer function mapping the estimated
wavelet to the reference wavelet, again with
least-squares damping. Finally the estimated
transfer functions are convolved with the
original data, combining the effects of
deconvolution and reconvolution.
We can also use the estimated wavelets for
each line to extract simpler corrections,
such as a bulk time shift, instantaneous
phase rotation, and a scale factor, as
did Bishop and Nunns \cite{ties:BishopNunns}.
These limited corrections should be more stable and
less sensitive to noise. Extracting phase
corrections from estimated wavelets is also
more robust than fitting phase shifts picked
from the data. But for any frequencies
shared by intersecting lines, we know we can
do better and detect more arbitrary differences.
It is possible that certain lines will
contain frequencies not present in any
intersecting lines. These isolated frequencies will
be absent from cross-correlations. A damped
least-squares fit to cross-correlations will
omit these frequencies from the estimated
wavelet. To retain these frequencies, we can
include autocorrelations as well as
cross-correlations in the optimizations.
A final correction to a reference wavelet
may remove the isolated frequencies again.
% \svector % \stensor % \cite
\bibliography{wsh} \end{document}
% $Id: crossbal.tex,v 1.26 2004/01/08
16:25:39 Harlan Exp $