ORIGINAL_ARTICLE
Magnitude and distance dependent design spectra for rock sites based on Iranian acceleration time-histories and comparison with regional design spectra
Ground vibrations during an earthquake can severely damage structures and equipments housed in them. Many factors including earthquake magnitude, distance from the fault or epicenter, duration of strong shaking, soil condition of the site, and the frequency content of the motion define the properties of ground motion and its amplification. A deep understanding of the effects of these factors on the response of structures and equipments is essential for a safe and economical design. Some of these effects such as the amplitude of the motion, frequency content, and local soil conditions are best represented through a response spectrum, which describes the maximum response of a damped single-degree-of-freedom (SDOF) oscillator with various frequencies or periods to ground motion. Earthquake ground motion is usually measured by strong motion instruments, which record the acceleration of the ground. The recorded accelerograms, after corrections for instrument errors and baseline, are integrated to obtain the velocity and displacement time-histories. The maximum response of a SDOF system excited at its base by a time acceleration function is expressed in terms of only three parameters: (1) the natural frequency of the system, (2) the amount of damping, and (3) the acceleration time-history of the ground motion. Response spectrum analysis is the dominant contemporary method for dynamic analysis of building structures under seismic loading. The main reasons for the widespread use of this method are: its relative simplicity, its inherent conservatism, and its applicability to elastic analysis of complex systems. Since the detailed characteristics of future earthquakes are not known, the majority of earthquake design spectra are obtained by weighted averaging of a set of response spectra from records with similar characteristics such as soil condition, epicentral distance, magnitude and source mechanism. The design spectrum specifies the design seismic acceleration, velocity or displacement at a given frequency or period if it is derived from ground acceleration, velocity or displacement time histories. For practical applications, design spectra are presented as smooth curves or straight lines. Smoothing is carried out to eliminate the peaks and valleys in the response spectra that are not desirable for design because of the difficulties encountered in determining the exact frequencies and mode shapes of structures during severe earthquakes when the structural behavior is most likely nonlinear. Since the peak ground acceleration, velocity, and displacement for various earthquake records are different, the computed response cannot be averaged on an absolute basis. Thus, normalization is needed to make a standard basis for averaging. Various procedures are used to normalize the response spectra before averaging is carried out. Among these procedures, one has been the most commonly used, which is normalization with respect to peak ground motion to make the same peak ground motion for all ground motion time histories. Building codes commonly present design spectra in terms of acceleration amplification as a function of period on an arithmetic scale. In this study, the data from Accelerographic network stations are deployed on rock sites of Iran with shear wave velocity larger than 750 m/s, which is equivalent to site Type I in the Iranian seismic building code. The Seismosignal software is used to do both baseline correction and filtering for all the dominant horizontal and vertical components to reduce the inherent error of the motion. Among all the ground motions, only 103 vertical and 109 dominant horizontal time histories are accepted after baseline correction and filtering. The data are classified considering different combinations of the range of magnitude and distance. The epicentral distance is classified as near field (0-35 km), medium distance (35-65 km) and far field (65-100 km), while the earthquake magnitude is classified as small earthquake (4.5<M<5.5), medium earthquake (5.5<M<6.5) and large earthquake (6.5<M<7.5), after which the vertical and the horizontal response spectra are prepared for each time history for %5 damping ratio. Obviously, the result can be generalized to other damping ratios. By averaging the response spectra is obtained an unsmoothed design spectra. A smoothed design spectra is plotted by averaging of acceleration amplification spectra for each frequency. This procedure is repeated for an average plus one standard deviation of both vertical and horizontal response spectra. Eventually, the smoothed design spectra determined in this study are compared with that of the regional attenuation relationships obtained based on the data from Europe and the Middle East (Ambraseys et al., 2005). The comparisons show relatively good correlation between the spectrum obtained in this study and the regional attenuation relationships for periods greater than about 0.19sand weak correlation for periods of less than it.
https://jesphys.ut.ac.ir/article_50625_da9e88a7adaf4a5ee19ac2bc97abd469.pdf
2014-06-22
1
16
10.22059/jesphys.2014.50625
Peak ground acceleration
Time history
Damping
Response spectra
Design spectra
H. R
Javan-emrooz
hamidjavan@ut.ac.ir
1
گروه فیزیک زمین، مؤسسه ژئوفیزیک دانشگاه تهران
LEAD_AUTHOR
M.
Eskandari Ghadi
ghadi@ut.ac.ir
2
دانشکده مهندسی عمران، پردیس دانشکدههای فنی دانشگاه تهران
AUTHOR
N.
Mirzaei
nmirzaii@ut.ac.ir
3
گروه فیزیک زمین، مؤسسه ژئوفیزیک دانشگاه تهران
AUTHOR
ORIGINAL_ARTICLE
Velocity structure of south-east of Iran based on ambient noise analysis
The mixture of natural and artificial seismic sources with random distributions cause diffuse wave field with random amplitudes and phases called noise. When noise is analyzed in a long-term process, it contains surface waves which are spread in all directions. Thus, ambient noise contains data relevant to the surface waves. In recent years, as broadband seismic networks have been distributed vastly around the world, diffuse wave fields are utilized to obtain surface waves. The data of the fields are recorded in the forms of seismic ambient noise and waveforms. Seismic waveform is created as a result of multiple diffuse seismic waves of heterogeneous areas, while seismic ambient noise is caused by many types of sources such as ocean microseisms , atmospheric turbulences (Tanimoto, 1999), storms, volcano erroptions and so on. Recent studies suggest that surface waves extracted from diffuse wave fields and seismic waveforms are according to the Green function (Wapenaar, 2004) .Although, the horizontal to vertical spectral ratio technique of microtremor measurement is widely applied in microzonation and site response studies during past two dacays. but the goal of this kind of geotchnical studies is different from seismologcal noise investigations. For the first time, Campillo and Paul (2003) have calculated group velocity of Rayleigh and Love surface waves from waveforms of 101 teleseismic earthquakes recorded in the national Mexican seismic network. After that investigation of ambient noise for Green function analysis have been continued by means of Shapiro and Campillo (2004; 2005) ؛ Schuster et al., (2004) ؛ Snieder (2004)؛ Bensen et al.(2007) ; Wapenaar et al.(2013)؛ Javan and Movaghari (1392). They showed it is possible to get the Green function between stations through calculating Cross Correlation Function of recorded noise. Characteristics of seismic ambient noise are independent of occurring earthquake. That’s why ambient noise is used widely and provides the opportunity to do imaging without a source, or passive imaging in order to study crustal structure between two stations. More applications include terrestrial and solar seismology, underwater acoustics, and structural health monitoring (Larose et al., 2008). In this article, we are going to compare velocity structure created by surface waves of ambient noise and earthquake surface waves based on waveforms from IIEES broadband seismic stations. Braod band seismic stations are usually installed in quiet locations some distance from significant sources of cultural noise, such as roads, railroads, and machinery. We analyze seismic noise using continuous 50 sample/s from one year data. Using recorded ambient noise in Tabas, Sharakht (Qaenat), Zahedan, Chabahar, and Bandar Abbas broadband seismic stations, the Green function of surface waves between each pair station was obtained by cross correlation technique and dispersion curve was calculated through frequency-time analysis. According to this curve, a 1-D model of velocity structure between two stations was presented. There has been a comparison between this model and the one acquired from May 11, 2013 earthquake occurred in the north of Jask at the south of Iran. The results show that we can use the ambient noise to study crustal velocity structure and upper mantle as well. Therefore, it is necessary to record ambient noise continuously in seismic stations so as to prepare fundamental research in seismology.
https://jesphys.ut.ac.ir/article_50627_e4d74df17e8c44333d20a889ab289bbe.pdf
2014-06-22
17
30
10.22059/jesphys.2014.50627
ambient noise
Crustal Structure
Green function
Group velocity
Rayleigh wave
South-east of Iran
R.
Movaghari
1
ژوهشگاه بینالمللی زلزلهشناسی و مهندسی زلزله، تهران
AUTHOR
G.
Javan-Doloei
javandoloei@iiees.ac.ir
2
پژوهشگاه بینالمللی زلزلهشناسی و مهندسی زلزله، تهران
LEAD_AUTHOR
M.
Nowrozi
3
پژوهشگاه بینالمللی زلزلهشناسی و مهندسی زلزله، تهران
AUTHOR
A.
Sadidkhouy
asadid@ut.ac.ir
4
گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران
AUTHOR
ORIGINAL_ARTICLE
Simulation of the first earthquake August 11, 2012 Ahar-Varzaghan using stochastic finite fault method
On 11th of August 2012 the region was surprisingly struck by a shallow Mw 6.4 (USGS) earthquake with pure right-lateral strike-slip character only about 50 km north of the North-Tabriz Fault. An east-west striking surface rupture of about 20 km length was observed in the field by Geological Survey of Iran. Only 11 minutes later and about 6 km further NW a second shallow event with Mw 6.2 occurred. It showed an NE-SW oriented oblique thrust mechanism (HRVD). This earthquake sequence provides an opportunity to better understand the processes of active deformation and their causes in NW-Iran. In recent years, seismologists have attempted to develop quantitative models of the earthquake rupture process with the ultimate goal of predicting strong ground motion. The choice of ground-motion model has a significant impact on hazard estimates for an active seismic zone such as the NW-Iran. Simulation procedures provide a means of including specific information about the earthquake source, the wave propagation path between the source and the site and local site response in an estimation of ground motion. Simulation procedures also provide a means of estimating the dependence of strong ground motions on variations in specific fault parameters. Several different methods for simulating strong ground motions are available in the literature. A number of possible methods that could be used to generate synthetic records include (i) deterministic methods, (ii) stochastic methods, (iii) empirical Green’s function, (iv) semi-empirical methods, (v) composite source models, and (vi) hybrid methods. The stochastic method begins with the specification of the Fourier spectrum of ground motion as a function of magnitude and distance. The acceleration spectrum is modeled by a spectrum with a ω2 shape, where ω = angular frequency (Aki, 1967; Brune, 1970; Boore 1983). Finite fault modeling has been an important tool for the prediction of ground motion near the epicenters of large earthquakes (Hartzel, 1978; Irikura, 1983; Joyner and Boore, 1986; Heaton and Hartzel, 1986; Somerville et al., 1991; Tumarkin and Archuleta, 1994; Zeng et al. 1994; Beresnev and Atkinson, 1998). One of the most useful methods to simulate ground motion for a large earthquake is based on the simulation of a number of small earthquakes as subfaults that comprise a big fault. A large fault is divided into N subfaults and each subfault is considered as a small point source (introduced by Hartzel, 1978). The ground motions contributed by each subfault can be calculated by the stochastic point-source method and then summed at the observation point, with a proper time delay, to obtain the ground motion from the entire fault. We used the dynamic corner frequency approach. In this model, the corner frequency is a function of time, and the rupture history controls the frequency content of the simulated time series of each subfault. In this study, we identify the source parameters of the first earthquake August 11, 2012 Ahar-Varzaghan earthquake using stochastic finite fault method (Motazedian and Atkinson, 2005). We estimated the causative rupture length and the downdip causative rupture width using the empirical relations of Wells and Coppersmith (1994), from the best defined aftershocks zone and depth distribution of these aftershocks as 15km and 10km, respectively. The simulated results compared with recorded ones on both frequency and time domain. The good agreement between the simulations and records, at both low and high frequencies, gives us confidence in our simulation model parameters for NW-Iran. The estimated strike and dip of the causative fault are 85º and 83º. The fault plane was divided into 5×5 elements. Rupture was propagated at (i,j)= (4×3) element from east to west. The focal depth is approximately 12 km. We then obtained a spectral decay parameter (κ) from the slope of smoothed amplitude of the Fourier spectra of acceleration at higher frequencies. The best fit coefficient for the horizontal component is κ=0.0002R+0.047. The kappa factor for the vertical component is estimated based on the same procedure and estimated κ=0.0002R+0.034. These equations represent the κo for horizontal component is larger than that of the vertical component. This confirms that the attenuation of higher frequencies is much less on the vertical than the horizontal component, as the vertical component is less sensitive to the variation of shear-wave velocity of near-surface deposits. The clear difference between vertical and horizontal values suggests that κo contains dependence on near surface site specific attenuation effects. In the absence of three-component stations, values obtained from vertical components may be helpful for a first estimate of this parameter. We also calculated residuals for each record at each frequency, where the residual is defined as log (observed PSA) - log (predicted PSA), where PSA is the horizontal component of 5% damped pseudoacceleration. We sorted simulated records according to agreement between Fourier spectrum and response spectra into two groups, A and B. The simulation using A quality agrees betther with observed records than that using B quality. The lowest residuals averaged over all frequencies are from 0.4 to 18.3 Hz for A quality and from 1.2 to 18 Hz for B quality simulated.
https://jesphys.ut.ac.ir/article_50629_1342ddf0cd2d203c4301ae9af57c92d4.pdf
2014-06-22
31
43
10.22059/jesphys.2014.50629
Strong ground motion
Stochastic finite fault method
Ahar-Varzaghan earthquake
NW Iran
M.
Mahood
m.mahood@srbiau.ac.ir
1
گروه ژئوفیزیک، دانشگاه آزاد اسلامی واحد علوم و تحقیقات تهران
LEAD_AUTHOR
N.
Akbarzadeh
nafise.k2@gmail.com
2
گروه ژئوفیزیک، دانشگاه آزاد اسلامی واحد علوم و تحقیقات، تهران
AUTHOR
H.
Hamzehloo
51186888
3
پژوهشگاه بینالمللی زلزلهشناسی و مهندسی زلزله، تهران
AUTHOR
ORIGINAL_ARTICLE
Velocity analysis using high-resolution bootstrapped differential-semblance
The purpose of velocity analysis is to extract the normal moveout velocity as a function of the zero-offset travel time at selected CDP locations along the seismic line. Since results of velocity analysis depend on coherency estimator, an estimator that provides a high velocity resolution is essential. Even though the conventional semblance method which is the most popular coherency estimator (Tanner and Kohler, 1969) provides a robust velocity spectrum, the tendency to smear the velocity peaks as the time increases makes the estimation of accurate velocity difficult. This estimator, however, has some resolution limits that cause problems in some cases. It fails to distinguish interfering events in a short time window and in cases of thin bedding (Lerner and Cellis, 2007). We propose here two new coherency estimators that resolve these limitations at a minor extra-cost. The estimators are based on a differential semblance (DS) coefficient (Symes and Carazzone, 1991) that is weighted by the semblance estimator. High-resolution is introduced by sorting the traces in the data in a way that highlights the time shifts between adjacent traces within a time gate. The new estimators exploit the redundancy of seismic data in the common mid-point (CMP) to bootstrap the seismic traces in a manner that nicely brings time shifts between adjacent traces to discriminate time gates built using parameters that are close to the true stacking parameters. Bootstrapping is a statistical technique used to infer estimates of standard errors and confidence interval from data samples for which the statistical properties are unattainable via simple means. The first proposed estimator is deterministic bootstrapped differential semblance (BDS) that is based on a deterministic sorting of original offset traces by alternating near and far offsets to achieve maximized time shifts between adjacent traces. Deterministic sorting that alternates near- and far-offset traces in the time window has higher resolution than does simple bootstrapping applied to the data traces. The second was the product of several BDS terms, with the first term being the deterministic BDS defined above. The other terms were generated by random sorting of traces that alternated between near and far offsets in an unpredictable manner. The proposed estimators help in discriminating several trial parameters which produce a good guess of the flattening parameters and have direct implications in retrieving velocity information from time gathers. The suggested estimators are tested on synthetic and real data examples to show the gain in resolution they yield when applied, and they are compared with coefficient semblance. Results show that deterministic BDS coefficient provides an increased resolution with no extra computing effort compared to the BDS coefficient. Further resolution can be achieved by involving several controlled bootstrapping outcomes in the estimator, but this comes at a computing cost nearly proportional to the number of terms in the high resolution estimator. The high-resolution BDS proves to be an efficient tool in building velocity spectra for time-domain velocity analysis and it provides more resolution with respect to conventional semblance estimator. The proposed estimators could be a good substitute for the semblance coefficient, and an economic alternative to other high resolution estimators such as eigenvalue methods that are expensive for the dense parameter tracking in high fold data sets.
https://jesphys.ut.ac.ir/article_50631_7e6e4aa96367d4cea0bb5f8a2fc62394.pdf
2014-06-22
45
57
10.22059/jesphys.2014.50631
Bootstrap method
Velocity analysis
Differential semblance
Normal moveout correction
A.
Majidi
1
گروه زمینشناسی، دانشکده علوم، دانشگاه ارومیه
AUTHOR
H. R.
Siahkoohi
hamid@ut.ac.ir
2
گروه فیزیک زمین، مؤسسة ژئوفیزیک دانشگاه تهران
AUTHOR
R.
Nikrouz
3
گروه زمینشناسی، دانشکده علوم، دانشگاه ارومیه
AUTHOR
ORIGINAL_ARTICLE
Seismic data interpolation via a series of new regularizing functions
Natural signals are continues, therefore, digitizing is an essential task enabling us to use computing tools to process them. According to the Nyquist/Shannon sampling theory, the sampling frequency must be at least twice the maximum frequency contained in the signal which is being sampled; otherwise, some high frequencies may be aliased and result in a bad reconstruction. The Nyquist sampling rate makes it possible to reconstruct the original signal exactly from its acquired samples. To enhance the efficiency of sampling process, a procedure is to use a high sampling rate. But the huge volume of generated data by this approach is a major challenge in many fields, like seismic exploration, and moreover, sometimes the sampling equipment cannot handle the broad frequency band. Seismic data acquisition includes sampling in time and spatial directions of a waveform that is generated by some sources like dynamite. Sampling should be done according to a regular pattern of receivers. Nevertheless, generally due to some acquisition obstacles seismic data sets are irregularly sampled in spatial direction(s). This irregularity causes a low quality seismic images that contain artifacts and missing traces. One of the approaches that have been developed to deal with this defect is interpolation of the acquired data according to a regular grid. Through the interpolation we can achieve an estimation of the fully sampled desired signal. This approach can also be as a tool to design an acquisition geometry which is sparser and results in more cost effective survey. Compressive sensing (CS) theory has been developed helping us to sample data below Nyquist sampling rate while being able to reconstruct them by considering the solution of an optimization problem. This theory claims that the signals/images that can be presented sparsely under a pre-specified basis or frame can be reconstructed accurately from a few numbers of its samples. The principle of the CS is based on the Tikhonov regularization like equation (eq. 1) which utilizes sparsifying regularization terms. In equation (1), the CS sampling operator, , contains three elements: (i) a sparsifying transform C which provides a sparse presentation of signals/images according to the used basis, (ii) measurement matrix M which for seismic issue is identity matrix, and (iii) under sampling operator S which is incoherent with sparsifying operator C. Curvelet transform contains a frame set whose elements have a great correlation with curve-like reflection events presented in seismic data and can provide a sparse presentation of seismic images. The under sampling scheme used in this paper is Jitter that allows controlling the maximum gap size between known traces. Another commonly used under sampling scheme is Gaussian random or binary random. Since under sampling appearance in frequency domain is a Gaussian random noise, the interpolation problem can be treated as a nonlinear de-noising problem. Curvelet frames are an optimal choice for this purpose. The sparsity regularization plays a leading role in CS theory. This approach has also been effectively applied on other problems like de-noising and de-convolution. There are a wide range of functions that can impose sparsity in regularization equation. The performance of these functions to interpolate an incomplete data is related to their ability in coherency with initial model properties. There are a variety of potential functions and the l1-norm is the well-known and commonly used of them. But still a comprehensive study to find out which of them is more efficient for seismic image reconstruction is necessary. This defect is because of absence of a general potential function. Here we use a general potential function which enables us to compare the efficiency of a wide range of potential functions and find the optimum one for our problem. This regularization function incudes lp-norm functions and others as its especial cases which are presented in Table 1. This general function covers both convex and non-convex regularization functions. In this paper we use the potential function to compare the efficiency of different approaches in CS algorithm. Through solving regularization problems a controversial part is setting the best regularization parameter, . Here due to redundancy of curvelet transform, assigning a proper parameter will face some difficulties. Many approaches like L-curve, Stain’s unbiased risk estimate (SURE), and generalized cross validation (GCV), face some difficulties in finding this parameter. Therefore, we inclined to use some nonlinear approaches, such as NGCV (Nonlinear GCV) and WSURE (Weighted SURE). The efficiency of the mentioned methods for estimating regularization parameter and choosing the best potential function is evaluated by considering a synthetic noisy seismic image. By under-sampling this image and removing more than 60% of its traces, the initial/observed model will be reconstructed. This imperfect image serves as our acquired seismic data. In solving equation (1) we use a forward-backward splitting recursion algorithm. Finally through this algorithm we could reach the optimum potential function and a method to estimate the regularization parameter.
https://jesphys.ut.ac.ir/article_50636_aef64b927e2acc1a6de0ccdd8906903d.pdf
2014-06-22
59
68
10.22059/jesphys.2014.50636
Seismic data interpolation/reconstruction
Compressive sensing
Sparsity
Curvelet transform
B.
Tavakoli
borhan.tavakoli@ut.ac.ir
1
گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران
LEAD_AUTHOR
A.
Gholami
91577787
2
گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران
AUTHOR
H.R.
Siahkoohi
hamid@ut.ac.ir
3
گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران
AUTHOR
ORIGINAL_ARTICLE
Application of seismic waveform tomography in an engineering seismic
cross-hole study
Seismic tomography is an imaging technique which creates maps of subsurface elastic properties such as P/S wave velocity, density and attenuation, based on observed seismograms and use of sophisticated inversion algorithms. Amongst different acquisition geometries, seismic cross-hole tomography has a special position in geophysical surveys with many applications in hydrocarbons, coal and other minerals exploration and engineering purposes investigations related to constructions. Main goal of these studies is obtaining precise information about the earth structure (layers structure, impedance of layers, faults and fractures) or anomalies (objects, pipes, voids). Traveltime tomography is a conventional approach to convert special phase of waveform travletimes (such as P or S wave arrivals) to corresponding parameters. Low computational effort is needed to perform traveltime tomography, but the results suffer from the lack of high resolution. Seismic waveform tomography is an efficient tool for high resolution imaging of complex geological structures and has been widely used by researchers in the field of exploration seismology. As waveform tomography exploits waveforms, in addition to traveltimes, it has superior resolution comparing to traveltime tomography but its computational complexities have limited its everyday use in real world applications. In this study we focus on application of waveform tomography in an engineering purpose seismic cross-hole study. Our approach relies on solution of acoustic wave equation in frequency domain and minimizing residual of calculated wavefield and observed seismograms. Frequency domain approach lets simultaneous sources modeling and implementing frequency dependent absorption mechanisms. This approach leads to a large system of equations. To solve the large system of equations sparse direct solvers can be used. The mixed-grid finite-difference used to discretize continuous second order hyperbolic acoustic wave equation. Although elastic modeling is more the realistic and near to observed data, most researchers prefer to use acoustic wave equation instead of elastic one due to lower computational costs. Instead, we pre-process the observed data to increase comparability of observations and modeling. These pre-processing include suppressing phases cannot be explained by acoustic modeling such as S waves or Rayleigh waves or scaling seismograms to take into account amplitude vs. offset effects in acoustic and elastic cases. Waveform tomography is very a nonlinear problem with a very rugged cost function. To overcome this nonlinearity, we solve the problem using hierarchical approaches. We start inversion from low frequency components, where the cost function is smoother, and then proceed to higher components. Lower frequency inversion results have been used as initial velocity model for higher frequency inversion. A synthetic example has been used to test the performance of the algorithm in the absence and presence of noise. As the results show the performance of current waveform tomography algorithm decreases in case of noisy data, which implies the importance of denoising before inversion and/or employing regularization. Another strategy which helps to control noise issue is simultaneous inversion of frequency components in different groups, as showed in real data example. Lastly a real cross-hole dataset acquired for engineering purposes has been studied. The traveltime tomography result is used as starting model for waveform tomography. The results of waveform tomography are in agreement with downhole measurements.
https://jesphys.ut.ac.ir/article_50632_b0370bee2435e80e4b25bce4176266d5.pdf
2014-06-22
69
82
10.22059/jesphys.2014.50632
Seismic waveform tomography
Cross-hole seismic
Wave Equation
Frequency domain
N.
Amini
navidamini@ut.ac.ir
1
گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران
LEAD_AUTHOR
A.
Javaherian
javaheri@ut.ac.ir
2
دانشکده مهندسی نفت، دانشگاه صنعتی امیرکبیر، تهران
AUTHOR
ORIGINAL_ARTICLE
Surveying the Ira and Nava faults (south east of Damavand volcano) using magnetometry method
Structural evolution of Alborz has already been mentioned by many researchers all over the world. Central Alborz is located in the bending of the eastern and the western Alborz. Damavand volcano is situated in the bending part along the great and active faults such as Mosha, North of Baijan, Ask, Nava and Ira. The Nava and Ira faults are in the eastern side of Damavand volcano with a trend of ESE parallel to the general tenor of faulted and fractured part of western Alborz. Based on the geological map, both faults are active reverse structures which are hidden under lavas. With regard to the structural studies, Ira and Nava are active and related to the other unknown faults with the same trend and also with a trend of WNW that show the new established transtension mechanisms prevailing on the Alborz. Geologically, it is believed that some new tectonic events affect the structural evolution in the region, such as new extensional system with the WNW trend activated during 5±2 m.y. In this study we extensively applied geophysical methods combined with former structural data to find any event of the neo-tectonic systems in the area. Due to being more applicable, magnetometry was used for surveying the unconformities. The field study concentrated on the faulted and fractured sedimentary bedrock of Alborz, east of Damavand. The average height level is about 4000 meter. Because of the hard topographic conditions, we could design only two North-South profiles. Total magnetic field variations were measured using a moving proton magnetometer and one system as the remote base. More than 286 data points were collected and processed to extract the best model out of the reduction to the pole transform, first vertical deviation, upward continuation and analytical signal. The model of the first vertical deviation is the best reliable output to show the anomaly of tectonic signature. With this filter, amplitude spectra were enhanced as well as the wave numbers. Another advantage of this method is the detecting any type of geological and subsurface block movements caused by the faults, folding or other tectonic events. First vertical deviation proves the best model compared to the other models and with correlation to the geological map, it presents many important insights of minor and major faults that were hidden before. In our founding, two NWN junction faults are remarkable which verify the activation of new transtensional system due to having the sign of normal-strike slip movements in the tectonic of the Eocene units. It seems that they are minor repeatedly faults with the normal movement of the hanging wall towards SW. In general we recognized, approximately eight fault mechanisms at subsurface whose signatures are not shown on the geological maps of the region. Some of them belong to the former reverse system and two of them are in accordance with the new conventional transtension system with WNW tenor and the normal movement.
https://jesphys.ut.ac.ir/article_50634_7c3248c59367a55251f0bb8c83a84f94.pdf
2014-06-22
83
96
10.22059/jesphys.2014.50634
Alborz
Damavand Volcano
Ira and Nava faults
magnetometry
Vertical derivative
B.
Oskooi
boskooi@ut.ac.ir
1
گروه فیزیک زمین، مؤسسه ژئوفیزیک دانشگاه تهران
LEAD_AUTHOR
S.
Omidian
omidian@mailanator.com
2
دانشکده زمینشناسی، دانشگاه تهران
AUTHOR
ORIGINAL_ARTICLE
Challenges in defining of Bouguer gravity anomaly
Generally, gravity anomaly is the difference between the observed acceleration of Earth's gravity and a normal value. Topography (all masses above geoid) plays a main role in definition of the gravity anomaly. Based on modeling of the effect of topography, there are different models of gravity anomaly such as free-air and Bouguer anomaly. The main goal of the Bouguer anomaly is removing of gravitational effect of all masses above the geoid (topography and atmosphere). This anomaly is widely used in exploration geophysics. In geodetic applications, in the absence of topography, Bouguer gravity anomaly is smooth and thus more suitable for interpolation and even stable downward continuation. In the other hand, gravity anomaly is the difference between real gravity at a point and normal gravity in corresponding point where the real and normal potentials in both points are the same. In geodesy, the gravity disturbance is defined as the difference between the real gravity observed at a point and normal gravity at the same point. In many geophysics literatures, gravity anomaly is replaced by gravity disturbance together a corrective term called geophysical indirect effect. This correction is computed by application of the free-air (and usually the Bouguer) correction over the geoid–ellipsoid separation. This correction must be computed by application of only free air correction to separation of the real equipotential surface and its equivalence in normal gravity field at gravity observation. The free-air (FA) correction is used to up/downward continuation of normal gravity anomaly. In practice, only linear approximation, 0.3086 mGal/m, is used while a second-order FA correction is more realistic than the linear approximation. Note that the FA correction is not a reduction formula for downward continuation of gravity anomaly. One of the most ambiguities in definition of Bouguer effect gravity anomaly arises from formulating the effect of topography. The gravitational of topography can be split into Bouguer term, which is the dominant term, plus minor effect, terrain roughness. In the evaluation of a topographical effect, planar or spherical models of topography can be used. Many studies have shown that planar and spherical model of topography give very different results for Bouguer anomalies. Also, it was shown that the planar topography model (in form of infinite Bouguer plate) yields to a mathematically and physically meaningless quantity. To compute the terrain correction in geophysics, the gravitational effect of only masses up to about distance 167 km (Hayford zone) is considered. In principle the domain of computation of the topographical effect is the whole of the Earth. Despite the fact that the gravitational effect decreases with distance, the effect of beyond Hayford zone is large and should be considered. The removal of the topographical masses disturbs the isostasic equilibrium of the crust. As a result, the equipotential surface can be moved up to several hundred meters. The indirect topographic effect is defined as the effect on gravity due to removing the topographical masses. The indirect effect of topography (ITE) in Bouguer gravity anomaly was first introduced by Vanicek, et al (2004). Their computations show that the numerical values of ITE can be reached up to 150 mGal in mountainous area. While, in most studies, ITE does not take into account and only direct topographical effect is considered. In analogy with topographical effect, in the computation of Bouguer gravity anomaly, the direct and indirect effects of atmospheric masses should be considered. Usually the gravity effect of the atmosphere is evaluated by IAG formula. This formula considers only the direct topographical effect as the correction to gravity anomaly. The indirect atmospherical effect is not discussed in this context. In this study, the method proposed by Sjoberg (2000) is recommended and applied. In order to investigate differences between classic and new Bouguer gravity anomalies, numerical calculations were performed in a mountainous area bounded by , where there are 2385 land gravity observation. The classic planar Bouguer anomalies were computed from where g and are observed and normal gravity, H is the orthometric height of point and is the terrain correction computed up to Hayford zone. The new spherical Bouguer anomalies were computed from where FA is second-order free-air correction, DTE is the direct topographical effect (spherical shell + terrain roughness), ITE is the indirect topographical effect, DAE is the direct atmospherical effect and, IAE is the indirect Atmospherical effect. The results indicate that there are large differences (over 100 mGal) between classical and new Bouguer anomalies. The new Bouguer anomalies are less correlated with terrain heights. Therefore the planar model cannot completely remove the gravitational effect of topography.
https://jesphys.ut.ac.ir/article_50635_127606fddc0cf5dcabc5fc3121dbb5c2.pdf
2014-06-22
97
111
10.22059/jesphys.2014.50635
gravity anomaly
Geodesy
Geophysics
Bouguer
Indirect effect
M.
Goli
goli@shahroodut.ac.ir
1
دانشکده مهندسی عمران، دانشگاه صنعتی شاهرود
LEAD_AUTHOR
ORIGINAL_ARTICLE
Ellipsoidal approximation of the topographical effects in the Earth's gravity field modelling
The topographical effect is one component of the Earth's gravity field that needs to be reliably evaluated in the gravity field modeling. The topographical effect can be numerically evaluated from the knowledge of a Digital Terrain Models (DTM). After the Satellite positioning system, e.g., GPS, the computation points as well as DTMs present/convert in Gauss ellipsoidal (geodetic) coordinates system, λ, φ and h called ellipsoidal longitude, longitude and height, respectively. So far, the planar and spherical models of the topography are frequently used for computation of the effect of topographical masses in geodesy and geophysics. In practice, the planar model is widely used in the evaluation of the classical terrain correction. Vanicek et al. (2001) indicated that the planar model of topography (in form of infinite Bouguer plate) cannot be applied for the solution of the geodetic boundary value problem. Also, spherical approximation of the topography may be insufficient for precise determination of the 1cm-geoid. Moreover, the interested points on and above the Earth’s surface as well as the DTMs are presented in geodetic coordinate system. Therefore the Newton's integral and related formulas should be evaluated in terms of the geodetic coordinates system. In this study, a new exact ellipsoidal formula for potential of topography and its vertical gradient, as well as for second Helmert condensation topography effects are derived. The Newton's integral for computation of the gravitational potential and its vertical gradient has a weak singularity when the computation point is close to the integration point. According to Martinec (1998), the singularity is removed from the numerical integration using the Cauchy algorithm by adding and subtracting the Bouguer terms (the singularity contribution). In ellipsoidal approximation, the Bouguer terms are computed from an ellipsoidal shell. The ellipsoidal shell is sufficiently approximated by a shell bounded by two concentric, similar ellipsoids that so called homoeoid. The thickness of homoeoid is equal to ellipsoidal height of topography at the interest point. The roughness terms, due to deficiency of the ellipsoidal Bouguer shell can be evaluated by direct numerical integration. The results of two spherical and ellipsoidal models are numerically investigated in Iran (the highest peak exceeds 5000 m). The selected test area extends from 24° to 40° northern latitudes and from 44° to 60° eastern longitudes. Near zone of topographical integrals extends to 4° and the far zone from 4° to 180°. Near distant is divided into three zones. 1- Innermost zone to 15 minute, 2- middle zone to 1°, and outer zone from 1° to 4°. The contribution of Innermost, middle and outer zones is computed by 3", 30" and 5' DEMs. Far zone effect is computed by integration over a 30' DTM. The numerical results indicate that the magnitudes of ellipsoidal corrections (difference between ellipsoidal and spherical solutions) are small. The main bulk of this correction is long wavelength and is due to Bouguer and distance zone contributions. Therefore the ellipsoidal correction can be sufficiently used for regional and global applications such as regional Earth's gravity field approximation. Since for the compilation of 1cm geoid, the gravity with a precision better than 10 µGal is needed (Martinec, 1998), the ellipsoidal approximation of topography must be used in precise geoid computation particularly in rugged mountainous area.
https://jesphys.ut.ac.ir/article_50637_5499df9f589e116559e837d615641359.pdf
2014-06-22
113
124
10.22059/jesphys.2014.50637
Gravity field
Topographical effects
Ellipsoidal approximation
Geodetic coordinate system
M.
Goli
goli@shahroodut.ac.ir
1
دانشکده مهندسی عمران، دانشگاه صنعتی شاهرود
LEAD_AUTHOR
M.
Najafi-Alamdari
mnajalm@yahoo.com
2
گروه هیدروگرافی، دانشگاه آزاد اسلامی، واحد تهران شمال
AUTHOR
ORIGINAL_ARTICLE
Inverse analysis of geomagnetic investigations for local anomaly detection using genetic algorithm
One of the most important goals in geomagnetic investigations is detecting local anomaly locations. Regional anomaly can be simulating as a trend surface, and local anomalies will be detected by comparison of measured data and simulated trend surface. The problem is trying to find best coefficients of trend surface model using inverse methods based on modern optimization techniques, which are faster and more accurate than common methods. The main idea of inverse method based on modern optimization approach is to search for a model, which gives its predicted values that are as close as possible to the observed ones. Extensive advances in computational techniques allowed researchers to develop new search strategies for use in optimization problems. Genetic Algorithm is one of the evolutionary optimization algorithms, based on the population of chromosomes, which is widely used in engineering optimization problems. Evolutionary algorithms are developed based on swarm intelligence and social behavior of individuals in nature. Besides, the populations in evolutionary algorithms called agents affected by neighbor agents and the best agent. At the end, optimum solution will be specified with respect to optimize objective function. In this paper, genetic algorithm is used for minimizing the differences between real and simulated data. In order to study geomagnetic anomaly, first, forward model should be developed and then, using inverse method based on GA, regional anomaly trend surface will be simulated. The objective function is define as , where, and are positions of the field study locations that are measured by GPS and is the magnetic value of the positions. Also, A, B, C, D, E and F are unknown coefficients that will be determined using inverse method. According to the objective function, a two-dimensional equation is proposed for simulating regional anomaly trend surface. Two-dimensional equations are better than one-dimensional and three-dimensional or higher dimensional equations. One-dimensional equations do not guarantee to cover all aspects of data. Besides, three or higher dimensional equations are also not recommended for modeling data; because, over fitting to the data may be occurred. Therefore, the two-dimensional equation is the best model for simulating the regional anomaly trend surface. It is important to note that the optimization technique will usually perform well in nonlinear forward models. The unknown coefficients of trend surface on regional magnetic anomaly in Doroh area in southeast of Iran were optimized using inverse analysis, and finally the local anomalies were detected. In order to find locations of local geomagnetic anomalies, total anomaly trend is subtracted from regional anomaly trend and then, the potential locations for drilling investigation are recognized. Our experimental results demonstrate very promising results of the optimization technique for solving inverse problems using GA for detecting local geomagnetic anomaly trend surface, which is validating through drilling investigations. Besides, upward and reduce to pole filters and combination of them, which are common filters for detecting local geomagnetic anomaly locations, are used for conformation our results.
https://jesphys.ut.ac.ir/article_50638_b65ce8b755352aa1d5aeae72934cc131.pdf
2014-06-22
125
138
10.22059/jesphys.2014.50638
Inverse analysis
Local magnetic anomaly
Genetic algorithm
Minimization
H.
Izadi
hossein.izadi.ir@ieee.org
1
گروه مهندسی اکتشاف نفت، دانشکده مهندسی معدن، پردیس دانشکدههای فنی دانشگاه تهران
LEAD_AUTHOR
GH.
Nowrouzi
gnowrouzi@gmail.com
2
گروه مهندسی معدن، دانشکده مهندسی، دانشگاه بیرجند
AUTHOR
B.
Roshan Ravan
3
گروه مهندسی اکتشاف معدن، دانشکده مهندسی معدن، دانشگاه صنعتی اصفهان
AUTHOR
S.
Shakiba
4
گروه مهندسی اکتشاف معدن، دانشکده مهندسی معدن، پردیس دانشکدههای فنی دانشگاه تهران
AUTHOR
ORIGINAL_ARTICLE
A statistical-dynamical analysis of the relation between the Mediterranean storm track and the North Atlantic Oscillation based on wave activity diagnostics
The North Atlantic Oscillation (NAO) is one of the most prominent modes of low-frequency variability over the Atlantic basin in the Northern Hemisphere. In the past decades, the impact of NAO has attracted increasing scientific interest because the NAO exerts an important impact on the regional climate and weather in the North Atlantic region and adjacent continents. Of particular interest is the impact of NAO on the Mediterranean storm track through which NAO can extend its influence to the climate far downstream including the Middle East and southwest Asia. The problem has been previously studied using the energetics by comparing ensemble averages of the terms involved in the eddy kinetic and available potential energy, where ensemble averages are taken separately over the critical positive and negative NAO months. Such analysis has resulted in certain specific results regarding the behavior of the transient eddies in Mediterranean storm track during the two phases of NAO. For example, the energy flux vectors indicate a stronger source in the central Mediterranean with a stronger sink in the Red Sea and Northeast Africa in the positive NAO. There is, however, a fundamental issue with any energy-based analysis, that is the non-uniqueness way of writing the conversion and flux terms. As a more powerful diagnostic tool, wave activity conservation law resolves the non-uniqueness issues encountered in dealing with the conversion terms. In this way, wave activity diagnostics proves useful for investigating propagation characteristics of stationary and migratory wave disturbances and their interaction with mean flows, as well as inferring preferred position of emission and absorption of Rossby waves. First introduced for waves defined by perturbation with respect to zonal mean leading to the Eliassen–Palm (EP) diagnostics, the wave activity conservation law has now been extended to other averages as well as to more general definition of waves and mean flows with no resort to averaging. In this study a form of the wave activity and its flux introduced by Esler and Haynes in 1999 is used. The data used are the NCEP/NCAR reanalysis data covering years 1950–2011 for the winter months from December to February. The critical months are defined on the basis of the monthly index of NAO and grouped in two ensembles of 31 positive and 37 negative NAO months. A critical positive (negative) is considered a month whose monthly NAO index is greater (smaller) than the mean NAO index by more than one standard deviation. The wave activity and the three components of its flux are computed for all days of each winter season, then the averages are taken and the composite maps are prepared for the two ensembles. To investigate the net flux of wave activity to the Mediterranean region, a three-dimensional domain extended vertically from 600 to 200 hPa and horizontally from 15W to 45E in longitude and from 30N to 50N in latitude is selected. For further analysis, the domain thus defined is divided to the three equal subdomains in the west, center and east of the Mediterranean domain. The main results can be summarized as follows. The connection of the Mediterranean storm track to the north east of Atlantic and north of Europe is stronger in the positive phase of NAO. However, there is a stronger connection of the Mediterranean storm track to the cyclogenesis in the west of the North Atlantic in the negative phase of NAO. In other words, the Mediterranean storm track receives stronger activity from the north and the west in, respectively, the positive and negative phases of NAO. In the upper troposphere, wave activity flux vectors indicate the dominance of anticyclonic (cyclonic) Rossby wave breaking and northward (southward) transfer of momentum in the positive (negative) phase of NAO over the Mediterranean region. In both phases, while the west and east subdomains act as sinks (receivers) of wave activity, the central subdomain acts as a source (emitter). In accordance with the results from energetics, the central Mediterranean acts as a considerably stronger source of wave activity in the positive phase. Overall, results of wave activity analysis confirm those of the energetics. In particular, the southwest Asia is expected to receive a stronger influence from the North Atlantic storm track via the Mediterranean in the positive phase of NAO. The above results are solely based on the simultaneous analysis of wave activity over the whole North Atlantic and Mediterranean storm tracks as well as the southwest Asia in critical months. It remains to see how such results carry over to the actual episodes of positive and negative NAO with proper time lags. Such analysis is expected to have the potential to lead to some seasonal forecasting capability.
https://jesphys.ut.ac.ir/article_50639_040befb7dde0e66c2b92ff2e63aed44d.pdf
2014-06-22
139
152
10.22059/jesphys.2014.50639
Wave activity
EP flux
North Atlantic Oscillation
Critical months
Mediterranean storm track
M.
Rezaeian
1
گروه فیزیک فضا، مؤسسه ژئوفیزیک دانشگاه تهران
AUTHOR
A.R.
Mohebalhojeh
moheb@ut.ac.ir
2
گروه فیزیک فضا، مؤسسه ژئوفیزیک دانشگاه تهران
LEAD_AUTHOR
F.
Ahmadi-Givi
ahmadig@ut.ac.ir
3
گروه فیزیک فضا، مؤسسه ژئوفیزیک دانشگاه تهران
AUTHOR
M.A
Nasr-Esfahany
mnasr@ut.ac.ir
4
گروه مهندسی آب، دانشگاه شهرکرد
AUTHOR
ORIGINAL_ARTICLE
Sensitivity analysis and comparison of capability of three conceptual models HEC-HMS, HBV and IHACRES in simulating continuous rainfall-runoff in semi-arid basins
Arid and semi-arid regions of the world are confronted with limited water resources. A large part of Iran is arid and semi-arid and rainfall in arid and semi-arid regions is typically meager, irregular and highly variable. This irregularity affects the hydrological cycle and water resources. Investigating the hydrology of the arid and semi arid regions is essential to know this environment and determine their vulnerability to changes. This is obvious that effective water resource management is necessary and this needs a decision support system that includes modeling tools. Choosing a model needs recognition of capability and limitations of hydrological models in watershed scale. In this paper for runoff simulation in semi-arid Azam Harat river basin, three conceptual continuous Rainfall–Runoff models HBV, HEC-HMS and IHACRES were used. HBV (Hydrologiska Byrans Vattenavdelning) model was firstly developed in Swedish meteorlogical and hydrological center in 1976. Up to now, the runoff simulations of different basins with different hydrological conditions have been evaluated by this model. This model simulates the continous runoff as well as flood single event of a basin, dividing the basin into several subbasins. Dividing subbasins is based on altitude and the vegetation of the basin. In this research we used the HBV-Light version. In this version Genetic Algorithm (GA) procedure is used to calibrate the parameters of the model. HEC-HMS (Hydrologic Engineering Center- Hydrologic Modeling System) model is a new version of HEC-1 model which has been used for simulation of both continous and single event runoff of a basin. On of the main advantage of this model is simulating the snow melt of the basin. In this research, the soil moisture algorithm was chosen, as the main methodoly of simulating runoff base on the fluctuations of rainfall, evapotranspiration and soil moisture losses. IHACRES model is based on non-linear loss module and linear unit hydrograph module. The process of simulation includes converting precipitation and temperature in each time step to effective rainfall by non-linear module, then converting to surface runoff by unit hydrographs linear modulus at the same time step.Some criteria of evaluation in this study are Nash coefficient (E), coefficient of determination (R2), and the standard error of a root mean square error (RMSE) and Bias. The results show that HBV model with 0.76 Nash coefficient, 0.77 coefficient of determination, 0.72 RMSE and -0.004 Bias error and HEC-HMS with 0.62 Nash coefficient, 0.64 coefficient of determination and 1.3 RMSE and 0.007 Bias error have highest and lowest efficiencies in the calibration period, respectively. These values are 0.66, 0.67, 0.8 and -0.15 for HBV model and 0.55, 0.57, 1.02 and -0.03 for HEC-HMS model, respectively. Finally HBV model has the best performance in simulating rainfall according to watershed condition in the validation period. In parameter sensitivity analysis that was applied, the most sensitive parameters of HBV model were UZL, mAXBAS and BETA. In HEC-HMS model, parameters soil storage, Max infiltration and tension storage were the most sensitive parameters with greatest effect on the model output results. The parameters of IHACRES model demonstrate equal sensitivity.
https://jesphys.ut.ac.ir/article_50640_6ee24eb3181457562ac83b503af2a136.pdf
2014-06-22
153
172
10.22059/jesphys.2014.50640
Conceptual rainfall
Runoff model
Azam Harat River basin
HBV
HEC-HMS
IHACRES
M.
Yaghoubi
m.yaghobi@ut.ac.ir
1
گروه علوم و مهندسی آب، پردیس ابوریحان، دانشگاه تهران
AUTHOR
A.R.
Massah Bavani
armassah@ut.ac.ir
2
پردیس ابوریحان، دانشگاه تهران
LEAD_AUTHOR