Earwig's Copyvio Detector

Settings

This tool attempts to detect copyright violations in articles. In search mode, it will check for similar content elsewhere on the web using Google, external links present in the text of the page, or Turnitin (via EranBot), depending on which options are selected. In comparison mode, the tool will compare the article to a specific webpage without making additional searches, like the Duplication Detector.

Running a full check can take up to a minute if other websites are slow or if the tool is under heavy use. Please be patient. If you get a timeout, wait a moment and refresh the page.

Be aware that other websites can copy from Wikipedia, so check the results carefully, especially for older or well-developed articles. Specific websites can be skipped by adding them to the excluded URL list.

Site: https:// . .org
Page title: or revision ID:
Action:
Results generated in 0.634 seconds. Permalink.
Article:

A figure of merit (FOM) is a performance metric that characterizes the performance of a device, system, or method, relative to its alternatives.

Examples

Accuracy of a rifle

Audio amplifier figures of merit such as gain or efficiency

Battery life of a laptop computer

Calories per serving

Clock rate of a CPU is often given as a figure of merit, but is of limited use in comparing between different architectures. FLOPS may be a better figure, though these too aren't completely representative of the performance of a CPU.

Contrast ratio of an LCD

Frequency response of a speaker

Fill factor of a solar cell

Resolution of the image sensor in a digital camera

Measure of the detection performance of a sonar system, defined as the propagation loss for which a 50% detection probability is achieved

Noise figure of a radio receiver

The thermoelectric figure of merit, zT, a material constant proportional to the efficiency of a thermoelectric couple made with the material

The figure of merit of digital-to-analog converter, calculated as (power dissipation)/(2ENOB × effective bandwidth) [J/Hz]

Luminous efficacy of lighting

Profit of a company

Residual noise remaining after compensation in an aeromagnetic survey

Heat absorption and transfer quality for a solar cooker

Computational benchmarks are synthetic figures of merit that summarize the speed of algorithms or computers in performing various typical tasks.

References

Source:

Skip to Main content

ScienceDirect Journals & Books Search Register Sign in Figure of Merit

Multivariate AFOMs are numerical parameters for evaluation of the performance of an analytical instrument or method.

From:

Data Handling in Science and Technology

, 2015 Set alert About this page

Featured on this page

Featured on this page

Definition Chapters and Articles Related Terms Third Party Recommendations Contents Definition Chapters and Articles Related Terms Third Party Recommendations Chapters and Articles Volume 3 K.S. Booksh , ... J. Czege , in Comprehensive Chemometrics , 2009 3.09.7 Figures of Merit

Analytical figures of merit such as sensitivity, selectivity, limit of detection, and net analyte signal (NAS) are commonly used as transferable metrics for performance characterization of instrumental methods. These common figures of merit are mathematically defined for univariate, multivariate, and multiway data instrumentation.

2,59

All figures of merit are defined as a function of the NAS.

2,59

Although there have been competing definitions of what exactly constitutes ‘net’ analyte signal in three-way calibration,

20,59–61

the definition by Messick

et al . 62

has been found to be the most straightforward extension of Lorber’s original definition

63 of multivariate NAS. 59

Simply put, the NAS of an analyte is the part of the analyte signal that is orthogonal to the signal from all interfering species

(27) NAS A = ( I − R − a R − a + ) r a where the vector r a

is the two-dimensional EEM profile or the three-dimensional EEM–time decay profile of analyte A unfolded into a vector, the matrix

R −a

is a collection of all other spectral profiles unfolded in the same manner as

r a and appended column-wise, I

is an identity matrix of appropriate dimension, and the superscript + indicates the Moore–Penrose pseudo-inverse.

The selectivity (SEL) is thus the ratio of the norm of the NAS

A

and the norm of

r a , (28) SEL A = || NAS A || || r a ||

For three-way analysis to mathematically provide accurate estimates of analyte concentration, the selectivity must be > 0. That is, if the analyte is colinear in any dimension with interfering compounds, the NAS and SEL become zero and quantitation becomes impossible.

Just because an analysis is mathematically possible does not mean that the analysis will be of good quality. The figures of merit to describe the quality of analyses, signal-to-noise ratio (S/N) and sensitivity (SEN), are based on the net analyte signal-to-noise ratio.

(29) S / N A = || NAS A || || e a || (30) SEN = || NAS A || c 0 where ∥ e a

∥ is the 2-norm of the model residuals or another estimation of errors across an entire three-way data cube and

c 0

is nominally unit concentration. Obviously, the more overlapping the interferants that are modeled by PARAFAC, the lesser the magnitude of the NAS and the lower the S/N ratio.

Further details regarding the determination of figures of merit and the implications of each competing definition can be found in the critical review by Faber

et al . 59 Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780444527011000788

Analytical Figures of Merit

Alejandro C. Olivieri , Graciela M. Escandar , in Practical Three-Way Calibration , 2014 6.1

Definition of figure of merit

A figure of merit

is a quantity used to characterize the performance of a device, system or method, relative to its alternatives. In engineering, figures of merit are often defined for particular materials or devices in order to determine their relative utility for an application. In commerce, such figures are often used as a marketing tool to convince consumers to choose a particular brand. In analytical calibration, they are employed to compare the relative performances of different analytical methodologies, and also to establish detection capabilities, a feature which is specific for analytical chemistry.

Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780124104082000065 Volume 2 Franco Allegrini , Alejandro C. Olivieri , in

Comprehensive Chemometrics (Second Edition)

, 2020 2.20.1 Introduction

Figures of merit are numerical parameters, usually employed for comparing analytical methods in terms of predictive ability and detection capabilities. The estimation of analytical figures of merit for calibrations based on first- and higher-order data has become an active research field in recent years.

1–23

The starting approach was to extend the well-known univariate calibration concepts to data of increasing complexity. For example, the sensitivity, which is a key

figure of merit

for qualifying analytical methods, is correctly interpreted in the framework of classical univariate calibration as the change in response (the analytical signal) for a given change in stimulus (the analyte concentration).

4

This concept was extended to multivariate and multi-way calibration using the net analyte signal (NAS) as an analogue of the raw instrumental signal in univariate calibration.

5,6

The notion of net analyte signal is attractive, because it measures, in principle, the portion of the overall signal which can be directly employed to quantitate a given analyte in the presence of interferents. However, this approach proved to be unsuccessful to estimate the sensitivity, particularly in multi-way calibration.

7

A different approach was thus taken, based on error propagation rather than on signal changes associated to analyte concentration changes.

3

In this approach, a small amount of random perturbation noise is employed to interrogate the calibration model, analyzing how it propagates from the test sample signal to the estimated analyte concentration. The propagation is assumed to take place for an infinitely precise calibration model, that is, the calibration signals and concentrations are assumed to carry no noise. In other words, it is assumed that the only error propagation source is the noise in the signal of the sample to be quantified. It is a rather artificial view of the error propagation process, but it has two important advantages: (1) it allows one to provide operational expressions to estimate the sensitivity in all calibration scenarios, from univariate to multivariate to multi-way and (2) it is consistent with extensive noise addition simulations and affords an adequate parameter to measure the prediction error. It also agrees, as it should, with the classical view of the sensitivity in univariate calibration.

1

Comparison of analytical methodologies using figures of merit demands that they are based on the same units for the measured instrumental signal. The sensitivity, for example, does not meet with this requirement, because it has units of signal × concentration

− 1

. The analytical sensitivity is preferable in this regard, because its units are concentration

− 1

and is thus independent on the type of measured signal.

8,9

However, both the sensitivity and the analytical sensitivity are derived from the above-mentioned error propagation approach. In the latter, the interrogating random noise used to define them is of the so-called iid type, that is, identically and independently distributed, meaning that its variance is constant for all measured sensors, and that there is no correlation among the noise values at neighboring sensors. This is hardly found in practice, at least in principle.

10

Consequently, a relevant issue concerning the classical sensitivity parameters is whether they are useful when the real instrumental noise is not iid. This has only been addressed very recently, with the somewhat expected conclusion that the classical sensitivity parameters are not the best indicators of method performance.

11

Modifications of the sensitivity expressions have been proposed to improve their interpretability and usefulness for comparison purposes, as will be discussed below.

11,12

Certain figures of merit are not only used for method comparison, but display an intrinsic importance: the detection and quantitation limits, for example, not only allow one to compare different methodologies in terms of detectability.

13,14

They help the analyst in assessing whether a given analytical protocol can be applied to detect low levels of an analyte, in accordance with official documents set by regulating agencies. The definition of the limit of detection has suffered important changes during the years, from the classical “three-sigma” concept, to the present approach based on the consideration of: (1) the false detects and false non-detects (or errors of Types 1 and 2), and (2) the propagation of the uncertainty in the setting of the calibration parameters to the analyte prediction, in addition to the uncertainty brought about by the measurement of the test sample signal.

14

Sadly, the literature still shows analytical reports using the old definition of the limit of detection, which greatly complicates the comparison of analytical methods.

15,16

In this report, some relevant analytical figures of merit will be discussed, valid from univariate to multi-way calibration, including recent advances concerning non-linear first-order calibration based on artificial neural networks (ANN),

17

and non-iid structures for the instrumental noise.

12

It is noteworthy for a field of apparent complexity that the theory is almost complete for most of the studied figures of merit. Some specific issues remain to be investigated, as will be clear below.

It may be noticed that the word

merit

is usually employed in a positive manner, highlighting the quality of being worthy of reward, as the Oxford Dictionary dictates. However, the term merit can also be used in the opposite way, that is, meaning that something is worth of punishment. The quality parameters represented by figures of merit act in a similar manner, and analytical methods can be favored or disfavored based on their specific values. In real practice, analysts balance the numerical parameters described in this report, together with additional considerations such as cost, time, energy, waste, security, potential for automation and/or non-invasive measurements, etc.

Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780124095472146128 Volume 2 T. Hancock , C. Smyth , in Comprehensive Chemometrics , 2009 2.31.4.1.1

Assessing the natural number of clusters using similarity-based k-means and figures of merit

By predicting the co-occurrence matrix, SBK allows for an estimate of the natural number of clusters in a data set, based on figures of merit (FOMs).

19

FOMs use the ensemble ideology to assess the number of clusters. The theory behind FOMs applied with SBK is explained here, after the original FOMs are described.

2.31.4.1.1(i) Figures of merit

An FOM assesses the ‘predictive power’ of a clustering algorithm by leaving out a variable,

p

, clustering the data (into

k

clusters), and then calculating the root mean square error (RMSE) of

p

relative to the cluster means, RMSE

(p,k) : (16) RMSE ( p , k ) = 1 n ∑ r = 1 k ∑ x ∼ i ∈ S r ( x i , p − x ― r ( p ) ) 2 where x i,p

is the measurement of the

p

th variable on the

i th observational unit, n

is the number of observational units,

S r

is the set of observational units in the

r th cluster, and x ― r ( p )

is the mean of variable

p

for the observational units in the

r

th cluster. Each variable is omitted and its RMSE is calculated. These RMSE are summed over all variables to give an aggregate FOM (AFOM):

(17) AFOM ( k ) = ∑ p = 1 p RMSE ( p , k )

Obviously, low values of a clustering algorithm’s AFOM indicate that the algorithm has high predictive power.

19

The AFOM is calculated for each

k

and adjusted for cluster size. The reasoning behind adjusting the AFOM for cluster size is simple. As the number of clusters is increased, the AFOM is artificially decreased because the clusters are smaller and naturally will not be as spread out as before. The adjusted AFOM mitigates the effect of artificial decrease by dividing the AFOM by

( n − k ) / n . 2.31.4.1.1(ii)

Figures of merit applied with SBK

If the data set is clustered to

k

clusters and this process is repeated

P

times, then the AFOM is defined as

(18) AFOM ( k ) = ∑ p = 1 P 1 n 2 ∑ r = 1 k ∑ i , j ∈ S r ( p ) ( c i , j C ― r ( p ) ) 2 where S r (p)

is the set of observational units in cluster

r on the pth run, C ― r ( p )

is the mean similarity of observational units in cluster

r on the p th run, c i,j is the (i,j)

th element of the co-occurrence matrix, and

n 2

is the dimension of the similarity matrix.

Following the same theory described above, here the adjusted FOM is given by

(19) AFOM adj ( k ) = AFOM ( k ) P ( n 2 − k ) / n 2 We also incorporate P

into the adjustment factor to find the mean adjusted FOM. The AFOM

adj

is obtained for varying levels of

k

, and the smallest AFOM

adj

indicates the number of clusters. For the sake of parsimony, the elbow of the AFOM

adj

curve is selected as the optimal number of clusters.

Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780444527011000697

Fundamentals and Analytical Applications of Multiway Calibration

Hai-Long Wu , ... Ru-Qin Yu , in

Data Handling in Science and Technology

, 2015 2.5 Figures of Merit

Figures of merit are analytical parameters used for evaluating the performance of a calibration method. Different approaches have been discussed in literatures for computing figures of merit for multiway calibration methods

[86]

. Here, we briefly describe one of approaches for computing the sensitivity, the limit of detection (LOD), and the limit of quantitation (LOQ), for three-way calibration and for four-way calibration.

In three-way calibration based on the trilinear model, the sensitivity (SEN) for the analyte

n

can be computed by the following expression

[86–88] : (3) S E N = m 1 / n t h row of I − Z u Z u + Z c + where the ( IJ ) × P Z c = b c 1 ⊙ a c 1 ⋯ b cP ⊙ a cP

, associated to the

P

calibrated analytes, the (

IJ ) × [( I + J ) Q ] Z u = b u 1 ⊗ I a I b ⊗ a u 1 ⋯ b u Q ⊗ I a I b ⊗ a u Q

, containing information of the

Q unexpected interferents.

In four-way calibration based on the quadrilinear model, the SEN for the analyte

n

is given by an expression

[86]

analogous to that of the three-way case:

(4) S E N = m 1 / n t h row of I − Z u Z u + Z c + where the ( IJK ) × P Z c = c c 1 ⊙ b c 1 ⊙ a c 1 ⋯ c cP ⊙ b cP ⊙ a cP

, associated to the

P

calibrated analytes, the (

IJK ) × [( I + J + K ) Q ] Z u = c u 1 ⊗ b u 1 ⊗ I a c u 1 ⊗ I b ⊗ a u 1 I c ⊗ b u 1 ⊗ a u 1 ⋯ c u Q ⊗ b u Q ⊗ I a c u Q ⊗ I b ⊗ a u Q I c ⊗ b u Q ⊗ a u Q

, containing information of the

Q unexpected interferents.

In a multiway calibration, the LOD and the LOQ can be estimated

[86–88] as follows: (5) L O D = 3.3 h 0 s c 2 + h 0 s x 2 S E N 2 + s x 2 S E N 2 (6) L O Q = 10 h 0 s c 2 + h 0 s x 2 S E N 2 + s x 2 S E N 2 where h 0

is the sample leverage,

s c 2

is the variance in calibration concentrations,

s x 2

is the variance in instrumental signals, and SEN is the analyte sensitivity.

For the simulated three-way EEMs data array, the figures of merit obtained by using three-way calibration method based on the ATLD algorithm are listed in

Table 3

. For the analytes 1 and 2, the sensitivities (SENs) are 24.3 and 25.4

nM − 1

, respectively, the LODs are 1.0 and 2.6

pM, respectively, and the LOQs are 2.9 and 7.8

pM, respectively. These results indicated the satisfactory sensitivity, LOD, and LOQ of the three-way calibration method based on the ATLD algorithm for the direct quantitative analysis of the two analytes of interest, even in the presence of an uncalibrated interferent and a varying background signal.

Table 3 .

Figures of Merit Obtained by the Three-Way Calibration Method Based on the ATLD Algorithm with the Number of Components

N =

4, for the Simulated EEMs Three-Way Data Array

Figures of merit Analyte 1 Analyte 2 SEN (L nmol − 1 ) 24.3 25.4 LOD (pM) 1.0 2.6 LOQ (pM) 2.9 7.8 Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780444635273000047 Magnetic separations Jenifer Gómez-Pastora , ... Maciej Zborowski , in Particle Separation Techniques , 2022 4.6

Maximum energy product of a permanent magnet

The figure of merit

important for the application of permanent magnets to magnetic microparticle separations is the “maximum energy product” of the magnet material, often quoted in the literature and provided by commercial vendors. It is the product of remnant magnetization,

M

and the coercive force,

H c

at which the magnitude of the product reaches the maximum, in units of the magnet intrinsic energy density, joule/m

3 (J/m 3

) or as often, mega-gauss-oersted (MGOe), where 1 MGOe

= 7957.75 J/m 3

. The rapid progress in the permanent magnet materials science and their numerous applications brought that figure to 52 MGOe (0.41

MJ/m 3

) at affordable prices (

Hatch & Stelter, 2001

). Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B978032385486300007X

Theory and Bonding of Inorganic Nonmolecular Systems

José J. Plata , ... Antonio Marquez , in

Comprehensive Inorganic Chemistry III (Third Edition)

, 2023 3.14.2 Figure of merit, ZT Thermoelectric figure of merit , ZT

, is the most used descriptor to analyze the efficiency of a thermoelectric material. The maximum efficiency, η

max

, of a TE generator as a function of

ZT ave

was already defined at the beginning of the 20th

7–9 century as (1) η max = T h − T c T h 1 + Z T ave − 1 1 + Z T ave + T c T h , where T h and T c

are the hot and cold ends temperatures respectively, and

T ave is ( T h + T c

)/2. This expression is only valid if

ΔT = T h − T c

is very small or it is assumed that

Z

does not present a temperature dependency, which is not true in most cases. There are different approaches to include the temperature dependency of

Z , 10

however, it is usually averaged over the

ZT curve as 11 (2) Z T ave = 1 T h − T c ∫ T c T h ZTdT .

Similarly, the thermoelectric cooling efficiency, η

c

, can be calculated as

(3) η c = T h T h − T c 1 + Z T ave − T h T c 1 + Z T ave + 1 . For instance, if T h = 700 K and ΔT = 400

K, a TE material with a

ZT ave

of 3 presents a η

max =

25%, which is close to the efficiency of traditional combustion engines. On the other hand, if

T h = 300 K, ΔT = 20 K and ZT ave =

3, a thermoelectric cooler could reach a 6% efficiency. As rule of thumb, TE materials should present

ZT >

3 to be competitive against conventional refrigerators and generators.

12

A simple question rises once a threshold value for

ZT

is well defined. Why is it so difficult finding TE materials whose

ZT >

3? To answer that question, the physical properties underneath thermoelectricity and also their interrelation need to be explored. TE figure of merit

ZT is calculated as (4) ZT = S 2 σ T κ e + κ l ,

where σ is the electrical conductivity, κ

e and κ l

are the electronic and lattice thermal conductivities,

T

is the temperature and

S

is the Seebeck coefficient. Maximizing

ZT

is a difficult and complex task; Wiedemann-Franz law requires κ

e

to be proportional to σ and Pisarenko relation limits the enlargement of

S and σ simultaneously. 13

The phonon-glass electron-crystal, PGEC, paradigm proposed by Slack is the ideal model to decouple the connection between κ

l

and the electronic transport properties. This implies that the material should have low κ

l

as in amorphous materials but good electrical properties as in a crystal.

14

Following all these interconnections, it is fair to conclude that calculating each of the properties included in

ZT

definition are essential to have a clear idea of the potential performance of a compound as a TE material.

Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128231449001333

Environmental and Agricultural, Applications of Atomic Spectroscopy

Michael Thompson , Michael H. Ramsey , in

Encyclopedia of Spectroscopy and Spectrometry (Third Edition)

, 2017 Interference Effects Another figure of merit

that describes the quality of the data produced in atomic spectroscopy is ‘trueness’, which, in the absence of analytical blunders, is synonymous with lack of interference. Interference is defined as the influence on the analytical signal relating to an analyte caused by constituents of the test materials other than the analyte (the concomitants). Broadly, atomic spectroscopy is affected by interference to a remarkably low extent. However, interference is not absent and usually has to be taken seriously. Fortunately, there are well-established analytical techniques for obviating the effects of interference that are eminently applicable to atomic spectroscopy.

Interference effects are often spectral in origin. In atomic emission and absorption they result when a concomitant gives rise either to photons close to the same wavelength as the analyte (spectral overlap) or to photons of another wavelength that inadvertently arrive at the detector (stray light). Methods for overcoming such spectral problems usually involve either a separate estimation of the background to a spectral line, or estimation of the concomitant concentration at another region of the spectrum and the calculation therefrom of an appropriate correction term. These features are built into modern instruments and their software, but the correction procedure may degrade other aspects of performance such as the detection limit. Comparable problems occur in mass spectrometry.

Nonspectral interferences are called matrix effects. In free-atom methods, matrix effects often reflect changes in the efficiency of atomization of the analyte or of excitation of the separate atoms produced in the atom cell. In FAAS and ICPAES, changes in the physical characteristics of the test solutions (e.g. surface tension, viscosity, density) may additionally affect the efficiency of the nebulizer. However, matrix effects seldom cause errors of greater than ±5% and can often be ignored in agricultural and environmental studies. In GFAAS, by contrast, the composition of the matrix is crucial in determining the atomization efficiency, and needs to be carefully controlled. Likewise, in XRF the gross composition of the pellet or bead may affect the intensity of fluorescent X-rays.

There are a number of convenient and effective ways of handling matrix effects. Where the gross composition of the test materials is effectively constant, it is sufficient to calibrate the instrument with matrix-matched calibrators. Even when test materials are also variable, they may sometimes be matched with calibrators by swamping the native matrix with a much larger quantity of a matrix modifier. This technique is commonly used in FAAS, GFAAS and XRF methods. If matrix matching is not possible, the method of analyte additions is always applicable, if somewhat laborious, to methods with linear calibration functions. In XRF a set of complex correction equations based on the major composition is used to correct for matrix effects.

Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128032244001357

Future Directions in Silicon Photonics

Yating Wan , ... John Bowers , in Semiconductors and Semimetals , 2019 3.2.6 Characteristic temperature

Two figures of merit are primarily used to describe the laser performance at elevated temperatures: the maximum temperature, and the characteristic temperature, T

0 . T 0

measures how much a laser's threshold power (

P th

) or threshold current (

I th

), changes with temperature, and is defined by

P th = P 0 e T / T 0 and I th = I 0 e T / T 0

for an optically pumped device and an electrically injected device, respectively. While temperature invariant operation (T

0 =

∞) in a range of 5–70

°C (

Fathpour et al., 2004

) and maximum operating temperature up to 220

°C (

Kageyama et al., 2011

) have been demonstrated for QD lasers with “macroscale” resonators on native substrates, the high aspect ratio of sidewall/active region volume deteriorates the temperature performance of microcavity lasers to some extent. Despite the fact that thermal conductance and injection pumping are difficult to be achieved in suspended disks, it is in this configuration that QD WGMs were first observed (

Gayral et al., 1999

). Among the limited research carried out on the temperature characteristics of InAs QD microdisk lasers on a native GaAs substrate, Ide et al. reported CW pumped micordisk lasers with a T

0 of 64

K in the temperature range of 130–230

K and of 36

K above 230 K (

Ide et al., 2005

), while Yang et al. demonstrated a T

0 of 31

K above room temperature under pulsed excitation (

Yang et al., 2007

). A record-high T

0 of 105

K was reported in a 4-μm diameter microdisk epitaxially grown on Si through continuous optical pumping (

Wan et al., 2016c

). This was achieved by optimizing the lateral undercut for high light confinement within the disk while maintaining sufficient heat sinking via the underlying pedestal. Normalized lasing spectra of this microdisk laser were compared from 10 to 300

K in Fig. 9 A

, together with the corresponding L-L curves in

Fig. 9

B. In the spectra, narrow lines of high-

Q

WGMs shift to longer wavelengths with the increase of temperature, with a small temperature coefficient of less than 0.04

nm/K from 10 to 80

K, and from 220 to 280

K ( Fig. 9

C). The lasing line is also determined by the temperature shrinkage of an active region bandgap. Mode hopping toward longer wavelengths at 80 and 280

K was observed due to thermal redshifting of the gain spectrum. The overall trend follows the theoretical shrinkage line of InAs band-gap, given below (

Heitz et al., 1999

). Fig. 9 .

Temperature dependent characteristics of a 4-μm diameter microdisk epitaxially grown on Si: (A) normalized lasing spectra from 10 to 300

K taken at three times the threshold. Inset: SEM image of the disk; (B) L-L curves of the lasing peak from 10 to 300

K. (C) Lasing wavelength versus temperature and theoretical InAs band-gap shrinkage; (D) Threshold power versus temperature (

Wan et al., 2016c

). Δ E = A ⋅ T 2 T + B assuming A = 0.00042 eV/K 2 and B = 199 K. In Fig. 9

D, the threshold pump power increases by a factor of ∼

2 as temperature increases from 10 to 300

K, giving a T

0 of 105 K.

Ring resonators provide much improved thermal conductivity and are better suited to electrical injection than microdisks. A similar structure using ring geometry achieved a record-high lasing temperature of 107

°С and a T

0 of 77

K under continuous optical pumping with a 6-μm outer diameter and a 2-μm inner diameter (

Kryzhanovskaya et al., 2014

). To achieve electrical injection in a QD microlaser, a microdisk was the first configuration where WGMs were observed at a temperature of 5

K, with a complicated air bridge structure (

Zhang and Hu, 2003

). However, the highest temperature to date was 300

K, achieved by burying the disk in benzocyclobutene (BCB) cladding to alleviate the thermal and fragility problem (

Mao et al., 2011

). On the contrary, microring QD lasers show much superior temperature performance, with the highest CW temperature up to 100

°С, and T 0 of 197 K ( Fig. 10 ) (

Wan et al., 2017a

). Fig. 10 .

Measured temperature dependent L-I curves of an electrically injected microring laser with a 50-μm radius of and a 4-μm width. Inset: threshold current versus temperature (

Wan et al., 2017a

).

In addition, the temperature characteristics have been improved to a great deal, attributed to special techniques such as tunnel injection of electrons and modulation

p

-doping of the active region (

Bhattacharya and Mi, 2007

). In tunnel injection, cold carriers (electrons) are tunneling injected into the QD lasing states so that other carriers are not heated. This leads to an enhanced T

0

by minimizing carrier occupation in the states from wetting layer and barrier (

Bhattacharya et al., 2003

). P

-modulation doping of the QDs is more generally utilized to reduce the sensitivity to temperature. Typically, the separation between the ground state and first excited state band is up to 80

meV in the conduction, but is only around 10

meV in the valence band (

Kageyama et al., 2011

). At room temperature, while the electrons can be well confined in the dots, the holes can easily thermalize and escape, as the valence band offset is well below

kT

. This effect of thermalization can be suppressed by adding extra holes via

p

-type modulation doping, leading to increased temperature stability. As a direct comparison, two microring lasers were tested at various heatsink temperatures and their

L - I

characteristics were analyzed in

Fig. 11 A and B , respectively (

Wan et al., 2018a

). For the laser in

Fig. 11

B, the GaAs barriers separating the QDs was modulation

p

-doped using beryllium with a hole concentration of 5

× 10 17 cm − 3

, otherwise the two structures are nominally the same. In

Fig. 11

A, CW lasing was sustained up to 40

°C for the undoped laser with a

T 0 ~ 22

K. On the contrary, for the same laser structure grown on a separate wafer but with modulation

p

-doped active regions, CW lasing temperature was elevated up to 80

°C, with a T 0 of ~ 103

K near room temperature (20–40

°C) ( Fig. 11 B). Fig. 11 .

Temperature characteristics of two QD microring lasers grown on Si with an outer ring radius of 15

μm and ring waveguide width of 4

μm: L-I curves with varied heatsink temperature for (A) QDs with an intrinsic active region and (B) QDs with a modulation

p

-doped active region. (C) Threshold current versus heatsink temperature (

Wan et al., 2018a

). Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/S0080878419300092

Advances in Infrared Photodetectors

S.D. Gunapala , ... D.Z. Ting , in Semiconductors and Semimetals , 2011 5.1 Effect of Non-uniformity The general figure of merit

that describes the performance of a large imaging array is the noise equivalent temperature difference NEΔT. NEΔT is the minimum temperature difference across the target that would produce a signal-to-noise ratio of unity and it is given by (

Kingston, 1978 ; Zussman et al ., 1991 ) (2.24) N E Δ T = A Δ f D B ∗ ( d P B / d T ) , where D B ∗

is the blackbody detectivity (defined by

Eq. 2.22 ) and ( dP B / dT

) is the change in the incident integrated blackbody power in the spectral range of detector with temperature. The integrated blackbody power

P B

, in the spectral range from λ

1 to λ 2

, can be written as

(2.25) P B = A sin 2 θ 2 cos φ ∫ λ 1 λ 2 W ( λ ) d λ , where θ, ϕ

, and W(λ) are the optical field of view, angle of incidence, and blackbody spectral density, respectively, and are defined by

Eqs. 2.7 and 2.8

in sub-section 3.3

. Before discussing the array results, it is also important to understand the limitations on the FPA imaging performance due to pixel non-uniformities (

Levine, 1993

). This point has been discussed in detail by

Shepherd (1988)

for the case of PtSi infrared FPAs (

Mooney et al ., 1989

), which have low response, but very high uniformity. The general figure of merit to describe the performance of a large imaging array is the noise equivalent temperature difference NEΔT, including the spatial noise which has been derived by

Shepherd (1988)

. It is given by

(2.26) N E Δ T = N n d N b / d T b , where T b

is the background temperature, and

N n

is the total number of noise electrons per pixel, which is given by

(2.27) N n 2 = N t 2 + N b 2 + u 2 N b 2 .

Where the photoresponse-independent temporal noise electrons is

N t

, the shot noise electrons from the background radiation is

N b

, and residual non-uniformity after correction by the electronics is

u

. The temperature derivative of the background flux can be written to a good approximation as

(2.28) d N b d T b = h c N b k λ ¯ T b 2 , where λ ¯ = ( λ 1 + λ 2 ) / 2

is the average wavelength of the spectral band between λ

1 and λ 2

. When temporal noise dominates, NEΔT reduces to

Eq. (2.24)

. In the case where residual nonuniformity dominates,

Eqs. 2.26 and (2.28)

reduces to (2.29) N E Δ T = u λ ¯ T b 2 1.44 .

The unit of the constant is cm K,

λ ¯

is in cm and

T b

is in K. Thus, in this spatial noise limited operation, NEΔT ∝ u and higher uniformity means higher imaging performance.

Levine (1993)

has shown as an example, taking

T b = 300 K, λ ¯ = 10 μm, and u

= 0.1% results in the NEΔT value of 63 mK, while an order of magnitude uniformity improvement (i.e.,

u

= 0.01%) gives NEΔT value of 6.3 mK. By using the full expression

Eq. (2.27) , Levine (1993)

has calculated NEΔT as a function of

D

* as shown in

Fig. 2.34

. It is important to note that when

D * ≥ 10 10

cm√ Hz/W, the performance is uniformity limited and thus essentially independent of the detectivity, i.e.,

D

* is not the relevant figure of merit (

Grave and Yariv, 1992

). Figure 2.34 .

Noise equivalent temperature difference NEΔT as a function of detectivity

D

*. The effects of nonuniformity are included for u = 10

−3 and 10 −4

. Note that for

D * > 10 10

cm√Hz/W detectivity is not the relevant figure of merit for FPAs. (

Levine, 1993 ) Read more View chapter Explore book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780123813374000024 Related terms: Nanowires Titanium Dioxide Thermoelectric Materials Nanoparticle Refractive Index Thermoelectricity Thermal Conductivity Nanomaterial Phase Composition Seebeck Coefficient View all Topics About ScienceDirect Remote access Shopping cart Advertise Contact and support Terms and conditions Privacy policy

Cookies are used by this site.

Cookie Settings

All content on this site: Copyright ©

2024

Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the Creative Commons licensing terms apply.