我的博文

论文投稿:如何回复审稿人-实例1

2013-02-12 09:16  阅读(257)  评论(0)  分类:学习资源

We thank the reviewers for their careful read and thoughtful comments on previous draft. We have carefully
taken their comments into consideration in preparing our revision, which has resulted in a paper that is
clearer, more compelling, and broader. The following summarizes how we responded to reviewer
comments.
Reviewer A
1. We have followed Reviewer A’s suggestions to frame the paper in terms of rational choice theory in
general, with citations to that literature (page 1), link updating to social learning theory (page 7), cite a
bit more contemporary literature on deterrence (pages 3 and 13), rational choice (page 1), gender and risk
(page 24), and ethnographies on thrill and risk (page 15), and describe impulsivity measures (page 19).
We cite an unpublished manuscript sent to us by Lochner (2005) in two footnotes (which is how he cites
us).
2. Reviewer A speculates (point 4) that our experienced certainty measure could be problematic because it
measures the ratio of all past arrests to crimes in the past year, rather than all past arrests to all past
crimes (we lack data on lifetime crimes). We are implicitly assuming that delinquent acts committed
years ago lack salience, whereas prior arrests are remembered. However, to examine the robustness of
this assumption we estimated a model in which we replace our measure with the ratio of arrests in the
past year to crimes committed in the past year, and obtained similar results (see footnote 19), although
this variable has a smaller variance resulting in less precise estimates.
3. With respect to heuristics, we have deemphasized this portion of the paper, given that we cannot strongly
test for heuristics, but were using it mainly to suggest alternatives to Bayesian learning. Reviewer A is
correct that our results only apply to the current perceptions we are modeling and not perceptions we
treat as exogenous. (Note that delinquent peers refers to the past year.)
4. We examined Reviewer A’s hypothesis that naïve people might not contemplate crime, and therefore
including them biases deterrent effects downward. We excluded those who had never offended and also
tried excluding those who said the most serious thing (crime) they think they could do was “nothing.”
Both analyses led to slightly attenuated estimates of rational choice, probably due to sample selection
bias.
5. Reviewer As observations in point 7 are well taken. We included the violent versus property distinction
given that utility theories often emphasize monetary crimes and the tradition of distinction in the
literature. We believe that the similarity of results across violence and theft strengthens our results—
particularly for those readers who are not as well-versed in the literature as Reviewer A. Nevertheless,
we have added a brief discussion of why our model may be invariant to pecuniary versus violent crimes
and cite the literature on this point (page 17).
6. Reviewer As point 8 is also well taken. We have explicitly discussed the implications of our low Rsquares
in the conclusions.
Reviewer B
1. Reviewer B’s first point—while we agree with it in principle—is quite provocative and challenging. The
trick is to expand on the theoretical link between utility models and criminal justice in the front end, in a
way that is sociologically informative but does not lead the reader astray, given that this is not the thesis
of the paper. We have spent a good deal of time trying to do this by mentioning the critique of
Pashukanis and Garland briefly in the front end (and we do use the suggested quote—thanks for the
suggestion), without disrupting the flow of the paper. We return to these more general issues in the
discussion, including linking our low R-squares to the general policy debates, touching on Garland
(2001) and Whitman (2003).
2. On Reviewer B’s second point, we of course agree completely that institutional and rational choice
should be viewed as complementary. Our point about using rational choice as a baseline was aimed at
those (numerous) institutional types who ignore rational choice. We were not implying that rational
choice and structural accounts are zero sum. We have reworded this discussion in ways that are
consistent with the spirit of Reviewer B’s (more eloquent) paragraph (page 35).
3. On the issue of reliability, we have reported the conventional Cronbach alphas for our indexes, which
range from .72 to .84). The test-retest reliabilities of perceived certainty of about .25 usually refers to
measurements taken six months or a year apart, which of course confounds true change with pure
unreliability. Our theory and models explicitly assume substantial true change, so such correlations
provide only a lower bound on reliability. For our measures, the correlation for a one-year lag is about
.32. When disaggregated by age group, we find that correlations are lowest for our youngest (11 year
olds) cohort (.25), where we expect true risk to vary widely, and highest for our oldest (15 year old)
cohort (.45), where we expect experiences structure risk perception, increasing their stability. These
estimates are consistent with our expectation of substantial true change and moderately high reliability
for our risk estimates—e.g., a true stability of .50 and reliability of .70. This is discussed on page 23 and
footnote 13.
4. On the remaining points, we’ve added a very brief footnote describing vignettes (footnote 20), deleted
the cite to Levitt’s refuted paper, and noted how we computed log of zero (footnote 10). We’ve moved
the mathematical presentation of the random effects negative binomial model to Appendix C (which
perhaps can go on the web page). We agree with the reviewer that a zero inflated negative binomial
model is very attractive (and can be fit for cross-sectional data). But for the random effects panel
version, the model is essentially trying to disentangle three parameterizations of unobserved
heterogeneity—the delta term from the negative binomial, the random individual effect that follows a
beta distribution, and then the zero inflated portion of the model (not to mention the observed sources,
stable covariates and lagged endogenous predictor). It simply tries to squeeze out too much information
from a restricted sample space. We corresponded with Bill Greene on this point (we were using
LIMDEP), who was not surprised at the difficulty of disentangling all these effects. Since we are not
interested in disentangling the unique effects of each source of heterogeneity and we are very interested
in the lagged effects (and hence random effects model, which helps overcome bias in lagged estimates
due to serial correlation), we dropped the zero inflated model.
Reviewer C
1. Reviewer C notes that overdispersion can be caused by unobserved heterogeneity, and suggests a
footnote speculating what that might include. Statisticians tell us that overdispersion can result from a
variety of sources, including omitted variables (as Reviewer C notes) and we suggest genetics, early
socialization, or social ties as prime candidates, but also clustering in the data (e.g., neighborhoods or
families), positive contagion, and rate dependence. We discuss this issue in footnote 16.
2. Reviewer C suggests that we are overstepping our results by showing how they may be of interest to
those interested in social capital and crime, which detracts from our individual-level contributions. We
have accordingly dropped this discussion.
Reviewer D
1. Our analyses differ from prior studies by using very strong measures (with appropriate scales), a large
sample of youth from high risk neighborhoods, and appropriate statistical methods. Previous research
has typically used either weaker measures or smaller low risk samples (such as high school or college
students). We have noted this on page 34.
2. Reviewer D suggests providing descriptive information on the distribution of perceived risk and its joint
distribution with arrest (ratio of arrest to self-reports, our key variable). We have provided two sets of
graphs, which summarize this information. First, a histogram of perceived certainty over time for theft
and violence (Figure 3) reveals the slight decline in perceived risk over time, the censoring at both ends
of the scale (suggesting a tobit model), and some potential heteroskedasticity. (We note that much of
this decline is due to age—the time 2-3 difference is not significant controlling for age.) Second, a
histogram of predicted perceived risk by experienced certainty (ratio of arrests to crimes) derived from
our full models (Figure 4), which shows the monotonic relationship between experienced certainty and
perceived risk, the “shell of illusion” for naïve offenders (zero arrests), and the modest effect of control
variables.
3. Reviewer D suggests discussing the precise metric relationship between perceived risk and crime, and
interpret it in percentage terms. We are unable to do this in our reported model because, consistent with
an expected utility model, we used a weighted average of perceived risk of cost (reward) weighted by the
subjective value of the cost (reward). We agree that this is an important point—especially for policy
discussions—and therefore, reestimated the model disaggregating utility (value) from certainty. We
discuss this result in the conclusions, when we discuss the magnitude of the deterrent effect. This is
done, not by using the raw coefficients (D’s query in point 4), but rather the exponentiated coefficient:
[exp(β ) – 1] x 100. (Note that this percentage change is invariant in our sample, as our model contains
no significant interaction effects.)
4. It would be terrific to examine regional differences in arrest clearance rates and perceived risk, however,
because the neighborhoods are all clustered in the same lower class area of the city, there simply isn’t
sufficient variance in such measures across neighborhoods. A different sampling design would be
needed for that question.
5. We estimated a model in which we included dummies for out of school, and unemployed, plus the
interaction (out of school and unemployed), but the interaction was nonsignificant
6. As a test of robustness, we estimated a fixed-effects model of perceived risk and found similar overall
results; we report the random effects model so we can estimate effects of stable covariates and gain
power over statistical tests. In our negative binomial models, a fixed effects model loses too much
statistical power to obtain stable estimates with our data.
7. We now report the serial correlation of perceived risk (page 23), and in Figure 3 we show the univariate
change in risk over time. Here, we can see the drop in risk over time (Reviewer D’s point 9). However,
if we control age, this effect is significant between waves 1 and 2, but not between waves 2 and 3 (page
23).
8. On the Bayesian learning versus Kahneman and Tversky, we have followed Reviewer D’s suggestion
and deemphasized this comparison. We are using heuristics only as an illustration of one theoretical
justification of non-Bayesian learning, rather than as a formal test of the operation of heuristics.
Therefore, we pare back this material, take it out of explicit hypotheses, and note on page 34 that we
cannot rule out heuristics in our estimates of risk formation; we can only rule out the extreme case in
which heuristics dominate so much that no updating occurs. We’ve left the ideas in to provide a
theoretical rationale for not finding updating.

 

我要评论

0条评论