User talk:Neow
Chart on Iraq casualties
[edit]In hopes of curbing ongoing reverts, some clarification is probably in order. You are attempting to explain a particular idea with relation to one source and not doing so with the others. You refer to this as something that "changes what's being counted", which is a confused way of putting things, but which you say is only true for IBC. That is false.
- Thanks for these comments. This helps me understand where you're coming from. Sorry for suggesting before that you were biased.
- I'll try to explain below what I'm trying to do. Neow (talk) 00:27, 18 March 2009 (UTC)
The drastic difference between the first two listed figures shows that something about the methods of one or both "changed what they were counting" from what is described to something else after whatever distortion took place. Either the lower source is not covering all of the deaths described due to something in its methods, or the higher source is covering something else above the deaths described due to something in its methods, in which case the descriptions there are "false" in the convoluted meaning you're trying to apply to IBC but not the others.
- The differences in the first two counts has nothing to do with what I'm talking about. Our role as wiki editors is simply to summarize the facts about who says what about the death tolls. Any attempt to guess or analyze why one number is different from another is not relevant. Our role is just to pass on and summarize what each source actually says.
- Anyway, now that I see why you're focusing on the difference in the first two counts, I realize you don't seem to understand what I'm trying to do. I'll try to explain that more clearly below. Neow (talk) 00:27, 18 March 2009 (UTC)
Moreover, all of the sources rely on a reporting method. The surveys rely on reporting by household members and interviewers. There can be error and inaccuracy in this reporting just as with the reporting IBC uses. One example, of many, is that this type of reporting will "exclude" deaths not reported by the households. Thus when you made a chart saying "none" under an arbitrary column for "excluded" deaths, this was incorrect. It should have read "deaths not reported by household members during interviews" or some such. Your column for IBC was also wrong because that includes deaths not reported by the media (read its page). But there is no reason to include a column for this type of potential measurement error and not every other type.
- I know all of this. I admit the way I tried adding a column to the table was a clumsy and imprecise way of expressing the issue I'm trying to address, and that saying IBC excludes deaths not reported in the media failed to indicate that they do include deaths from official reports (records) that weren't reported in the media. Neow (talk) 00:27, 18 March 2009 (UTC)
Why no column for the inclusion of something that doesn't belong, beyond what is described? If a news article (or a household, or interviewer) reports something that did not take place, or was not that type of death being described here, then this would cause the figures to be covering things beyond what's described, which should equally make the chart "false" in your confused meaning. And this is before we get into anything about what might go on in statistical extrapolations.
There is nothing false about the chart as it stands. You are trying to start getting into issues of how figures are derived and potential errors inherent in these and you are doing it in a selective and arbitrary way. These issues are unnecessary for and too complicated to handle properly in a set of bullet points. They are explained on the pages of each source and in the text of the page below the chart.Stradov (talk) 10:05, 17 March 2009 (UTC)
- Ok, let me explain the problem I'm trying to address. I'm not trying to get into how figures are derived and potential errors inherent in the methods. Obviously the details on that (whether the estimates are done by asking households if their family members have died, or by looking at the number of shoes sold in Iraq each year and extrapolating to estimate how many fewer people there are buying them) are going to vary for each source and would be too cumbersome to include in a summary table.
- My point is, there's a fundamental difference in the goal of a survey that tries to estimate (by whatever method the survey designers use) the total number of deaths, versus the IBC that simply counts reports of individual people who have died. The IBC basically just says "we have seen reports that at least N individuals died, and although we know there are more people whose deaths weren't reported/recorded, we can't say anything about whether the deaths we could count represent 95% or 5% of all the people who died".
- Remember that our role as wiki editors is only to summarize what each source claims, not to analyze the validity or accuracy of their claims. So the difference here is that in the case of the 3 surveys, they all claim that their number represents that survey's own best estimate of the actual total number of deaths. By contrast, the IBC doesn't claim any such thing. The IBC only provides something like a lower bound on the number of deaths, based on the reports and records they've counted.
- Thus, the 3 survey numbers can be directly compared and contrasted with each other (after adjusting for differences like what time period and what types of deaths they cover) in a way that the IBC cannot. You make this point well yourself when you note that it's not possible that the first two survey numbers could both be correct, because they're both estimating the same thing (violent deaths from March 2003 to June 2006). By contrast, it is possible that any one of the surveys--even the 3rd estimate of 1,033,000 deaths through Aug 2007--could be precisely accurate and at the same time the IBC count of 91,059-99,431 reported deaths (or whatever the IBC count was in Aug 2007) could also be a perfectly complete and accurate count of all known death reports/records, because it could simply be that the other 900K+ deaths were never witnessed by the media (or by the officials who compiled the records that IBC also counts).
- The point is, the 3 surveys are one kind of thing, and the IBC is a totally different kind of thing, so putting all 4 numbers in the same column of a table without indicating that fundamental difference is profoundly misleading.
- That is what I meant when I said that it's false to say that the IBC is a count of violent civilian deaths. It's not! Rather, it is a count of violent-civilian-death reports and records. The surveys, by contrast, are not counts at all. They are statistical estimates. But they are estimates of violent deaths, not (obviously) estimates of how many violent-death reports or records have been counted. This difference is essential to understanding the numbers. Without any indication of this fundamental difference, the table is at least terribly misleading if not outright false.
- So, first a question: do you understand my point that the 3 surveys are one mutually comparable kind of thing, while the IBC is another kind of thing which could potentially be accurate, for what it is, no matter how much larger the actual number of deaths (which the surveys are all somebody's attempt to estimate) may be?
- Assuming you do, how about a challenge: can you come up with an acceptable way to represent this distinction in the table so that it does not make it appear that the survey estimates can be compared directly to the IBC count?
- Having thought this through some more, my current proposal would be something like this:
Source What was estimated/counted March 2003 to... Iraq Family Health Survey Estimate: 151,000 violent deaths. June 2006 Lancet survey Estimate: 601,027 violent deaths out of 654,965 excess deaths. June 2006 Opinion Research Business survey Estimate: 1,033,000 violent deaths as a result of the conflict. August 2007 Iraq Body Count Count: 91,059 – 99,431 reports/records of violent civilian deaths as a result of the conflict. February 2009
I think I have understood what you're trying to do. I disagree that the chart as it stands is misleading or that what you are trying to explain is necessary or appropriate for a chart of bullet points like this where other issues (yours being only one) are described and explained in the main text below and on the pages of each source. You seem to be confused on a number of things. Having read some of the IBC comments on the other studies, you are incorrect that they don't say anything about whether their count could be 95 or 5%. They basically say it's absurd that it could be as low as five percent and is most likely more than half. But I don't think that's relevant here anyway. I think your main confusion is here:
"That is what I meant when I said that it's false to say that the IBC is a count of violent civilian deaths. It's not! Rather, it is a count of violent-civilian-death reports and records. The surveys, by contrast, are not counts at all. They are statistical estimates. But they are estimates of violent deaths, not (obviously) estimates of how many violent-death reports or records have been counted."
Your reasoning here is very confused. How would one count violent-civilian-deaths in a way that you would admit to being a count of that (and be sure to avoid "reports" and "records"!). If your first sentence is reasonable, which it is not, then we can not say the surveys are estimates of violent deaths. They are estimates of the number of violent deaths their respective survey teams would report if they interviewed the number of houses they estimate to be in Iraq. Since you have a double standard here I think you'll see right away the convoluted silliness of putting things this way in this case. It is equally silly to put it that way for IBC and say it is not a count of violent civilian deaths. It is.
I certainly realize that there is a difference in what IBC is doing from the others. The first three are identified as surveys, the fourth a count. In this context, that adequately identifies the difference you are concerned with already. Disparities in these numbers are evident across all the sources, not just the IBC one. There are many "possible" explanations for these. That you can identify one "possible" explanation, among many, for the disparity in the case of IBC that is not one of the many possible explanations for the disparity between the others, is no basis to insist that that one has to be singled out and represented or else the chart is "profoundly misleading". It is still just one "possibility" among many.
"can you come up with an acceptable way to represent this distinction in the table so that it does not make it appear that the survey estimates can be compared directly to the IBC count?"
No. It is too tangential for a chart of this type and there is no reason this distinction should be singled out for representation in the chart while a host of others are not. Each listed source can be directly compared or not. If you directly compare the first two numbers you immediately hit a brick wall until you dig deeper than the chart and start analyzing things about the sources. Likewise, if you directly compare any of the others. The chart does not give you enough information to do meaningful comparisons with any of them. It just lists some varying figures by some of the most notable sources. There is nothing at all misleading about it. More detailed explanations are for the text below and on the pages.Stradov (talk) 03:34, 18 March 2009 (UTC)
One last point: "91,059 – 99,431 reports/records of violent civilian deaths as a result of the conflict." Now that is actually false. IBC does not give the number of reports/records of violent civilian deaths. There could be 100 reports or records for 1 death, and the IBC would count 1 , because they're counting deaths, not reports or records.Stradov (talk) 04:55, 18 March 2009 (UTC)
- Apparently I still wasn't clear enough. You wrote:
- "How would one count violent-civilian-deaths in a way that you would admit to being a count of that (and be sure to avoid "reports" and "records"!)."
- No, that's not what I meant. I get your point now that the IBC is actually counting reported deaths, not death reports--as you said, obviously, if there are 10 reports of the same person's death, they only count one person as being reported dead--but the fundamental difference between a count of a subset of the total deaths (namely, in this case, those that were reported/recorded) and an estimate of actual total deaths remains. I was using "violent-death reports" as a shorthand to mean "violent deaths that were reported", obviously not to mean that if the same death was reported more than once that each report would be counted separately.
- So to be ultra-precise, I'd have to re-word my paragraph above to read:
- That is what I meant when I said that it's false to say that the IBC is a count of violent civilian deaths. It's not! Rather, it is a count of the subset of violent civilian deaths that have been individually reported or recorded. The surveys, by contrast, are not counts of individual deaths at all. None of the surveys counted the number of deaths they report. Rather, they are statistical estimates. But they are estimates of the actual total number of violent deaths, not (obviously) estimates of how many individual deaths have been reported or recorded."
- The point is, the surveys are not estimating how many deaths have been reported or recorded, so they aren't estimates of the number that the IBC would report, so it doesn't even make sense to compare them to the IBC numbers (for example, to say "the IBC count and the ORB survey estimate cannot both be right" would simply be a misunderstanding of what they are), whereas one can at least compare/contrast the surveys with each other in terms of their methods, what they tried to count, etc.
- Putting one totally different item in a column with 3 other items that are comparable is what is so misleading. But you wrote "there is no reason this distinction should be singled out for representation in the chart while a host of others are not." Then here's a challenge: can you identify a single other distinction that would mean it doesn't even make sense to compare the surveys to each other in the same way it obviously doesn't make sense to compare the IBC with any of the surveys? (You obviously can't do that with the first two surveys, at least, because you already compared them yourself when you said they couldn't both be accurate.) Neow (talk) 01:19, 19 March 2009 (UTC)
"So to be ultra-precise, I'd have to re-word my paragraph above to read: That is what I meant when I said that it's false to say that the IBC is a count of violent civilian deaths. It's not! Rather, it is a count of the subset of violent civilian deaths that have been individually reported or recorded."
This is still just very confused and I think will continue to be. The problem is that your premise is wrong so it doesn't really matter how you juggle the words around. If you're trying to count all the change you have around your house, but you miss some under the couch or whatever, does that make it "false" to say you're counting change? Of course not. By your convoluted reasoning, it does. If there's any reason why a count might not be able to count every one of the things it's counting, then it's not counting what it's counting. It's unlikely that any amount of logomachist trickery will make this false premise seem correct.
"But they are estimates of the actual total number of violent deaths"
Not if we apply the kind of thinking you do above. They are then estimates of the number of violent deaths their survey team would report if they interviewed the number of households they estimate to be in Iraq. You can then say that this estimate is of either a subset or superset of the actual total of violent deaths. Otherwise you're being profoundly misleading.
""to say the IBC count and the ORB survey estimate cannot both be right" would simply be a misunderstanding of what they are"
If someone says this, they almost certainly mean "right" in the sense of "they can't both be close to the true number". Perhaps a crudely put statement, but probably not a misunderstanding.
Afterall, if IBC is "right" in this sense, then ORB's estimate of how many deaths their survey team would report if they interviewed the number of estimated households could be exactly right too. It could simply be that the idea that an estimate of that should come to the same figure as an estimate of the actual total number of deaths is just a misunderstanding of what the estimate is.
Is that clear, or should I re-word?
"Putting one totally different item in a column with 3 other items that are comparable is what is so misleading."
I guess we could put 2 more things you consider to also be totally different items in to balance that out, but this seems pointless to me as I don't think it's misleading as it stands.
"Then here's a challenge: can you identify a single other distinction that would mean it doesn't even make sense to compare the surveys to each other in the same way it obviously doesn't make sense to compare the IBC with any of the surveys?"
You like these challenges. But this still has a premise problem. I don't agree that it obviously doesn't make sense to compare the IBC with any of these. Comparisons can be sensible or not, depending on what you're trying to accomplish by the comparisons. If the purpose of your comparisons are to explain differences in the figures, then you have a host of considerations in all cases. In the case of comparing one of the others to IBC, one of these considerations would be whether limitations in the IBC counting approach explain all or some of the difference between the figures. That becomes one of the possibilities in that case. If comparisons made sense to do before, with a host of other possibilities, I don't see why the existence of this one now means the exercise does not make sense, let alone obviously.Stradov (talk) 14:16, 19 March 2009 (UTC)
- Wow, you seem quite committed to using the astonishingly convoluted logic on display above to justify your insistence on keeping these items in the table in a form which, I believe, anyone with a neutral point of view (who understood the fact that the surveys are estimates of a total while the IBC is a count of a particular subset of the total) would agree is misleading. Given that I have no way of proving that, well, I give up. I hope you've enjoyed this exercise in ensuring that people who don't have time to study these issues in depth get a skewed sense of the truth--although of course you'd claim you don't believe that's what you're doing. (Here's another challenge if you're still listening: show this page to 10 random people who aren't your friends, and see if any of them support your position.) Neow (talk) 22:55, 19 March 2009 (UTC)
Hi,
You appear to be eligible to vote in the current Arbitration Committee election. The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to enact binding solutions for disputes between editors, primarily related to serious behavioural issues that the community has been unable to resolve. This includes the ability to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail. If you wish to participate, you are welcome to review the candidates' statements and submit your choices on the voting page. For the Election committee, MediaWiki message delivery (talk) 22:13, 30 November 2015 (UTC)
ArbCom Elections 2016: Voting now open!
[edit]Hello, Neow. Voting in the 2016 Arbitration Committee elections is open from Monday, 00:00, 21 November through Sunday, 23:59, 4 December to all unblocked users who have registered an account before Wednesday, 00:00, 28 October 2016 and have made at least 150 mainspace edits before Sunday, 00:00, 1 November 2016.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2016 election, please review the candidates' statements and submit your choices on the voting page. Mdann52 (talk) 22:08, 21 November 2016 (UTC)
ArbCom Elections 2016: Voting now open!
[edit]Hello, Neow. Voting in the 2016 Arbitration Committee elections is open from Monday, 00:00, 21 November through Sunday, 23:59, 4 December to all unblocked users who have registered an account before Wednesday, 00:00, 28 October 2016 and have made at least 150 mainspace edits before Sunday, 00:00, 1 November 2016.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2016 election, please review the candidates' statements and submit your choices on the voting page. MediaWiki message delivery (talk) 22:08, 21 November 2016 (UTC)
ArbCom 2017 election voter message
[edit]Hello, Neow. Voting in the 2017 Arbitration Committee elections is now open until 23.59 on Sunday, 10 December. All users who registered an account before Saturday, 28 October 2017, made at least 150 mainspace edits before Wednesday, 1 November 2017 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2017 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery (talk) 18:42, 3 December 2017 (UTC)