logo 
spacer
  

Welcome Guest ( Log In | Register )

If you have an opinion, you should share it! Register Now!

America's Debate hosts the best in news, government, and political debate. Register now to take part in the most civil and constructive debate on the Internet. Join the community, and get ready to be challenged!

Click here to start

> Sponsored Links

Register to remove these ads!
> Republicans vs. Math, Nate Silver's vindication
Raptavio
post Nov 9 2012, 03:23 PM
Post #1


*********
Advanced Senior Contributor

Group: Members
Posts: 3,515
Member No.: 10,458
Joined: April-27-09

From: Rosemount, MN
Gender: Male
Politics: Very Liberal
Party affiliation: Democrat



This topic by akaCG was emblematic of a trend on the right side of the political sphere in the 2012 election cycle -- to refute the polls based upon accusations of "skew" based on party ID.

It was a fundamentally flawed basis for attacking a poll's accuracy; party ID is fluid. For swing voters it changes as often as one's candidate preference, even sometimes more often. Polls that are randomly sampled are not, by definition, "oversampling" or "undersampling." That's the random sample, and the party ID break will naturally be within the margin of error 95% of the time. To weight polls based on previous measures of party ID results in a real skew to the polls (in testament of this fact, Rasmussen, which does weight their polls, was near the bottom of the accuracy list, again.)

But many "serious" numbers people on the Right still held their own pictures of nationwide party ID in mind and "adjusted" poll results accordingly. This insistence upon "unskewing" the polls reached its hilarious conclusion with http://www.unskewedpolls.com - whose election predictions were hilariously optimistic for Republicans.

Meanwhile, poll cruncher Nate Silver based all his election models on poll data and math - and hit the bullseye with his election forecast models, correctly predicting every state in the Presidential race, and missing only one in the Senate races. But Silver was derided by conservatives everywhere and to a lesser extent the Beltway media in general as merely having been "lucky" in 2008, with no fact-based refutation of his models whatsoever. It was simply taken as an article of faith that Nate Silver's methodology was flawed because liberal bias freedom Obama sucks.

This particular exchange between myself and akaCG was the genesis of this topic:

QUOTE(Raptavio)
With all but two of the major pollsters (Rasmussen and Voter/Consumer Research) doing their polling without any party ID weighting, this means the party ID resulting from their polling is, essentially, a survey of party ID of the moment. To say "But the actual party ID according to THIS survey taken last month is different, so you oversampled Dems/Reps" is the logical equivalent to saying "But THIS survey shows a different percentage of people voting for Obama vs. Romney, so you oversampled Obama/Romney voters". In other words, it makes no logical sense.


QUOTE(akaCG)
1.
I very much look forward to the battery of late October (the time when pollsters, as well as their clients, need to worry much more about their accuracy rankings than anything else; after all, pollsters' accuracy rankings are based on how close their last-before-the-election poll was, not on how close their poll(s) taken 45-30 days before the election was/were) polls that ... somehow/mysteriously ... will have gone from regularly featuring D +8, +9, +10, + 11, even +14 samplings to regularly featuring D +5, +4, +3, +2 and such samplings.

2.
I also very much look forward to the battery of late October (... see above ...) polls that ... somehow/mysteriously ... will have gone from sporadically featuring 54% women versus 46% men, 16/17% more 18-49 year olds, etc. than there were as of the 2010 Census (either national or state-level) to regularly featuring samplings thereof that actually have some resemblance to said 2010 Census stats.


Now that the vote is complete, we know the actual exit polls:
53% women/47% men voted.
46% of voters were 18-44 years old vs. 39% overall population in the 2010 census.
The party ID break was D+6.

In short, nationwide, the election results mirrored the likely voter models that were being disputed, not only in that exchange, but in the right-wing media as a whole.

The conservative media ignored the polls in favor of what sounded good, and even dismissed Nate Silver's math without any real explanation. David Brooks said he was in "silly land", Jonah Goldberg derided him as running a "numbers racket" and compared his models to the poll-ignoring economic ones by the University of Colorado as if they were equivalent, UnskewedPolls.com's own Dean Chambers called his work "bizarre" and "voodoo" and dismissed him as "far-left", "thin" and "effeminate" . NONE of these people had a criticism that found flaw in Silver's math or any fact-based aspect of Silver's reasoning, yet their criticisms became an article of faith; all the FOX talking heads were confident of a Romney win, calling Silver's work "scientific gobbledygook".

Peggy Noonan predicted a Romney win based on "the vibrations", the size of some Romney crowds, and the number of Romney signs she saw. George Will predicted a 321 EV Romney win, with Minnesota going to Romney because the marriage amendment would "bring out the evangelicals." Fred Barnes predicted a Romney win based on Gallup's turnout predictions alone, without paying any mind that Gallup was a consistent outlier. (Statistics says never trust the outliers.) Emmet Tyrell pre-gloated about a Romney win based not on the polls, but on the first debate performance.

I'll spare additional links, but other mathless predictions included Michael Barone's 315-223 Romney victory, and the ever-wrong Dick Morris' 363-175 Romney landslide. None of these predictions were based in anything other than an outlier pool or purely subjective measure.

This came to a head on Election Night when we watched Karl Rove, respected within the right-wing media as a "numbers guy", melt down in real time as he refused to accept the truth that was happening right in front of him.

FOX's Megyn Kelly questioned whether Karl Rove's calculations "Math You Do As A Republican To Make Yourself Feel Better" during his meltdown.

But math, like science, has no agenda. It is what it is. It is not influenced by partisan bias, subjective opinion, or wishful thinking. And nobody found any flaw in Nate Silver's math. They chose to deny it for purely subjective reasons. And this was the result.

Rachel Maddow gave one of her best performances Wednesday night on this very topic. A money quote:

QUOTE
[I]f the Republican party and the conservative movement and the conservative media is stuck in a vacuum-sealed, door-locked spin cycle of telling each other what makes them feel good, and denying the factual lived truth of the world, then we are all deprived as a nation of the constructive debate between competing feasible ideas about real problems. Last night the Republicans got shellacked. And they had no idea it was coming. And we saw them in real time, in real, humiliating time, not believe it even as it was happening to them. And unless they are going to secede, they are going to have to pop the factual bubble they have been so happy living inside — if they do not want to get shellacked again. And that will be a painful process for them, I’m sure, but it will be good for the whole country, left, right, and center.


Questions for debate:

Is Nate Silver just lucky, or is his record of accuracy based upon solid mathematical, statistical and scientific principle?

Did the right-wing media, pundits and party faithful not see Obama's victory coming due to their "information bubble"?

Do these failures to predict/accept reality cause secondary tactical or strategic blunders among the GOP campaigns?

Should the right-wing media change their tactics to reflect rather than ignore reality, or is there a psychological benefit to their "I don't care what the facts are, we're going to WIN!" tactic?
Go to the top of the page
 
+Quote Post
 
Start new topic
Replies (1 - 19)
amf
post Nov 9 2012, 04:41 PM
Post #2


********
Millennium Mark

Group: Members
Posts: 1,372
Member No.: 1,540
Joined: October-23-03

From: Atlanta, GA
Gender: Male
Politics: Moderate
Party affiliation: Independent



Is Nate Silver just lucky, or is his record of accuracy based upon solid mathematical, statistical and scientific principle?

Not lucky. Smart. Aggregators of the polls -- including Nate Silver and TalkingPointsMemo -- did better than most pollsters, just because they are aggregating polls and -- in Nate's case -- weighting them appropriately based on past performance for the pollsters' "likely voter" models. A sample size of 1000 gets you a 95% confidence with a margin of error of about 2.5-3%. Aggregate 5 of those and you have an MoE under 1.

Did the right-wing media, pundits and party faithful not see Obama's victory coming due to their "information bubble"?

Yes. Absolutely. You tell yourself the same things over and over and everyone you know says the same things, suddenly that's your reality... that's not shared by anyone other than the people you meet. OR... maybe the pundits know which side of the bread the butter is on and they didn't want to rock the boat and get villified by the crazy fringe that they were playing to.

Do these failures to predict/accept reality cause secondary tactical or strategic blunders among the GOP campaigns?

Romney visiting Pennsylvania and running ads there a few days before election day? Blunder.

However, I don't see any of these as being the primary cause of their loss. Just screwed up their travel calendar.

The big reason is that no one questioned the crazy ideas the fringe was pushing them to adopt as part of their campaign messaging.

Should the right-wing media change their tactics to reflect rather than ignore reality, or is there a psychological benefit to their "I don't care what the facts are, we're going to WIN!" tactic?

The media is doing what it does to sell tampons and gold coins and cars. Conflict is good. Us vs. them keeps people watching. Facts that make "us" look bad are bad for business.

The problem isn't the media, it's the messages. But not accepting the polls for what they were, that's the media just playing to their crowd.
Go to the top of the page
 
+Quote Post
Eeyore
post Nov 10 2012, 12:48 AM
Post #3


Group Icon

********
Thaaaaanks for noticin' me

Sponsor
February 2003

Group: Moderators
Posts: 2,483
Member No.: 365
Joined: December-28-02

From: Nashville
Gender: Male
Politics: Liberal
Party affiliation: Democrat



Is Nate Silver just lucky, or is his record of accuracy based upon solid mathematical, statistical and scientific principle?


No, aggregating existing polls is a solid system. It does rile up some of the polling companies though. Polls are not always right but they are based on mathematics. I felt like we at ad.gif were sucked into this misinformation this year. It made me second guess myself because I think of our community as a bastion from misinformation.
Did the right-wing media, pundits and party faithful not see Obama's victory coming due to their "information bubble"?
Yes. These are faith based organizations that are based on the faith of being right and representing what they see as best for the country. Numbers are crunched by point headed intellectuals.
Do these failures to predict/accept reality cause secondary tactical or strategic blunders among the GOP campaigns?
Yes, in this case. But see below
Should the right-wing media change their tactics to reflect rather than ignore reality, or is there a psychological benefit to their "I don't care what the facts are, we're going to WIN!" tactic?
The right wing media should change its tactics to become better at presenting issues using ethical standards of journalism. This is not a problem confined to right wing media, but it is by far worse at Fox than other news networks or at the main network news broadcasts.

I am not sure that Republican leadership should change their tactics. Leadership is often best characterized as being out ahead of public opinion. Where would Churchill had been if he chased after public opinion polls. Sure he was as wrong about retaining the Indian empire as he was right about standing in the face of Nazi aggression. But his stance in the face of Nazi aggression is one of the finest acts of democratic leadership period. No if you think you are right, stand by your values and speak clearly in the public square. This is a type of leadership. It is a strength (that has really vexed me in its effects that I think have been to the detriment of our nation since 1980) of the Republican Party when is has been at its finest. I would say this is the strongest lasting legacy of the party of Lincoln. (I am more supportive of his leadership than Reagan and Bush Jr.)
Go to the top of the page
 
+Quote Post
Raptavio
post Nov 11 2012, 03:09 PM
Post #4


*********
Advanced Senior Contributor

Group: Members
Posts: 3,515
Member No.: 10,458
Joined: April-27-09

From: Rosemount, MN
Gender: Male
Politics: Very Liberal
Party affiliation: Democrat



Is Nate Silver just lucky, or is his record of accuracy based upon solid mathematical, statistical and scientific principle?

Statistics are an area where I have some expertise. Nate Silver is one of the most brilliant statisticians in the field. He is dead-on accurate because of this.
I think the beltway media have woken up to his fact. As testament, I predict you will see in the coming cycles the emergence of several Silver wannabees, who try to do the same number-crunching game.
Some of these will be "doing it wrong", and weight their results in favor of their preferred candidate. People who do this will be exclusively on the Right, and attempt to cause a new "information bubble."
UnskewedPolls.com was the first, and particularly most hamhanded, of these efforts. More sophisticated, plausible-seeming ones will follow.

Did the right-wing media, pundits and party faithful not see Obama's victory coming due to their "information bubble"?

Karl Rove's meltdown in real time, Mitt Romney's having been so confident in his victory that he didn't even write a concession speech, the amazingly gobsmacked stunned silence of the audience at Romney's Boston campaign HQ, Allen West's desperate legal maneuvers post-loss, the predictions of even ordinarily respectable conservative pundits (like George Will) of a solid Romney victory -- these all point to a shared illusion that completely ignored the reality of the polls in front of them and the math required to get to 270 electoral votes.

Barack Obama had the advantage from June to November. The only glimmer of hope Romney had was in the wake of the first debate (a truly mediocre performance by the President coupled with a truly awful close helped, but also a very solid performance by Romney helped establish credibility), but it did not at any time increase his numbers enough to achieve an electoral victory. Yet it seemed everyone (Note: Hyperbole) on the Right, both at large and here on ad.gif, was supremely confident in a Romney victory.

I don't know of any explanation other than an "information bubble."

Do these failures to predict/accept reality cause secondary tactical or strategic blunders among the GOP campaigns?

I don't think Romney had much to lose by going into Pennsylvania -- he had pretty much saturated the markets in Wisconsin, Ohio, and Virginia. He had money to burn.

I also don't think that the number-crunchers on Romney's campaign held any illusions about where the path to victory lay -- they put their emphasis on the right states.

But I do think that downballot tickets -- particularly what has been derisively called on the Left the "Team Rape" , including Akin, Mourdock, McMahon, Berg (ND), Tom Smith, Joe Walsh -- believed their extreme positions on women's rights would be of benefit to them electorally, and this cost the GOP the Senate in an election year when the Senate was really the GOP's to lose. For crying out loud, they couldn't even capture North Dakota, which is a state as red as Tabasco sauce sprinkled on a stop sign in a bowl of cherries wrapped in the Chinese flag.

It could be that overconfidence enticed them to devote time and effort to unwinnable races and not put more time and effort into tighter but winnable races, but I don't have specific evidence of that.

Should the right-wing media change their tactics to reflect rather than ignore reality, or is there a psychological benefit to their "I don't care what the facts are, we're going to WIN!" tactic?

There is a psychological benefit: People like to be on the winning side. And people who think they're going to win work harder than people who think they're going to lose.

However, the demoralization of losing when one is so confident of winning can have far-reaching negative effects on future cycles, and in a loss of trust in one's leaders.

I think there's a net negative to the "bubble" there, and so I believe that bursting it will only be to the GOP's benefit. And to the country as a whole.
Go to the top of the page
 
+Quote Post
Eeyore
post Nov 11 2012, 05:07 PM
Post #5


Group Icon

********
Thaaaaanks for noticin' me

Sponsor
February 2003

Group: Moderators
Posts: 2,483
Member No.: 365
Joined: December-28-02

From: Nashville
Gender: Male
Politics: Liberal
Party affiliation: Democrat



QUOTE(Raptavio @ Nov 11 2012, 09:09 AM) *
Is Nate Silver just lucky, or is his record of accuracy based upon solid mathematical, statistical and scientific principle?

Statistics are an area where I have some expertise. Nate Silver is one of the most brilliant statisticians in the field. He is dead-on accurate because of this.
I think the beltway media have woken up to his fact. As testament, I predict you will see in the coming cycles the emergence of several Silver wannabees, who try to do the same number-crunching game.
Some of these will be "doing it wrong", and weight their results in favor of their preferred candidate. People who do this will be exclusively on the Right, and attempt to cause a new "information bubble."
UnskewedPolls.com was the first, and particularly most hamhanded, of these efforts. More sophisticated, plausible-seeming ones will follow.


I would add to this prediction area that the danger to poll accuracy will be a debate over the quality of the polls being included. When polls are attacked by conservative groups for having a left leaning bias in their methodology, the wrong reaction will be to support right leaning polls as a correction. Either the science is valid and a poll is done right and therefore should be included or it is not and should not be. This attempt to include biased polls in someone's aggregation techniques will be where the science will get taken off track. I.e. I expect loud debates over the validity of polls used in these systems.

This tactic of selective bias has been commonly used and most egregiously used in the selection of intelligence for the Iraq war. It is simply a two wrongs make a right tactic. It is like being an America's Debate member and feeling that some posters are unfairly being allowed to break the rules and then going forward and doing the same or worse in response to it.
Go to the top of the page
 
+Quote Post
AuthorMusician
post Nov 11 2012, 06:46 PM
Post #6


**********
Glasses and journalism work for me.

Sponsor
November 2003

Group: Sponsors
Posts: 6,328
Member No.: 297
Joined: December-1-02

From: Blueberry Hill
Gender: Male
Politics: Liberal
Party affiliation: Democrat



Is Nate Silver just lucky, or is his record of accuracy based upon solid mathematical, statistical and scientific principle?

I'm interested enough to read his book down the line. I wonder how many powerful people in the GOP will do the same?

My guess is none. There's some very bad thinking going on up there. It's as if the whole lot got infected by some virus that removes rationality. Did anyone really think Romney had a chance after alienating virtually all minorities, women of birthing age (plus spouses) and college students? Did they really expect a run to the middle right at the end would work? Apparently so, and all I can come up with is a viral infection that targets mostly Republicans. Let's see, probably not an STD but must involve delivery systems that Republicans find irresistible.

Fox News and talk radio! Just say no to that, and things will get better.
Go to the top of the page
 
+Quote Post
akaCG
post Nov 12 2012, 01:05 AM
Post #7


*********
Advanced Senior Contributor

Sponsor
August 2012

Group: Sponsors
Posts: 4,846
Member No.: 10,787
Joined: November-25-09

Gender: Male
Politics: Conservative
Party affiliation: Independent



Is Nate Silver just lucky, or is his record of accuracy based upon solid mathematical, statistical and scientific principle?

'Far as I can tell, Silver is about as consummately professional a psephologist as they come.

As far as whether he's lucky, and to what degree, is still an open question, however. After all, we're talking about just 3 election cycles. Statistically speaking, that's a mighty small sample. For instance, is an investor who outperforms the S&P500 3 years in a row really lucky or really good?

And, for goodness's sake, let's drop the "Republicans vs. Math" nonsense. It's just silly.

The statement "1 + 2 = 3" is math. It will always be correct.

The statement "If we assume that A will be equal to 1 and assume B will be equal to 2, then we can assume that A + B will be equal to 3", however, is a prediction model that will turn out to be correct only if its assumptions are. Otherwise, it won't be.

Nate Silver turned out to be right and Gallup and Rasmussen turned out to be wrong not because the former is good at math while the latter aren't, but because the former's assumptions were correct while the latter's assumptions weren't.

ps:

What I would reeeeally like to see is how Silver's model would have performed over the past 13 Presidential and 13 mid-term elections (i.e. since 1961, the year that the number of Electoral College votes became 538 as a result of the 23rd Amendment). Now that would reeeeally help settle the "Is Nate Silver just lucky?" question.

This post has been edited by akaCG: Nov 12 2012, 01:21 AM
Go to the top of the page
 
+Quote Post
amf
post Nov 12 2012, 01:19 AM
Post #8


********
Millennium Mark

Group: Members
Posts: 1,372
Member No.: 1,540
Joined: October-23-03

From: Atlanta, GA
Gender: Male
Politics: Moderate
Party affiliation: Independent



QUOTE
What I would reeeeally like to see is how Silver's model would have performed over the past 13 Presidential and 13 mid-term elections


Won't happen, because polling data was in short supply right up until telecom costs dropped like a rock following the 2001 recession. It was too expensive to call all over the country on a regular basis. Now it's nearly free, so polls are flowing like water from a hose.

QUOTE
Either the science is valid and a poll is done right and therefore should be included or it is not and should not be. This attempt to include biased polls in someone's aggregation techniques will be where the science will get taken off track. I.e. I expect loud debates over the validity of polls used in these systems.


Nate's system accounts for this by weighting pollsters according to their historical accuracy for certain types of polls. He also weights state polls relative to national polls, so that trends get reflected throughout his system. Part of what helped Nate this year was Sandy... Gallup, which had gone way off course with its national tracking poll for several weeks -- as their tracking poll had done in previous elections as well, by the way -- stopped polling for a week, which made the final aggregates more accurate. Had that not happened, it's possible that he would have incorrectly called Florida, which was the closest race in his model.
Go to the top of the page
 
+Quote Post
Eeyore
post Nov 12 2012, 01:49 AM
Post #9


Group Icon

********
Thaaaaanks for noticin' me

Sponsor
February 2003

Group: Moderators
Posts: 2,483
Member No.: 365
Joined: December-28-02

From: Nashville
Gender: Male
Politics: Liberal
Party affiliation: Democrat



QUOTE(akaCG @ Nov 11 2012, 07:05 PM) *
And, for goodness's sake, let's drop the "Republicans vs. Math" nonsense. It's just silly.

The statement "1 + 2 = 3" is math. It will always be correct.


I can not will not drop what you call silliness. I think it is rampantly evident that the Republican consensus machine has been anti-intellectual and has argued in the face of scientific methodology again and again.

So for me it isn't silly it is nearly the best way to account for the way that information has been used by my Republican opponents in the past generation.

The evidence is there right in the response to the polling data compiled by Nate Silver. Look for the responses and the reasons why his information was skewed to the left. It was an ideological response to math. (which doesn't have an ideology)
Go to the top of the page
 
+Quote Post
akaCG
post Nov 12 2012, 03:08 AM
Post #10


*********
Advanced Senior Contributor

Sponsor
August 2012

Group: Sponsors
Posts: 4,846
Member No.: 10,787
Joined: November-25-09

Gender: Male
Politics: Conservative
Party affiliation: Independent



QUOTE(amf @ Nov 11 2012, 08:19 PM) *
QUOTE
What I would reeeeally like to see is how Silver's model would have performed over the past 13 Presidential and 13 mid-term elections

Won't happen, because polling data was in short supply right up until telecom costs dropped like a rock following the 2001 recession. It was too expensive to call all over the country on a regular basis. Now it's nearly free, so polls are flowing like water from a hose.
...

Well, then, I guess we'll just have to wait until about the 2052 Presidential Election before we have a statistically large enough sample to help us determine whether Nate Silver is really good or really lucky.



QUOTE(Eeyore @ Nov 11 2012, 08:49 PM) *
QUOTE(akaCG @ Nov 11 2012, 07:05 PM) *
And, for goodness's sake, let's drop the "Republicans vs. Math" nonsense. It's just silly.

The statement "1 + 2 = 3" is math. It will always be correct.

I can not will not drop what you call silliness. ...
...

FWIW, I do believe that you neither can nor will. I really do. At this point, anyway. Whether you will, at some point in the future, be able and willing to, ... well, let's just say that the assumptions built into my "How likely is a self-described Liberal Democrat to entertain the notion that Nate Silver is really lucky as opposed to really good" model points to odds that are no better than those of winning a "Six Line" bet at roulette.



This post has been edited by akaCG: Nov 12 2012, 03:37 AM
Go to the top of the page
 
+Quote Post
Eeyore
post Nov 12 2012, 03:47 AM
Post #11


Group Icon

********
Thaaaaanks for noticin' me

Sponsor
February 2003

Group: Moderators
Posts: 2,483
Member No.: 365
Joined: December-28-02

From: Nashville
Gender: Male
Politics: Liberal
Party affiliation: Democrat



QUOTE(akaCG @ Nov 11 2012, 09:08 PM) *
FWIW, I do believe that you neither can nor will. I really do. At this point, anyway. Whether you will, at some point in the future, be able and willing to, ... well, let's just say that the assumptions built into my "How likely is a self-described Liberal Democrat to entertain the notion that Nate Silver is really lucky as opposed to really good" model points to odds that are no better than those of winning a "Six Line" bet at roulette.


No I most certainly can drop it if it were the right thing to do or if it were silliness. But your calling it silliness does not make it so. This is not a talking point it is math. The typical republican conservative thing to do is to recreate math through selective bias to fit the world view. The typical liberal thing to do is to quake in our boots wondering if math is really math because the republicans just sound sooo self assured.

So, to clarify, your point was that Silver was lucky instead of using sound math. But it is not really Republicans versus math. Should I spin it better to reflect what it sounds like to me, Republicans versus reality?

I want my information to be accurate and then I will deal with what expert consensus has as accurate. I will challenge it if I have doubts if I have questions. I do not have liberal talking points to lead my thinking. I discovered Nate Silver during this election season and found his reasoning to be consistently and rationally presented and explained. He used math not politics.

So discard me if you will, but I am not buying your silliness argument.

This post has been edited by Eeyore: Nov 12 2012, 03:54 AM
Go to the top of the page
 
+Quote Post
JohnfrmCleveland
post Nov 12 2012, 04:18 AM
Post #12


********
Master Debater

Sponsor
September 2009

Group: Sponsors
Posts: 2,456
Member No.: 8,090
Joined: November-1-07

From: Cleveland, OH
Gender: Male
Politics: Very Liberal
Party affiliation: None



QUOTE(Eeyore @ Nov 11 2012, 10:47 PM) *
I discovered Nate Silver during this election season and found his reasoning to be consistently and rationally presented and explained. He used math not politics.


This isn't Nate Silver's first go-around at this stuff. Before he moved to the NYTimes, he ran fivethirtyeight.com, a site which still has data up to the 2010 elections. It's sitting there, frozen in time, untouched since he left for his new office.

He was already making a name for himself back in 2008.
Go to the top of the page
 
+Quote Post
Raptavio
post Nov 12 2012, 04:52 AM
Post #13


*********
Advanced Senior Contributor

Group: Members
Posts: 3,515
Member No.: 10,458
Joined: April-27-09

From: Rosemount, MN
Gender: Male
Politics: Very Liberal
Party affiliation: Democrat



QUOTE(akaCG @ Nov 11 2012, 07:05 PM) *
Is Nate Silver just lucky, or is his record of accuracy based upon solid mathematical, statistical and scientific principle?

'Far as I can tell, Silver is about as consummately professional a psephologist as they come.

As far as whether he's lucky, and to what degree, is still an open question, however. After all, we're talking about just 3 election cycles. Statistically speaking, that's a mighty small sample. For instance, is an investor who outperforms the S&P500 3 years in a row really lucky or really good?

And, for goodness's sake, let's drop the "Republicans vs. Math" nonsense. It's just silly.

The statement "1 + 2 = 3" is math. It will always be correct.

The statement "If we assume that A will be equal to 1 and assume B will be equal to 2, then we can assume that A + B will be equal to 3", however, is a prediction model that will turn out to be correct only if its assumptions are. Otherwise, it won't be.

Nate Silver turned out to be right and Gallup and Rasmussen turned out to be wrong not because the former is good at math while the latter aren't, but because the former's assumptions were correct while the latter's assumptions weren't.

ps:

What I would reeeeally like to see is how Silver's model would have performed over the past 13 Presidential and 13 mid-term elections (i.e. since 1961, the year that the number of Electoral College votes became 538 as a result of the 23rd Amendment). Now that would reeeeally help settle the "Is Nate Silver just lucky?" question.


Your error is your proclamations about the sample size.

There are three election cycles, yes, but there are in each sample 50 states (40 of which a trained chimp could call, of course) and 33-34 Senate races on top of those. Nate Silver is showing a 95% rate of accuracy across three election cycles accounting for state by state polling. The only race he got wrong was North Dakota's Senate race this cycle. Last Presidential cycle, he got a single state wrong.

Statistically speaking, Nate Silver has demonstrated within p <0.05 that his methods are sound. Certainly as sound or more sound than any other prediction model in the field right now.

Had you said there's not enough data to say that conclusively prior to Nov. 6, you'd have been right. Not any more.

And that's math. The kind of 1+2=3 math upon which you say everyone agrees, yet upon which everyone clearly didn't agree going into this election and which you are disputing now.

You can do the math yourself. Let's say 10 Senate races and 10 Presidential states are "battlegrounds" per cycle. That makes 50 unique data points - two Presidential campaigns and three Senate campaigns. Let's give an extremely generous 85 percent chance of success per race by random sample per race. There are 19,600 possible patterns that make at 47 out of 50 correct guesses. 1225 that make 48. 50 that make 49 and 1 that makes 50. Result? p = 0.046. Silver's got the mojo to within statistical measures of confidence, even with a very generous random hit/miss ratio.

There's also the point that most of the Republican prognostications were not based on math, but on unfounded assumptions about what turnout would be like, based on everything from subjective evaluations of enthusiasm at Romney rallies to the number of campaign signs one saw driving down the street. Or just flat-out guessing, as people like George Will and Dick Morris seemed to do. And Republican campaigns and voters alike seemed to accept the prognostications which were not based on math but which predicted happy things for the GOP without question.

Or am I wrong? Start with your own predictions. Upon what data points were they based? As I recall, you suggested the polls (prior to the 1st debate, of course) showed the President so far ahead because the pollsters were deliberately stacking the deck. I never understood why. Maybe you could explain it.

Go to the top of the page
 
+Quote Post
akaCG
post Nov 12 2012, 04:38 PM
Post #14


*********
Advanced Senior Contributor

Sponsor
August 2012

Group: Sponsors
Posts: 4,846
Member No.: 10,787
Joined: November-25-09

Gender: Male
Politics: Conservative
Party affiliation: Independent



QUOTE(Eeyore @ Nov 11 2012, 10:47 PM) *
...
So, to clarify, your point was that Silver was lucky instead of using sound math. ...
...

No, it wasn't. My point is two-fold:

1.
Both the forecasters/modelers whose predictions were correct (e.g. Nate Silver) and those whose predictions were wrong (I'm not talking about George Will, Peggy Noonan, Joe Scarborough, etc. here; I'm talking about, say, these guys) use math. The difference between the two is not that the former use math while the latter don't. The difference is in the assumptions that each plugged into their model (e.g.: Do we give more weight to national or state-level polls? What co-variance factor do we assign to each individual state's polls? At what rate do we reduce the "weight" of old polls? Do we exclude outlier polls? Do the "internals" of polls matter at all, or do we only look at the "top line" numbers thereof? What weight do we assign to, say, a Gravis Marketing poll versus a, say, NBC/WSJ poll? Do we include economic factors (e.g. GDP, Consumer Confidence, etc.) or do we ignore them? What proportion of "independents" do we assign to the "Lean Dem" and "Lean Rep" categories? Do we use only the last Presidential Election to base our turnout assumptions on, or do we also give some weight to the last mid-term election? Do we take the "incumbency factor" into account? Etcetera, etcetera, etcetera.)

2.
It's too early to tell to what degree Nate Silver's model will stand the test of time. He himself, being the consummate professional that he is, realizes and admits it:"I know we're going to have some misses sooner or later," he said, adding that an incorrect forecast on the Senate race in North Dakota is "proof that we can be wrong — and polls can be wrong."

ps:
I thought I'd include the following, from a gentleman whose model also uses plenty of math but whose set of variables and assumptions are notably different from Silver's, and whose model predicted (back in June!) that Obama would get 328 ECVs (bolding mine):
QUOTE
...
The Time-for-Change model predicts the incumbent party candidate’s national-level vote share as a function of three variables: the president’s net approval-disapproval rating in June of the election year; the percent change in GDP from Q1 to Q2 of the election year; and whether the incumbent party has held the presidency for two or more terms. ... Presidential approval data are from the Gallup survey organization, and economic data are from the U.S. Department of Commerce Bureau of Economic Analysis.
...
To do the updating, the model makes a series of assumptions; some for simplification, others to capture what we (think we) know about the dynamics of public opinion during a presidential election campaign. First, I translate every poll result into the proportion favoring Obama or Romney among only those respondents with a preference for either candidate. This sets aside anyone who is undecided – and is why all the survey results on the poll tracker page appear symmetrical. I consider this safer than making guesses about how those voters will ultimately decide on Election Day.

Next, I assume ...
...
The model then assumes ...
...
Finally, the model makes an assumption ...
...

Link: http://votamatic.org/how-it-works/

The remarkable about this model is its stability over the course of the past 4 months. Compare that to Nate Silver's fluctuations. Wow.

What I also like about it is its transparency, which makes it possible to check the guy's work, going back to 1948. Nate Silver's (for perfectly understandable commercial reasons), on the other hand, is impossible to reproduce and back-test.

Wish I had come across this guy's model before today. It's now bookmarked, for use next time.

Go to the top of the page
 
+Quote Post
Raptavio
post Nov 12 2012, 06:02 PM
Post #15


*********
Advanced Senior Contributor

Group: Members
Posts: 3,515
Member No.: 10,458
Joined: April-27-09

From: Rosemount, MN
Gender: Male
Politics: Very Liberal
Party affiliation: Democrat



QUOTE(akaCG @ Nov 12 2012, 10:38 AM) *
QUOTE(Eeyore @ Nov 11 2012, 10:47 PM) *
...
So, to clarify, your point was that Silver was lucky instead of using sound math. ...
...

No, it wasn't. My point is two-fold:

1.
Both the forecasters/modelers whose predictions were correct (e.g. Nate Silver) and those whose predictions were wrong (I'm not talking about George Will, Peggy Noonan, Joe Scarborough, etc. here; I'm talking about, say, these guys) use math. The difference between the two is not that the former use math while the latter don't. The difference is in the assumptions that each plugged into their model (e.g.: Do we give more weight to national or state-level polls? What co-variance factor do we assign to each individual state's polls? At what rate do we reduce the "weight" of old polls? Do we exclude outlier polls? Do the "internals" of polls matter at all, or do we only look at the "top line" numbers thereof? What weight do we assign to, say, a Gravis Marketing poll versus a, say, NBC/WSJ poll? Do we include economic factors (e.g. GDP, Consumer Confidence, etc.) or do we ignore them? What proportion of "independents" do we assign to the "Lean Dem" and "Lean Rep" categories? Do we use only the last Presidential Election to base our turnout assumptions on, or do we also give some weight to the last mid-term election? Do we take the "incumbency factor" into account? Etcetera, etcetera, etcetera.)


Actually the University of Colorado model, to my understanding, doesn't use polls at all, but predict election outcomes based entirely on economic models and previous election outcomes. The math based on those models is sound, I'm sure, but the models themselves are not directly connected to polling data in any way. Which makes the model's connection to the most direct data available -- polling -- nonexistant. The flaw in this model should be obvious.

QUOTE
2.
It's too early to tell to what degree Nate Silver's model will stand the test of time. He himself, being the consummate professional that he is, realizes and admits it:"I know we're going to have some misses sooner or later," he said, adding that an incorrect forecast on the Senate race in North Dakota is "proof that we can be wrong — and polls can be wrong."


An error rate is a foregone conclusion. I think that's different than saying the jury's out about whether his model is decisively sound.

QUOTE
I thought I'd include the following, from a gentleman whose model also uses plenty of math but whose set of variables and assumptions are notably different from Silver's, and whose model predicted (back in June!) that Obama would get 328 ECVs (bolding mine):
QUOTE
...
The Time-for-Change model predicts the incumbent party candidate’s national-level vote share as a function of three variables: the president’s net approval-disapproval rating in June of the election year; the percent change in GDP from Q1 to Q2 of the election year; and whether the incumbent party has held the presidency for two or more terms. ... Presidential approval data are from the Gallup survey organization, and economic data are from the U.S. Department of Commerce Bureau of Economic Analysis.
...
To do the updating, the model makes a series of assumptions; some for simplification, others to capture what we (think we) know about the dynamics of public opinion during a presidential election campaign. First, I translate every poll result into the proportion favoring Obama or Romney among only those respondents with a preference for either candidate. This sets aside anyone who is undecided – and is why all the survey results on the poll tracker page appear symmetrical. I consider this safer than making guesses about how those voters will ultimately decide on Election Day.

Next, I assume ...
...
The model then assumes ...
...
Finally, the model makes an assumption ...
...

Link: http://votamatic.org/how-it-works/

The remarkable about this model is its stability over the course of the past 4 months. Compare that to Nate Silver's fluctuations. Wow.

What I also like about it is its transparency, which makes it possible to check the guy's work, going back to 1948. Nate Silver's (for perfectly understandable commercial reasons), on the other hand, is impossible to reproduce and back-test.

Wish I had come across this guy's model before today. It's now bookmarked, for use next time.


Wow. Thanks for this link, akaCG. His indicators should be very interesting to examine as well. At first blush, his model appears to be similar but distinct in several important respects.

On statistical assumptions:

All statistics require some kind of assumptions, this is true. But not all assumptions are the same. Assumptions that are far abstracted from raw data (e.g. UC's stuff) are far more tenuous than assumptions that are based on the raw data and their history (e.g., that poll responses are generally truthful, that low response rates don't tend to skew data, that polling landlines only doesn't tend to skew data, that undecideds at date X tend to break by election day based on algorithm Y, that economic factors A, B, and C have impacts D, E and F on turnout).

Which is why any statistician would tell you that Nate Silver's models are far more sound than UC's.

And why we can say that most of the Republican predictions were based on assumptions that actually defied the data (enthusiasm gap, demographic turnout changes).

Go to the top of the page
 
+Quote Post
Hobbes
post Nov 13 2012, 05:00 PM
Post #16


Group Icon

**********
No More Mr. Nice Guy!

Group: Committee Members
Posts: 5,310
Member No.: 1,155
Joined: September-8-03

From: Dallas, TX
Gender: Male
Politics: Conservative
Party affiliation: Independent



QUOTE(Raptavio @ Nov 12 2012, 12:02 PM) *
Actually the University of Colorado model, to my understanding, doesn't use polls at all, but predict election outcomes based entirely on economic models and previous election outcomes. The math based on those models is sound, I'm sure, but the models themselves are not directly connected to polling data in any way. Which makes the model's connection to the most direct data available -- polling -- nonexistant. The flaw in this model should be obvious.


Not a flaw, actually, an advantage. You should base your predictions on past data (other election cycles), not predictions about the upcoming outcome (polls). Polls have a great many flaws. They are inherently subjective--the way the questions are formed, etc. There is often a disconnect in them between what one says and what one does. Polls are never scientifically constructed--ie, with reverse questions to verify answers, etc, questions used to establish a baseline, etc. Such polls quickly get too long, but they also indicate the flaws of polling. Polls are one of the, if not the, weakest ways to establish a claim. They are also cheap and easy though, hence their proliferation, mainly just in politics (well, and BCS football, which is another matter).

In your statistics background, have you studied creating statistically valid polls, Raptavio (real question, not meant as a trap)? Doing so is extremely difficult, and if one wants to spend the time necessary, the poll is invariably too long to be very useful. What always happens is that pollsters throw in all sorts of questions, to get as much 'data' as possible,and keep the number of questions as low as possible so that people will actually respond. This goes directly against what statistics says should be done, which is to focus on as small a set of variables as possible, and make the poll as robust as possible to allow for verification of veracity of response. The way polls are used is really more like a very large focus group...asking people all sorts of questions, with little to no verification of veracity. Polls are a little better in elections, because at least they are similar to how voting is actually done, but they still have many many flaws.

One of the only reasons polls are effective in politics is that the number of people who are truly 'variables' (ie, might vote differently in each election) are really small, essentially within the polls margin of error to start with. You know ahead of time that both Democracts and Republicans are going to get about 47% of the vote before you even start. I did notice an increased emphasis on supposedly undecided voters in this election, but then how many of those were truly undecided. If you listened to them during any of the telecasts they were on, they all seemed to at least have a strong preference one way or another. Which is also one of the flaws in polls...just because somebody says something doesn't mean that is really what they think (ask any married person smile.gif )

FWIW...we had a thread on a similar topic some years ago, (Good grief...was that really 8 years ago!) in which it was attempted to argue that polls should be preferred to actual results--which defies just about every principle in statistics. Hopefully you will agree (Erasmussimo never did).

Note: not making any claims for or against Silver's model here---just pointing out that not basing something on polls is not inherently flawed. Any model, regardless of what it is based on, needs to be tested against actual data. Silver's has.

FWIW...my original degree is in Petroleum Engineering. Since no one has even gone 10,000 feet underground and actually studied how oil flows, there are two versions of most reservoir engineering models---one theoretically, based on how physics etc says things should work, and one empirical, based on samples of how things actually worked during various periods of time. In general, the empirical ones work better, with the theoretical ones playing catchup. Which makes sense...the empirical ones are based on actual data, inherently accounting for all the thousands of possible variables, whereas the theoretical ones need to figure out what all those variables are, and what their value would be during any particular set of circumstances. Seems a bit relevant to the discussion here, although not an exact analogy. Point being if someone develops a model based on past experience, and demonstrates it is effective, no reason to discount it based on that--it is actually the best measure to use, IF possible and proven.
Go to the top of the page
 
+Quote Post
Raptavio
post Nov 14 2012, 12:47 AM
Post #17


*********
Advanced Senior Contributor

Group: Members
Posts: 3,515
Member No.: 10,458
Joined: April-27-09

From: Rosemount, MN
Gender: Male
Politics: Very Liberal
Party affiliation: Democrat



QUOTE(Hobbes @ Nov 13 2012, 11:00 AM) *
QUOTE(Raptavio @ Nov 12 2012, 12:02 PM) *
Actually the University of Colorado model, to my understanding, doesn't use polls at all, but predict election outcomes based entirely on economic models and previous election outcomes. The math based on those models is sound, I'm sure, but the models themselves are not directly connected to polling data in any way. Which makes the model's connection to the most direct data available -- polling -- nonexistant. The flaw in this model should be obvious.


Not a flaw, actually, an advantage.


w00t.gif

QUOTE
You should base your predictions on past data (other election cycles), not predictions about the upcoming outcome (polls).


Polls are not predictions. They are measures of a sample set of the full data set, showing opinions of a snapshot in time. They are not predictions, they are estimations.

QUOTE
Polls have a great many flaws. They are inherently subjective--the way the questions are formed, etc. There is often a disconnect in them between what one says and what one does. Polls are never scientifically constructed--ie, with reverse questions to verify answers, etc, questions used to establish a baseline, etc. Such polls quickly get too long, but they also indicate the flaws of polling. Polls are one of the, if not the, weakest ways to establish a claim. They are also cheap and easy though, hence their proliferation, mainly just in politics (well, and BCS football, which is another matter).

In your statistics background, have you studied creating statistically valid polls, Raptavio (real question, not meant as a trap)? Doing so is extremely difficult, and if one wants to spend the time necessary, the poll is invariably too long to be very useful. What always happens is that pollsters throw in all sorts of questions, to get as much 'data' as possible,and keep the number of questions as low as possible so that people will actually respond. This goes directly against what statistics says should be done, which is to focus on as small a set of variables as possible, and make the poll as robust as possible to allow for verification of veracity of response. The way polls are used is really more like a very large focus group...asking people all sorts of questions, with little to no verification of veracity. Polls are a little better in elections, because at least they are similar to how voting is actually done, but they still have many many flaws.


In the words of Tony Stark in The Avengers, "Finally, someone who speaks English." You appear to have some important knowledge about statistics as applied to surveys. Cool. But you're wrong about several things.
I have studied creating statistically accurate polls ('statistically valid' isn't exactly a term that applies). You're quite right that care must be taken to neutrally word the questions and to not introduce bias through the survey methodology in order to get the most valid poll possible. Even little things like naming one candidate first or mentioning party ID can have an impact. But imperfections in the methodology do not invalidate polls; they merely impact accuracy. And, as has been demonstrated in every Presidential election since 1952, pollsters have learned how to survey in a manner that is within useful margins of error from actual results. The measures you speak of are not requirements for "scientifically valid" polls or social surveys; they are measures that wold increase the accuracy of the poll in a manner that is of, frankly, marginal benefit when conducting political preference polls.

QUOTE
Note: not making any claims for or against Silver's model here---just pointing out that not basing something on polls is not inherently flawed. Any model, regardless of what it is based on, needs to be tested against actual data. Silver's has.


Not basing something on polls is not inherently flawed (except inasmuch as any model will be flawed). However, it is more abstracted from actual sample data (polls) and is therefore likely to have a wider margin of error than one based on direct sample data.

QUOTE
FWIW...my original degree is in Petroleum Engineering. Since no one has even gone 10,000 feet underground and actually studied how oil flows, there are two versions of most reservoir engineering models---one theoretically, based on how physics etc says things should work, and one empirical, based on samples of how things actually worked during various periods of time. In general, the empirical ones work better, with the theoretical ones playing catchup. Which makes sense...the empirical ones are based on actual data, inherently accounting for all the thousands of possible variables, whereas the theoretical ones need to figure out what all those variables are, and what their value would be during any particular set of circumstances. Seems a bit relevant to the discussion here, although not an exact analogy. Point being if someone develops a model based on past experience, and demonstrates it is effective, no reason to discount it based on that--it is actually the best measure to use, IF possible and proven.


The comparison between UC's model and the polls is not between theory and empirical data, it's between a correlation model (UC's) and a sample dataset (polling). The inherent issue with the correlation model is that correlation does not necessarily mean causation. The inherent issue with sample data is that in political preference, the data change with the vicissitudes of the campaign and current events. However, the advantages of one over the other should, I hope, be obvious.
Go to the top of the page
 
+Quote Post
lo rez
post Nov 14 2012, 07:26 PM
Post #18


******
Senior Contributor

Group: Members
Posts: 387
Member No.: 10,616
Joined: August-8-09

From: Minneapolis, MN
Gender: Male
Politics: Liberal
Party affiliation: Independent



Heads up - Nate Silver is doing a live Q&A over at Deadspin right now.
Go to the top of the page
 
+Quote Post
Hobbes
post Nov 15 2012, 01:10 AM
Post #19


Group Icon

**********
No More Mr. Nice Guy!

Group: Committee Members
Posts: 5,310
Member No.: 1,155
Joined: September-8-03

From: Dallas, TX
Gender: Male
Politics: Conservative
Party affiliation: Independent



QUOTE(Raptavio @ Nov 13 2012, 06:47 PM) *
Polls are not predictions. They are measures of a sample set of the full data set, showing opinions of a snapshot in time. They are not predictions, they are estimations.


Therein lies what everyone needs to remember. They are just samples (everyone gets that), showing opinions of a snapshot in time. Opinions are inherently subjective, and likely to change. Hence the importance of polls being a snapshot in time, thereby recognizing that just that change can, and likely will, happen. This is what can easily make polls less accurate than other measures that really on more stable entities, such as the economy and past elections.


QUOTE
...pollsters have learned how to survey in a manner that is within useful margins of error from actual results.


This is mostly true, discarding those that intentionally skew the results for political purposes (didn't hear much about those this cycle).
QUOTE
The measures you speak of are not requirements for "scientifically valid" polls or social surveys; they are measures that wold increase the accuracy of the poll in a manner that is of, frankly, marginal benefit when conducting political preference polls.


This is also mostly true. What the big difference between polls and surveys is that with polls, you do get to see the 'actual' results, in full, later. In most surveys, you do not, hence steps therefore need to be taken to determine the accuracy of the data within the survey itself (ie, perform regression analysis on the results, etc). For example, in a survey, what you would want to be able to show is that those people who thought, for example, the economy was the most important factor, and that Candidate A was better for the economy, actually voted for Candidate A. You would also want multiple questions on the economy to determine that they really felt that way. That is missing from polls...but then adding in the amount of questions necessary to validate that is exactly what would make the polls too long. This is why I compare polls to focus groups. Both gather alot of data, neither really includes any methodology to validate that data. Many polls incorporate this now, gathering some of this data for cross sectional purposes (people in the South support Candidate A, people who prioritize health care prefer candidate B, etc), but without the real ability to perform any regression analysis. This is what I mean when I was talking about statistical validity of polls---they lack the data to perform any validation of their results within themselves.

QUOTE
Not basing something on polls is not inherently flawed (except inasmuch as any model will be flawed). However, it is more abstracted from actual sample data (polls) and is therefore likely to have a wider margin of error than one based on direct sample data.


Agree on the first part (the abstractness)...the latter depends on the accuracy of the psuedo estimators used. It could well be that, say, the economy does indeed determine the outcome of the election more accurately than any opinion polls, in which case it wouldn't have a wider margin of error, it would have a lower one. This is due to the two factors you mentioned which I highlighted earlier, that being that polls represent opinions, and can vary over time. Find a more abstract variable that has lower fluctuation with a higher or the same correlation with actual results, and you will have a lower margin of error. Conversely, find one that doesn't, and the margin will be wider. Which is why validating over time is so important.


QUOTE
The comparison between UC's model and the polls is not between theory and empirical data, it's between a correlation model (UC's) and a sample dataset (polling). The inherent issue with the correlation model is that correlation does not necessarily mean causation. The inherent issue with sample data is that in political preference, the data change with the vicissitudes of the campaign and current events. However, the advantages of one over the other should, I hope, be obvious.


Yes, this is why I said it wasn't a perfect analogy. Agree with your points about the two--still disagree that this points to any advantage of one over the other. Each has its own set of advantages, and disadvantages. Personally, I shy away from the subjectiveness and vfluctuations in time inherent in polls (said I was an engineer! smile.gif ). Others might prefer the advantages you mention for polls. Regardless of accuracy (which isn't known for any election until after the fact) polls have another set of advantages that you don't mention. They provide a wealth of data which can either be talked about endlessly (from the media) or spun to indicate advantage (by the politicians). Kind of like comparing the computer models vs. the polls in college football. The computer models don't leave much to talk about, and that is boring....irregardless of whether they are more accurate than the polls or not.

This post has been edited by Hobbes: Nov 15 2012, 01:24 AM
Go to the top of the page
 
+Quote Post
AuthorMusician
post Nov 15 2012, 04:31 PM
Post #20


**********
Glasses and journalism work for me.

Sponsor
November 2003

Group: Sponsors
Posts: 6,328
Member No.: 297
Joined: December-1-02

From: Blueberry Hill
Gender: Male
Politics: Liberal
Party affiliation: Democrat



While it is fine to argue the details of stats, this Slate article explains about how the Romney campaign failed at doing the math.

Romney actually won the unaffiliated vote, which his campaign went after. The big failing came in underestimating the success of ground-level campaign work (actual shoes on the ground) to bring in the minority votes and some of the white votes. Rather than not understanding the math, the failure in understanding had to do with money versus people. What money can buy (ads) is far less effective than people talking with other people. People vote. The GOP needs to remember that.

The failure of their software at the end of the campaign probably made little difference, but it did show a severe lack of preparation.

Perhaps the second most effective thing that the Democrats did was to provide numbers quickly when asked. The Republicans offered vague notions of believing, and so all will be well. Apparently, not enough voters swallowed that.

So the Republican loss can be summed up as follows:

Did not understand the electorate.

Did not have an effective strategy.

Put belief ahead of reality.

The solutions to these problems should be obvious. Oh yes, and find better IT people.

This post has been edited by AuthorMusician: Nov 15 2012, 04:33 PM
Go to the top of the page
 
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 

  
Go to the top of the page - Simple Version Time is now: February 19th, 2018 - 11:55 PM
©2002-2010 America's Debate, Inc.  All rights reserved.