If it's not random sampling, it's not a scientific poll. Tracking polls use random sampling. Not a set panel.The LA Times thing is not a random sample poll, i.e, it is not a poll. It is a set panel of participants. A focus group if you will.
It is a poll not a focus group even though the sample is the same. A focus group is a discussion -- there is no discussion and it is conducted like any other poll. I have been involved in common sample tracking polls before -- they are still polls.
The problem here is that the sample is biased and the methodology of weighting percentages of likelihood to vote is more than a little suspect.
Actually this is not true. I used to work in the business.
The initial sample was random and from that they used the sample over and over to track within it. That does not make it not a poll. In fact it is a fairly common methodology. You can even ask questions that track the responses through each time they are polled to monitor changes.
Set panels can be created with random initial samples or be voluntary or use an existing list. You can poll a set panel if you do it properly and have valid results if the panel were created with sound methodology. This is a poll -- just not a good one but the sample quota they used to create the initial sample was not statistically representative, it may not have been managed as the survey continued, and the interpretation is not valid.
I don't engage in calling things that are lower quality not what they are -- I just label them lower quality. This is a case for that.
You can easily poll using the sample of a previous answered poll and this is what they did. However, you have to maintain representative quotas each time (you can weight them but need enough in each). They started without representative quotas and we do not know if this has deteriorated as they may have just used the sample again each time without adjusting quotas since (as you lose some you have to reduce others accordingly in weighting).
Typically you have to start with a massive sample as you lose sample each time you do it. They started this with 3000. This poll is down to 2500 which in the US is getting low -- a couple more times and they will be below 2000. And it did not go down as much as it normally would have, however, which suggests no quota adjustments so it could be getting more distorted each time out. An election tracking poll using the first respondents as sample for 5-6 future polls should start closer to 5-10,000 in order to have validity by the time they get to the last one. You continue asking everyone but take out enough to weight according to the proper quotas.
The issue is having enough sample, it initially must be generated randomly (telephone for example) and the quotas being accurate and representative.Then you check quotas with each edition of the survey removing lowering all other quotas for each you lose.
Then when done, you must use reasonable assumptions to interpret it. Failure to do these properly does not make it not a poll -- it makes it a bad poll.