TED Conversations

TED
  • TED
  • New York, NY
  • United States

TEDCRED 10+

This conversation is closed. Start a new conversation
or join one »

Discuss the note to the TED community on the withdrawal of the TEDxWestHollywood license.

For discussion: http://blog.ted.com/2013/04/01/a-note-to-the-ted-community-on-the-withdrawal-of-the-tedxwesthollywood-license

+1
Share:

Showing single comment thread. View the full conversation.

    • Apr 5 2013: I'm pretty sure the decision by TED has been set in stone.

      Just a point of clarification. Regarding the content of the links above, would it be fair to take that as meaning you view Russell Targ's work in the same class as, say, flat earth ideas? I ask because it seems like this thread keeps trying to get a footing on various issues regarding the TEDx event in question. One of the more common discussions, and the one that makes the most sense, IMHO, is 'does Targ's work equate to pseudoscience.' I don't want to assume any points made or not made, so I just wanted clarity. It seems if this particular topic gets bundled with ideas not at the helm of discussion, that creates, if not a series of strawmen, then at least attaches an unnecessary albatross if supporters of Targ. I'm in no way saying that is your intention, just a possible perception. Personally, I would just like to see a facilitation of a debate between the accused and the best mind of the other camp. Past that, it's all just treading well worn ground.
      • thumb
        Apr 5 2013: I don't know enough about Targ's work to say whether it is in the same class as flat earth ideas or not, so no, it would not be fair to take it that way. My point in posting these links is that the work of Gardner and Sagan is especially helpful in addressing issues of frauds, fallacies, pseudoscience, and science and their social, historical, and intellectual contexts. I think that their work can be invaluable to formulating useful policies that address the issues at hand while at the same time being accessible and comprehensible to the general public. Among the many gifts of Gardner and Sagan were the ability to speak in plain language to a popular audience, something TED has long valued.

        You're right, this discussion has been pulled time and time again to Targ, but TED has said that its decision about the West Hollywood event was not based on the work of specific individuals but rather on the program as a whole. That is why I think it would be best to steer the conversation back to issues of general principles rather than fixating on the work of just one of several presenters. There has been no similar detailed evaluation of the work of Suzanne Taylor, the West Hollywood TEDx organizer, nor of other scheduled presenters. Targ's case may be helpful in illustrating general principles, but the decision was not based on his work and unwarranted focus on it is a distraction, IMHO. There were many other presenters. I do not think it is necessary to facilitate debates with each and every one of them (which would actually be the most fair to them) or with any of them in particular, including Targ.
        • Apr 5 2013: What about Penrose and Hameroff talking at TED about quantum OR? Do you say a scientist of the calibre of Hawking can be involved in pseudoscience too and TED would have to look at this on a case by case basis? Like Penrose talking about Twistors or "Before the Big Bang"" is fine, but suspecting mind-matter interaction on the quantum level or that there is something like a Platonic realm of mathematical truths that cannot be accessed purely mechanical (Gödel argument) would cross the line? Btw, Gödel also presented a purely logical proof for God. Would he have been rejected as a speaker by TED either? Or Bohr, Heisenberg? Max Planck, as he believed matter derives from consciousness?

          How is this not policing ideas from ideological bias and suppressing public debate, the actual open scientific and cultural process?

          I would love to see Dean Radin talking on TED. Any objections?
    • thumb
      Apr 5 2013: "At the time of writing, there are three claims in the ESP field which, in my opinion, deserve serious study: (1) that by thought alone humans can (barely) affect random number generators in computers; (2) that people under mild sensory deprivation can receive thoughts or images "projected" at them; and (3) that young children sometimes report the details of a previous life, which upon checking turn out to be accurate and which they could not have known about in any other way than reincarnation"

      ~Carl Sagan, The Demon-Haunted World, Random House, 1995, p. 302.
      • thumb
        Apr 5 2013: Has any serious and significant scientific progress been made on any of these three issues in the past twenty years? What have been the results (either towards confirming, disconfirming, or both)? How has it been evaluated? Please include reliable sources.
        • Apr 5 2013: Yes, on all fronts. Reincarnation is probably the front on which least progress has been made inasmuch as it can't really be brought into the lab. Nonetheless Jim Tucker soldiers on continuing Ian Stevenson's work at the University of Virginia. What kind of sources would you like - equal to the ones you gave for Gardner and Sagan or proper peer-reviewed stuff - and if the latter how proper and how peer-reviewed? I don't want to come over as some kind of peer-review fetishist, but I'm sure you'll understand why I ask in advance.
        • thumb
          Apr 5 2013: Let's start off with the Ganzfeld studies. Thanks to Craig Weiler who gathered up links on this and posted them in one spot:
          http://forum.mind-energy.net/skeptiko-podcast/4967-references-resources-only.html#post145873

          There is more than enough reading there to keep you going for a while, John. Here's a sneak preview of what you'll find:

          http://www.frontiersin.org/quantitative_psychology_and_measurement/10.3389/fpsyg.2011.00117/full

          "Starting from the famous phrase “extraordinary claims require extraordinary evidence,” we will present the evidence supporting the concept that human visual perception may have non-local properties, in other words, that it may operate beyond the space and time constraints of sensory organs, in order to discuss which criteria can be used to define evidence as extraordinary. This evidence has been obtained from seven databases which are related to six different protocols used to test the reality and the functioning of non-local perception, analyzed using both a frequentist and a new Bayesian meta-analysis statistical procedure. According to a frequentist meta-analysis, the null hypothesis can be rejected for all six protocols even if the effect sizes range from 0.007 to 0.28. According to Bayesian meta-analysis, the Bayes factors provides strong evidence to support the alternative hypothesis (H1) over the null hypothesis (H0), but only for three out of the six protocols. We will discuss whether quantitative psychology can contribute to defining the criteria for the acceptance of new scientific ideas in order to avoid the inconclusive controversies between supporters and opponents."
        • Apr 5 2013: "Has any serious and significant scientific progress been made on any of these three issues in the past twenty years?"

          That depends on how you define "serious" and "scientific." IMO, the hypothesis that human intention can affect the output of a random number generator is not a scientific hypothesis, because of a little problem called physics. Nonetheless, it has been studied ad nauseum. Bösch et al (2006) conducted a meta-analysis of 380 such trials comprising 299,400,000,000 random bits (well, random under the null hypothesis, anyway), which has got to be the largest sample size in the history of statistics. Compared with a value of .5 under the null, the authors found a result of...wait for it... .499997, which was statistically significant, but in the wrong direction. That was using a fixed-effect model. Using a random-effects model, the result was .500035, which was statistically significant, but in the opposite direction. The authors concluded that due to the sensitivity of the results to the model chosen, the analysis was inconclusive and that more trials are needed.

          You know, because 299 billion isn't a large enough sample size.

          Source: Bösch et al (2006). Examining Psychokinesis: The Interaction of Human Intention With Random Number Generators—A Meta-Analysis. Psychological Bulletin 132 (4); 497–523.
        • Apr 5 2013: I think they refuted Bösch with this.


          Reexamining Psychokinesis: Comment on Bo
          ̈sch, Steinkamp,
          and Boller (2006)
          Dean Radin
          Institute of Noetic Sciences
          Roger Nelson and York Dobyns
          Princeton University
          Joop Houtkooper
          Justus Liebig University of Giessen
          H. Bo
          ̈sch, F. Steinkamp, and E. Boller’s (2006) review of the evidence for psychokinesis confirms many
          of the authors’ earlier findings. The authors agree with Bo
          ̈sch et al. that existing studies provide statistical
          evidence for psychokinesis, that the evidence is generally of high methodological quality, and that effect
          sizes are distributed heterogeneously. Bo
          ̈sch et al. postulated the heterogeneity is attributable to selective
          reporting and thus that psychokinesis is “not proven.” However, Bo
          ̈sch et al. assumed that effect size is
          entirely independent of sample size. For these experiments, this assumption is incorrect; it also
          guarantees heterogeneity. The authors maintain that selective reporting is an implausible explanation for
          the observed data and hence that these studies provide evidence for a genuine psychokinetic effect.

          http://www.deanradin.com/papers/radin_RNGMA_psych_bull.pdf
        • Apr 5 2013: http://jeksite.org/psi/beyond_meta.pdf

          hm. still an open debate? diving in..

          Ugh! True?

          "By the usual methodological standards recommended for experimental research, there have been no well-designed ganzfeld experiments. Based on available data, Rosenthal (1986), Utts (1991), and Dalton (1997b) described 33% as the expected hit rate for a typical ganzfeld experiment where 25% is expected by chance. With this hit rate, a sample size of 201 is needed to have a .8 probability of obtaining a .05 result one-tailed.1 No existing ganzfeld experiments were preplanned with that sample size. The median sample size in recent studies was 40 trials, which has a power under .25."

          From my understanding and experience with psi, it makes sense that smaller sample sizes (shorter runs etc.) would create larger effects, though.


          Answer of Bösch to Radin et. al:

          Abstract
          Our meta-analysis, which demonstrated (i) a small, but highly significant overall effect, (ii) a
          small study effect, and (iii) extreme heterogeneity, has provoked widely differing responses.
          After considering our respondents’ concerns about the possible effects of psychological
          moderator variables, the potential for missing data, and the difficulties inherent in any metaanalytic
          data, we reaffirm our view that publication bias is the most parsimonious model to
          account for all three findings. However, until compulsory registration of trials occurs, it
          cannot be proven that the effect is in fact attributable to publication bias and it remains up to
          the individual reader to decide how our results are best and most parsimoniously interpreted.
          https://www.google.de/search?q=10.1.1.132.3260&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:de:official&client=firefox-a#hl=de&client=firefox-a&hs=sRu&rls=org.mozilla:de%3Aofficial&sclient=psy-ab&q=in+the+eye+of+the+beholder+citeseer+b%C3%B6sch&oq=in+the+eye+of+the+beholder+citeseer+b%C3%B6sch&gs_l=serp.3...14718.16632.2.17138.6.6.0.0.0.0.151.602.4j2.6.0.eappsweb..0.0...1.1.8.psy-ab.q
        • Apr 5 2013: Amfortas Titurel quoted Radin et al from their criticism of the Bösch et al RNG meta-analysis:

          "The authors [Radin et al] maintain that selective reporting is an implausible explanation for the observed data and hence that these studies provide evidence for a genuine psychokinetic effect."

          You can follow the link below to view the funnel plot from the Bösch paper. Note the "missing" studies in the lower left-hand corner of the plot. This means that for some reason there are fewer small studies with small effect sizes than there are small studies with large effect sizes. In fields of study where experimenters are not ideologically motivated to save their hypotheses at any cost, this would be accepted for exactly what it looks like: small studies with small effect sizes were unpublished. But of course in parapsychology, the obvious explanation must be wrong, since people must be able to affect the output of random number generators. They just must!

          http://jt512.dyndns.org/images/RNG_funnel_plot.png
      • thumb
        Apr 5 2013: Thanks. In the interest of satisfying people such as Sandy and Craig, you should provide sources from reputable, peer-reviewed journals.
        • Apr 5 2013: What are you counting as reputable?
        • thumb
          Apr 5 2013: Did you even look at the literature?

          There are many peer-reviewed articles on that list. And the books listed are also properly referenced works in and of themselves. Both sides of the issue is presented as well. The articles arguing against psi have not been left out. Unlike the skeptics, I see no requirement to limit the information presented to the public.
        • thumb
          Apr 5 2013: Just to make it easy, since you don't seem to be up to doing any reading, here is an entertaining and informative video that discusses the evidence for psi:
          http://www.youtube.com/watch?feature=player_embedded&v=FMXqyf13HeM

          At 26:03 in the video, there is a slide showing the references. The journals referenced include Science, Nature, International Journal of Neuroscience, Neuroscience Letters, and Physics Essays.
      • thumb
        Apr 5 2013: Steve, please start with Science, Nature, and the "flagship" journals of major academic disciplines. Thanks again for offering!
        • Apr 5 2013: There's one to be going on with. It's from the Journal of Personality and Social Psychology
          http://dbem.ws/FeelingFuture.pdf
        • Apr 5 2013: But see Francis (2012), who showed statistically that Bem's positive findings were likely to have been due to publication bias.

          Source: Francis G. (2012). Too good to be true: Publication bias in two prominent studies from experimental psychology. Psychonomic Bulletin & Review, 19, 151-156. DOI 10.3758/s13423-012-0227-9
        • Apr 5 2013: @JT512
          Have you a link to the actual paper because often skeptical responses to such evidence are shoddy pieces of work - like the one you cited earlier where Wiseman pretended he had failed to replicate Sheldrake's results by hiding his data - data which when examined showed Wiseman had actually replicated it exactly and had simply added in an ad hoc criterion to invalidate the experiment.
        • Apr 6 2013: Steve, the paper by Greg Francis criticizing the Bem study is behind a paywall, unfortunately. The paper is not "shoddy." Greg applies the Ioannidis and Trikalinos (2007) test for an excess of significant findings to two papers, only one of which (Bem's) was a psi paper. Greg has published criticisms of a number of papers in experimental psychology using the same methodology. He is not a professional psi skeptic in the sense of Wiseman, and in no way was he singling out psi. I have written a brief description of how the test works on my blog, where I applied it to show that yet another experimental psych paper is not credible (link below).

          Also, it is not the case, as you allege, that Wiseman pretended anything or hid any data in his psychic dog experiments. In his paper he clearly explains that the experiments were designed to test specific claims made by the dog's owner. The tests were properly designed, conducted, and analyzed, and failed to confirm the owner's claims.

          It is not surprising that Sheldrake could find some similar pattern between his and Wiseman's data that appear to support Sheldrake's claim that dogs are psychic. If you examine a dataset enough ways you are bound to find something that supports your hypothesis. That is precisely why the definition of a "success" and the statistical test to be employed has to be well defined prior to conducting the experiment. That's what Wiseman did, and according to those criteria, which were specified in advance, the dog failed his psychic test.

          http://jt512.dydndns.org/blog?p=130
        • Apr 6 2013: it wasn't a case of examining the data set in particular way, it was the only sensible way. You count up the seconds the dog is at the window during the time the owner is away and see if there is a correlation with when the owner is coming home. That was the reported phenomenon to be tested - whether the dog waits at the window when the owner is coming home. And when one does that test the dog is at the window far more when the owner is coming home than at other times. Wiseman got even better results than sheldrake - dog at window on average 4% of the time when the owner is out and not coming home and 75% of the time during the period the owner is on the way home - and yet Wiseman just invented a criterion that if the dog went to the window at all for reasons Wiseman could not detect then he discounted that test and declared it a failure - thus dispensing with all the data that showed the exact effect that was to be tested. It was bit of a disgrace. Was that not the reason Wiseman had to resign from the SPR or something (before he got the boot)?
          There's an article of Wiseman's shenanigans here http://www.sheldrake.org/D&C/controversies/Carter_Wiseman.pdf


          Anyway, as regards the other paper I'd need to see what it says before coming to any judgement - often psi is critiqued in very odd ways and I always think it's good to check out what they've actually done.
      • thumb
        Apr 5 2013: Thanks, Steve.

        Sorry, Sandy. Videos, regardless of their content, are definitely secondary sources with standards far below those of reputable, peer-reviewed journals. The video medium in particular is highly susceptible to methods of propaganda.
        • thumb
          Apr 5 2013: John, the articles are listed. It's OK if you insist on being blind to the evidence. Keep moving the goalposts as required.
      • thumb
        Apr 5 2013: No goalposts have been moved, Sandy, and it is disingenuous of you to suggest that.

        In response to my statement "Please include reliable sources," Steve asked, "What kind of sources would you like - equal to the ones you gave for Gardner and Sagan or proper peer-reviewed stuff - and if the latter how proper and how peer-reviewed?"

        I responded: "In the interest of satisfying people such as Sandy and Craig, you should provide sources from reputable, peer-reviewed journals."

        He then asked: "What are you counting as reputable?"

        I responded: "Steve, please start with Science, Nature, and the 'flagship' journals of major academic disciplines."

        You then offered a link to a video, to which I commented:

        "Sorry, Sandy. Videos, regardless of their content, are definitely secondary sources with standards far below those of reputable, peer-reviewed journals."

        In response, you posted the comment, "John, the articles are listed. It's OK if you insist on being blind to the evidence. Keep moving the goalposts as required."

        The only explanations I can think of for this are either:

        A. You are not reading the conversation,
        B. You are reading the conversation but not understanding what is written, or
        C. You reading the conversation and understanding what is written but choosing to ignore it.

        If none of these are correct, please explain why you are claiming that I keep moving the goalposts.
        • thumb
          Apr 5 2013: The video gives a list of references to the very journals you suddenly value. The references are there for all to see, available to anyone not purposefully blind to the evidence.
      • thumb
        Apr 5 2013: jt Fivetwelve wrote, "You know, because 299 billion isn't a large enough sample size."

        Thanks for the information. It's priceless.
      • thumb
        Apr 6 2013: Steve, my intuition tells me that the fundamental flaw in these dog studies may actually be a significant underestimation of the cognitive (not precognitive) abilities of dogs. There is an increasing amount of research suggesting that animals are far more intelligent than has been assumed.

        Are Animals More Intelligent Than We Think?
        http://www.nytimes.com/2003/11/11/science/are-animals-smarter-than-we-think.html

        The Brains of the Animal Kingdom
        http://online.wsj.com/article/SB10001424127887323869604578370574285382756.html

        It may not be ESP so much as something like IQ.
        • Apr 6 2013: Not having read them of course you could just be making up a silly pseudo-explanation. How do you suppose the dog worked out when the owner was on her way home when nobody in the house knew? Perhaps dogs send messages to each other by barking (like in 101 Dalmatians) and JT's friends were in on it and were alerting him using the "twilight bark".

          http://www.youtube.com/watch?v=q3eqofgfDpY
        • Apr 6 2013: Mr. Hoopes, perhaps you could enlighten us as to how normal intelligence could explain the study. I'm sure that you've read up on Sheldrake's study and can explain in detail how he got the results he did and why it was only intelligence as opposed to any kind of psychic ability.

          I also have to say that for someone who doesn't believe in psychic ability, you sure do talk about intuition a lot.
      • thumb
        Apr 6 2013: Maybe the dog had been perceptive enough to notice and cognitively process information that no one else had. Why consider the dog's knowledge to be a subset of human knowledge when we know that dogs perceive the world in ways radically different from humans? What I'm suggesting is that in some ways the dog may have actually been more intelligent than the people (including the people studying it), and they are not clever enough to figure out how. How's that for a new paradigm?

        You jest, but I wouldn't be so quick to dismiss the "twilight bark." I've listened to roosters communicate with each other across miles. There's nothing psi about it and someone dismissive of animal communication would be likely to miss it entirely.
        • Apr 6 2013: Of course, there could be 101 Dalmatians, sorry reasons, for the dog always going to the window the most when the owner is coming home - perhaps it sneaked a look in Sheldrake's bag and read the experimental protocol.Maybe JT overheard them talking about the test. Who knows, not you certainly, but any old explanation will do. What is it they say, any port in a storm (of cognitive dissonance).
      • thumb
        Apr 6 2013: We know that dogs can hear audio frequencies that humans cannot. We also know that their olfactory sense is far superior. What if the dog were reacting to signals beyond the perception of humans yet not in the category of psi? Were there experimental controls for that? Animals also have a keen sense of time. My cats are especially good about routines performed at certain times of day. They also pay attention to things (sounds, movements, etc.) that I will only notice because of their reactions. I think underestimated animal intelligence violates Occam's Razor far less than psi.
        • Apr 6 2013: All that was dealt with. Smell, sound, routine, all dealt with. I favour the doggy conspiracy where all the dogs were signalling to each other unbeknownst to the experimenters.
      • thumb
        Apr 6 2013: Good point. They do that, you know.

Showing single comment thread. View the full conversation.